Merge branch 'android14-6.1' into 'android14-6.1-lts'

Catches the android14-6.1-lts branch up with the android14-6.1 branch
which has had a lot of changes that are needed here to resolve future
LTS merges and to ensure that the ABI is kept stable.

It contains the following commits:

* 9fd41ac172 ANDROID: Delete build.config.gki.aarch64.16k.
* 073df44c36 FROMGIT: usb: typec: tcpm: Refactor the PPS APDO selection
* 078410e73f UPSTREAM: usb: typec: tcpm: Fix response to vsafe0V event
* 722f6cc09c ANDROID: Revert "ANDROID: allmodconfig: disable WERROR"
* c2611a04b9 ANDROID: GKI: update symbol list file for xiaomi
* 34fde9ec08 FROMGIT: usb: typec: tcpm: not sink vbus if operational current is 0mA
* 3ebafb7b46 BACKPORT: FROMGIT: mm: handle faults that merely update the accessed bit under the VMA lock
* 9e066d4b35 FROMLIST: mm: Allow fault_dirty_shared_page() to be called under the VMA lock
* 83ab986324 FROMGIT: mm: handle swap and NUMA PTE faults under the VMA lock
* ffcebdef16 FROMGIT: mm: run the fault-around code under the VMA lock
* 072c35fb69 FROMGIT: mm: move FAULT_FLAG_VMA_LOCK check down from do_fault()
* fa9a8adff0 FROMGIT: mm: move FAULT_FLAG_VMA_LOCK check down in handle_pte_fault()
* dd621869c1 BACKPORT: FROMGIT: mm: handle some PMD faults under the VMA lock
* 8594d6a30f BACKPORT: FROMGIT: mm: handle PUD faults under the VMA lock
* 66cbbe6b31 FROMGIT: mm: move FAULT_FLAG_VMA_LOCK check from handle_mm_fault()
* e26044769f BACKPORT: FROMGIT: mm: allow per-VMA locks on file-backed VMAs
* 4cb518a06f FROMGIT: mm: remove CONFIG_PER_VMA_LOCK ifdefs
* f4b32b7f15 FROMGIT: mm: fix a lockdep issue in vma_assert_write_locked
* 250f19771f FROMGIT: mm: handle userfaults under VMA lock
* e704d0e4f9 FROMGIT: mm: handle swap page faults under per-VMA lock
* f8a65b694b FROMGIT: mm: change folio_lock_or_retry to use vm_fault directly
* 693d905ec0 BACKPORT: FROMGIT: mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED
* 939d4b1ccc BACKPORT: FROMGIT: mm: move vma locking out of vma_prepare and dup_anon_vma
* 0f0b09c02c BACKPORT: FROMGIT: mm: always lock new vma before inserting into vma tree
* a8a479ed96 FROMGIT: mm: lock vma explicitly before doing vm_flags_reset and vm_flags_reset_once
* ad18923856 FROMGIT: mm: replace mmap with vma write lock assertions when operating on a vma
* 5f0ca924aa FROMGIT: mm: for !CONFIG_PER_VMA_LOCK equate write lock assertion for vma and mmap
* abb0f2767e FROMGIT: mm: don't drop VMA locks in mm_drop_all_locks()
* 365af746f5 BACKPORT: riscv: mm: try VMA lock-based page fault handling first
* 3c187b4a12 BACKPORT: FROMGIT: mm: enable page walking API to lock vmas during the walk
* b6093c47fe BACKPORT: mm: lock VMA in dup_anon_vma() before setting ->anon_vma
* 0ee0062c94 UPSTREAM: mm: fix memory ordering for mm_lock_seq and vm_lock_seq
* 3378cbd264 FROMGIT: usb: host: ehci-sched: try to turn on io watchdog as long as periodic_count > 0
* 2d3351bd5e FROMGIT: BACKPORT: usb: ehci: add workaround for chipidea PORTSC.PEC bug
* 7fa8861130 UPSTREAM: tty: n_gsm: fix UAF in gsm_cleanup_mux
* 683966ac69 UPSTREAM: mm/mmap: Fix extra maple tree write
* f86c79eb86 FROMGIT: Multi-gen LRU: skip CMA pages when they are not eligible
* 7ae1e02abb UPSTREAM: mm: skip CMA pages when they are not available
* 7666325265 UPSTREAM: dma-buf: fix an error pointer vs NULL bug
* e61d76121f UPSTREAM: dma-buf: keep the signaling time of merged fences v3
* fda157ce15 UPSTREAM: netfilter: nf_tables: skip bound chain on rule flush
* 110a26edd1 UPSTREAM: net/sched: sch_qfq: account for stab overhead in qfq_enqueue
* 9db1437238 UPSTREAM: net/sched: sch_qfq: refactor parsing of netlink parameters
* 7688102949 UPSTREAM: netfilter: nft_set_pipapo: fix improper element removal
* 37f4509407 ANDROID: Add checkpatch target.
* d7dacaa439 UPSTREAM: USB: Gadget: core: Help prevent panic during UVC unconfigure
* 4dc009c3a8 ANDROID: GKI: Update symbols to symbol list
* fadc35923d ANDROID: vendor_hook: fix the error record position of mutex
* 3fc69d3f70 ANDROID: ABI: add allowed list for galaxy
* a5a662187f ANDROID: gfp: add __GFP_CMA in gfpflag_names
* b520b90913 ANDROID: ABI: Update to fix slab-out-of-bounds in xhci_vendor_get_ops
* c2cbb3cc24 ANDROID: usb: host: fix slab-out-of-bounds in xhci_vendor_get_ops
* 64787ee451 ANDROID: GKI: update pixel symbol list for xhci
* b0c06048a8 FROMGIT: fs: drop_caches: draining pages before dropping caches
* 2f76bb83b1 ANDROID: GKI: update symbol list file for xiaomi
* 8e86825eec ANDROID: uid_sys_stats: Use a single work for deferred updates
* 960d9828ee ANDROID: ABI: Update symbol for Exynos SoC
* 3926cc6ef8 ANDROID: GKI: Add symbols to symbol list for vivo
* dbb09068c1 ANDROID: vendor_hooks: Add tune scan type hook in get_scan_count()
* 5e1d25ac2a FROMGIT: BACKPORT: Multi-gen LRU: Fix can_swap in lru_gen_look_around()
* addf1a9a65 FROMGIT: Multi-gen LRU: Avoid race in inc_min_seq()
* a7adb98897 FROMGIT: Multi-gen LRU: Fix per-zone reclaim
* 03812b904e ANDROID: ABI: update symbol list for galaxy
* b283f9b41f ANDROID: oplus: Update the ABI xml and symbol list
* c3d26e2b5a ANDROID: vendor_hooks: Add hooks for lookaround
* 29e2f3e3d1 ANDROID: ABI: Update STG ABI to format version 2
* 3bd3d13701 ANDROID: ABI: Update symbol list for imx
* ad0b008167 FROMGIT: erofs: fix wrong primary bvec selection on deduplicated extents
* 126ef64cba UPSTREAM: media: Add ABGR64_12 video format
* 86e2e8fd05 BACKPORT: media: Add BGR48_12 video format
* 892293272c UPSTREAM: media: Add YUV48_12 video format
* b2cf7e4268 UPSTREAM: media: Add Y212 v4l2 format info
* 0f3f7a21af UPSTREAM: media: Add Y210, Y212 and Y216 formats
* ca7b45b128 UPSTREAM: media: Add Y012 video format
* 343b85ecad UPSTREAM: media: Add P012 and P012M video format
* 7beed73af0 ANDROID: GKI: Create symbol files in include/config
* 295e779e8f ANDROID: fuse-bpf: Use stored bpf for create_open
* 74d9daa59a ANDROID: fuse-bpf: Add bpf to negative fuse_dentry
* 6aef06abba ANDROID: fuse-bpf: Check inode not null
* 4bbda90bd8 ANDROID: fuse-bpf: Fix flock test compile error
* 84ac22a0d3 ANDROID: fuse-bpf: Add partial ioctl support
* e341d2312c ANDROID: ABI: Update oplus symbol list
* f5c707dc65 UPSTREAM: mm/mempolicy: Take VMA lock before replacing policy
* 890b1aabb1 BACKPORT: mm: lock_vma_under_rcu() must check vma->anon_vma under vma lock
* d3b37a712a BACKPORT: FROMGIT: irqchip/gic-v3: Workaround for GIC-700 erratum 2941627
* a89e2cbbc0 ANDROID: GKI: update xiaomi symbol list
* 371f8d901a UPSTREAM: mm: lock newly mapped VMA with corrected ordering
* 0d9960403c UPSTREAM: fork: lock VMAs of the parent process when forking
* e3601b25ae UPSTREAM: mm: lock newly mapped VMA which can be modified after it becomes visible
* 05f7c7fe72 UPSTREAM: mm: lock a vma before stack expansion
* c0ba567af1 ANDROID: GKI: bring back find_extend_vma()
* 188ce9572f BACKPORT: mm: always expand the stack with the mmap write lock held
* 74efdc0966 BACKPORT: execve: expand new process stack manually ahead of time
* c8ad906849 ANDROID: abi_gki_aarch64_qcom: ufshcd_mcq_poll_cqe_lock
* 1afccd4255 UPSTREAM: mm: make find_extend_vma() fail if write lock not held
* 4087cac574 UPSTREAM: powerpc/mm: convert coprocessor fault to lock_mm_and_find_vma()
* 6c33246824 UPSTREAM: mm/fault: convert remaining simple cases to lock_mm_and_find_vma()
* add0a1ea04 UPSTREAM: arm/mm: Convert to using lock_mm_and_find_vma()
* 9f136450af UPSTREAM: riscv/mm: Convert to using lock_mm_and_find_vma()
* 053053fc68 UPSTREAM: mips/mm: Convert to using lock_mm_and_find_vma()
* 9cdce804c0 UPSTREAM: powerpc/mm: Convert to using lock_mm_and_find_vma()
* 1016faf509 BACKPORT: arch/arm64/mm/fault: Fix undeclared variable error in do_page_fault()
* 89298b8b3c BACKPORT: arm64/mm: Convert to using lock_mm_and_find_vma()
* cf70cb4f1f UPSTREAM: mm: make the page fault mmap locking killable
* 544ae28cf6 ANDROID: Inherit "user-aware property" across rtmutex.
* 5e4a5dc820 BACKPORT: blk-crypto: use dynamic lock class for blk_crypto_profile::lock
* db2c29e53d ANDROID: ABI: update symbol list for Xclipse GPU
* 7edb035c79 ANDROID: drm/ttm: export ttm_tt_unpopulate()
* b61f298c0d ANDROID: GKI: Add ABI symbol list(devlink) for MTK
* ec419af28f ANDROID: devlink: Select CONFIG_NET_DEVLINK in Kconfig.gki
* 1e114e6efa ANDROID: KVM: arm64: Fix memory ordering for pKVM module callbacks
* 3803ae4a28 BACKPORT: mm: introduce new 'lock_mm_and_find_vma()' page fault helper
* 66b5ad3507 BACKPORT: maple_tree: fix potential out-of-bounds access in mas_wr_end_piv()
* 19dd4101e0 UPSTREAM: x86/smp: Cure kexec() vs. mwait_play_dead() breakage
* 26260c4bd1 UPSTREAM: x86/smp: Use dedicated cache-line for mwait_play_dead()
* d8cb0365cb UPSTREAM: x86/smp: Remove pointless wmb()s from native_stop_other_cpus()
* 6744547e95 UPSTREAM: x86/smp: Dont access non-existing CPUID leaf
* ba2ccba863 UPSTREAM: x86/smp: Make stop_other_cpus() more robust
* 5c9836e66d UPSTREAM: x86/microcode/AMD: Load late on both threads too
* 53048f151c BACKPORT: mm, hwpoison: when copy-on-write hits poison, take page offline
* a2dff37b0c UPSTREAM: mm, hwpoison: try to recover from copy-on write faults
* 466448f55f BACKPORT: mm/mmap: Fix error return in do_vmi_align_munmap()
* 41b30362e9 BACKPORT: mm/mmap: Fix error path in do_vmi_align_munmap()
* d45a054f9c UPSTREAM: HID: logitech-hidpp: add HIDPP_QUIRK_DELAYED_INIT for the T651.
* 0e477a82e6 UPSTREAM: HID: hidraw: fix data race on device refcount
* af2d741bf3 UPSTREAM: can: isotp: isotp_sendmsg(): fix return error fix on TX path
* 5887040491 UPSTREAM: fbdev: fix potential OOB read in fast_imageblit()
* 6c48edb9c9 ANDROID: GKI: add function symbols for unisoc
* 342aff08ae ANDROID: cgroup: Cleanup android_rvh_cgroup_force_kthread_migration
* fcdea346bb UPSTREAM: net/sched: cls_fw: Fix improper refcount update leads to use-after-free
* f091cc7434 UPSTREAM: netfilter: nf_tables: fix chain binding transaction logic
* 1bb5e7fb37 ANDROID: abi_gki_aarch64_qcom: update abi

Change-Id: I6f86301f218a60c00d03e09a4e3bfebe42bad0d5
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman 2023-08-23 18:27:38 +00:00
commit 8b02e8901d
162 changed files with 10067 additions and 989 deletions

View File

@ -6,6 +6,7 @@ load("//build/bazel_common_rules/dist:dist.bzl", "copy_to_dist_dir")
load("//build/kernel/kleaf:common_kernels.bzl", "define_common_kernels")
load(
"//build/kernel/kleaf:kernel.bzl",
"checkpatch",
"ddk_headers",
"kernel_abi",
"kernel_build",
@ -40,6 +41,11 @@ _GKI_X86_64_MAKE_GOALS = [
"modules",
]
checkpatch(
name = "checkpatch",
checkpatch_pl = "scripts/checkpatch.pl",
)
write_file(
name = "gki_system_dlkm_modules",
out = "android/gki_system_dlkm_modules",

View File

@ -139,6 +139,9 @@ stable kernels.
| ARM | MMU-500 | #841119,826419 | N/A |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| ARM | GIC-700 | #2941627 | ARM64_ERRATUM_2941627 |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_845719 |
+----------------+-----------------+-----------------+-----------------------------+
| Broadcom | Brahma-B53 | N/A | ARM64_ERRATUM_843419 |

View File

@ -257,12 +257,45 @@ the second byte and Y'\ :sub:`7-0` in the third byte.
- The padding bits contain undefined values that must be ignored by all
applications and drivers.
The next table lists the packed YUV 4:4:4 formats with 12 bits per component.
Expand the bits per component to 16 bits, data in the high bits, zeros in the low bits,
arranged in little endian order, storing 1 pixel in 6 bytes.
.. flat-table:: Packed YUV 4:4:4 Image Formats (12bpc)
:header-rows: 1
:stub-columns: 0
* - Identifier
- Code
- Byte 1-0
- Byte 3-2
- Byte 5-4
- Byte 7-6
- Byte 9-8
- Byte 11-10
* .. _V4L2-PIX-FMT-YUV48-12:
- ``V4L2_PIX_FMT_YUV48_12``
- 'Y312'
- Y'\ :sub:`0`
- Cb\ :sub:`0`
- Cr\ :sub:`0`
- Y'\ :sub:`1`
- Cb\ :sub:`1`
- Cr\ :sub:`1`
4:2:2 Subsampling
=================
These formats, commonly referred to as YUYV or YUY2, subsample the chroma
components horizontally by 2, storing 2 pixels in 4 bytes.
components horizontally by 2, storing 2 pixels in a container. The container
is 32-bits for 8-bit formats, and 64-bits for 10+-bit formats.
The packed YUYV formats with more than 8 bits per component are stored as four
16-bit little-endian words. Each word's most significant bits contain one
component, and the least significant bits are zero padding.
.. raw:: latex
@ -270,7 +303,7 @@ components horizontally by 2, storing 2 pixels in 4 bytes.
.. tabularcolumns:: |p{3.4cm}|p{1.2cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|
.. flat-table:: Packed YUV 4:2:2 Formats
.. flat-table:: Packed YUV 4:2:2 Formats in 32-bit container
:header-rows: 1
:stub-columns: 0
@ -337,6 +370,46 @@ components horizontally by 2, storing 2 pixels in 4 bytes.
- Y'\ :sub:`3`
- Cb\ :sub:`2`
.. tabularcolumns:: |p{3.4cm}|p{1.2cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|
.. flat-table:: Packed YUV 4:2:2 Formats in 64-bit container
:header-rows: 1
:stub-columns: 0
* - Identifier
- Code
- Word 0
- Word 1
- Word 2
- Word 3
* .. _V4L2-PIX-FMT-Y210:
- ``V4L2_PIX_FMT_Y210``
- 'Y210'
- Y'\ :sub:`0` (bits 15-6)
- Cb\ :sub:`0` (bits 15-6)
- Y'\ :sub:`1` (bits 15-6)
- Cr\ :sub:`0` (bits 15-6)
* .. _V4L2-PIX-FMT-Y212:
- ``V4L2_PIX_FMT_Y212``
- 'Y212'
- Y'\ :sub:`0` (bits 15-4)
- Cb\ :sub:`0` (bits 15-4)
- Y'\ :sub:`1` (bits 15-4)
- Cr\ :sub:`0` (bits 15-4)
* .. _V4L2-PIX-FMT-Y216:
- ``V4L2_PIX_FMT_Y216``
- 'Y216'
- Y'\ :sub:`0` (bits 15-0)
- Cb\ :sub:`0` (bits 15-0)
- Y'\ :sub:`1` (bits 15-0)
- Cr\ :sub:`0` (bits 15-0)
.. raw:: latex
\normalsize

View File

@ -762,6 +762,48 @@ nomenclature that instead use the order of components as seen in a 24- or
\normalsize
12 Bits Per Component
==============================
These formats store an RGB triplet in six or eight bytes, with 12 bits per component.
Expand the bits per component to 16 bits, data in the high bits, zeros in the low bits,
arranged in little endian order.
.. raw:: latex
\small
.. flat-table:: RGB Formats With 12 Bits Per Component
:header-rows: 1
* - Identifier
- Code
- Byte 1-0
- Byte 3-2
- Byte 5-4
- Byte 7-6
* .. _V4L2-PIX-FMT-BGR48-12:
- ``V4L2_PIX_FMT_BGR48_12``
- 'B312'
- B\ :sub:`15-4`
- G\ :sub:`15-4`
- R\ :sub:`15-4`
-
* .. _V4L2-PIX-FMT-ABGR64-12:
- ``V4L2_PIX_FMT_ABGR64_12``
- 'B412'
- B\ :sub:`15-4`
- G\ :sub:`15-4`
- R\ :sub:`15-4`
- A\ :sub:`15-4`
.. raw:: latex
\normalsize
Deprecated RGB Formats
======================

View File

@ -103,6 +103,17 @@ are often referred to as greyscale formats.
- ...
- ...
* .. _V4L2-PIX-FMT-Y012:
- ``V4L2_PIX_FMT_Y012``
- 'Y012'
- Y'\ :sub:`0`\ [3:0] `0000`
- Y'\ :sub:`0`\ [11:4]
- ...
- ...
- ...
* .. _V4L2-PIX-FMT-Y14:
- ``V4L2_PIX_FMT_Y14``
@ -146,3 +157,7 @@ are often referred to as greyscale formats.
than 16 bits. For example, 10 bits per pixel uses values in the range 0 to
1023. For the IPU3_Y10 format 25 pixels are packed into 32 bytes, which
leaves the 6 most significant bits of the last byte padded with 0.
For Y012 and Y12 formats, Y012 places its data in the 12 high bits, with
padding zeros in the 4 low bits, in contrast to the Y12 format, which has
its padding located in the most significant bits of the 16 bit word.

View File

@ -123,6 +123,20 @@ All components are stored with the same number of bits per component.
- Cb, Cr
- Yes
- 4x4 tiles
* - V4L2_PIX_FMT_P012
- 'P012'
- 12
- 4:2:0
- Cb, Cr
- Yes
- Linear
* - V4L2_PIX_FMT_P012M
- 'PM12'
- 12
- 4:2:0
- Cb, Cr
- No
- Linear
* - V4L2_PIX_FMT_NV16
- 'NV16'
- 8
@ -586,6 +600,86 @@ Data in the 10 high bits, zeros in the 6 low bits, arranged in little endian ord
- Cb\ :sub:`11`
- Cr\ :sub:`11`
.. _V4L2-PIX-FMT-P012:
.. _V4L2-PIX-FMT-P012M:
P012 and P012M
--------------
P012 is like NV12 with 12 bits per component, expanded to 16 bits.
Data in the 12 high bits, zeros in the 4 low bits, arranged in little endian order.
.. flat-table:: Sample 4x4 P012 Image
:header-rows: 0
:stub-columns: 0
* - start + 0:
- Y'\ :sub:`00`
- Y'\ :sub:`01`
- Y'\ :sub:`02`
- Y'\ :sub:`03`
* - start + 8:
- Y'\ :sub:`10`
- Y'\ :sub:`11`
- Y'\ :sub:`12`
- Y'\ :sub:`13`
* - start + 16:
- Y'\ :sub:`20`
- Y'\ :sub:`21`
- Y'\ :sub:`22`
- Y'\ :sub:`23`
* - start + 24:
- Y'\ :sub:`30`
- Y'\ :sub:`31`
- Y'\ :sub:`32`
- Y'\ :sub:`33`
* - start + 32:
- Cb\ :sub:`00`
- Cr\ :sub:`00`
- Cb\ :sub:`01`
- Cr\ :sub:`01`
* - start + 40:
- Cb\ :sub:`10`
- Cr\ :sub:`10`
- Cb\ :sub:`11`
- Cr\ :sub:`11`
.. flat-table:: Sample 4x4 P012M Image
:header-rows: 0
:stub-columns: 0
* - start0 + 0:
- Y'\ :sub:`00`
- Y'\ :sub:`01`
- Y'\ :sub:`02`
- Y'\ :sub:`03`
* - start0 + 8:
- Y'\ :sub:`10`
- Y'\ :sub:`11`
- Y'\ :sub:`12`
- Y'\ :sub:`13`
* - start0 + 16:
- Y'\ :sub:`20`
- Y'\ :sub:`21`
- Y'\ :sub:`22`
- Y'\ :sub:`23`
* - start0 + 24:
- Y'\ :sub:`30`
- Y'\ :sub:`31`
- Y'\ :sub:`32`
- Y'\ :sub:`33`
* -
* - start1 + 0:
- Cb\ :sub:`00`
- Cr\ :sub:`00`
- Cb\ :sub:`01`
- Cr\ :sub:`01`
* - start1 + 8:
- Cb\ :sub:`10`
- Cr\ :sub:`10`
- Cb\ :sub:`11`
- Cr\ :sub:`11`
Fully Planar YUV Formats
========================

File diff suppressed because it is too large Load Diff

View File

@ -42,6 +42,7 @@
blocking_notifier_chain_register
blocking_notifier_chain_unregister
bpf_trace_run1
bpf_trace_run10
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
@ -325,6 +326,7 @@
fd_install
fget
_find_first_bit
_find_first_zero_bit
_find_last_bit
_find_next_and_bit
_find_next_bit
@ -701,6 +703,7 @@
___ratelimit
raw_notifier_call_chain
raw_notifier_chain_register
raw_notifier_chain_unregister
_raw_read_lock
_raw_read_unlock
_raw_spin_lock
@ -1025,7 +1028,6 @@
ww_mutex_unlock
# required by cfg80211.ko
bpf_trace_run10
csum_partial
debugfs_rename
__dev_change_net_namespace
@ -1227,8 +1229,10 @@
match_string
memory_read_from_buffer
migrate_swap
perf_event_create_kernel_counter
perf_event_enable
perf_event_read_local
pick_highest_pushable_task
raw_notifier_chain_unregister
raw_spin_rq_lock_nested
raw_spin_rq_unlock
_raw_write_trylock
@ -1272,6 +1276,7 @@
__traceiter_android_vh_binder_restore_priority
__traceiter_android_vh_binder_set_priority
__traceiter_android_vh_binder_wakeup_ilocked
__traceiter_android_vh_jiffies_update
__traceiter_android_vh_scheduler_tick
__traceiter_android_vh_syscall_prctl_finished
__traceiter_binder_transaction_received
@ -1302,6 +1307,7 @@
__tracepoint_android_vh_binder_restore_priority
__tracepoint_android_vh_binder_set_priority
__tracepoint_android_vh_binder_wakeup_ilocked
__tracepoint_android_vh_jiffies_update
__tracepoint_android_vh_scheduler_tick
__tracepoint_android_vh_syscall_prctl_finished
__tracepoint_binder_transaction_received
@ -2048,6 +2054,9 @@
# required by scsc_wlan.ko
arp_tbl
__cpuhp_remove_state
__cpuhp_state_add_instance
__cpuhp_state_remove_instance
dev_addr_mod
dev_alloc_name
__dev_queue_xmit
@ -2056,6 +2065,7 @@
dql_reset
dst_release
ether_setup
__find_nth_bit
for_each_kernel_tracepoint
in4_pton
in6_pton
@ -2200,7 +2210,6 @@
drm_syncobj_get_handle
drm_syncobj_replace_fence
__fdget
_find_first_zero_bit
__folio_put
get_random_u32
__get_task_comm
@ -2261,7 +2270,6 @@
__traceiter_gpu_mem_total
__tracepoint_android_vh_meminfo_proc_show
__tracepoint_gpu_mem_total
ttm_bo_eviction_valuable
ttm_bo_init_reserved
ttm_bo_kmap
ttm_bo_kunmap
@ -2304,6 +2312,7 @@
ttm_resource_manager_usage
ttm_sg_tt_init
ttm_tt_fini
ttm_tt_unpopulate
vm_get_page_prot
__wake_up_locked
ww_mutex_lock_interruptible
@ -2575,5 +2584,6 @@
__skb_get_hash
__skb_gso_segment
tasklet_unlock_wait
ttm_bo_eviction_valuable
ufshcd_mcq_poll_cqe_nolock
unregister_netdevice_many

View File

@ -35,6 +35,7 @@
class_create_file_ns
class_find_device
class_remove_file_ns
cleancache_register_ops
__const_udelay
copy_from_kernel_nofault
cpu_hwcaps
@ -99,6 +100,8 @@
__free_pages
free_pages
free_pages_exact
fsnotify
__fsnotify_parent
generic_file_read_iter
generic_mii_ioctl
generic_perform_write
@ -149,6 +152,8 @@
kasan_flag_enabled
kasprintf
kernel_cpustat
kernel_neon_begin
kernel_neon_end
kernfs_find_and_get_ns
kfree
__kfree_skb
@ -164,6 +169,7 @@
kobject_put
kstrdup
kstrtoint
kstrtos16
kstrtouint
kstrtoull
kthread_create_on_node
@ -257,6 +263,7 @@
register_reboot_notifier
register_restart_handler
register_syscore_ops
regulator_get_current_limit
remove_cpu
rtc_class_open
rtc_read_time
@ -277,6 +284,9 @@
single_open
single_release
skb_copy_ubufs
smpboot_register_percpu_thread
smpboot_unregister_percpu_thread
snd_soc_add_card_controls
snd_soc_find_dai
snd_soc_info_volsw_sx
snd_soc_put_volsw_sx
@ -285,6 +295,7 @@
sprintf
sscanf
__stack_chk_fail
stack_trace_save_regs
stpcpy
strcmp
strim
@ -306,6 +317,12 @@
system_long_wq
system_unbound_wq
sys_tz
tcp_register_congestion_control
tcp_reno_cong_avoid
tcp_reno_ssthresh
tcp_reno_undo_cwnd
tcp_slow_start
tcp_unregister_congestion_control
time64_to_tm
__traceiter_android_rvh_arm64_serror_panic
__traceiter_android_rvh_die_kernel_fault
@ -339,6 +356,7 @@
__traceiter_android_vh_try_to_freeze_todo
__traceiter_android_vh_try_to_freeze_todo_unfrozen
__traceiter_android_vh_watchdog_timer_softlockup
__traceiter_android_vh_wq_lockup_pool
__traceiter_block_rq_insert
__traceiter_console
__traceiter_hrtimer_expire_entry
@ -380,6 +398,7 @@
__tracepoint_android_vh_try_to_freeze_todo
__tracepoint_android_vh_try_to_freeze_todo_unfrozen
__tracepoint_android_vh_watchdog_timer_softlockup
__tracepoint_android_vh_wq_lockup_pool
__tracepoint_block_rq_insert
__tracepoint_console
__tracepoint_hrtimer_expire_entry
@ -399,6 +418,7 @@
up_write
usb_alloc_dev
usb_gstrings_attach
usb_set_configuration
usbnet_get_endpoints
usbnet_link_change
usb_set_device_state

View File

@ -822,6 +822,7 @@
flush_delayed_work
flush_work
__flush_workqueue
__folio_put
fortify_panic
fput
free_candev
@ -969,7 +970,9 @@
i2c_smbus_read_i2c_block_data
i2c_smbus_write_byte
i2c_smbus_write_byte_data
__i2c_smbus_xfer
i2c_smbus_xfer
__i2c_transfer
i2c_transfer
i2c_transfer_buffer_flags
i2c_unregister_device
@ -1143,6 +1146,7 @@
kstrtoull
kthread_bind
kthread_create_on_node
kthread_freezable_should_stop
kthread_park
kthread_parkme
kthread_should_park
@ -1324,6 +1328,9 @@
nsecs_to_jiffies
ns_to_timespec64
__num_online_cpus
nvmem_cell_get
nvmem_cell_put
nvmem_cell_read
nvmem_cell_read_u32
nvmem_cell_read_u64
nvmem_device_read
@ -1377,6 +1384,7 @@
of_gen_pool_get
of_get_child_by_name
of_get_compatible_child
of_get_cpu_node
of_get_display_timing
of_get_i2c_adapter_by_node
of_get_mac_address
@ -1442,6 +1450,8 @@
of_usb_update_otg_caps
oops_in_progress
open_candev
page_pinner_inited
__page_pinner_put_page
page_pool_alloc_pages
page_pool_create
page_pool_destroy
@ -1586,6 +1596,7 @@
platform_irqchip_probe
platform_irq_count
platform_msi_create_irq_domain
pm_genpd_add_subdomain
pm_genpd_init
pm_genpd_remove
pm_genpd_remove_device
@ -1597,6 +1608,7 @@
pm_runtime_forbid
pm_runtime_force_resume
pm_runtime_force_suspend
pm_runtime_get_if_active
__pm_runtime_idle
pm_runtime_no_callbacks
__pm_runtime_resume
@ -1796,10 +1808,14 @@
rtc_time64_to_tm
rtc_tm_to_time64
rtc_update_irq
rt_mutex_lock
rt_mutex_trylock
rt_mutex_unlock
rtnl_is_locked
rtnl_lock
rtnl_unlock
sched_clock
sched_setattr_nocheck
sched_set_fifo_low
schedule
schedule_hrtimeout
@ -2248,6 +2264,7 @@
__v4l2_device_register_subdev_nodes
v4l2_device_unregister
v4l2_device_unregister_subdev
v4l2_enum_dv_timings_cap
v4l2_event_dequeue
v4l2_event_pending
v4l2_event_queue
@ -2298,6 +2315,7 @@
v4l2_m2m_unregister_media_controller
v4l2_m2m_update_start_streaming_state
v4l2_m2m_update_stop_streaming_state
v4l2_match_dv_timings
v4l2_s_parm_cap
v4l2_src_change_event_subscribe
v4l2_subdev_call_wrappers
@ -2414,6 +2432,7 @@
xdp_do_redirect
xdp_master_redirect
xdp_return_frame
xdp_return_frame_rx_napi
xdp_rxq_info_is_reg
__xdp_rxq_info_reg
xdp_rxq_info_reg_mem_model

View File

@ -416,6 +416,7 @@
device_release_driver
device_remove_bin_file
device_remove_file
device_remove_file_self
device_rename
__device_reset
device_set_of_node_from_dev
@ -429,6 +430,22 @@
_dev_info
__dev_kfree_skb_any
__dev_kfree_skb_irq
devlink_alloc_ns
devlink_flash_update_status_notify
devlink_fmsg_binary_pair_nest_end
devlink_fmsg_binary_pair_nest_start
devlink_fmsg_binary_put
devlink_free
devlink_health_report
devlink_health_reporter_create
devlink_health_reporter_destroy
devlink_health_reporter_priv
devlink_health_reporter_state_update
devlink_priv
devlink_region_create
devlink_region_destroy
devlink_register
devlink_unregister
dev_load
devm_add_action
__devm_alloc_percpu

View File

@ -86,6 +86,7 @@
tcf_exts_validate
tcf_queue_work
__traceiter_android_rvh_post_init_entity_util_avg
__traceiter_android_rvh_rtmutex_force_update
__traceiter_android_vh_account_process_tick_gran
__traceiter_android_vh_account_task_time
__traceiter_android_vh_do_futex
@ -99,11 +100,6 @@
__traceiter_android_vh_record_pcpu_rwsem_starttime
__traceiter_android_vh_record_rtmutex_lock_starttime
__traceiter_android_vh_record_rwsem_lock_starttime
__tracepoint_android_vh_record_mutex_lock_starttime
__tracepoint_android_vh_record_pcpu_rwsem_starttime
__tracepoint_android_vh_record_rtmutex_lock_starttime
__tracepoint_android_vh_record_rwsem_lock_starttime
__trace_puts
__traceiter_android_vh_alter_mutex_list_add
__traceiter_android_vh_binder_free_proc
__traceiter_android_vh_binder_has_work_ilocked
@ -121,8 +117,11 @@
__traceiter_android_vh_binder_thread_release
__traceiter_android_vh_binder_wait_for_work
__traceiter_android_vh_cgroup_set_task
__traceiter_android_vh_check_folio_look_around_ref
__traceiter_android_vh_dup_task_struct
__traceiter_android_vh_exit_signal
__traceiter_android_vh_look_around
__traceiter_android_vh_look_around_migrate_folio
__traceiter_android_vh_mem_cgroup_id_remove
__traceiter_android_vh_mem_cgroup_css_offline
__traceiter_android_vh_mem_cgroup_css_online
@ -136,6 +135,7 @@
__traceiter_android_vh_cleanup_old_buffers_bypass
__traceiter_android_vh_dm_bufio_shrink_scan_bypass
__traceiter_android_vh_mutex_unlock_slowpath
__traceiter_android_vh_rtmutex_waiter_prio
__traceiter_android_vh_rwsem_can_spin_on_owner
__traceiter_android_vh_rwsem_opt_spin_finish
__traceiter_android_vh_rwsem_opt_spin_start
@ -143,6 +143,7 @@
__traceiter_android_vh_sched_stat_runtime_rt
__traceiter_android_vh_shrink_node_memcgs
__traceiter_android_vh_sync_txn_recvd
__traceiter_android_vh_task_blocks_on_rtmutex
__traceiter_block_bio_queue
__traceiter_block_getrq
__traceiter_block_rq_complete
@ -156,7 +157,9 @@
__traceiter_sched_stat_wait
__traceiter_sched_waking
__traceiter_task_rename
__traceiter_android_vh_test_clear_look_around_ref
__tracepoint_android_rvh_post_init_entity_util_avg
__tracepoint_android_rvh_rtmutex_force_update
__tracepoint_android_vh_account_process_tick_gran
__tracepoint_android_vh_account_task_time
__tracepoint_android_vh_alter_mutex_list_add
@ -176,6 +179,7 @@
__tracepoint_android_vh_binder_thread_release
__tracepoint_android_vh_binder_wait_for_work
__tracepoint_android_vh_cgroup_set_task
__tracepoint_android_vh_check_folio_look_around_ref
__tracepoint_android_vh_do_futex
__tracepoint_android_vh_dup_task_struct
__tracepoint_android_vh_exit_signal
@ -191,6 +195,8 @@
__tracepoint_android_vh_futex_wake_traverse_plist
__tracepoint_android_vh_futex_wake_up_q_finish
__tracepoint_android_vh_irqtime_account_process_tick
__tracepoint_android_vh_look_around
__tracepoint_android_vh_look_around_migrate_folio
__tracepoint_android_vh_mutex_can_spin_on_owner
__tracepoint_android_vh_mutex_opt_spin_finish
__tracepoint_android_vh_mutex_opt_spin_start
@ -198,6 +204,11 @@
__tracepoint_android_vh_cleanup_old_buffers_bypass
__tracepoint_android_vh_dm_bufio_shrink_scan_bypass
__tracepoint_android_vh_mutex_unlock_slowpath
__tracepoint_android_vh_record_mutex_lock_starttime
__tracepoint_android_vh_record_pcpu_rwsem_starttime
__tracepoint_android_vh_record_rtmutex_lock_starttime
__tracepoint_android_vh_record_rwsem_lock_starttime
__tracepoint_android_vh_rtmutex_waiter_prio
__tracepoint_android_vh_rwsem_can_spin_on_owner
__tracepoint_android_vh_rwsem_opt_spin_finish
__tracepoint_android_vh_rwsem_opt_spin_start
@ -205,6 +216,8 @@
__tracepoint_android_vh_sched_stat_runtime_rt
__tracepoint_android_vh_shrink_node_memcgs
__tracepoint_android_vh_sync_txn_recvd
__tracepoint_android_vh_task_blocks_on_rtmutex
__tracepoint_android_vh_test_clear_look_around_ref
__tracepoint_block_bio_queue
__tracepoint_block_getrq
__tracepoint_block_rq_complete
@ -218,6 +231,7 @@
__tracepoint_sched_stat_wait
__tracepoint_sched_waking
__tracepoint_task_rename
__trace_puts
try_to_free_mem_cgroup_pages
typec_mux_get_drvdata
unregister_memory_notifier
@ -227,3 +241,4 @@
wait_for_completion_killable_timeout
wakeup_source_remove
wq_worker_comm
zero_pfn

View File

@ -2290,6 +2290,9 @@
__xfrm_state_destroy
xfrm_state_lookup_byspi
xfrm_stateonly_find
xhci_address_device
xhci_bus_resume
xhci_bus_suspend
xhci_gen_setup
xhci_init_driver
xhci_resume

View File

@ -1544,6 +1544,7 @@
iommu_group_get_iommudata
iommu_group_put
iommu_group_ref_get
iommu_group_remove_device
iommu_group_set_iommudata
iommu_iova_to_phys
iommu_map
@ -3647,6 +3648,7 @@
ufshcd_hold
ufshcd_mcq_config_esi
ufshcd_mcq_enable_esi
ufshcd_mcq_poll_cqe_lock
ufshcd_mcq_poll_cqe_nolock
ufshcd_mcq_write_cqis
ufshcd_pltfrm_init

View File

@ -574,6 +574,8 @@
skb_unlink
sk_error_report
sk_free
snd_ctl_find_id
snd_info_get_line
snprintf
sock_alloc_send_pskb
sock_create_kern
@ -1578,6 +1580,11 @@
spi_controller_suspend
spi_finalize_current_transfer
# required by sprd-audio-codec.ko
regulator_register
snd_pcm_rate_bit_to_rate
snd_pcm_rate_to_rate_bit
# required by sprd-bc1p2.ko
kthread_flush_worker
__kthread_init_worker
@ -1662,14 +1669,18 @@
drm_poll
drm_read
drm_release
drm_send_event_timestamp_locked
drm_vblank_init
mipi_dsi_host_register
mipi_dsi_host_unregister
mipi_dsi_set_maximum_return_packet_size
of_drm_find_bridge
of_get_drm_display_mode
of_graph_get_port_by_id
of_graph_get_remote_node
__platform_register_drivers
platform_unregister_drivers
regmap_get_reg_stride
# required by sprd-iommu.ko
iommu_device_register
@ -1761,6 +1772,9 @@
devm_watchdog_register_device
watchdog_init_timeout
# required by sprdbt_tty.ko
tty_port_link_device
# required by sysdump.ko
android_rvh_probe_register
input_close_device

View File

@ -419,6 +419,7 @@
__traceiter_android_vh_try_to_freeze_todo
__traceiter_android_vh_try_to_freeze_todo_unfrozen
__traceiter_android_vh_try_to_unmap_one
__traceiter_android_vh_tune_scan_type
__traceiter_android_vh_ufs_check_int_errors
__traceiter_android_vh_ufs_clock_scaling
__traceiter_android_vh_ufs_compl_command
@ -588,6 +589,7 @@
__tracepoint_android_vh_try_to_unmap_one
__tracepoint_android_vh_try_to_freeze_todo
__tracepoint_android_vh_try_to_freeze_todo_unfrozen
__tracepoint_android_vh_tune_scan_type
__tracepoint_android_vh_ufs_check_int_errors
__tracepoint_android_vh_ufs_clock_scaling
__tracepoint_android_vh_ufs_compl_command

View File

@ -218,6 +218,12 @@
kernfs_path_from_node
blkcg_activate_policy
#required by mq-deadline module
blk_mq_debugfs_rq_show
seq_list_start
seq_list_next
__blk_mq_debugfs_rq_show
#required by metis.ko module
__traceiter_android_vh_rwsem_read_wait_start
__traceiter_android_vh_rwsem_write_wait_start
@ -310,3 +316,19 @@
# required by SAGT module
__traceiter_android_rvh_before_do_sched_yield
__tracepoint_android_rvh_before_do_sched_yield
#required by minetwork.ko
sock_wake_async
bpf_map_put
bpf_map_inc
__dev_direct_xmit
napi_busy_loop
int_active_memcg
bpf_redirect_info
dma_need_sync
page_pool_put_page_bulk
build_skb_around
#required by xm_ispv4_pcie.ko
pci_ioremap_bar
pci_disable_pcie_error_reporting

View File

@ -28,6 +28,7 @@ config ALPHA
select GENERIC_SMP_IDLE_THREAD
select HAVE_ARCH_AUDITSYSCALL
select HAVE_MOD_ARCH_SPECIFIC
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_RELA
select ODD_RT_SIGACTION
select OLD_SIGSUSPEND

View File

@ -119,20 +119,12 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
flags |= FAULT_FLAG_USER;
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
retry:
mmap_read_lock(mm);
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (!vma)
goto bad_area;
if (vma->vm_start <= address)
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
goto bad_area_nosemaphore;
/* Ok, we have a good vm_area for this memory access, so
we can handle it. */
good_area:
si_code = SEGV_ACCERR;
if (cause < 0) {
if (!(vma->vm_flags & VM_EXEC))
@ -189,6 +181,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
bad_area:
mmap_read_unlock(mm);
bad_area_nosemaphore:
if (user_mode(regs))
goto do_sigsegv;

View File

@ -41,6 +41,7 @@ config ARC
select HAVE_PERF_EVENTS
select HAVE_SYSCALL_TRACEPOINTS
select IRQ_DOMAIN
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_RELA
select OF
select OF_EARLY_FLATTREE

View File

@ -113,15 +113,9 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
retry:
mmap_read_lock(mm);
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (!vma)
goto bad_area;
if (unlikely(address < vma->vm_start)) {
if (!(vma->vm_flags & VM_GROWSDOWN) || expand_stack(vma, address))
goto bad_area;
}
goto bad_area_nosemaphore;
/*
* vm_area is good, now check permissions for this memory access
@ -161,6 +155,7 @@ void do_page_fault(unsigned long address, struct pt_regs *regs)
bad_area:
mmap_read_unlock(mm);
bad_area_nosemaphore:
/*
* Major/minor page fault accounting
* (in case of retry we only land here once)

View File

@ -122,6 +122,7 @@ config ARM
select HAVE_UID16
select HAVE_VIRT_CPU_ACCOUNTING_GEN
select IRQ_FORCED_THREADING
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_REL
select NEED_DMA_MAP_STATE
select OF_EARLY_FLATTREE if OF

View File

@ -231,37 +231,11 @@ static inline bool is_permission_fault(unsigned int fsr)
return false;
}
static vm_fault_t __kprobes
__do_page_fault(struct mm_struct *mm, unsigned long addr, unsigned int flags,
unsigned long vma_flags, struct pt_regs *regs)
{
struct vm_area_struct *vma = find_vma(mm, addr);
if (unlikely(!vma))
return VM_FAULT_BADMAP;
if (unlikely(vma->vm_start > addr)) {
if (!(vma->vm_flags & VM_GROWSDOWN))
return VM_FAULT_BADMAP;
if (addr < FIRST_USER_ADDRESS)
return VM_FAULT_BADMAP;
if (expand_stack(vma, addr))
return VM_FAULT_BADMAP;
}
/*
* ok, we have a good vm_area for this memory access, check the
* permissions on the VMA allow for the fault which occurred.
*/
if (!(vma->vm_flags & vma_flags))
return VM_FAULT_BADACCESS;
return handle_mm_fault(vma, addr & PAGE_MASK, flags, regs);
}
static int __kprobes
do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
{
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
int sig, code;
vm_fault_t fault;
unsigned int flags = FAULT_FLAG_DEFAULT;
@ -300,31 +274,21 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
/*
* As per x86, we may deadlock here. However, since the kernel only
* validly references user space from well defined areas of the code,
* we can bug out early if this is from code which shouldn't.
*/
if (!mmap_read_trylock(mm)) {
if (!user_mode(regs) && !search_exception_tables(regs->ARM_pc))
goto no_context;
retry:
mmap_read_lock(mm);
} else {
/*
* The above down_read_trylock() might have succeeded in
* which case, we'll have missed the might_sleep() from
* down_read()
*/
might_sleep();
#ifdef CONFIG_DEBUG_VM
if (!user_mode(regs) &&
!search_exception_tables(regs->ARM_pc))
goto no_context;
#endif
vma = lock_mm_and_find_vma(mm, addr, regs);
if (unlikely(!vma)) {
fault = VM_FAULT_BADMAP;
goto bad_area;
}
fault = __do_page_fault(mm, addr, flags, vm_flags, regs);
/*
* ok, we have a good vm_area for this memory access, check the
* permissions on the VMA allow for the fault which occurred.
*/
if (!(vma->vm_flags & vm_flags))
fault = VM_FAULT_BADACCESS;
else
fault = handle_mm_fault(vma, addr & PAGE_MASK, flags, regs);
/* If we need to retry but a fatal signal is pending, handle the
* signal first. We do not need to release the mmap_lock because
@ -355,6 +319,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
if (likely(!(fault & (VM_FAULT_ERROR | VM_FAULT_BADMAP | VM_FAULT_BADACCESS))))
return 0;
bad_area:
/*
* If we are in kernel mode at this point, we
* have no context to handle this fault with.

View File

@ -216,6 +216,7 @@ config ARM64
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
select KASAN_VMALLOC if KASAN
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH

View File

@ -39,7 +39,12 @@ static bool (*default_trap_handler)(struct kvm_cpu_context *host_ctxt);
int __pkvm_register_host_smc_handler(bool (*cb)(struct kvm_cpu_context *))
{
return cmpxchg(&default_host_smc_handler, NULL, cb) ? -EBUSY : 0;
/*
* Paired with smp_load_acquire(&default_host_smc_handler) in
* handle_host_smc(). Ensure memory stores happening during a pKVM module
* init are observed before executing the callback.
*/
return cmpxchg_release(&default_host_smc_handler, NULL, cb) ? -EBUSY : 0;
}
int __pkvm_register_default_trap_handler(bool (*cb)(struct kvm_cpu_context *))
@ -1376,7 +1381,7 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt)
handled = kvm_host_psci_handler(host_ctxt);
if (!handled)
handled = kvm_host_ffa_handler(host_ctxt);
if (!handled && READ_ONCE(default_host_smc_handler))
if (!handled && smp_load_acquire(&default_host_smc_handler))
handled = default_host_smc_handler(host_ctxt);
if (!handled)
__kvm_hyp_host_forward_smc(host_ctxt);

View File

@ -28,14 +28,19 @@ struct kvm_host_psci_config __ro_after_init kvm_host_psci_config;
static void (*pkvm_psci_notifier)(enum pkvm_psci_notification, struct kvm_cpu_context *);
static void pkvm_psci_notify(enum pkvm_psci_notification notif, struct kvm_cpu_context *host_ctxt)
{
if (READ_ONCE(pkvm_psci_notifier))
if (smp_load_acquire(&pkvm_psci_notifier))
pkvm_psci_notifier(notif, host_ctxt);
}
#ifdef CONFIG_MODULES
int __pkvm_register_psci_notifier(void (*cb)(enum pkvm_psci_notification, struct kvm_cpu_context *))
{
return cmpxchg(&pkvm_psci_notifier, NULL, cb) ? -EBUSY : 0;
/*
* Paired with smp_load_acquire(&pkvm_psci_notifier) in
* pkvm_psci_notify(). Ensure memory stores hapenning during a pKVM module
* init are observed before executing the callback.
*/
return cmpxchg_release(&pkvm_psci_notifier, NULL, cb) ? -EBUSY : 0;
}
#endif

View File

@ -35,7 +35,8 @@ static inline void __hyp_putx4n(unsigned long x, int n)
static inline bool hyp_serial_enabled(void)
{
return !!READ_ONCE(__hyp_putc);
/* Paired with __pkvm_register_serial_driver()'s cmpxchg */
return !!smp_load_acquire(&__hyp_putc);
}
void hyp_puts(const char *s)
@ -64,5 +65,10 @@ void hyp_putc(char c)
int __pkvm_register_serial_driver(void (*cb)(char))
{
return cmpxchg(&__hyp_putc, NULL, cb) ? -EBUSY : 0;
/*
* Paired with smp_load_acquire(&__hyp_putc) in
* hyp_serial_enabled(). Ensure memory stores hapenning during a pKVM
* module init are observed before executing the callback.
*/
return cmpxchg_release(&__hyp_putc, NULL, cb) ? -EBUSY : 0;
}

View File

@ -502,27 +502,14 @@ static void do_bad_area(unsigned long far, unsigned long esr,
#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000)
#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000)
static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
static vm_fault_t __do_page_fault(struct mm_struct *mm,
struct vm_area_struct *vma, unsigned long addr,
unsigned int mm_flags, unsigned long vm_flags,
struct pt_regs *regs)
{
struct vm_area_struct *vma = find_vma(mm, addr);
if (unlikely(!vma))
return VM_FAULT_BADMAP;
/*
* Ok, we have a good vm_area for this memory access, so we can handle
* it.
*/
if (unlikely(vma->vm_start > addr)) {
if (!(vma->vm_flags & VM_GROWSDOWN))
return VM_FAULT_BADMAP;
if (expand_stack(vma, addr))
return VM_FAULT_BADMAP;
}
/*
* Check that the permissions on the VMA allow for the fault which
* occurred.
*/
@ -554,9 +541,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
unsigned long vm_flags;
unsigned int mm_flags = FAULT_FLAG_DEFAULT;
unsigned long addr = untagged_addr(far);
#ifdef CONFIG_PER_VMA_LOCK
struct vm_area_struct *vma;
#endif
if (kprobe_page_fault(regs, esr))
return 0;
@ -614,7 +599,6 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
#ifdef CONFIG_PER_VMA_LOCK
if (!(mm_flags & FAULT_FLAG_USER))
goto lock_mmap;
@ -627,7 +611,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
goto lock_mmap;
}
fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs);
vma_end_read(vma);
if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
vma_end_read(vma);
if (!(fault & VM_FAULT_RETRY)) {
count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
@ -642,32 +627,15 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
return 0;
}
lock_mmap:
#endif /* CONFIG_PER_VMA_LOCK */
/*
* As per x86, we may deadlock here. However, since the kernel only
* validly references user space from well defined areas of the code,
* we can bug out early if this is from code which shouldn't.
*/
if (!mmap_read_trylock(mm)) {
if (!user_mode(regs) && !search_exception_tables(regs->pc))
goto no_context;
retry:
mmap_read_lock(mm);
} else {
/*
* The above mmap_read_trylock() might have succeeded in which
* case, we'll have missed the might_sleep() from down_read().
*/
might_sleep();
#ifdef CONFIG_DEBUG_VM
if (!user_mode(regs) && !search_exception_tables(regs->pc)) {
mmap_read_unlock(mm);
goto no_context;
}
#endif
vma = lock_mm_and_find_vma(mm, addr, regs);
if (unlikely(!vma)) {
fault = VM_FAULT_BADMAP;
goto done;
}
fault = __do_page_fault(mm, addr, mm_flags, vm_flags, regs);
fault = __do_page_fault(mm, vma, addr, mm_flags, vm_flags, regs);
/* Quick path to respond to signals */
if (fault_signal_pending(fault, regs)) {
@ -686,9 +654,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
}
mmap_read_unlock(mm);
#ifdef CONFIG_PER_VMA_LOCK
done:
#endif
/*
* Handle the "normal" (no error) case first.
*/

View File

@ -96,6 +96,7 @@ config CSKY
select HAVE_RSEQ
select HAVE_STACKPROTECTOR
select HAVE_SYSCALL_TRACEPOINTS
select LOCK_MM_AND_FIND_VMA
select MAY_HAVE_SPARSE_IRQ
select MODULES_USE_ELF_RELA if MODULES
select OF

View File

@ -97,13 +97,12 @@ static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_f
BUG();
}
static inline void bad_area(struct pt_regs *regs, struct mm_struct *mm, int code, unsigned long addr)
static inline void bad_area_nosemaphore(struct pt_regs *regs, struct mm_struct *mm, int code, unsigned long addr)
{
/*
* Something tried to access memory that isn't in our memory map.
* Fix it, but check if it's kernel or user first.
*/
mmap_read_unlock(mm);
/* User mode accesses just cause a SIGSEGV */
if (user_mode(regs)) {
do_trap(regs, SIGSEGV, code, addr);
@ -238,20 +237,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
if (is_write(regs))
flags |= FAULT_FLAG_WRITE;
retry:
mmap_read_lock(mm);
vma = find_vma(mm, addr);
vma = lock_mm_and_find_vma(mm, address, regs);
if (unlikely(!vma)) {
bad_area(regs, mm, code, addr);
return;
}
if (likely(vma->vm_start <= addr))
goto good_area;
if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {
bad_area(regs, mm, code, addr);
return;
}
if (unlikely(expand_stack(vma, addr))) {
bad_area(regs, mm, code, addr);
bad_area_nosemaphore(regs, mm, code, addr);
return;
}
@ -259,11 +247,11 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
* Ok, we have a good vm_area for this memory access, so
* we can handle it.
*/
good_area:
code = SEGV_ACCERR;
if (unlikely(access_error(regs, vma))) {
bad_area(regs, mm, code, addr);
mmap_read_unlock(mm);
bad_area_nosemaphore(regs, mm, code, addr);
return;
}

View File

@ -28,6 +28,7 @@ config HEXAGON
select GENERIC_SMP_IDLE_THREAD
select STACKTRACE_SUPPORT
select GENERIC_CLOCKEVENTS_BROADCAST
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_RELA
select GENERIC_CPU_DEVICES
select ARCH_WANT_LD_ORPHAN_WARN

View File

@ -57,21 +57,10 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs)
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
retry:
mmap_read_lock(mm);
vma = find_vma(mm, address);
if (!vma)
goto bad_area;
vma = lock_mm_and_find_vma(mm, address, regs);
if (unlikely(!vma))
goto bad_area_nosemaphore;
if (vma->vm_start <= address)
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
good_area:
/* Address space is OK. Now check access rights. */
si_code = SEGV_ACCERR;
@ -140,6 +129,7 @@ void do_page_fault(unsigned long address, long cause, struct pt_regs *regs)
bad_area:
mmap_read_unlock(mm);
bad_area_nosemaphore:
if (user_mode(regs)) {
force_sig_fault(SIGSEGV, si_code, (void __user *)address);
return;

View File

@ -110,10 +110,12 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
* register backing store that needs to expand upwards, in
* this case vma will be null, but prev_vma will ne non-null
*/
if (( !vma && prev_vma ) || (address < vma->vm_start) )
goto check_expansion;
if (( !vma && prev_vma ) || (address < vma->vm_start) ) {
vma = expand_stack(mm, address);
if (!vma)
goto bad_area_nosemaphore;
}
good_area:
code = SEGV_ACCERR;
/* OK, we've got a good vm_area for this memory area. Check the access permissions: */
@ -174,35 +176,9 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
mmap_read_unlock(mm);
return;
check_expansion:
if (!(prev_vma && (prev_vma->vm_flags & VM_GROWSUP) && (address == prev_vma->vm_end))) {
if (!vma)
goto bad_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (REGION_NUMBER(address) != REGION_NUMBER(vma->vm_start)
|| REGION_OFFSET(address) >= RGN_MAP_LIMIT)
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
} else {
vma = prev_vma;
if (REGION_NUMBER(address) != REGION_NUMBER(vma->vm_start)
|| REGION_OFFSET(address) >= RGN_MAP_LIMIT)
goto bad_area;
/*
* Since the register backing store is accessed sequentially,
* we disallow growing it by more than a page at a time.
*/
if (address > vma->vm_end + PAGE_SIZE - sizeof(long))
goto bad_area;
if (expand_upwards(vma, address))
goto bad_area;
}
goto good_area;
bad_area:
mmap_read_unlock(mm);
bad_area_nosemaphore:
if ((isr & IA64_ISR_SP)
|| ((isr & IA64_ISR_NA) && (isr & IA64_ISR_CODE_MASK) == IA64_ISR_CODE_LFETCH))
{

View File

@ -107,6 +107,7 @@ config LOONGARCH
select HAVE_VIRT_CPU_ACCOUNTING_GEN if !SMP
select IRQ_FORCED_THREADING
select IRQ_LOONGARCH_CPU
select LOCK_MM_AND_FIND_VMA
select MMU_GATHER_MERGE_VMAS if MMU
select MODULES_USE_ELF_RELA if MODULES
select NEED_PER_CPU_EMBED_FIRST_CHUNK

View File

@ -166,22 +166,18 @@ static void __kprobes __do_page_fault(struct pt_regs *regs,
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
retry:
mmap_read_lock(mm);
vma = find_vma(mm, address);
if (!vma)
goto bad_area;
if (vma->vm_start <= address)
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (!expand_stack(vma, address))
goto good_area;
vma = lock_mm_and_find_vma(mm, address, regs);
if (unlikely(!vma))
goto bad_area_nosemaphore;
goto good_area;
/*
* Something tried to access memory that isn't in our memory map..
* Fix it, but check if it's kernel or user first..
*/
bad_area:
mmap_read_unlock(mm);
bad_area_nosemaphore:
do_sigsegv(regs, write, address, si_code);
return;

View File

@ -105,8 +105,9 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
if (address + 256 < rdusp())
goto map_err;
}
if (expand_stack(vma, address))
goto map_err;
vma = expand_stack(mm, address);
if (!vma)
goto map_err_nosemaphore;
/*
* Ok, we have a good vm_area for this memory access, so
@ -193,10 +194,12 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
goto send_sig;
map_err:
mmap_read_unlock(mm);
map_err_nosemaphore:
current->thread.signo = SIGSEGV;
current->thread.code = SEGV_MAPERR;
current->thread.faddr = address;
goto send_sig;
return send_fault_sig(regs);
acc_err:
current->thread.signo = SIGSEGV;

View File

@ -192,8 +192,9 @@ void do_page_fault(struct pt_regs *regs, unsigned long address,
&& (kernel_mode(regs) || !store_updates_sp(regs)))
goto bad_area;
}
if (expand_stack(vma, address))
goto bad_area;
vma = expand_stack(mm, address);
if (!vma)
goto bad_area_nosemaphore;
good_area:
code = SEGV_ACCERR;

View File

@ -94,6 +94,7 @@ config MIPS
select HAVE_VIRT_CPU_ACCOUNTING_GEN if 64BIT || !SMP
select IRQ_FORCED_THREADING
select ISA if EISA
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_REL if MODULES
select MODULES_USE_ELF_RELA if MODULES && 64BIT
select PERF_USE_VMALLOC

View File

@ -99,21 +99,13 @@ static void __do_page_fault(struct pt_regs *regs, unsigned long write,
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
retry:
mmap_read_lock(mm);
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (!vma)
goto bad_area;
if (vma->vm_start <= address)
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
goto bad_area_nosemaphore;
/*
* Ok, we have a good vm_area for this memory access, so
* we can handle it..
*/
good_area:
si_code = SEGV_ACCERR;
if (write) {

View File

@ -16,6 +16,7 @@ config NIOS2
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_KGDB
select IRQ_DOMAIN
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_RELA
select OF
select OF_EARLY_FLATTREE

View File

@ -86,27 +86,14 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause,
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
if (!mmap_read_trylock(mm)) {
if (!user_mode(regs) && !search_exception_tables(regs->ea))
goto bad_area_nosemaphore;
retry:
mmap_read_lock(mm);
}
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (!vma)
goto bad_area;
if (vma->vm_start <= address)
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
goto bad_area_nosemaphore;
/*
* Ok, we have a good vm_area for this memory access, so
* we can handle it..
*/
good_area:
code = SEGV_ACCERR;
switch (cause) {

View File

@ -127,8 +127,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address,
if (address + PAGE_SIZE < regs->sp)
goto bad_area;
}
if (expand_stack(vma, address))
goto bad_area;
vma = expand_stack(mm, address);
if (!vma)
goto bad_area_nosemaphore;
/*
* Ok, we have a good vm_area for this memory access, so

View File

@ -288,15 +288,19 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
retry:
mmap_read_lock(mm);
vma = find_vma_prev(mm, address, &prev_vma);
if (!vma || address < vma->vm_start)
goto check_expansion;
if (!vma || address < vma->vm_start) {
if (!prev || !(prev->vm_flags & VM_GROWSUP))
goto bad_area;
vma = expand_stack(mm, address);
if (!vma)
goto bad_area_nosemaphore;
}
/*
* Ok, we have a good vm_area for this memory access. We still need to
* check the access permissions.
*/
good_area:
if ((vma->vm_flags & acc_type) != acc_type)
goto bad_area;
@ -342,17 +346,13 @@ void do_page_fault(struct pt_regs *regs, unsigned long code,
mmap_read_unlock(mm);
return;
check_expansion:
vma = prev_vma;
if (vma && (expand_stack(vma, address) == 0))
goto good_area;
/*
* Something tried to access memory that isn't in our memory map..
*/
bad_area:
mmap_read_unlock(mm);
bad_area_nosemaphore:
if (user_mode(regs)) {
int signo, si_code;
@ -444,7 +444,7 @@ handle_nadtlb_fault(struct pt_regs *regs)
{
unsigned long insn = regs->iir;
int breg, treg, xreg, val = 0;
struct vm_area_struct *vma, *prev_vma;
struct vm_area_struct *vma;
struct task_struct *tsk;
struct mm_struct *mm;
unsigned long address;
@ -480,7 +480,7 @@ handle_nadtlb_fault(struct pt_regs *regs)
/* Search for VMA */
address = regs->ior;
mmap_read_lock(mm);
vma = find_vma_prev(mm, address, &prev_vma);
vma = vma_lookup(mm, address);
mmap_read_unlock(mm);
/*
@ -489,7 +489,6 @@ handle_nadtlb_fault(struct pt_regs *regs)
*/
acc_type = (insn & 0x40) ? VM_WRITE : VM_READ;
if (vma
&& address >= vma->vm_start
&& (vma->vm_flags & acc_type) == acc_type)
val = 1;
}

View File

@ -257,6 +257,7 @@ config PPC
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
select KASAN_VMALLOC if KASAN && MODULES
select LOCK_MM_AND_FIND_VMA
select MMU_GATHER_PAGE_SIZE
select MMU_GATHER_RCU_TABLE_FREE
select MMU_GATHER_MERGE_VMAS

View File

@ -410,6 +410,7 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm,
ret = H_STATE;
break;
}
vma_start_write(vma);
/* Copy vm_flags to avoid partial modifications in ksm_madvise */
vm_flags = vma->vm_flags;
ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,

View File

@ -143,6 +143,7 @@ static int subpage_walk_pmd_entry(pmd_t *pmd, unsigned long addr,
static const struct mm_walk_ops subpage_walk_ops = {
.pmd_entry = subpage_walk_pmd_entry,
.walk_lock = PGWALK_WRLOCK_VERIFY,
};
static void subpage_mark_vma_nohuge(struct mm_struct *mm, unsigned long addr,

View File

@ -33,19 +33,11 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
if (mm->pgd == NULL)
return -EFAULT;
mmap_read_lock(mm);
ret = -EFAULT;
vma = find_vma(mm, ea);
vma = lock_mm_and_find_vma(mm, ea, NULL);
if (!vma)
goto out_unlock;
if (ea < vma->vm_start) {
if (!(vma->vm_flags & VM_GROWSDOWN))
goto out_unlock;
if (expand_stack(vma, ea))
goto out_unlock;
}
return -EFAULT;
ret = -EFAULT;
is_write = dsisr & DSISR_ISSTORE;
if (is_write) {
if (!(vma->vm_flags & VM_WRITE))

View File

@ -84,11 +84,6 @@ static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code)
return __bad_area_nosemaphore(regs, address, si_code);
}
static noinline int bad_area(struct pt_regs *regs, unsigned long address)
{
return __bad_area(regs, address, SEGV_MAPERR);
}
static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address,
struct vm_area_struct *vma)
{
@ -474,7 +469,6 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
if (is_exec)
flags |= FAULT_FLAG_INSTRUCTION;
#ifdef CONFIG_PER_VMA_LOCK
if (!(flags & FAULT_FLAG_USER))
goto lock_mmap;
@ -494,7 +488,8 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
}
fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs);
vma_end_read(vma);
if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
vma_end_read(vma);
if (!(fault & VM_FAULT_RETRY)) {
count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
@ -506,7 +501,6 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
return user_mode(regs) ? 0 : SIGBUS;
lock_mmap:
#endif /* CONFIG_PER_VMA_LOCK */
/* When running in the kernel we expect faults to occur only to
* addresses in user space. All other faults represent errors in the
@ -515,40 +509,12 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
* we will deadlock attempting to validate the fault against the
* address space. Luckily the kernel only validly references user
* space from well defined areas of code, which are listed in the
* exceptions table.
*
* As the vast majority of faults will be valid we will only perform
* the source reference check when there is a possibility of a deadlock.
* Attempt to lock the address space, if we cannot we then validate the
* source. If this is invalid we can skip the address space check,
* thus avoiding the deadlock.
* exceptions table. lock_mm_and_find_vma() handles that logic.
*/
if (unlikely(!mmap_read_trylock(mm))) {
if (!is_user && !search_exception_tables(regs->nip))
return bad_area_nosemaphore(regs, address);
retry:
mmap_read_lock(mm);
} else {
/*
* The above down_read_trylock() might have succeeded in
* which case we'll have missed the might_sleep() from
* down_read():
*/
might_sleep();
}
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (unlikely(!vma))
return bad_area(regs, address);
if (unlikely(vma->vm_start > address)) {
if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))
return bad_area(regs, address);
if (unlikely(expand_stack(vma, address)))
return bad_area(regs, address);
}
return bad_area_nosemaphore(regs, address);
if (unlikely(access_pkey_error(is_write, is_exec,
(error_code & DSISR_KEYFAULT), vma)))
@ -584,9 +550,7 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
mmap_read_unlock(current->mm);
#ifdef CONFIG_PER_VMA_LOCK
done:
#endif
if (unlikely(fault & VM_FAULT_ERROR))
return mm_fault_error(regs, address, fault);

View File

@ -40,6 +40,7 @@ config RISCV
select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
select ARCH_SUPPORTS_HUGETLBFS if MMU
select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU
select ARCH_SUPPORTS_PER_VMA_LOCK if MMU
select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
@ -114,6 +115,7 @@ config RISCV
select HAVE_RSEQ
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_RELA if MODULES
select MODULE_SECTIONS if MODULES
select OF

View File

@ -83,13 +83,13 @@ static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_f
BUG();
}
static inline void bad_area(struct pt_regs *regs, struct mm_struct *mm, int code, unsigned long addr)
static inline void
bad_area_nosemaphore(struct pt_regs *regs, int code, unsigned long addr)
{
/*
* Something tried to access memory that isn't in our memory map.
* Fix it, but check if it's kernel or user first.
*/
mmap_read_unlock(mm);
/* User mode accesses just cause a SIGSEGV */
if (user_mode(regs)) {
do_trap(regs, SIGSEGV, code, addr);
@ -99,6 +99,15 @@ static inline void bad_area(struct pt_regs *regs, struct mm_struct *mm, int code
no_context(regs, addr);
}
static inline void
bad_area(struct pt_regs *regs, struct mm_struct *mm, int code,
unsigned long addr)
{
mmap_read_unlock(mm);
bad_area_nosemaphore(regs, code, addr);
}
static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long addr)
{
pgd_t *pgd, *pgd_k;
@ -280,24 +289,40 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
flags |= FAULT_FLAG_WRITE;
else if (cause == EXC_INST_PAGE_FAULT)
flags |= FAULT_FLAG_INSTRUCTION;
if (!(flags & FAULT_FLAG_USER))
goto lock_mmap;
vma = lock_vma_under_rcu(mm, addr);
if (!vma)
goto lock_mmap;
if (unlikely(access_error(cause, vma))) {
vma_end_read(vma);
goto lock_mmap;
}
fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs);
if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
vma_end_read(vma);
if (!(fault & VM_FAULT_RETRY)) {
count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
goto done;
}
count_vm_vma_lock_event(VMA_LOCK_RETRY);
if (fault_signal_pending(fault, regs)) {
if (!user_mode(regs))
no_context(regs, addr);
return;
}
lock_mmap:
retry:
mmap_read_lock(mm);
vma = find_vma(mm, addr);
vma = lock_mm_and_find_vma(mm, addr, regs);
if (unlikely(!vma)) {
tsk->thread.bad_cause = cause;
bad_area(regs, mm, code, addr);
return;
}
if (likely(vma->vm_start <= addr))
goto good_area;
if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {
tsk->thread.bad_cause = cause;
bad_area(regs, mm, code, addr);
return;
}
if (unlikely(expand_stack(vma, addr))) {
tsk->thread.bad_cause = cause;
bad_area(regs, mm, code, addr);
bad_area_nosemaphore(regs, code, addr);
return;
}
@ -305,7 +330,6 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
* Ok, we have a good vm_area for this memory access, so
* we can handle it.
*/
good_area:
code = SEGV_ACCERR;
if (unlikely(access_error(cause, vma))) {
@ -346,6 +370,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
mmap_read_unlock(mm);
done:
if (unlikely(fault & VM_FAULT_ERROR)) {
tsk->thread.bad_cause = cause;
mm_fault_error(regs, addr, fault);

View File

@ -102,6 +102,7 @@ static const struct mm_walk_ops pageattr_ops = {
.pmd_entry = pageattr_pmd_entry,
.pte_entry = pageattr_pte_entry,
.pte_hole = pageattr_pte_hole,
.walk_lock = PGWALK_RDLOCK,
};
static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,

View File

@ -403,7 +403,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
access = VM_WRITE;
if (access == VM_WRITE)
flags |= FAULT_FLAG_WRITE;
#ifdef CONFIG_PER_VMA_LOCK
if (!(flags & FAULT_FLAG_USER))
goto lock_mmap;
vma = lock_vma_under_rcu(mm, address);
@ -414,7 +413,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
goto lock_mmap;
}
fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs);
vma_end_read(vma);
if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
vma_end_read(vma);
if (!(fault & VM_FAULT_RETRY)) {
count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
goto out;
@ -426,7 +426,6 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
goto out;
}
lock_mmap:
#endif /* CONFIG_PER_VMA_LOCK */
mmap_read_lock(mm);
gmap = NULL;
@ -453,8 +452,9 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access)
if (unlikely(vma->vm_start > address)) {
if (!(vma->vm_flags & VM_GROWSDOWN))
goto out_up;
if (expand_stack(vma, address))
goto out_up;
vma = expand_stack(mm, address);
if (!vma)
goto out;
}
/*

View File

@ -2510,6 +2510,7 @@ static int thp_split_walk_pmd_entry(pmd_t *pmd, unsigned long addr,
static const struct mm_walk_ops thp_split_walk_ops = {
.pmd_entry = thp_split_walk_pmd_entry,
.walk_lock = PGWALK_WRLOCK_VERIFY,
};
static inline void thp_split_mm(struct mm_struct *mm)
@ -2554,6 +2555,7 @@ static int __zap_zero_pages(pmd_t *pmd, unsigned long start,
static const struct mm_walk_ops zap_zero_walk_ops = {
.pmd_entry = __zap_zero_pages,
.walk_lock = PGWALK_WRLOCK,
};
/*
@ -2655,6 +2657,7 @@ static const struct mm_walk_ops enable_skey_walk_ops = {
.hugetlb_entry = __s390_enable_skey_hugetlb,
.pte_entry = __s390_enable_skey_pte,
.pmd_entry = __s390_enable_skey_pmd,
.walk_lock = PGWALK_WRLOCK,
};
int s390_enable_skey(void)
@ -2692,6 +2695,7 @@ static int __s390_reset_cmma(pte_t *pte, unsigned long addr,
static const struct mm_walk_ops reset_cmma_walk_ops = {
.pte_entry = __s390_reset_cmma,
.walk_lock = PGWALK_WRLOCK,
};
void s390_reset_cmma(struct mm_struct *mm)
@ -2728,6 +2732,7 @@ static int s390_gather_pages(pte_t *ptep, unsigned long addr,
static const struct mm_walk_ops gather_pages_ops = {
.pte_entry = s390_gather_pages,
.walk_lock = PGWALK_RDLOCK,
};
/*

View File

@ -56,6 +56,7 @@ config SUPERH
select HAVE_STACKPROTECTOR
select HAVE_SYSCALL_TRACEPOINTS
select IRQ_FORCED_THREADING
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_RELA
select NEED_SG_DMA_LENGTH
select NO_DMA if !MMU && !DMA_COHERENT

View File

@ -439,21 +439,9 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
}
retry:
mmap_read_lock(mm);
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (unlikely(!vma)) {
bad_area(regs, error_code, address);
return;
}
if (likely(vma->vm_start <= address))
goto good_area;
if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {
bad_area(regs, error_code, address);
return;
}
if (unlikely(expand_stack(vma, address))) {
bad_area(regs, error_code, address);
bad_area_nosemaphore(regs, error_code, address);
return;
}
@ -461,7 +449,6 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
* Ok, we have a good vm_area for this memory access, so
* we can handle it..
*/
good_area:
if (unlikely(access_error(error_code, vma))) {
bad_area_access_error(regs, error_code, address);
return;

View File

@ -56,6 +56,7 @@ config SPARC32
select DMA_DIRECT_REMAP
select GENERIC_ATOMIC64
select HAVE_UID16
select LOCK_MM_AND_FIND_VMA
select OLD_SIGACTION
select ZONE_DMA

View File

@ -143,28 +143,19 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
if (pagefault_disabled() || !mm)
goto no_context;
if (!from_user && address >= PAGE_OFFSET)
goto no_context;
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
retry:
mmap_read_lock(mm);
if (!from_user && address >= PAGE_OFFSET)
goto bad_area;
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (!vma)
goto bad_area;
if (vma->vm_start <= address)
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
goto bad_area_nosemaphore;
/*
* Ok, we have a good vm_area for this memory access, so
* we can handle it..
*/
good_area:
code = SEGV_ACCERR;
if (write) {
if (!(vma->vm_flags & VM_WRITE))
@ -318,17 +309,9 @@ static void force_user_fault(unsigned long address, int write)
code = SEGV_MAPERR;
mmap_read_lock(mm);
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (!vma)
goto bad_area;
if (vma->vm_start <= address)
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
good_area:
goto bad_area_nosemaphore;
code = SEGV_ACCERR;
if (write) {
if (!(vma->vm_flags & VM_WRITE))
@ -347,6 +330,7 @@ static void force_user_fault(unsigned long address, int write)
return;
bad_area:
mmap_read_unlock(mm);
bad_area_nosemaphore:
__do_fault_siginfo(code, SIGSEGV, tsk->thread.kregs, address);
return;

View File

@ -383,8 +383,9 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
goto bad_area;
}
}
if (expand_stack(vma, address))
goto bad_area;
vma = expand_stack(mm, address);
if (!vma)
goto bad_area_nosemaphore;
/*
* Ok, we have a good vm_area for this memory access, so
* we can handle it..
@ -482,8 +483,9 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
* Fix it, but check if it's kernel or user first..
*/
bad_area:
insn = get_fault_insn(regs, insn);
mmap_read_unlock(mm);
bad_area_nosemaphore:
insn = get_fault_insn(regs, insn);
handle_kernel_fault:
do_kernel_fault(regs, si_code, fault_code, insn, address);

View File

@ -47,14 +47,15 @@ int handle_page_fault(unsigned long address, unsigned long ip,
vma = find_vma(mm, address);
if (!vma)
goto out;
else if (vma->vm_start <= address)
if (vma->vm_start <= address)
goto good_area;
else if (!(vma->vm_flags & VM_GROWSDOWN))
if (!(vma->vm_flags & VM_GROWSDOWN))
goto out;
else if (is_user && !ARCH_IS_STACKGROW(address))
goto out;
else if (expand_stack(vma, address))
if (is_user && !ARCH_IS_STACKGROW(address))
goto out;
vma = expand_stack(mm, address);
if (!vma)
goto out_nosemaphore;
good_area:
*code_out = SEGV_ACCERR;

View File

@ -272,6 +272,7 @@ config X86
select HAVE_GENERIC_VDSO
select HOTPLUG_SMT if SMP
select IRQ_FORCED_THREADING
select LOCK_MM_AND_FIND_VMA
select NEED_PER_CPU_EMBED_FIRST_CHUNK
select NEED_PER_CPU_PAGE_FIRST_CHUNK
select NEED_SG_DMA_LENGTH

View File

@ -96,4 +96,6 @@ static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1,
extern u64 x86_read_arch_cap_msr(void);
extern struct cpumask cpus_stop_mask;
#endif /* _ASM_X86_CPU_H */

View File

@ -132,6 +132,8 @@ void wbinvd_on_cpu(int cpu);
int wbinvd_on_all_cpus(void);
void cond_wakeup_cpu0(void);
void smp_kick_mwait_play_dead(void);
void native_smp_send_reschedule(int cpu);
void native_send_call_func_ipi(const struct cpumask *mask);
void native_send_call_func_single_ipi(int cpu);

View File

@ -705,7 +705,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
/* need to apply patch? */
if (rev >= mc_amd->hdr.patch_id) {
if (rev > mc_amd->hdr.patch_id) {
ret = UCODE_OK;
goto out;
}

View File

@ -744,15 +744,26 @@ bool xen_set_default_idle(void)
}
#endif
struct cpumask cpus_stop_mask;
void __noreturn stop_this_cpu(void *dummy)
{
struct cpuinfo_x86 *c = this_cpu_ptr(&cpu_info);
unsigned int cpu = smp_processor_id();
local_irq_disable();
/*
* Remove this CPU:
* Remove this CPU from the online mask and disable it
* unconditionally. This might be redundant in case that the reboot
* vector was handled late and stop_other_cpus() sent an NMI.
*
* According to SDM and APM NMIs can be accepted even after soft
* disabling the local APIC.
*/
set_cpu_online(smp_processor_id(), false);
set_cpu_online(cpu, false);
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
mcheck_cpu_clear(c);
/*
* Use wbinvd on processors that support SME. This provides support
@ -766,8 +777,17 @@ void __noreturn stop_this_cpu(void *dummy)
* Test the CPUID bit directly because the machine might've cleared
* X86_FEATURE_SME due to cmdline options.
*/
if (cpuid_eax(0x8000001f) & BIT(0))
if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0)))
native_wbinvd();
/*
* This brings a cache line back and dirties it, but
* native_stop_other_cpus() will overwrite cpus_stop_mask after it
* observed that all CPUs reported stop. This write will invalidate
* the related cache line on this CPU.
*/
cpumask_clear_cpu(cpu, &cpus_stop_mask);
for (;;) {
/*
* Use native_halt() so that memory contents don't change

View File

@ -21,12 +21,14 @@
#include <linux/interrupt.h>
#include <linux/cpu.h>
#include <linux/gfp.h>
#include <linux/kexec.h>
#include <asm/mtrr.h>
#include <asm/tlbflush.h>
#include <asm/mmu_context.h>
#include <asm/proto.h>
#include <asm/apic.h>
#include <asm/cpu.h>
#include <asm/idtentry.h>
#include <asm/nmi.h>
#include <asm/mce.h>
@ -146,34 +148,47 @@ static int register_stop_handler(void)
static void native_stop_other_cpus(int wait)
{
unsigned long flags;
unsigned long timeout;
unsigned int cpu = smp_processor_id();
unsigned long flags, timeout;
if (reboot_force)
return;
/*
* Use an own vector here because smp_call_function
* does lots of things not suitable in a panic situation.
*/
/* Only proceed if this is the first CPU to reach this code */
if (atomic_cmpxchg(&stopping_cpu, -1, cpu) != -1)
return;
/* For kexec, ensure that offline CPUs are out of MWAIT and in HLT */
if (kexec_in_progress)
smp_kick_mwait_play_dead();
/*
* We start by using the REBOOT_VECTOR irq.
* The irq is treated as a sync point to allow critical
* regions of code on other cpus to release their spin locks
* and re-enable irqs. Jumping straight to an NMI might
* accidentally cause deadlocks with further shutdown/panic
* code. By syncing, we give the cpus up to one second to
* finish their work before we force them off with the NMI.
* 1) Send an IPI on the reboot vector to all other CPUs.
*
* The other CPUs should react on it after leaving critical
* sections and re-enabling interrupts. They might still hold
* locks, but there is nothing which can be done about that.
*
* 2) Wait for all other CPUs to report that they reached the
* HLT loop in stop_this_cpu()
*
* 3) If #2 timed out send an NMI to the CPUs which did not
* yet report
*
* 4) Wait for all other CPUs to report that they reached the
* HLT loop in stop_this_cpu()
*
* #3 can obviously race against a CPU reaching the HLT loop late.
* That CPU will have reported already and the "have all CPUs
* reached HLT" condition will be true despite the fact that the
* other CPU is still handling the NMI. Again, there is no
* protection against that as "disabled" APICs still respond to
* NMIs.
*/
if (num_online_cpus() > 1) {
/* did someone beat us here? */
if (atomic_cmpxchg(&stopping_cpu, -1, safe_smp_processor_id()) != -1)
return;
/* sync above data before sending IRQ */
wmb();
cpumask_copy(&cpus_stop_mask, cpu_online_mask);
cpumask_clear_cpu(cpu, &cpus_stop_mask);
if (!cpumask_empty(&cpus_stop_mask)) {
apic_send_IPI_allbutself(REBOOT_VECTOR);
/*
@ -183,24 +198,22 @@ static void native_stop_other_cpus(int wait)
* CPUs reach shutdown state.
*/
timeout = USEC_PER_SEC;
while (num_online_cpus() > 1 && timeout--)
while (!cpumask_empty(&cpus_stop_mask) && timeout--)
udelay(1);
}
/* if the REBOOT_VECTOR didn't work, try with the NMI */
if (num_online_cpus() > 1) {
if (!cpumask_empty(&cpus_stop_mask)) {
/*
* If NMI IPI is enabled, try to register the stop handler
* and send the IPI. In any case try to wait for the other
* CPUs to stop.
*/
if (!smp_no_nmi_ipi && !register_stop_handler()) {
/* Sync above data before sending IRQ */
wmb();
pr_emerg("Shutting down cpus with NMI\n");
apic_send_IPI_allbutself(NMI_VECTOR);
for_each_cpu(cpu, &cpus_stop_mask)
apic->send_IPI(cpu, NMI_VECTOR);
}
/*
* Don't wait longer than 10 ms if the caller didn't
@ -208,7 +221,7 @@ static void native_stop_other_cpus(int wait)
* one or more CPUs do not reach shutdown state.
*/
timeout = USEC_PER_MSEC * 10;
while (num_online_cpus() > 1 && (wait || timeout--))
while (!cpumask_empty(&cpus_stop_mask) && (wait || timeout--))
udelay(1);
}
@ -216,6 +229,12 @@ static void native_stop_other_cpus(int wait)
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
local_irq_restore(flags);
/*
* Ensure that the cpus_stop_mask cache lines are invalidated on
* the other CPUs. See comment vs. SME in stop_this_cpu().
*/
cpumask_clear(&cpus_stop_mask);
}
/*

View File

@ -53,6 +53,7 @@
#include <linux/tboot.h>
#include <linux/gfp.h>
#include <linux/cpuidle.h>
#include <linux/kexec.h>
#include <linux/numa.h>
#include <linux/pgtable.h>
#include <linux/overflow.h>
@ -99,6 +100,20 @@ EXPORT_PER_CPU_SYMBOL(cpu_die_map);
DEFINE_PER_CPU_READ_MOSTLY(struct cpuinfo_x86, cpu_info);
EXPORT_PER_CPU_SYMBOL(cpu_info);
struct mwait_cpu_dead {
unsigned int control;
unsigned int status;
};
#define CPUDEAD_MWAIT_WAIT 0xDEADBEEF
#define CPUDEAD_MWAIT_KEXEC_HLT 0x4A17DEAD
/*
* Cache line aligned data for mwait_play_dead(). Separate on purpose so
* that it's unlikely to be touched by other CPUs.
*/
static DEFINE_PER_CPU_ALIGNED(struct mwait_cpu_dead, mwait_cpu_dead);
/* Logical package management. We might want to allocate that dynamically */
unsigned int __max_logical_packages __read_mostly;
EXPORT_SYMBOL(__max_logical_packages);
@ -155,6 +170,10 @@ static void smp_callin(void)
{
int cpuid;
/* Mop up eventual mwait_play_dead() wreckage */
this_cpu_write(mwait_cpu_dead.status, 0);
this_cpu_write(mwait_cpu_dead.control, 0);
/*
* If waken up by an INIT in an 82489DX configuration
* cpu_callout_mask guarantees we don't get here before
@ -1746,10 +1765,10 @@ EXPORT_SYMBOL_GPL(cond_wakeup_cpu0);
*/
static inline void mwait_play_dead(void)
{
struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead);
unsigned int eax, ebx, ecx, edx;
unsigned int highest_cstate = 0;
unsigned int highest_subcstate = 0;
void *mwait_ptr;
int i;
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
@ -1784,12 +1803,9 @@ static inline void mwait_play_dead(void)
(highest_subcstate - 1);
}
/*
* This should be a memory location in a cache line which is
* unlikely to be touched by other processors. The actual
* content is immaterial as it is not actually modified in any way.
*/
mwait_ptr = &current_thread_info()->flags;
/* Set up state for the kexec() hack below */
md->status = CPUDEAD_MWAIT_WAIT;
md->control = CPUDEAD_MWAIT_WAIT;
wbinvd();
@ -1802,16 +1818,63 @@ static inline void mwait_play_dead(void)
* case where we return around the loop.
*/
mb();
clflush(mwait_ptr);
clflush(md);
mb();
__monitor(mwait_ptr, 0, 0);
__monitor(md, 0, 0);
mb();
__mwait(eax, 0);
if (READ_ONCE(md->control) == CPUDEAD_MWAIT_KEXEC_HLT) {
/*
* Kexec is about to happen. Don't go back into mwait() as
* the kexec kernel might overwrite text and data including
* page tables and stack. So mwait() would resume when the
* monitor cache line is written to and then the CPU goes
* south due to overwritten text, page tables and stack.
*
* Note: This does _NOT_ protect against a stray MCE, NMI,
* SMI. They will resume execution at the instruction
* following the HLT instruction and run into the problem
* which this is trying to prevent.
*/
WRITE_ONCE(md->status, CPUDEAD_MWAIT_KEXEC_HLT);
while(1)
native_halt();
}
cond_wakeup_cpu0();
}
}
/*
* Kick all "offline" CPUs out of mwait on kexec(). See comment in
* mwait_play_dead().
*/
void smp_kick_mwait_play_dead(void)
{
u32 newstate = CPUDEAD_MWAIT_KEXEC_HLT;
struct mwait_cpu_dead *md;
unsigned int cpu, i;
for_each_cpu_andnot(cpu, cpu_present_mask, cpu_online_mask) {
md = per_cpu_ptr(&mwait_cpu_dead, cpu);
/* Does it sit in mwait_play_dead() ? */
if (READ_ONCE(md->status) != CPUDEAD_MWAIT_WAIT)
continue;
/* Wait up to 5ms */
for (i = 0; READ_ONCE(md->status) != newstate && i < 1000; i++) {
/* Bring it out of mwait */
WRITE_ONCE(md->control, newstate);
udelay(5);
}
if (READ_ONCE(md->status) != newstate)
pr_err_once("CPU%u is stuck in mwait_play_dead()\n", cpu);
}
}
void hlt_play_dead(void)
{
if (__this_cpu_read(cpu_info.x86) >= 4)

View File

@ -901,12 +901,6 @@ __bad_area(struct pt_regs *regs, unsigned long error_code,
__bad_area_nosemaphore(regs, error_code, address, pkey, si_code);
}
static noinline void
bad_area(struct pt_regs *regs, unsigned long error_code, unsigned long address)
{
__bad_area(regs, error_code, address, 0, SEGV_MAPERR);
}
static inline bool bad_area_access_from_pkeys(unsigned long error_code,
struct vm_area_struct *vma)
{
@ -1355,7 +1349,6 @@ void do_user_addr_fault(struct pt_regs *regs,
}
#endif
#ifdef CONFIG_PER_VMA_LOCK
if (!(flags & FAULT_FLAG_USER))
goto lock_mmap;
@ -1368,7 +1361,8 @@ void do_user_addr_fault(struct pt_regs *regs,
goto lock_mmap;
}
fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs);
vma_end_read(vma);
if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED)))
vma_end_read(vma);
if (!(fault & VM_FAULT_RETRY)) {
count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
@ -1385,53 +1379,11 @@ void do_user_addr_fault(struct pt_regs *regs,
return;
}
lock_mmap:
#endif /* CONFIG_PER_VMA_LOCK */
/*
* Kernel-mode access to the user address space should only occur
* on well-defined single instructions listed in the exception
* tables. But, an erroneous kernel fault occurring outside one of
* those areas which also holds mmap_lock might deadlock attempting
* to validate the fault against the address space.
*
* Only do the expensive exception table search when we might be at
* risk of a deadlock. This happens if we
* 1. Failed to acquire mmap_lock, and
* 2. The access did not originate in userspace.
*/
if (unlikely(!mmap_read_trylock(mm))) {
if (!user_mode(regs) && !search_exception_tables(regs->ip)) {
/*
* Fault from code in kernel from
* which we do not expect faults.
*/
bad_area_nosemaphore(regs, error_code, address);
return;
}
retry:
mmap_read_lock(mm);
} else {
/*
* The above down_read_trylock() might have succeeded in
* which case we'll have missed the might_sleep() from
* down_read():
*/
might_sleep();
}
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (unlikely(!vma)) {
bad_area(regs, error_code, address);
return;
}
if (likely(vma->vm_start <= address))
goto good_area;
if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {
bad_area(regs, error_code, address);
return;
}
if (unlikely(expand_stack(vma, address))) {
bad_area(regs, error_code, address);
bad_area_nosemaphore(regs, error_code, address);
return;
}
@ -1439,7 +1391,6 @@ void do_user_addr_fault(struct pt_regs *regs,
* Ok, we have a good vm_area for this memory access, so
* we can handle it..
*/
good_area:
if (unlikely(access_error(error_code, vma))) {
bad_area_access_error(regs, error_code, address, vma);
return;
@ -1487,9 +1438,7 @@ void do_user_addr_fault(struct pt_regs *regs,
}
mmap_read_unlock(mm);
#ifdef CONFIG_PER_VMA_LOCK
done:
#endif
if (likely(!(fault & VM_FAULT_ERROR)))
return;

View File

@ -49,6 +49,7 @@ config XTENSA
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_VIRT_CPU_ACCOUNTING_GEN
select IRQ_DOMAIN
select LOCK_MM_AND_FIND_VMA
select MODULES_USE_ELF_RELA
select PERF_USE_VMALLOC
select TRACE_IRQFLAGS_SUPPORT

View File

@ -130,23 +130,14 @@ void do_page_fault(struct pt_regs *regs)
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
retry:
mmap_read_lock(mm);
vma = find_vma(mm, address);
vma = lock_mm_and_find_vma(mm, address, regs);
if (!vma)
goto bad_area;
if (vma->vm_start <= address)
goto good_area;
if (!(vma->vm_flags & VM_GROWSDOWN))
goto bad_area;
if (expand_stack(vma, address))
goto bad_area;
goto bad_area_nosemaphore;
/* Ok, we have a good vm_area for this memory access, so
* we can handle it..
*/
good_area:
code = SEGV_ACCERR;
if (is_write) {
@ -205,6 +196,7 @@ void do_page_fault(struct pt_regs *regs)
*/
bad_area:
mmap_read_unlock(mm);
bad_area_nosemaphore:
if (user_mode(regs)) {
current->thread.bad_vaddr = address;
current->thread.error_code = is_write;

View File

@ -79,7 +79,18 @@ int blk_crypto_profile_init(struct blk_crypto_profile *profile,
unsigned int slot_hashtable_size;
memset(profile, 0, sizeof(*profile));
/*
* profile->lock of an underlying device can nest inside profile->lock
* of a device-mapper device, so use a dynamic lock class to avoid
* false-positive lockdep reports.
*/
#ifdef CONFIG_LOCKDEP
lockdep_register_key(&profile->lockdep_key);
__init_rwsem(&profile->lock, "&profile->lock", &profile->lockdep_key);
#else
init_rwsem(&profile->lock);
#endif
if (num_slots == 0)
return 0;
@ -89,7 +100,7 @@ int blk_crypto_profile_init(struct blk_crypto_profile *profile,
profile->slots = kvcalloc(num_slots, sizeof(profile->slots[0]),
GFP_KERNEL);
if (!profile->slots)
return -ENOMEM;
goto err_destroy;
profile->num_slots = num_slots;
@ -443,6 +454,9 @@ void blk_crypto_profile_destroy(struct blk_crypto_profile *profile)
{
if (!profile)
return;
#ifdef CONFIG_LOCKDEP
lockdep_unregister_key(&profile->lockdep_key);
#endif
kvfree(profile->slot_hashtable);
kvfree_sensitive(profile->slots,
sizeof(profile->slots[0]) * profile->num_slots);

View File

@ -4,7 +4,6 @@ POST_DEFCONFIG_CMDS="update_config"
function update_config() {
${KERNEL_DIR}/scripts/config --file ${OUT_DIR}/.config \
-e UNWINDER_FRAME_POINTER \
-d WERROR \
-d SAMPLES \
-d BPFILTER \
-e RANDSTRUCT_NONE \

View File

@ -1,5 +0,0 @@
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.gki.aarch64
DEFCONFIG=16k_gki_defconfig
PRE_DEFCONFIG_CMDS="mkdir -p \${OUT_DIR}/arch/arm64/configs/ && cat ${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/configs/gki_defconfig ${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/configs/16k_gki.fragment > \${OUT_DIR}/arch/arm64/configs/${DEFCONFIG};"
POST_DEFCONFIG_CMDS=""

View File

@ -88,6 +88,8 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_do_send_sig_info);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mutex_wait_start);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mutex_wait_finish);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mutex_init);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_task_blocks_on_rtmutex);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rtmutex_waiter_prio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rtmutex_wait_start);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rtmutex_wait_finish);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mutex_opt_spin_start);
@ -313,3 +315,8 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rmqueue_smallest_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_one_page_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_regmap_update);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_enable_thermal_genl_check);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_check_folio_look_around_ref);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around_migrate_folio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_test_clear_look_around_ref);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_tune_scan_type);

View File

@ -66,18 +66,36 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
{
struct dma_fence_array *result;
struct dma_fence *tmp, **array;
ktime_t timestamp;
unsigned int i;
size_t count;
count = 0;
timestamp = ns_to_ktime(0);
for (i = 0; i < num_fences; ++i) {
dma_fence_unwrap_for_each(tmp, &iter[i], fences[i])
if (!dma_fence_is_signaled(tmp))
dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) {
if (!dma_fence_is_signaled(tmp)) {
++count;
} else if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
&tmp->flags)) {
if (ktime_after(tmp->timestamp, timestamp))
timestamp = tmp->timestamp;
} else {
/*
* Use the current time if the fence is
* currently signaling.
*/
timestamp = ktime_get();
}
}
}
/*
* If we couldn't find a pending fence just return a private signaled
* fence with the timestamp of the last signaled one.
*/
if (count == 0)
return dma_fence_get_stub();
return dma_fence_allocate_private_stub(timestamp);
array = kmalloc_array(count, sizeof(*array), GFP_KERNEL);
if (!array)
@ -138,7 +156,7 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
} while (tmp);
if (count == 0) {
tmp = dma_fence_get_stub();
tmp = dma_fence_allocate_private_stub(ktime_get());
goto return_tmp;
}

View File

@ -150,16 +150,17 @@ EXPORT_SYMBOL(dma_fence_get_stub);
/**
* dma_fence_allocate_private_stub - return a private, signaled fence
* @timestamp: timestamp when the fence was signaled
*
* Return a newly allocated and signaled stub fence.
*/
struct dma_fence *dma_fence_allocate_private_stub(void)
struct dma_fence *dma_fence_allocate_private_stub(ktime_t timestamp)
{
struct dma_fence *fence;
fence = kzalloc(sizeof(*fence), GFP_KERNEL);
if (fence == NULL)
return ERR_PTR(-ENOMEM);
return NULL;
dma_fence_init(fence,
&dma_fence_stub_ops,
@ -169,7 +170,7 @@ struct dma_fence *dma_fence_allocate_private_stub(void)
set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
&fence->flags);
dma_fence_signal(fence);
dma_fence_signal_timestamp(fence, timestamp);
return fence;
}

View File

@ -353,10 +353,10 @@ EXPORT_SYMBOL(drm_syncobj_replace_fence);
*/
static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
{
struct dma_fence *fence = dma_fence_allocate_private_stub();
struct dma_fence *fence = dma_fence_allocate_private_stub(ktime_get());
if (IS_ERR(fence))
return PTR_ERR(fence);
if (!fence)
return -ENOMEM;
drm_syncobj_replace_fence(syncobj, fence);
dma_fence_put(fence);

View File

@ -370,6 +370,7 @@ void ttm_tt_unpopulate(struct ttm_device *bdev, struct ttm_tt *ttm)
ttm->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED;
}
EXPORT_SYMBOL_GPL(ttm_tt_unpopulate);
#ifdef CONFIG_DEBUG_FS

View File

@ -4348,7 +4348,7 @@ static const struct hid_device_id hidpp_devices[] = {
{ /* wireless touchpad T651 */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH,
USB_DEVICE_ID_LOGITECH_T651),
.driver_data = HIDPP_QUIRK_CLASS_WTP },
.driver_data = HIDPP_QUIRK_CLASS_WTP | HIDPP_QUIRK_DELAYED_INIT },
{ /* Mouse Logitech Anywhere MX */
LDJ_DEVICE(0x1017), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_1P0 },
{ /* Mouse logitech M560 */

View File

@ -272,7 +272,12 @@ static int hidraw_open(struct inode *inode, struct file *file)
goto out;
}
down_read(&minors_rwsem);
/*
* Technically not writing to the hidraw_table but a write lock is
* required to protect the device refcount. This is symmetrical to
* hidraw_release().
*/
down_write(&minors_rwsem);
if (!hidraw_table[minor] || !hidraw_table[minor]->exist) {
err = -ENODEV;
goto out_unlock;
@ -301,7 +306,7 @@ static int hidraw_open(struct inode *inode, struct file *file)
spin_unlock_irqrestore(&hidraw_table[minor]->list_lock, flags);
file->private_data = list;
out_unlock:
up_read(&minors_rwsem);
up_write(&minors_rwsem);
out:
if (err < 0)
kfree(list);

View File

@ -485,8 +485,8 @@ static void do_fault(struct work_struct *work)
flags |= FAULT_FLAG_REMOTE;
mmap_read_lock(mm);
vma = find_extend_vma(mm, address);
if (!vma || address < vma->vm_start)
vma = vma_lookup(mm, address);
if (!vma)
/* failed to get a vma in the right range */
goto out;

View File

@ -203,7 +203,7 @@ iommu_sva_handle_iopf(struct iommu_fault *fault, void *data)
mmap_read_lock(mm);
vma = find_extend_vma(mm, prm->addr);
vma = vma_lookup(mm, prm->addr);
if (!vma)
/* Unmapped area */
goto out_put_mm;

View File

@ -51,6 +51,8 @@ struct redist_region {
static struct gic_chip_data_v3 gic_data __read_mostly;
static DEFINE_STATIC_KEY_TRUE(supports_deactivate_key);
static DEFINE_STATIC_KEY_FALSE(gic_arm64_2941627_erratum);
#define GIC_ID_NR (1U << GICD_TYPER_ID_BITS(gic_data.rdists.gicd_typer))
#define GIC_LINE_NR min(GICD_TYPER_SPIS(gic_data.rdists.gicd_typer), 1020U)
#define GIC_ESPI_NR GICD_TYPER_ESPIS(gic_data.rdists.gicd_typer)
@ -548,10 +550,39 @@ static void gic_irq_nmi_teardown(struct irq_data *d)
gic_irq_set_prio(d, GICD_INT_DEF_PRI);
}
static bool gic_arm64_erratum_2941627_needed(struct irq_data *d)
{
enum gic_intid_range range;
if (!static_branch_unlikely(&gic_arm64_2941627_erratum))
return false;
range = get_intid_range(d);
/*
* The workaround is needed if the IRQ is an SPI and
* the target cpu is different from the one we are
* executing on.
*/
return (range == SPI_RANGE || range == ESPI_RANGE) &&
!cpumask_test_cpu(raw_smp_processor_id(),
irq_data_get_effective_affinity_mask(d));
}
static void gic_eoi_irq(struct irq_data *d)
{
write_gicreg(gic_irq(d), ICC_EOIR1_EL1);
isb();
if (gic_arm64_erratum_2941627_needed(d)) {
/*
* Make sure the GIC stream deactivate packet
* issued by ICC_EOIR1_EL1 has completed before
* deactivating through GICD_IACTIVER.
*/
dsb(sy);
gic_poke_irq(d, GICD_ICACTIVER);
}
}
static void gic_eoimode1_eoi_irq(struct irq_data *d)
@ -562,7 +593,11 @@ static void gic_eoimode1_eoi_irq(struct irq_data *d)
*/
if (gic_irq(d) >= 8192 || irqd_is_forwarded_to_vcpu(d))
return;
gic_write_dir(gic_irq(d));
if (!gic_arm64_erratum_2941627_needed(d))
gic_write_dir(gic_irq(d));
else
gic_poke_irq(d, GICD_ICACTIVER);
}
static int gic_set_type(struct irq_data *d, unsigned int type)
@ -1757,6 +1792,12 @@ static bool gic_enable_quirk_hip06_07(void *data)
return false;
}
static bool gic_enable_quirk_arm64_2941627(void *data)
{
static_branch_enable(&gic_arm64_2941627_erratum);
return true;
}
static const struct gic_quirk gic_quirks[] = {
{
.desc = "GICv3: Qualcomm MSM8996 broken firmware",
@ -1793,6 +1834,25 @@ static const struct gic_quirk gic_quirks[] = {
.mask = 0xe8f00fff,
.init = gic_enable_quirk_cavium_38539,
},
{
/*
* GIC-700: 2941627 workaround - IP variant [0,1]
*
*/
.desc = "GICv3: ARM64 erratum 2941627",
.iidr = 0x0400043b,
.mask = 0xff0e0fff,
.init = gic_enable_quirk_arm64_2941627,
},
{
/*
* GIC-700: 2941627 workaround - IP variant [2]
*/
.desc = "GICv3: ARM64 erratum 2941627",
.iidr = 0x0402043b,
.mask = 0xff0f0fff,
.init = gic_enable_quirk_arm64_2941627,
},
{
}
};

View File

@ -252,12 +252,16 @@ const struct v4l2_format_info *v4l2_format_info(u32 format)
{ .format = V4L2_PIX_FMT_RGB565, .pixel_enc = V4L2_PIXEL_ENC_RGB, .mem_planes = 1, .comp_planes = 1, .bpp = { 2, 0, 0, 0 }, .hdiv = 1, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_RGB555, .pixel_enc = V4L2_PIXEL_ENC_RGB, .mem_planes = 1, .comp_planes = 1, .bpp = { 2, 0, 0, 0 }, .hdiv = 1, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_BGR666, .pixel_enc = V4L2_PIXEL_ENC_RGB, .mem_planes = 1, .comp_planes = 1, .bpp = { 4, 0, 0, 0 }, .hdiv = 1, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_BGR48_12, .pixel_enc = V4L2_PIXEL_ENC_RGB, .mem_planes = 1, .comp_planes = 1, .bpp = { 6, 0, 0, 0 }, .hdiv = 1, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_ABGR64_12, .pixel_enc = V4L2_PIXEL_ENC_RGB, .mem_planes = 1, .comp_planes = 1, .bpp = { 8, 0, 0, 0 }, .hdiv = 1, .vdiv = 1 },
/* YUV packed formats */
{ .format = V4L2_PIX_FMT_YUYV, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 1, .bpp = { 2, 0, 0, 0 }, .hdiv = 2, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_YVYU, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 1, .bpp = { 2, 0, 0, 0 }, .hdiv = 2, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_UYVY, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 1, .bpp = { 2, 0, 0, 0 }, .hdiv = 2, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_VYUY, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 1, .bpp = { 2, 0, 0, 0 }, .hdiv = 2, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_Y212, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 1, .bpp = { 4, 0, 0, 0 }, .hdiv = 2, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_YUV48_12, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 1, .bpp = { 6, 0, 0, 0 }, .hdiv = 1, .vdiv = 1 },
/* YUV planar formats */
{ .format = V4L2_PIX_FMT_NV12, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .hdiv = 2, .vdiv = 2 },
@ -267,6 +271,7 @@ const struct v4l2_format_info *v4l2_format_info(u32 format)
{ .format = V4L2_PIX_FMT_NV24, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .hdiv = 1, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_NV42, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .hdiv = 1, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_P010, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 2, .bpp = { 2, 2, 0, 0 }, .hdiv = 2, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_P012, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 2, .bpp = { 2, 4, 0, 0 }, .hdiv = 2, .vdiv = 2 },
{ .format = V4L2_PIX_FMT_YUV410, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 3, .bpp = { 1, 1, 1, 0 }, .hdiv = 4, .vdiv = 4 },
{ .format = V4L2_PIX_FMT_YVU410, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 1, .comp_planes = 3, .bpp = { 1, 1, 1, 0 }, .hdiv = 4, .vdiv = 4 },
@ -292,6 +297,7 @@ const struct v4l2_format_info *v4l2_format_info(u32 format)
{ .format = V4L2_PIX_FMT_NV21M, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .hdiv = 2, .vdiv = 2 },
{ .format = V4L2_PIX_FMT_NV16M, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .hdiv = 2, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_NV61M, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 1, 2, 0, 0 }, .hdiv = 2, .vdiv = 1 },
{ .format = V4L2_PIX_FMT_P012M, .pixel_enc = V4L2_PIXEL_ENC_YUV, .mem_planes = 2, .comp_planes = 2, .bpp = { 2, 4, 0, 0 }, .hdiv = 2, .vdiv = 2 },
/* Bayer RGB formats */
{ .format = V4L2_PIX_FMT_SBGGR8, .pixel_enc = V4L2_PIXEL_ENC_BAYER, .mem_planes = 1, .comp_planes = 1, .bpp = { 1, 0, 0, 0 }, .hdiv = 1, .vdiv = 1 },

View File

@ -1304,11 +1304,14 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_PIX_FMT_BGRX32: descr = "32-bit XBGR 8-8-8-8"; break;
case V4L2_PIX_FMT_RGBA32: descr = "32-bit RGBA 8-8-8-8"; break;
case V4L2_PIX_FMT_RGBX32: descr = "32-bit RGBX 8-8-8-8"; break;
case V4L2_PIX_FMT_BGR48_12: descr = "12-bit Depth BGR"; break;
case V4L2_PIX_FMT_ABGR64_12: descr = "12-bit Depth BGRA"; break;
case V4L2_PIX_FMT_GREY: descr = "8-bit Greyscale"; break;
case V4L2_PIX_FMT_Y4: descr = "4-bit Greyscale"; break;
case V4L2_PIX_FMT_Y6: descr = "6-bit Greyscale"; break;
case V4L2_PIX_FMT_Y10: descr = "10-bit Greyscale"; break;
case V4L2_PIX_FMT_Y12: descr = "12-bit Greyscale"; break;
case V4L2_PIX_FMT_Y012: descr = "12-bit Greyscale (bits 15-4)"; break;
case V4L2_PIX_FMT_Y14: descr = "14-bit Greyscale"; break;
case V4L2_PIX_FMT_Y16: descr = "16-bit Greyscale"; break;
case V4L2_PIX_FMT_Y16_BE: descr = "16-bit Greyscale BE"; break;
@ -1347,6 +1350,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_PIX_FMT_YUV420: descr = "Planar YUV 4:2:0"; break;
case V4L2_PIX_FMT_HI240: descr = "8-bit Dithered RGB (BTTV)"; break;
case V4L2_PIX_FMT_M420: descr = "YUV 4:2:0 (M420)"; break;
case V4L2_PIX_FMT_YUV48_12: descr = "12-bit YUV 4:4:4 Packed"; break;
case V4L2_PIX_FMT_NV12: descr = "Y/UV 4:2:0"; break;
case V4L2_PIX_FMT_NV21: descr = "Y/VU 4:2:0"; break;
case V4L2_PIX_FMT_NV16: descr = "Y/UV 4:2:2"; break;
@ -1354,6 +1358,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_PIX_FMT_NV24: descr = "Y/UV 4:4:4"; break;
case V4L2_PIX_FMT_NV42: descr = "Y/VU 4:4:4"; break;
case V4L2_PIX_FMT_P010: descr = "10-bit Y/UV 4:2:0"; break;
case V4L2_PIX_FMT_P012: descr = "12-bit Y/UV 4:2:0"; break;
case V4L2_PIX_FMT_NV12_4L4: descr = "Y/UV 4:2:0 (4x4 Linear)"; break;
case V4L2_PIX_FMT_NV12_16L16: descr = "Y/UV 4:2:0 (16x16 Linear)"; break;
case V4L2_PIX_FMT_NV12_32L32: descr = "Y/UV 4:2:0 (32x32 Linear)"; break;
@ -1364,6 +1369,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_PIX_FMT_NV61M: descr = "Y/VU 4:2:2 (N-C)"; break;
case V4L2_PIX_FMT_NV12MT: descr = "Y/UV 4:2:0 (64x32 MB, N-C)"; break;
case V4L2_PIX_FMT_NV12MT_16X16: descr = "Y/UV 4:2:0 (16x16 MB, N-C)"; break;
case V4L2_PIX_FMT_P012M: descr = "12-bit Y/UV 4:2:0 (N-C)"; break;
case V4L2_PIX_FMT_YUV420M: descr = "Planar YUV 4:2:0 (N-C)"; break;
case V4L2_PIX_FMT_YVU420M: descr = "Planar YVU 4:2:0 (N-C)"; break;
case V4L2_PIX_FMT_YUV422M: descr = "Planar YUV 4:2:2 (N-C)"; break;
@ -1448,6 +1454,9 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
case V4L2_PIX_FMT_NV12M_8L128: descr = "NV12M (8x128 Linear)"; break;
case V4L2_PIX_FMT_NV12_10BE_8L128: descr = "10-bit NV12 (8x128 Linear, BE)"; break;
case V4L2_PIX_FMT_NV12M_10BE_8L128: descr = "10-bit NV12M (8x128 Linear, BE)"; break;
case V4L2_PIX_FMT_Y210: descr = "10-bit YUYV Packed"; break;
case V4L2_PIX_FMT_Y212: descr = "12-bit YUYV Packed"; break;
case V4L2_PIX_FMT_Y216: descr = "16-bit YUYV Packed"; break;
default:
/* Compressed formats */

View File

@ -629,7 +629,6 @@ static const struct proc_ops uid_procstat_fops = {
};
struct update_stats_work {
struct work_struct work;
uid_t uid;
#ifdef CONFIG_UID_SYS_STATS_DEBUG
struct task_struct *task;
@ -637,38 +636,46 @@ struct update_stats_work {
struct task_io_accounting ioac;
u64 utime;
u64 stime;
struct update_stats_work *next;
};
static atomic_long_t work_usw;
static void update_stats_workfn(struct work_struct *work)
{
struct update_stats_work *usw =
container_of(work, struct update_stats_work, work);
struct update_stats_work *usw;
struct uid_entry *uid_entry;
struct task_entry *task_entry __maybe_unused;
rt_mutex_lock(&uid_lock);
uid_entry = find_uid_entry(usw->uid);
if (!uid_entry)
goto exit;
while ((usw = (struct update_stats_work *)atomic_long_read(&work_usw))) {
if (atomic_long_cmpxchg(&work_usw, (long)usw, (long)(usw->next)) != (long)usw)
continue;
uid_entry->utime += usw->utime;
uid_entry->stime += usw->stime;
uid_entry = find_uid_entry(usw->uid);
if (!uid_entry)
goto next;
uid_entry->utime += usw->utime;
uid_entry->stime += usw->stime;
#ifdef CONFIG_UID_SYS_STATS_DEBUG
task_entry = find_task_entry(uid_entry, usw->task);
if (!task_entry)
goto exit;
add_uid_tasks_io_stats(task_entry, &usw->ioac,
UID_STATE_DEAD_TASKS);
task_entry = find_task_entry(uid_entry, usw->task);
if (!task_entry)
goto next;
add_uid_tasks_io_stats(task_entry, &usw->ioac,
UID_STATE_DEAD_TASKS);
#endif
__add_uid_io_stats(uid_entry, &usw->ioac, UID_STATE_DEAD_TASKS);
exit:
__add_uid_io_stats(uid_entry, &usw->ioac, UID_STATE_DEAD_TASKS);
next:
#ifdef CONFIG_UID_SYS_STATS_DEBUG
put_task_struct(usw->task);
#endif
kfree(usw);
}
rt_mutex_unlock(&uid_lock);
#ifdef CONFIG_UID_SYS_STATS_DEBUG
put_task_struct(usw->task);
#endif
kfree(usw);
}
static DECLARE_WORK(update_stats_work, update_stats_workfn);
static int process_notifier(struct notifier_block *self,
unsigned long cmd, void *v)
@ -687,7 +694,6 @@ static int process_notifier(struct notifier_block *self,
usw = kmalloc(sizeof(struct update_stats_work), GFP_KERNEL);
if (usw) {
INIT_WORK(&usw->work, update_stats_workfn);
usw->uid = uid;
#ifdef CONFIG_UID_SYS_STATS_DEBUG
usw->task = get_task_struct(task);
@ -698,7 +704,9 @@ static int process_notifier(struct notifier_block *self,
*/
usw->ioac = task->ioac;
task_cputime_adjusted(task, &usw->utime, &usw->stime);
schedule_work(&usw->work);
usw->next = (struct update_stats_work *)atomic_long_xchg(&work_usw,
(long)usw);
schedule_work(&update_stats_work);
}
return NOTIFY_OK;
}

View File

@ -2508,8 +2508,10 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc)
gsm->has_devices = false;
}
for (i = NUM_DLCI - 1; i >= 0; i--)
if (gsm->dlci[i])
if (gsm->dlci[i]) {
gsm_dlci_release(gsm->dlci[i]);
gsm->dlci[i] = NULL;
}
mutex_unlock(&gsm->mutex);
/* Now wipe the queues */
tty_ldisc_flush(gsm->tty);

View File

@ -795,6 +795,9 @@ EXPORT_SYMBOL_GPL(usb_gadget_disconnect);
* usb_gadget_activate() is called. For example, user mode components may
* need to be activated before the system can talk to hosts.
*
* This routine may sleep; it must not be called in interrupt context
* (such as from within a gadget driver's disconnect() callback).
*
* Returns zero on success, else negative errno.
*/
int usb_gadget_deactivate(struct usb_gadget *gadget)
@ -833,6 +836,8 @@ EXPORT_SYMBOL_GPL(usb_gadget_deactivate);
* This routine activates gadget which was previously deactivated with
* usb_gadget_deactivate() call. It calls usb_gadget_connect() if needed.
*
* This routine may sleep; it must not be called in interrupt context.
*
* Returns zero on success, else negative errno.
*/
int usb_gadget_activate(struct usb_gadget *gadget)
@ -1625,7 +1630,11 @@ static void gadget_unbind_driver(struct device *dev)
usb_gadget_disable_async_callbacks(udc);
if (gadget->irq)
synchronize_irq(gadget->irq);
mutex_unlock(&udc->connect_lock);
udc->driver->unbind(gadget);
mutex_lock(&udc->connect_lock);
usb_gadget_udc_stop_locked(udc);
mutex_unlock(&udc->connect_lock);

View File

@ -755,10 +755,14 @@ static irqreturn_t ehci_irq (struct usb_hcd *hcd)
/* normal [4.15.1.2] or error [4.15.1.1] completion */
if (likely ((status & (STS_INT|STS_ERR)) != 0)) {
if (likely ((status & STS_ERR) == 0))
if (likely ((status & STS_ERR) == 0)) {
INCR(ehci->stats.normal);
else
} else {
/* Force to check port status */
if (ehci->has_fsl_port_bug)
status |= STS_PCD;
INCR(ehci->stats.error);
}
bh = 1;
}

View File

@ -674,7 +674,8 @@ ehci_hub_status_data (struct usb_hcd *hcd, char *buf)
if ((temp & mask) != 0 || test_bit(i, &ehci->port_c_suspend)
|| (ehci->reset_done[i] && time_after_eq(
jiffies, ehci->reset_done[i]))) {
jiffies, ehci->reset_done[i]))
|| ehci_has_ci_pec_bug(ehci, temp)) {
if (i < 7)
buf [0] |= 1 << (i + 1);
else
@ -875,6 +876,13 @@ int ehci_hub_control(
if (temp & PORT_PEC)
status |= USB_PORT_STAT_C_ENABLE << 16;
if (ehci_has_ci_pec_bug(ehci, temp)) {
status |= USB_PORT_STAT_C_ENABLE << 16;
ehci_info(ehci,
"PE is cleared by HW port:%d PORTSC:%08x\n",
wIndex + 1, temp);
}
if ((temp & PORT_OCC) && (!ignore_oc && !ehci->spurious_oc)){
status |= USB_PORT_STAT_C_OVERCURRENT << 16;

View File

@ -490,13 +490,14 @@ static int tt_no_collision(
static void enable_periodic(struct ehci_hcd *ehci)
{
if (ehci->periodic_count++)
return;
goto out;
/* Stop waiting to turn off the periodic schedule */
ehci->enabled_hrtimer_events &= ~BIT(EHCI_HRTIMER_DISABLE_PERIODIC);
/* Don't start the schedule until PSS is 0 */
ehci_poll_PSS(ehci);
out:
turn_on_io_watchdog(ehci);
}

View File

@ -707,6 +707,15 @@ ehci_port_speed(struct ehci_hcd *ehci, unsigned int portsc)
*/
#define ehci_has_fsl_susp_errata(e) ((e)->has_fsl_susp_errata)
/*
* Some Freescale/NXP processors using ChipIdea IP have a bug in which
* disabling the port (PE is cleared) does not cause PEC to be asserted
* when frame babble is detected.
*/
#define ehci_has_ci_pec_bug(e, portsc) \
((e)->has_fsl_port_bug && ((e)->command & CMD_PSE) \
&& !(portsc & PORT_PEC) && !(portsc & PORT_PE))
/*
* While most USB host controllers implement their registers in
* little-endian format, a minority (celleb companion chip) implement

View File

@ -188,11 +188,10 @@ EXPORT_SYMBOL_GPL(xhci_plat_register_vendor_ops);
static int xhci_vendor_init(struct xhci_hcd *xhci)
{
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
struct xhci_plat_priv *priv = xhci_to_priv(xhci);
struct xhci_vendor_ops *ops = NULL;
if (xhci_plat_vendor_overwrite.vendor_ops)
ops = priv->vendor_ops = xhci_plat_vendor_overwrite.vendor_ops;
ops = xhci->vendor_ops = xhci_plat_vendor_overwrite.vendor_ops;
if (ops && ops->vendor_init)
return ops->vendor_init(xhci);
@ -202,12 +201,11 @@ static int xhci_vendor_init(struct xhci_hcd *xhci)
static void xhci_vendor_cleanup(struct xhci_hcd *xhci)
{
struct xhci_vendor_ops *ops = xhci_vendor_get_ops(xhci);
struct xhci_plat_priv *priv = xhci_to_priv(xhci);
if (ops && ops->vendor_cleanup)
ops->vendor_cleanup(xhci);
priv->vendor_ops = NULL;
xhci->vendor_ops = NULL;
}
static int xhci_plat_probe(struct platform_device *pdev)

View File

@ -13,7 +13,6 @@
struct xhci_plat_priv {
const char *firmware_name;
unsigned long long quirks;
struct xhci_vendor_ops *vendor_ops;
struct xhci_vendor_data *vendor_data;
int (*plat_setup)(struct usb_hcd *);
void (*plat_start)(struct usb_hcd *);

View File

@ -25,7 +25,6 @@
#include "xhci-trace.h"
#include "xhci-debugfs.h"
#include "xhci-dbgcap.h"
#include "xhci-plat.h"
#define DRIVER_AUTHOR "Sarah Sharp"
#define DRIVER_DESC "'eXtensible' Host Controller (xHC) Driver"
@ -4517,7 +4516,7 @@ static int __maybe_unused xhci_change_max_exit_latency(struct xhci_hcd *xhci,
struct xhci_vendor_ops *xhci_vendor_get_ops(struct xhci_hcd *xhci)
{
return xhci_to_priv(xhci)->vendor_ops;
return xhci->vendor_ops;
}
EXPORT_SYMBOL_GPL(xhci_vendor_get_ops);

View File

@ -1941,7 +1941,9 @@ struct xhci_hcd {
void *dbc;
ANDROID_KABI_RESERVE(1);
/* Used for bug 194461020 */
ANDROID_KABI_USE(1, struct xhci_vendor_ops *vendor_ops);
ANDROID_KABI_RESERVE(2);
ANDROID_KABI_RESERVE(3);
ANDROID_KABI_RESERVE(4);

View File

@ -3262,23 +3262,12 @@ static int tcpm_pd_select_pdo(struct tcpm_port *port, int *sink_pdo,
return ret;
}
#define min_pps_apdo_current(x, y) \
min(pdo_pps_apdo_max_current(x), pdo_pps_apdo_max_current(y))
static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port)
{
unsigned int i, j, max_mw = 0, max_mv = 0;
unsigned int min_src_mv, max_src_mv, src_ma, src_mw;
unsigned int min_snk_mv, max_snk_mv;
unsigned int max_op_mv;
u32 pdo, src, snk;
unsigned int src_pdo = 0, snk_pdo = 0;
unsigned int i, src_ma, max_temp_mw = 0, max_op_ma, op_mw;
unsigned int src_pdo = 0;
u32 pdo, src;
/*
* Select the source PPS APDO providing the most power while staying
* within the board's limits. We skip the first PDO as this is always
* 5V 3A.
*/
for (i = 1; i < port->nr_source_caps; ++i) {
pdo = port->source_caps[i];
@ -3289,54 +3278,17 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port)
continue;
}
min_src_mv = pdo_pps_apdo_min_voltage(pdo);
max_src_mv = pdo_pps_apdo_max_voltage(pdo);
if (port->pps_data.req_out_volt > pdo_pps_apdo_max_voltage(pdo) ||
port->pps_data.req_out_volt < pdo_pps_apdo_min_voltage(pdo))
continue;
src_ma = pdo_pps_apdo_max_current(pdo);
src_mw = (src_ma * max_src_mv) / 1000;
/*
* Now search through the sink PDOs to find a matching
* PPS APDO. Again skip the first sink PDO as this will
* always be 5V 3A.
*/
for (j = 1; j < port->nr_snk_pdo; j++) {
pdo = port->snk_pdo[j];
switch (pdo_type(pdo)) {
case PDO_TYPE_APDO:
if (pdo_apdo_type(pdo) != APDO_TYPE_PPS) {
tcpm_log(port,
"Not PPS APDO (sink), ignoring");
continue;
}
min_snk_mv =
pdo_pps_apdo_min_voltage(pdo);
max_snk_mv =
pdo_pps_apdo_max_voltage(pdo);
break;
default:
tcpm_log(port,
"Not APDO type (sink), ignoring");
continue;
}
if (min_src_mv <= max_snk_mv &&
max_src_mv >= min_snk_mv) {
max_op_mv = min(max_src_mv, max_snk_mv);
src_mw = (max_op_mv * src_ma) / 1000;
/* Prefer higher voltages if available */
if ((src_mw == max_mw &&
max_op_mv > max_mv) ||
src_mw > max_mw) {
src_pdo = i;
snk_pdo = j;
max_mw = src_mw;
max_mv = max_op_mv;
}
}
max_op_ma = min(src_ma, port->pps_data.req_op_curr);
op_mw = max_op_ma * port->pps_data.req_out_volt / 1000;
if (op_mw > max_temp_mw) {
src_pdo = i;
max_temp_mw = op_mw;
}
break;
default:
tcpm_log(port, "Not APDO type (source), ignoring");
@ -3346,16 +3298,10 @@ static unsigned int tcpm_pd_select_pps_apdo(struct tcpm_port *port)
if (src_pdo) {
src = port->source_caps[src_pdo];
snk = port->snk_pdo[snk_pdo];
port->pps_data.req_min_volt = max(pdo_pps_apdo_min_voltage(src),
pdo_pps_apdo_min_voltage(snk));
port->pps_data.req_max_volt = min(pdo_pps_apdo_max_voltage(src),
pdo_pps_apdo_max_voltage(snk));
port->pps_data.req_max_curr = min_pps_apdo_current(src, snk);
port->pps_data.req_out_volt = min(port->pps_data.req_max_volt,
max(port->pps_data.req_min_volt,
port->pps_data.req_out_volt));
port->pps_data.req_min_volt = pdo_pps_apdo_min_voltage(src);
port->pps_data.req_max_volt = pdo_pps_apdo_max_voltage(src);
port->pps_data.req_max_curr = pdo_pps_apdo_max_current(src);
port->pps_data.req_op_curr = min(port->pps_data.req_max_curr,
port->pps_data.req_op_curr);
}
@ -3473,32 +3419,16 @@ static int tcpm_pd_send_request(struct tcpm_port *port)
static int tcpm_pd_build_pps_request(struct tcpm_port *port, u32 *rdo)
{
unsigned int out_mv, op_ma, op_mw, max_mv, max_ma, flags;
enum pd_pdo_type type;
unsigned int src_pdo_index;
u32 pdo;
src_pdo_index = tcpm_pd_select_pps_apdo(port);
if (!src_pdo_index)
return -EOPNOTSUPP;
pdo = port->source_caps[src_pdo_index];
type = pdo_type(pdo);
switch (type) {
case PDO_TYPE_APDO:
if (pdo_apdo_type(pdo) != APDO_TYPE_PPS) {
tcpm_log(port, "Invalid APDO selected!");
return -EINVAL;
}
max_mv = port->pps_data.req_max_volt;
max_ma = port->pps_data.req_max_curr;
out_mv = port->pps_data.req_out_volt;
op_ma = port->pps_data.req_op_curr;
break;
default:
tcpm_log(port, "Invalid PDO selected!");
return -EINVAL;
}
max_mv = port->pps_data.req_max_volt;
max_ma = port->pps_data.req_max_curr;
out_mv = port->pps_data.req_out_volt;
op_ma = port->pps_data.req_op_curr;
flags = RDO_USB_COMM | RDO_NO_SUSPEND;
@ -4326,7 +4256,9 @@ static void run_state_machine(struct tcpm_port *port)
if (port->slow_charger_loop && (current_lim > PD_P_SNK_STDBY_MW / 5))
current_lim = PD_P_SNK_STDBY_MW / 5;
tcpm_set_current_limit(port, current_lim, 5000);
tcpm_set_charge(port, true);
/* Not sink vbus if operational current is 0mA */
tcpm_set_charge(port, !!pdo_max_current(port->snk_pdo[0]));
if (!port->pd_supported)
tcpm_set_state(port, SNK_READY, 0);
else
@ -4615,7 +4547,8 @@ static void run_state_machine(struct tcpm_port *port)
tcpm_set_current_limit(port,
tcpm_get_current_limit(port),
5000);
tcpm_set_charge(port, true);
/* Not sink vbus if operational current is 0mA */
tcpm_set_charge(port, !!pdo_max_current(port->snk_pdo[0]));
}
if (port->ams == HARD_RESET)
tcpm_ams_finish(port);
@ -5392,6 +5325,10 @@ static void _tcpm_pd_vbus_off(struct tcpm_port *port)
/* Do nothing, vbus drop expected */
break;
case SNK_HARD_RESET_WAIT_VBUS:
/* Do nothing, its OK to receive vbus off events */
break;
default:
if (port->pwr_role == TYPEC_SINK && port->attached)
tcpm_set_state(port, SNK_UNATTACHED, tcpm_wait_for_discharge(port));
@ -5443,6 +5380,9 @@ static void _tcpm_pd_vbus_vsafe0v(struct tcpm_port *port)
case SNK_DEBOUNCED:
/*Do nothing, still waiting for VSAFE5V for connect */
break;
case SNK_HARD_RESET_WAIT_VBUS:
/* Do nothing, its OK to receive vbus off events */
break;
default:
if (port->pwr_role == TYPEC_SINK && port->auto_vbus_discharge_enabled)
tcpm_set_state(port, SNK_UNATTACHED, 0);
@ -5959,12 +5899,6 @@ static int tcpm_pps_set_out_volt(struct tcpm_port *port, u16 req_out_volt)
goto port_unlock;
}
if (req_out_volt < port->pps_data.min_volt ||
req_out_volt > port->pps_data.max_volt) {
ret = -EINVAL;
goto port_unlock;
}
target_mw = (port->current_limit * req_out_volt) / 1000;
if (target_mw < port->operating_snk_mw) {
ret = -EINVAL;
@ -6493,11 +6427,7 @@ static int tcpm_psy_set_prop(struct power_supply *psy,
ret = tcpm_psy_set_online(port, val);
break;
case POWER_SUPPLY_PROP_VOLTAGE_NOW:
if (val->intval < port->pps_data.min_volt * 1000 ||
val->intval > port->pps_data.max_volt * 1000)
ret = -EINVAL;
else
ret = tcpm_pps_set_out_volt(port, val->intval / 1000);
ret = tcpm_pps_set_out_volt(port, val->intval / 1000);
break;
case POWER_SUPPLY_PROP_CURRENT_NOW:
if (val->intval > port->pps_data.max_curr * 1000)

View File

@ -189,7 +189,7 @@ static void fast_imageblit(const struct fb_image *image, struct fb_info *p,
u32 fgx = fgcolor, bgx = bgcolor, bpp = p->var.bits_per_pixel;
u32 ppw = 32/bpp, spitch = (image->width + 7)/8;
u32 bit_mask, eorx, shift;
const char *s = image->data, *src;
const u8 *s = image->data, *src;
u32 *dst;
const u32 *tab;
size_t tablen;

View File

@ -315,10 +315,10 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
* Grow the stack manually; some architectures have a limit on how
* far ahead a user-space access may be in order to grow the stack.
*/
if (mmap_read_lock_killable(mm))
if (mmap_write_lock_killable(mm))
return -EINTR;
vma = find_extend_vma(mm, bprm->p);
mmap_read_unlock(mm);
vma = find_extend_vma_locked(mm, bprm->p);
mmap_write_unlock(mm);
if (!vma)
return -EFAULT;

View File

@ -10,6 +10,7 @@
#include <linux/writeback.h>
#include <linux/sysctl.h>
#include <linux/gfp.h>
#include <linux/swap.h>
#include "internal.h"
/* A global variable is a bit ugly, but it keeps the code simple */
@ -59,6 +60,7 @@ int drop_caches_sysctl_handler(struct ctl_table *table, int write,
static int stfu;
if (sysctl_drop_caches & 1) {
lru_add_drain_all();
iterate_supers(drop_pagecache_sb, NULL);
count_vm_event(DROP_PAGECACHE);
}

View File

@ -1013,9 +1013,11 @@ static void z_erofs_do_decompressed_bvec(struct z_erofs_decompress_backend *be,
struct z_erofs_bvec *bvec)
{
struct z_erofs_bvec_item *item;
unsigned int pgnr;
if (!((bvec->offset + be->pcl->pageofs_out) & ~PAGE_MASK)) {
unsigned int pgnr;
if (!((bvec->offset + be->pcl->pageofs_out) & ~PAGE_MASK) &&
(bvec->end == PAGE_SIZE ||
bvec->offset + bvec->end == be->pcl->length)) {
pgnr = (bvec->offset + be->pcl->pageofs_out) >> PAGE_SHIFT;
DBG_BUGON(pgnr >= be->nr_pages);

Some files were not shown because too many files have changed in this diff Show More