Merge branch 'android14-6.1' into branch 'android14-6.1-lts'

This catches the android14-6.1-lts branch up with a lot of changes that
have only gone into the android14-6.1 branch to make testing easier and
to track more symbols properly.

This includes the following commits:

* 171c27ba1f BACKPORT: usb: gadget: uvc: Add missing initialization of ssp config descriptor
* bb0173a1da BACKPORT: usb: gadget: unconditionally allocate hs/ss descriptor in bind operation
* 5c4815f5b6 UPSTREAM: usb: gadget: f_uvc: change endpoint allocation in uvc_function_bind()
* 5a05f2e755 UPSTREAM: usb: gadget: function: Remove unused declarations
* defd93f219 UPSTREAM: usb: gadget: uvc: clean up comments and styling in video_pump
* 82fe654f56 UPSTREAM: mm/page_alloc: use write_seqlock_irqsave() instead write_seqlock() + local_irq_save().
* ed6694a682 UPSTREAM: cpuidle: teo: Update idle duration estimate when choosing shallower state
* d8e99e1af8 BACKPORT: Revert "PCI: dwc: Wait for link up only if link is started"
* 841ad9b9b3 UPSTREAM: ravb: Fix use-after-free issue in ravb_tx_timeout_work()
* 17e456ce41 UPSTREAM: ravb: Fix up dma_free_coherent() call in ravb_remove()
* 5ba644e8a0 BACKPORT: usb: typec: altmodes/displayport: Signal hpd low when exiting mode
* 9e4f6e1ef8 ANDROID: KVM: arm64: Fix KVM_HOST_S2_DEFAULT_MMIO_PTE encoding
* 5418491fa5 ANDROID: Update the ABI symbol list
* b821a3c8fc ANDROID: fs/proc: Perform priority inheritance around access_remote_vm()
* 37c1a91404 UPSTREAM: serial: 8250_dw: fall back to poll if there's no interrupt
*   35361bdac2 Merge "Merge tag 'android14-6.1.43_r00' into android14-6.1" into android14-6.1
|\
| * 769612f594 Merge tag 'android14-6.1.43_r00' into android14-6.1
* | 034b4b4f1b ANDROID: Update the ABI representation
* | 0947464633 ANDROID: power: Add vendor hook for suspend
|/
* b783e85610 ANDROID: Update the ABI symbol list
* 2c609cab0b UPSTREAM: of: reserved-mem: print out reserved-mem details during boot
* ff2563f384 ANDROID: GKI: Update symbol list for xiaomi "abi_gki_aarch64_xiaomi"
* 7542b3bef7 ANDROID: Update symbols list and ABI for qcom
* 63d4231d85 ANDROID: fuse-bpf: Add NULL pointer check in fuse_entry_revalidate
* 09641ca77f ANDROID: GKI: Update oplus symbol list update oplus symbol list for Addding hooks for adjusting alloc_flags
* 0b20035778 ANDROID: vendor_hooks: Add hooks for adjusting alloc_flags
* 367ce30ddc UPSTREAM: libceph: harden msgr2.1 frame segment length checks
* debc1e0486 ANDROID: Update the ABI symbol list
* 401b78ce87 ANDROID: mm: Add vendor hook in filemap_get_folio()
* 1b3269beea UPSTREAM: netfilter: ipset: Fix race between IPSET_CMD_CREATE and IPSET_CMD_SWAP
* a9c65c7efb UPSTREAM: netfilter: ipset: Add schedule point in call_ad().
* cd4ea97d2a UPSTREAM: net: xfrm: Fix xfrm_address_filter OOB read
* a4ccba8bdc UPSTREAM: igb: set max size RX buffer when store bad packet is enabled
* 8a67c06094 ANDROID: GKI: fix ABI breakage in struct hid_device
* 28ee91ed2b UPSTREAM: HID: input: map battery system charging
* 2dd1c535d1 FROMGIT: maple_tree: add GFP_KERNEL to allocations in mas_expected_entries()
* faa4efd6b1 UPSTREAM: maple_tree: replace data before marking dead in split and spanning store
* 47e3b4920d UPSTREAM: maple_tree: change mas_adopt_children() parent usage
* e0f829b74b UPSTREAM: maple_tree: introduce mas_tree_parent() definition
* e69d6570ed UPSTREAM: maple_tree: introduce mas_put_in_tree()
* d2e45cee2d UPSTREAM: maple_tree: reorder replacement of nodes to avoid live lock
* 545cc51b9f ANDROID: GKI: add allowed list for Exynosauto SoC
* f51787dfb7 ANDROID: Update the ABI symbol list
* 1b71e8ef45 ANDROID: Update the ABI symbol list
* 908a530787 ANDROID: KVM: Update nVHE stack size to 8KB
* 53771c1826 ANDROID: Update the ABI symbol list
* a22ff19ff6 ANDROID: mm: Add vendor hook in rmqueue()
* 09ca291e0a FROMLIST: virt: geniezone: Add memory pin/unpin support
* 7cc3767c2a FROMLIST: virt: geniezone: Add block-based demand paging support
* 3fcc07ee5f FROMLIST: virt: geniezone: Add demand paging support
* 6a1a30896d ANDROID: virt: geniezone: Refactoring memory region support
* 9f64b18da1 ANDROID: virt: geniezone: Refactor code comments from mainline v6 accordingly
* 544b128747 ANDROID: virt: geniezone: Refactoring vgic to align with upstream v6
* f9291d7af0 ANDROID: virt: geniezone: Refactoring vcpu to align with upstream v6
* e348fe6d2d ANDROID: virt: geniezone: Refactoring vm capability to align with upstream v6
* fb3444af07 ANDROID: virt: geniezone: Refactoring irqfd to align with upstream v6
* 7e1cb3bdec ANDROID: sched: Add EXPORT_SYMBOL_GPL for sched_wakeup
* 73cee74111 ANDROID: vendor_hooks: Export direct reclaim trace points
* fca353bdc0 ANDROID: mm: freeing MIGRATE_ISOLATE page instantly
* 08351370ec ANDROID: KVM: arm64: Allow setting device attr in stage-2 PTEs
* b25aabd50a ANDROID: KVM: arm64: Fix hyp tracing build dependencies
* f82e080810 ANDROID: abi_gki_aarch64_qcom: update abi symbols
* 2fff9f7cd4 ANDROID: vendor hooks: Enable Vendor hook to register smmu driver to dedicated iommu bus defined by vendor.
* fadd504206 UPSTREAM: netfilter: xt_sctp: validate the flag_info count
* 1c90408931 UPSTREAM: mm/mglru: make memcg_lru->lock irq safe
* 87cd3d689e UPSTREAM: iommu/amd: Fix possible memory leak of 'domain'
* e5f37a2c46 UPSTREAM: selftests/tc-testing: Remove configs that no longer exist
* 7c793b4d8f ANDROID: abi_gki_aarch64_qcom: update abi symbols
* bf51ba7b3c ANDROID: ABI: Update symbol list for imx
* 1e6a9aeb14 ANDROID: GKI: add allowed list for Exynosauto SoC
* a338830fde UPSTREAM: ufs: core: wlun send SSU timeout recovery
* fd2e98c6f5 UPSTREAM: PM: domains: fix integer overflow issues in genpd_parse_state()
* e3e2ece8a0 ANDROID: mm: vh for compaction begin/end
* 2176509c4d UPSTREAM: netfilter: xt_u32: validate user space input
* 132b47119e UPSTREAM: netfilter: nfnetlink_osf: avoid OOB read
* 8c3b0a3493 UPSTREAM: ipv4: fix null-deref in ipv4_link_failure
* 4181951d21 UPSTREAM: net/sched: Retire rsvp classifier
* acb0728638 UPSTREAM: usb: core: stop USB enumeration if too many retries
* 8b1bd87917 ANDROID: KVM: arm64: Add missing hyp events for forwarded SMCs
* f4812c6864 ANDROID: KVM: arm64: Store hyp address in the host fp state array
* 6334225e9b ANDROID: KVM: arm64: Allocate host fp/simd state later in initialization
* 83ebd50235 UPSTREAM: netfilter: nf_tables: disallow rule removal from chain binding
* 7d088a3e4f UPSTREAM: fs/smb/client: Reset password pointer to NULL
* 2807a43b69 ANDROID: Update the ABI symbol list
* 368b752997 FROMGIT: usb: typec: ucsi: Clear EVENT_PENDING bit if ucsi_send_command fails
* 4fcc13c1ff ANDROID: mm: add missing check in the backport for handling faults under VMA lock
* 1fe248991f ANDROID: Update the ABI symbol list
* 4301901382 ANDROID: Update STG for ANDROID_KABI_USE(1, unsigned int saved_state)
* 22cd8e0def FROMGIT: freezer,sched: Use saved_state to reduce some spurious wakeups
* 457e65696a BACKPORT: FROMGIT: sched/core: Remove ifdeffery for saved_state
* 3437652fa2 BACKPORT: erofs: set block size to the on-disk block size
* e84c93fd42 BACKPORT: erofs: avoid hardcoded blocksize for subpage block support
* 36496d09e8 BACKPORT: erofs: get rid of z_erofs_do_map_blocks() forward declaration
* cee0694362 BACKPORT: erofs: get rid of erofs_inode_datablocks()
* f7d9c7d0b4 BACKPORT: erofs: simplify iloc()
* 7d42260e5c ANDROID: Update the ABI symbol list
* 324c8522f9 ANDROID: Update symbol list for mtk
* 30d86f760c ANDROID: mm: Add vendor hooks for recording when kswapd finishing the reclaim job
* 0deb7bb73e ANDROID: mm: Add vendor hooks for __alloc_pages_slowpath
* 5c2855fbce ANDROID: mm: Add vendor hook for compact pages work.
* 4e10001b7c ANDROID: Update the ABI symbol list
* 2434dece1f FROMGIT: usb: gadget: u_serial: Add null pointer check in gserial_suspend
* 5f8aa27248 ANDROID: Update the ABI symbol list
* f7e7874d9b BACKPORT: usb: typec: bus: verify partner exists in typec_altmode_attention
* 5cb3b26d79 ANDROID: ABI: Update the pixel symbol list and stg
* cf1ba6a102 UPSTREAM: shmem: fix smaps BUG sleeping while atomic
* 52824b718c UPSTREAM: blk-ioprio: Introduce promote-to-rt policy
* dce1834895 ANDROID: ABI: Update oplus symbol list
* 89815ec103 ANDROID: GKI: export symbols to do reverse mapping within memcg and modify lru stats
* 45fe413fdf ANDROID: gki_defconfig: Enable CONFIG_BLK_CGROUP_IOPRIO
* c240f4ed00 ANDROID: gunyah: Convert mutex_lock_interruptible to mutex_lock
* 6305df8009 UPSTREAM: bpf, sockmap: fix deadlocks in the sockhash and sockmap
* 7999b48d76 UPSTREAM: net: sched: sch_qfq: Fix UAF in qfq_dequeue()
* 709dc094e3 UPSTREAM: ARM: ptrace: Restore syscall skipping for tracers
* ea494b2716 UPSTREAM: ARM: ptrace: Restore syscall restart tracing
* b374d94195 Revert "BACKPORT: FROMGIT: usb: gadget: udc: Handle gadget_connect failure during bind operation"
* ae5ea9043d ANDROID: Move microdroid and crashdump defconfigs to common
* b548c046c7 UPSTREAM: net: prevent skb corruption on frag list segmentation
* 060ebb378d ANDROID: ABI: Update oplus symbol list
* f451f4a599 ANDROID: vendor_hooks: Add hooks for oem percpu-rwsem optimaton
* a3cb85bffe ANDROID: ABI: Update oplus symbol list
* 740a51391b ANDROID: vendor_hooks: Add hooks for binder
* c6724bfeda ANDROID: uid_sys_stat: instead update_io_stats_uid_locked to update_io_stats_uid
* 97f2f8a065 ANDROID: uid_sys_stat: split the global lock uid_lock to the fine-grained locks for each hlist in hash_table.
* 9290fc3e8d ANDROID: Flush deferred probe list before dropping host priv
* 6625133137 ANDROID: KVM: arm64: Don't force pte mappings in [n]VHE guest stage-2
* 2f2c035453 UPSTREAM: usb: gadget: u_serial: Add null pointer check in gs_start_io
* ac9005946a UPSTREAM: sched: Consider task_struct::saved_state in wait_task_inactive()
* b52b33e912 UPSTREAM: sched: Unconditionally use full-fat wait_task_inactive()
* 8465ef2b4f ANDROID: GKI: Update symbol list for ASUS
* 1e4c6e5048 UPSTREAM: tty: n_gsm: fix the UAF caused by race condition in gsm_cleanup_mux
* 40b46d8656 UPSTREAM: netfilter: nf_tables: prevent OOB access in nft_byteorder_eval
* d8f69aade5 UPSTREAM: iommu/of: mark an unused function as __maybe_unused
* a032fbc776 UPSTREAM: iommu: dma: Use of_iommu_get_resv_regions()
* 693c712967 UPSTREAM: iommu: Implement of_iommu_get_resv_regions()
* e9603e85ac UPSTREAM: dt-bindings: reserved-memory: Document iommu-addresses
* 64ed291347 UPSTREAM: of: Introduce of_translate_dma_region()
* 536996aa30 ANDROID: GKI: Add rockchip fragment and build.config
* 6a10b34387 ANDROID: GKI: Add symbols for rockchip v4l2
* 3e3c6debe4 ANDROID: GKI: Add hid and usb symbols for rockchip
* 53162778e7 ANDROID: GKI: Add cdc symbols for rockchip
* b09b06dcf1 ANDROID: GKI: Add symbols for rockchip sdhci
* 62d64a59d9 ANDROID: GKI: Add symbols for rockchip devfreq
* 9c9ee611cf ANDROID: GKI: Add crypto symbols for rockchip
* 7246ecec46 ANDROID: GKI: Add rockchip drm symbols and abi
* 2f3d6aa0c9 ANDROID: GKI: Add initial abi for rockchip
* 1e26ba1901 ANDROID: GKI: Add initial rockchip symbol list
* 404360f6d3 FROMLIST: clk: clk-fractional-divider: Export clk_fractional_divider_general_approximation API
* c3d6c235b2 UPSTREAM: net/sched: sch_hfsc: Ensure inner classes have fsc curve
* d3212c2dba UPSTREAM: sched/rt: Fix bad task migration for rt tasks
* 215e38e517 ANDROID: GKI: Add ASUS symbol list
* e52e60e3ed UPSTREAM: tcpm: Avoid soft reset when partner does not support get_status
* bbc9d3bc0b ANDROID: vendor_hooks: mm: Add tune_swappiness vendor hook in get_swappiness()
* 7024c9cd28 ANDROID: ABI: Update symbols to unisoc whitelist
* de3e9f3111 ANDROID: ABI: Add to QCOM symbols list
* 85902d60cd ANDROID: ABI: update symbol list for galaxy
* c2ac612610 BACKPORT: printk: ringbuffer: Fix truncating buffer size min_t cast
* 7579b22626 ANDROID: GKI: Add symbols to symbol list for oplus
* 6e5f182128 ANDROID: signal: Add vendor hook for memory reap
* 3a51a61927 ANDROID: abi_gki_aarch64_qcom: white list symbols for mglru overshoot
* 0500235e3f ANDROID: vendor_hook: Add vendor hook to decide scan abort policy
* e6ed59127c UPSTREAM: af_unix: Fix null-ptr-deref in unix_stream_sendpage().
* 2eb5b31ac1 FROMLIST: ufs: core: fix abnormal scale up after last cmd finish
* 89434cbd2d FROMLIST: ufs: core: fix abnormal scale up after scale down
* e490b62fed FROMLIST: ufs: core: only suspend clock scaling if scale down
* 3ffb038098 ANDROID: GKI: update ABI definition
* e2fa9ebcae UPSTREAM: zsmalloc: allow only one active pool compaction context
* 478ec4dbea ANDROID: GKI: Update Tuxera symbol list
* cd94fe67fd ANDROID: ABI: Update symbols to qcom whitelist
* 68eefde2d3 UPSTREAM: usb: typec: tcpm: set initial svdm version based on pd revision
* a68bd01493 ANDROID: KVM: arm64: Don't update IOMMUs for share/unshare
* 20ecb229c5 ANDROID: cpuidle: teo: Export a function that allows modifying util_threshold
* 2490ab50e7 ANDROID: sched: Add vendor hook for rt util update
* 6d97f75abc ANDROID: sched: Add vendor hook for util-update related functions
* e08c5de06e ANDROID: sched: Add vendor hooks for override sugov behavior
* 5762974151 ANDROID: Add new hook to enable overriding uclamp_validate()
* b57e3c1d99 ANDROID: sched/uclamp: Don't enable uclamp_is_used static key by in-kernel requests
* 2b25d535d0 ANDROID: topology: Add vendor hook for use_amu_fie
* eb9686932b ANDROID: sched: Export symbols needed for vendor hooks
* 84131c988b ANDROID: Update symbol list for Exynos Auto SoCs
* 3367abadff UPSTREAM: netfilter: nf_tables: deactivate catchall elements in next generation
* a891f77b7b ANDROID: GKI: Update symbols to symbol list
* 4d8d9522db ANDROID: GKI: Export four symbols in file net/core/net-trace.c
* 3973acfed0 UPSTREAM: blk-ioc: fix recursive spin_lock/unlock_irq() in ioc_clear_queue()
* 523bfe8539 ANDROID: fuse-bpf: Align data structs for 32-bit kernels
* 9f5a84b955 ANDROID: GKI: Update symbol list for xiaomi
* 176d72d941 ANDROID: vendor_hooks: export cgroup_threadgroup_rwsem
* 1fb9e95d46 ANDROID: GKI: add symbol list file for meizu
* 8fb9de0877 ANDROID: fuse-bpf: Get correct inode in mkdir
* 0fdb44964c ANDROID: ABI: Update allowed list for QCOM
* 404522c763 UPSTREAM: blk-ioc: protect ioc_destroy_icq() by 'queue_lock'
* bd0308e36b ANDROID: GKI: Update symbols to symbol list
* 87647c0c54 ANDROID: uid_sys_stats: Use llist for deferred work
* 4b3ab91671 UPSTREAM: net: nfc: Fix use-after-free caused by nfc_llcp_find_local
* c603880bd5 UPSTREAM: netfilter: nf_tables: disallow rule addition to bound chain via NFTA_RULE_CHAIN_ID
* d95b2b008e UPSTREAM: net: tap_open(): set sk_uid from current_fsuid()
* b15c3a3df0 UPSTREAM: usb: typec: ucsi: Fix command cancellation
* 0c34d588af UPSTREAM: locks: fix KASAN: use-after-free in trace_event_raw_event_filelock_lock
* 20266a0652 ANDROID: kleaf: Remove ptp_kvm.ko from i386 modules
* ce18fe6f29 ANDROID: GKI: Add symbols to symbol list for oplus
* 8e6550add2 ANDROID: vendor_hooks: Add tune swappiness hook in get_scan_count()
* dd87a7122c ANDROID: GKI: Update symbol list for VIVO
* 638804ea1c ANDROID: kleaf: get_gki_modules_list add i386 option
* 264e2973a4 ANDROID: arm as an option for get_gki_modules_list
* 37edfbc5c4 UPSTREAM: um: Only disable SSE on clang to work around old GCC bugs
* 2a13641a14 ANDROID: GKI: Update abi_gki_aarch64_qcom for page_owner symbols
* f08623648a ANDROID: mm: Export page_owner_inited and __set_page_owner
* e44e3955f7 ANDROID: Use alias for old rules.
* 67018dd4e4 ANDROID: virt: geniezone: Enable as GKI module for arm64
* 9a399ca713 ANDROID: Add arch specific gki module list targets
* 3e079b7691 FROMLIST: virt: geniezone: Add dtb config support
* 39bd65ec1d FROMLIST: virt: geniezone: Add memory region support
* c26057e351 FROMLIST: virt: geniezone: Add ioeventfd support
* e73a5222e6 FROMLIST: virt: geniezone: Add irqfd support
* 7427b76faa FROMLIST: virt: geniezone: Add irqchip support for virtual interrupt injection
* 540cff0872 FROMLIST: virt: geniezone: Add vcpu support
* 6ce86d075e FROMLIST: virt: geniezone: Add GenieZone hypervisor support
* 40107a0081 FROMLIST: dt-bindings: hypervisor: Add MediaTek GenieZone hypervisor
* beaffb638b FROMLIST: docs: geniezone: Introduce GenieZone hypervisor
* e0c4636bd2 UPSTREAM: net/sched: cls_route: No longer copy tcf_result on update to avoid use-after-free
* ec1f17ddac UPSTREAM: net: tun_chr_open(): set sk_uid from current_fsuid()
* 0adc759b0c UPSTREAM: exfat: check if filename entries exceeds max filename length
* f4ba064f76 UPSTREAM: net/sched: cls_fw: No longer copy tcf_result on update to avoid use-after-free
* 5b0878fc61 ANDROID: abi_gki_aarch64_qcom: update abi symbols
* 7551a1a2a1 ANDROID: cgroup: Add android_rvh_cgroup_force_kthread_migration
* cd018c99fa FROMGIT: pstore/ram: Check start of empty przs during init
* ffaab71302 UPSTREAM: erofs: avoid infinite loop in z_erofs_do_read_page() when reading beyond EOF
* 8497f46a87 UPSTREAM: erofs: avoid useless loops in z_erofs_pcluster_readmore() when reading beyond EOF
* 2f805fb912 UPSTREAM: erofs: Fix detection of atomic context
* cc6111a287 UPSTREAM: erofs: fix compact 4B support for 16k block size
* f11ccb03a0 UPSTREAM: erofs: kill hooked chains to avoid loops on deduplicated compressed images
* 7521b904dc UPSTREAM: erofs: fix potential overflow calculating xattr_isize
* 6ec6eee87e UPSTREAM: erofs: stop parsing non-compact HEAD index if clusterofs is invalid
* 9089c10d9c UPSTREAM: erofs: initialize packed inode after root inode is assigned
* 797dac42cc ANDROID: GKI: Update ABI for zsmalloc fixes
* cb440cecb2 BACKPORT: zsmalloc: fix races between modifications of fullness and isolated
* c0e84be923 ANDROID: ABI: Update symbols to unisoc whitelist for A14-6.1
* 5ef132d564 UPSTREAM: zsmalloc: consolidate zs_pool's migrate_lock and size_class's locks
* ec6b3d552a UPSTREAM: netfilter: nfnetlink_log: always add a timestamp
* 4db95aa21a ANDROID: virt: gunyah: Do not allocate irq for GH_RM_RESOURCE_NO_VIRQ
* 2d1d3be2ba ANDROID: GKI: Add Tuxera symbol list
* 20d8a89758 ANDROID: ABI: Update oplus symbol list
* 7afa84fbb9 ANDROID: vendor_hooks: Add hooks for waking up and exiting control
* 9ca47685c5 ANDROID: GKI: Update symbol list for xiaomi
* 2d7f87b0ff ANDROID: vendor_hooks:vendor hook for percpu-rwsem
* 63af84cffe ANDROID: fips140: fix the error injection module parameters
* 71bedf9d9c BACKPORT: blk-crypto: dynamically allocate fallback profile
* 086befddbe UPSTREAM: net/sched: cls_u32: No longer copy tcf_result on update to avoid use-after-free
* ecd8d8a208 UPSTREAM: Bluetooth: L2CAP: Fix use-after-free in l2cap_sock_ready_cb
* 6923dcc21d UPSTREAM: media: usb: siano: Fix warning due to null work_func_t function pointer

Change-Id: Idc01a15f70d151d08c30ee23c2939260764e428b
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman 2023-10-31 16:41:59 +00:00
commit d07ffd5565
182 changed files with 22709 additions and 1366 deletions

View File

@ -15,7 +15,7 @@ load(
"kernel_modules_install",
"merged_kernel_uapi_headers",
)
load(":modules.bzl", "COMMON_GKI_MODULES_LIST")
load(":modules.bzl", "get_gki_modules_list")
package(
default_visibility = [
@ -47,10 +47,49 @@ checkpatch(
checkpatch_pl = "scripts/checkpatch.pl",
)
write_file(
# Deprecated - Use arch specific files from below.
alias(
name = "gki_system_dlkm_modules",
out = "android/gki_system_dlkm_modules",
content = COMMON_GKI_MODULES_LIST + [
actual = "gki_system_dlkm_modules_arm64",
deprecation = """
Common list for all architectures is deprecated.
Instead use the file corresponding to the architecture used:
i.e. `gki_system_dlkm_modules_{arch}`
""",
)
alias(
name = "android/gki_system_dlkm_modules",
actual = "android/gki_system_dlkm_modules_arm64",
deprecation = """
Common list for all architectures is deprecated.
Instead use the file corresponding to the architecture used:
i.e. `gki_system_dlkm_modules_{arch}`
""",
)
write_file(
name = "gki_system_dlkm_modules_arm64",
out = "android/gki_system_dlkm_modules_arm64",
content = get_gki_modules_list("arm64") + [
# Ensure new line at the end.
"",
],
)
write_file(
name = "gki_system_dlkm_modules_x86_64",
out = "android/gki_system_dlkm_modules_x86_64",
content = get_gki_modules_list("x86_64") + [
# Ensure new line at the end.
"",
],
)
write_file(
name = "gki_system_dlkm_modules_risc64",
out = "android/gki_system_dlkm_modules_riscv64",
content = get_gki_modules_list("riscv64") + [
# Ensure new line at the end.
"",
],
@ -60,16 +99,20 @@ filegroup(
name = "aarch64_additional_kmi_symbol_lists",
srcs = [
# keep sorted
"android/abi_gki_aarch64_asus",
"android/abi_gki_aarch64_db845c",
"android/abi_gki_aarch64_exynos",
"android/abi_gki_aarch64_exynosauto",
"android/abi_gki_aarch64_galaxy",
"android/abi_gki_aarch64_honor",
"android/abi_gki_aarch64_imx",
"android/abi_gki_aarch64_meizu",
"android/abi_gki_aarch64_mtk",
"android/abi_gki_aarch64_oplus",
"android/abi_gki_aarch64_pixel",
"android/abi_gki_aarch64_qcom",
"android/abi_gki_aarch64_rockchip",
"android/abi_gki_aarch64_tuxera",
"android/abi_gki_aarch64_unisoc",
"android/abi_gki_aarch64_virtual_device",
"android/abi_gki_aarch64_vivo",
@ -81,7 +124,7 @@ filegroup(
define_common_kernels(target_configs = {
"kernel_aarch64": {
"kmi_symbol_list_strict_mode": True,
"module_implicit_outs": COMMON_GKI_MODULES_LIST,
"module_implicit_outs": get_gki_modules_list("arm64"),
"kmi_symbol_list": "android/abi_gki_aarch64",
"kmi_symbol_list_add_only": True,
"additional_kmi_symbol_lists": [":aarch64_additional_kmi_symbol_lists"],
@ -91,12 +134,12 @@ define_common_kernels(target_configs = {
},
"kernel_aarch64_16k": {
"kmi_symbol_list_strict_mode": False,
"module_implicit_outs": COMMON_GKI_MODULES_LIST,
"module_implicit_outs": get_gki_modules_list("arm64"),
"make_goals": _GKI_AARCH64_MAKE_GOALS,
},
"kernel_aarch64_debug": {
"kmi_symbol_list_strict_mode": False,
"module_implicit_outs": COMMON_GKI_MODULES_LIST,
"module_implicit_outs": get_gki_modules_list("arm64"),
"kmi_symbol_list": "android/abi_gki_aarch64",
"kmi_symbol_list_add_only": True,
"additional_kmi_symbol_lists": [":aarch64_additional_kmi_symbol_lists"],
@ -106,19 +149,19 @@ define_common_kernels(target_configs = {
},
"kernel_riscv64": {
"kmi_symbol_list_strict_mode": False,
"module_implicit_outs": COMMON_GKI_MODULES_LIST,
"module_implicit_outs": get_gki_modules_list("riscv64"),
"make_goals": _GKI_RISCV64_MAKE_GOALS,
},
"kernel_x86_64": {
"kmi_symbol_list_strict_mode": False,
"module_implicit_outs": COMMON_GKI_MODULES_LIST,
"module_implicit_outs": get_gki_modules_list("x86_64"),
"protected_exports_list": "android/abi_gki_protected_exports_x86_64",
"protected_modules_list": "android/gki_x86_64_protected_modules",
"make_goals": _GKI_X86_64_MAKE_GOALS,
},
"kernel_x86_64_debug": {
"kmi_symbol_list_strict_mode": False,
"module_implicit_outs": COMMON_GKI_MODULES_LIST,
"module_implicit_outs": get_gki_modules_list("x86_64"),
"protected_exports_list": "android/abi_gki_protected_exports_x86_64",
"protected_modules_list": "android/gki_x86_64_protected_modules",
"make_goals": _GKI_X86_64_MAKE_GOALS,
@ -575,7 +618,7 @@ kernel_build(
"modules",
"rk3399-rock-pi-4b.dtb",
],
module_outs = COMMON_GKI_MODULES_LIST + _ROCKPI4_MODULE_OUTS + _ROCKPI4_WATCHDOG_MODULE_OUTS,
module_outs = get_gki_modules_list("arm64") + _ROCKPI4_MODULE_OUTS + _ROCKPI4_WATCHDOG_MODULE_OUTS,
visibility = ["//visibility:private"],
)
@ -598,7 +641,7 @@ kernel_build(
"modules",
"rk3399-rock-pi-4b.dtb",
],
module_outs = COMMON_GKI_MODULES_LIST + _ROCKPI4_MODULE_OUTS,
module_outs = get_gki_modules_list("arm64") + _ROCKPI4_MODULE_OUTS,
visibility = ["//visibility:private"],
)

View File

@ -264,6 +264,17 @@ Description:
attached to the port will not be detected, initialized,
or enumerated.
What: /sys/bus/usb/devices/.../<hub_interface>/port<X>/early_stop
Date: Sep 2022
Contact: Ray Chi <raychi@google.com>
Description:
Some USB hosts have some watchdog mechanisms so that the device
may enter ramdump if it takes a long time during port initialization.
This attribute allows each port just has two attempts so that the
port initialization will be failed quickly. In addition, if a port
which is marked with early_stop has failed to initialize, it will ignore
all future connections until this attribute is clear.
What: /sys/bus/usb/devices/.../<hub_interface>/port<X>/state
Date: June 2023
Contact: Roy Luo <royluo@google.com>

View File

@ -2015,31 +2015,33 @@ that attribute:
no-change
Do not modify the I/O priority class.
none-to-rt
For requests that do not have an I/O priority class (NONE),
change the I/O priority class into RT. Do not modify
the I/O priority class of other requests.
promote-to-rt
For requests that have a non-RT I/O priority class, change it into RT.
Also change the priority level of these requests to 4. Do not modify
the I/O priority of requests that have priority class RT.
restrict-to-be
For requests that do not have an I/O priority class or that have I/O
priority class RT, change it into BE. Do not modify the I/O priority
class of requests that have priority class IDLE.
priority class RT, change it into BE. Also change the priority level
of these requests to 0. Do not modify the I/O priority class of
requests that have priority class IDLE.
idle
Change the I/O priority class of all requests into IDLE, the lowest
I/O priority class.
none-to-rt
Deprecated. Just an alias for promote-to-rt.
The following numerical values are associated with the I/O priority policies:
+-------------+---+
+----------------+---+
| no-change | 0 |
+-------------+---+
| none-to-rt | 1 |
+-------------+---+
+----------------+---+
| rt-to-be | 2 |
+-------------+---+
+----------------+---+
| all-to-idle | 3 |
+-------------+---+
+----------------+---+
The numerical value that corresponds to each I/O priority class is as follows:
@ -2055,9 +2057,13 @@ The numerical value that corresponds to each I/O priority class is as follows:
The algorithm to set the I/O priority class for a request is as follows:
- Translate the I/O priority class policy into a number.
- Change the request I/O priority class into the maximum of the I/O priority
class policy number and the numerical I/O priority class.
- If I/O priority class policy is promote-to-rt, change the request I/O
priority class to IOPRIO_CLASS_RT and change the request I/O priority
level to 4.
- If I/O priorityt class is not promote-to-rt, translate the I/O priority
class policy into a number, then change the request I/O priority class
into the maximum of the I/O priority class policy number and the numerical
I/O priority class.
PID
---

View File

@ -0,0 +1,31 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/hypervisor/mediatek,geniezone-hyp.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: MediaTek GenieZone hypervisor
maintainers:
- Yingshiuan Pan <yingshiuan.pan@mediatek.com>
description:
This interface is designed for integrating GenieZone hypervisor into Android
Virtualization Framework(AVF) along with Crosvm as a VMM.
It acts like a wrapper for every hypercall to GenieZone hypervisor in
order to control guest VM lifecycles and virtual interrupt injections.
properties:
compatible:
const: mediatek,geniezone-hyp
required:
- compatible
additionalProperties: false
examples:
- |
hypervisor {
compatible = "mediatek,geniezone-hyp";
};

View File

@ -52,6 +52,30 @@ properties:
Address and Length pairs. Specifies regions of memory that are
acceptable to allocate from.
iommu-addresses:
$ref: /schemas/types.yaml#/definitions/phandle-array
description: >
A list of phandle and specifier pairs that describe static IO virtual
address space mappings and carveouts associated with a given reserved
memory region. The phandle in the first cell refers to the device for
which the mapping or carveout is to be created.
The specifier consists of an address/size pair and denotes the IO
virtual address range of the region for the given device. The exact
format depends on the values of the "#address-cells" and "#size-cells"
properties of the device referenced via the phandle.
When used in combination with a "reg" property, an IOVA mapping is to
be established for this memory region. One example where this can be
useful is to create an identity mapping for physical memory that the
firmware has configured some hardware to access (such as a bootsplash
framebuffer).
If no "reg" property is specified, the "iommu-addresses" property
defines carveout regions in the IOVA space for the given device. This
can be useful if a certain memory region should not be mapped through
the IOMMU.
no-map:
type: boolean
description: >
@ -89,12 +113,69 @@ allOf:
- no-map
oneOf:
- oneOf:
- required:
- reg
- required:
- size
- oneOf:
# IOMMU reservations
- required:
- iommu-addresses
# IOMMU mappings
- required:
- reg
- iommu-addresses
additionalProperties: true
examples:
- |
/ {
compatible = "foo";
model = "foo";
#address-cells = <2>;
#size-cells = <2>;
reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
ranges;
adsp_resv: reservation-adsp {
/*
* Restrict IOVA mappings for ADSP buffers to the 512 MiB region
* from 0x40000000 - 0x5fffffff. Anything outside is reserved by
* the ADSP for I/O memory and private memory allocations.
*/
iommu-addresses = <&adsp 0x0 0x00000000 0x00 0x40000000>,
<&adsp 0x0 0x60000000 0xff 0xa0000000>;
};
fb: framebuffer@90000000 {
reg = <0x0 0x90000000 0x0 0x00800000>;
iommu-addresses = <&dc0 0x0 0x90000000 0x0 0x00800000>;
};
};
bus@0 {
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x0 0x0 0x40000000>;
adsp: adsp@2990000 {
reg = <0x2990000 0x2000>;
memory-region = <&adsp_resv>;
};
dc0: display@15200000 {
reg = <0x15200000 0x10000>;
memory-region = <&fb>;
};
};
};
...

View File

@ -34,8 +34,14 @@ Here is the main features of EROFS:
- Little endian on-disk design;
- 4KiB block size and 32-bit block addresses, therefore 16TiB address space
at most for now;
- Block-based distribution and file-based distribution over fscache are
supported;
- Support multiple devices to refer to external blobs, which can be used
for container images;
- 32-bit block addresses for each device, therefore 16TiB address space at
most with 4KiB block size for now;
- Two inode layouts for different requirements:

View File

@ -0,0 +1,86 @@
.. SPDX-License-Identifier: GPL-2.0
======================
GenieZone Introduction
======================
Overview
========
GenieZone hypervisor(gzvm) is a type-1 hypervisor that supports various virtual
machine types and provides security features such as TEE-like scenarios and
secure boot. It can create guest VMs for security use cases and has
virtualization capabilities for both platform and interrupt. Although the
hypervisor can be booted independently, it requires the assistance of GenieZone
hypervisor kernel driver(gzvm-ko) to leverage the ability of Linux kernel for
vCPU scheduling, memory management, inter-VM communication and virtio backend
support.
Supported Architecture
======================
GenieZone now only supports MediaTek ARM64 SoC.
Features
========
- vCPU Management
VM manager aims to provide vCPUs on the basis of time sharing on physical CPUs.
It requires Linux kernel in host VM for vCPU scheduling and VM power management.
- Memory Management
Direct use of physical memory from VMs is forbidden and designed to be dictated
to the privilege models managed by GenieZone hypervisor for security reason.
With the help of gzvm-ko, the hypervisor would be able to manipulate memory as
objects.
- Virtual Platform
We manage to emulate a virtual mobile platform for guest OS running on guest
VM. The platform supports various architecture-defined devices, such as
virtual arch timer, GIC, MMIO, PSCI, and exception watching...etc.
- Inter-VM Communication
Communication among guest VMs was provided mainly on RPC. More communication
mechanisms were to be provided in the future based on VirtIO-vsock.
- Device Virtualization
The solution is provided using the well-known VirtIO. The gzvm-ko would
redirect MMIO traps back to VMM where the virtual devices are mostly emulated.
Ioeventfd is implemented using eventfd for signaling host VM that some IO
events in guest VMs need to be processed.
- Interrupt virtualization
All Interrupts during some guest VMs running would be handled by GenieZone
hypervisor with the help of gzvm-ko, both virtual and physical ones. In case
there's no guest VM running out there, physical interrupts would be handled by
host VM directly for performance reason. Irqfd is also implemented using
eventfd for accepting vIRQ requests in gzvm-ko.
Platform architecture component
===============================
- vm
The vm component is responsible for setting up the capability and memory
management for the protected VMs. The capability is mainly about the lifecycle
control and boot context initialization. And the memory management is highly
integrated with ARM 2-stage translation tables to convert VA to IPA to PA under
proper security measures required by protected VMs.
- vcpu
The vcpu component is the core of virtualizing aarch64 physical CPU runnable,
and it controls the vCPU lifecycle including creating, running and destroying.
With self-defined exit handler, the vm component would be able to act
accordingly before terminated.
- vgic
The vgic component exposes control interfaces to Linux kernel via irqchip, and
we intend to support all SPI, PPI, and SGI. When it comes to virtual
interrupts, the GenieZone hypervisor would write to list registers and trigger
vIRQ injection in guest VMs via GIC.

View File

@ -16,6 +16,7 @@ Linux Virtualization Support
coco/sev-guest
hyperv/index
gunyah/index
geniezone/introduction
.. only:: html and subproject

View File

@ -8665,6 +8665,19 @@ F: include/vdso/
F: kernel/time/vsyscall.c
F: lib/vdso/
GENIEZONE HYPERVISOR DRIVER
M: Yingshiuan Pan <yingshiuan.pan@mediatek.com>
M: Ze-Yu Wang <ze-yu.wang@mediatek.com>
M: Yi-De Wu <yi-de.wu@mediatek.com>
F: Documentation/devicetree/bindings/hypervisor/mediatek,geniezone-hyp.yaml
F: Documentation/virt/geniezone/
F: arch/arm64/geniezone/
F: arch/arm64/include/uapi/asm/gzvm_arch.h
F: drivers/virt/geniezone/
F: include/linux/gzvm_drv.h
F include/uapi/asm-generic/gzvm_arch.h
F: include/uapi/linux/gzvm.h
GENWQE (IBM Generic Workqueue Card)
M: Frank Haverkamp <haver@linux.ibm.com>
S: Supported

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,11 @@
[abi_symbol_list]
# aura sync
hid_unregister_driver
hid_hw_raw_request
hid_open_report
hid_hw_start
hid_hw_stop
__hid_register_driver
hid_hw_output_report
hid_hw_open
hid_hw_close

File diff suppressed because it is too large Load Diff

View File

@ -359,6 +359,7 @@
__traceiter_android_vh_wq_lockup_pool
__traceiter_block_rq_insert
__traceiter_console
__traceiter_error_report_end
__traceiter_hrtimer_expire_entry
__traceiter_hrtimer_expire_exit
__traceiter_irq_handler_entry
@ -400,6 +401,7 @@
__tracepoint_android_vh_watchdog_timer_softlockup
__tracepoint_android_vh_wq_lockup_pool
__tracepoint_block_rq_insert
__tracepoint_error_report_end
__tracepoint_console
__tracepoint_hrtimer_expire_entry
__tracepoint_hrtimer_expire_exit

View File

@ -492,6 +492,9 @@
dma_get_sgtable_attrs
dma_get_slave_channel
dma_heap_add
dma_heap_buffer_alloc
dma_heap_buffer_free
dma_heap_find
dma_heap_get_dev
dma_heap_get_drvdata
dma_heap_get_name
@ -1164,6 +1167,8 @@
kvfree
kvfree_call_rcu
kvmalloc_node
led_classdev_register_ext
led_classdev_unregister
led_init_default_state_get
__list_add_valid
__list_del_entry_valid
@ -1820,6 +1825,7 @@
schedule
schedule_hrtimeout
schedule_timeout
schedule_timeout_idle
schedule_timeout_uninterruptible
scmi_driver_register
scmi_driver_unregister

View File

@ -0,0 +1,14 @@
[abi_symbol_list]
__traceiter_android_vh_tune_scan_type
__traceiter_android_vh_tune_swappiness
__tracepoint_android_vh_tune_swappiness
__tracepoint_android_vh_tune_scan_type
__traceiter_android_rvh_sk_alloc
__traceiter_android_rvh_sk_free
__tracepoint_android_rvh_sk_alloc
__tracepoint_android_rvh_sk_free
__traceiter_android_vh_alloc_pages_slowpath
__tracepoint_android_vh_tune_swappiness
__tracepoint_android_vh_tune_scan_type
__tracepoint_android_vh_alloc_pages_slowpath

View File

@ -2645,12 +2645,17 @@
__traceiter_android_vh_check_bpf_syscall
__traceiter_android_vh_check_file_open
__traceiter_android_vh_check_mmap_file
__traceiter_android_vh_compaction_exit
__traceiter_android_vh_compaction_try_to_compact_pages_exit
__traceiter_android_vh_cpufreq_fast_switch
__traceiter_android_vh_cpu_idle_enter
__traceiter_android_vh_cpu_idle_exit
__traceiter_android_vh_iommu_iovad_alloc_iova
__traceiter_android_vh_iommu_iovad_free_iova
__traceiter_android_vh_is_fpsimd_save
__traceiter_android_vh_mm_alloc_pages_direct_reclaim_enter
__traceiter_android_vh_mm_alloc_pages_direct_reclaim_exit
__traceiter_android_vh_mm_alloc_pages_may_oom_exit
__traceiter_android_vh_rwsem_init
__traceiter_android_vh_rwsem_wake
__traceiter_android_vh_rwsem_write_finished
@ -2661,6 +2666,7 @@
__traceiter_android_vh_show_suspend_epoch_val
__traceiter_android_vh_syscall_prctl_finished
__traceiter_android_vh_ufs_clock_scaling
__traceiter_android_vh_vmscan_kswapd_done
__traceiter_cpu_frequency
__traceiter_gpu_mem_total
__traceiter_ipi_entry
@ -2740,12 +2746,17 @@
__tracepoint_android_vh_check_bpf_syscall
__tracepoint_android_vh_check_file_open
__tracepoint_android_vh_check_mmap_file
__tracepoint_android_vh_compaction_exit
__tracepoint_android_vh_compaction_try_to_compact_pages_exit
__tracepoint_android_vh_cpufreq_fast_switch
__tracepoint_android_vh_cpu_idle_enter
__tracepoint_android_vh_cpu_idle_exit
__tracepoint_android_vh_iommu_iovad_alloc_iova
__tracepoint_android_vh_iommu_iovad_free_iova
__tracepoint_android_vh_is_fpsimd_save
__tracepoint_android_vh_mm_alloc_pages_direct_reclaim_enter
__tracepoint_android_vh_mm_alloc_pages_direct_reclaim_exit
__tracepoint_android_vh_mm_alloc_pages_may_oom_exit
__tracepoint_android_vh_rwsem_init
__tracepoint_android_vh_rwsem_wake
__tracepoint_android_vh_rwsem_write_finished
@ -2756,6 +2767,7 @@
__tracepoint_android_vh_show_suspend_epoch_val
__tracepoint_android_vh_syscall_prctl_finished
__tracepoint_android_vh_ufs_clock_scaling
__tracepoint_android_vh_vmscan_kswapd_done
__tracepoint_cpu_frequency
__tracepoint_gpu_mem_total
__tracepoint_ipi_entry

View File

@ -20,6 +20,9 @@
down_read_trylock
drm_crtc_vblank_waitqueue
filp_close
folio_add_lru
folio_mapping
folio_referenced
for_each_kernel_tracepoint
freq_qos_add_notifier
freq_qos_remove_notifier
@ -32,22 +35,31 @@
iio_channel_get
iio_channel_release
iio_get_channel_type
ip_local_deliver
ip6_local_out
ip6_route_me_harder
ip_route_me_harder
ipv6_find_hdr
iov_iter_advance
is_ashmem_file
jiffies_64_to_clock_t
kick_process
ktime_get_coarse_real_ts64
mem_cgroup_update_lru_size
memory_cgrp_subsys
memory_cgrp_subsys_enabled_key
mem_cgroup_from_id
mipi_dsi_generic_write
mmc_wait_for_cmd
__mod_lruvec_state
__mod_zone_page_state
nf_ct_attach
nf_ct_delete
nf_register_net_hook
nf_register_net_hooks
nf_unregister_net_hook
nf_unregister_net_hooks
nr_running
of_css
__page_file_index
__page_mapcount
@ -56,6 +68,7 @@
prepare_to_wait_exclusive
proc_symlink
public_key_verify_signature
put_pages_list
radix_tree_lookup_slot
radix_tree_replace_slot
_raw_write_trylock
@ -63,8 +76,10 @@
register_tcf_proto_ops
regulator_map_voltage_linear_range
remove_proc_subtree
root_mem_cgroup
rtc_read_alarm
rtc_set_alarm
__rtnl_link_unregister
sdio_memcpy_fromio
sdio_memcpy_toio
sdio_set_block_size
@ -90,6 +105,9 @@
__traceiter_android_vh_account_process_tick_gran
__traceiter_android_vh_account_task_time
__traceiter_android_vh_do_futex
__traceiter_android_vh_exit_check
__traceiter_android_vh_exit_signal_whether_wake
__traceiter_android_vh_freeze_whether_wake
__traceiter_android_vh_futex_sleep_start
__traceiter_android_vh_futex_wait_end
__traceiter_android_vh_futex_wait_start
@ -98,6 +116,7 @@
__traceiter_android_vh_futex_wake_up_q_finish
__traceiter_android_vh_record_mutex_lock_starttime
__traceiter_android_vh_record_pcpu_rwsem_starttime
__traceiter_android_vh_percpu_rwsem_wq_add
__traceiter_android_vh_record_rtmutex_lock_starttime
__traceiter_android_vh_record_rwsem_lock_starttime
__traceiter_android_vh_alter_mutex_list_add
@ -120,6 +139,7 @@
__traceiter_android_vh_check_folio_look_around_ref
__traceiter_android_vh_dup_task_struct
__traceiter_android_vh_exit_signal
__traceiter_android_vh_killed_process
__traceiter_android_vh_look_around
__traceiter_android_vh_look_around_migrate_folio
__traceiter_android_vh_mem_cgroup_id_remove
@ -140,6 +160,7 @@
__traceiter_android_vh_rwsem_opt_spin_finish
__traceiter_android_vh_rwsem_opt_spin_start
__traceiter_android_vh_rwsem_wake_finish
__traceiter_android_vh_adjust_alloc_flags
__traceiter_android_vh_sched_stat_runtime_rt
__traceiter_android_vh_shrink_node_memcgs
__traceiter_android_vh_sync_txn_recvd
@ -150,6 +171,10 @@
__traceiter_block_rq_issue
__traceiter_block_rq_merge
__traceiter_block_rq_requeue
__traceiter_net_dev_queue
__traceiter_net_dev_xmit
__traceiter_netif_receive_skb
__traceiter_netif_rx
__traceiter_sched_stat_blocked
__traceiter_sched_stat_iowait
__traceiter_sched_stat_runtime
@ -158,6 +183,12 @@
__traceiter_sched_waking
__traceiter_task_rename
__traceiter_android_vh_test_clear_look_around_ref
__traceiter_android_vh_tune_swappiness
__traceiter_android_vh_alloc_oem_binder_struct
__traceiter_android_vh_binder_transaction_received
__traceiter_android_vh_free_oem_binder_struct
__traceiter_android_vh_binder_special_task
__traceiter_android_vh_binder_free_buf
__tracepoint_android_rvh_post_init_entity_util_avg
__tracepoint_android_rvh_rtmutex_force_update
__tracepoint_android_vh_account_process_tick_gran
@ -182,12 +213,16 @@
__tracepoint_android_vh_check_folio_look_around_ref
__tracepoint_android_vh_do_futex
__tracepoint_android_vh_dup_task_struct
__tracepoint_android_vh_exit_check
__tracepoint_android_vh_exit_signal
__tracepoint_android_vh_killed_process
__tracepoint_android_vh_exit_signal_whether_wake
__tracepoint_android_vh_mem_cgroup_id_remove
__tracepoint_android_vh_mem_cgroup_css_offline
__tracepoint_android_vh_mem_cgroup_css_online
__tracepoint_android_vh_mem_cgroup_free
__tracepoint_android_vh_mem_cgroup_alloc
__tracepoint_android_vh_freeze_whether_wake
__tracepoint_android_vh_futex_sleep_start
__tracepoint_android_vh_futex_wait_end
__tracepoint_android_vh_futex_wait_start
@ -206,6 +241,7 @@
__tracepoint_android_vh_mutex_unlock_slowpath
__tracepoint_android_vh_record_mutex_lock_starttime
__tracepoint_android_vh_record_pcpu_rwsem_starttime
__tracepoint_android_vh_percpu_rwsem_wq_add
__tracepoint_android_vh_record_rtmutex_lock_starttime
__tracepoint_android_vh_record_rwsem_lock_starttime
__tracepoint_android_vh_rtmutex_waiter_prio
@ -213,17 +249,23 @@
__tracepoint_android_vh_rwsem_opt_spin_finish
__tracepoint_android_vh_rwsem_opt_spin_start
__tracepoint_android_vh_rwsem_wake_finish
__tracepoint_android_vh_adjust_alloc_flags
__tracepoint_android_vh_sched_stat_runtime_rt
__tracepoint_android_vh_shrink_node_memcgs
__tracepoint_android_vh_sync_txn_recvd
__tracepoint_android_vh_task_blocks_on_rtmutex
__tracepoint_android_vh_test_clear_look_around_ref
__tracepoint_android_vh_tune_swappiness
__tracepoint_block_bio_queue
__tracepoint_block_getrq
__tracepoint_block_rq_complete
__tracepoint_block_rq_issue
__tracepoint_block_rq_merge
__tracepoint_block_rq_requeue
__tracepoint_net_dev_queue
__tracepoint_net_dev_xmit
__tracepoint_netif_receive_skb
__tracepoint_netif_rx
__tracepoint_sched_stat_blocked
__tracepoint_sched_stat_iowait
__tracepoint_sched_stat_runtime
@ -231,6 +273,11 @@
__tracepoint_sched_stat_wait
__tracepoint_sched_waking
__tracepoint_task_rename
__tracepoint_android_vh_alloc_oem_binder_struct
__tracepoint_android_vh_binder_transaction_received
__tracepoint_android_vh_free_oem_binder_struct
__tracepoint_android_vh_binder_special_task
__tracepoint_android_vh_binder_free_buf
__trace_puts
try_to_free_mem_cgroup_pages
typec_mux_get_drvdata
@ -240,5 +287,6 @@
wait_for_completion_io_timeout
wait_for_completion_killable_timeout
wakeup_source_remove
wake_up_state
wq_worker_comm
zero_pfn

View File

@ -1,4 +1,5 @@
[abi_symbol_list]
activate_task
add_cpu
add_timer
add_timer_on
@ -30,6 +31,7 @@
__arch_clear_user
__arch_copy_from_user
__arch_copy_to_user
arch_freq_scale
arch_timer_read_counter
argv_free
argv_split
@ -42,6 +44,7 @@
atomic_notifier_chain_register
atomic_notifier_chain_unregister
autoremove_wake_function
available_idle_cpu
backlight_device_set_brightness
badblocks_check
badblocks_clear
@ -49,13 +52,19 @@
badblocks_init
badblocks_set
badblocks_show
balance_push_callback
bcmp
bdev_end_io_acct
bdev_nr_zones
bdev_start_io_acct
bin2hex
bio_add_page
bio_alloc_bioset
bio_chain
bio_endio
bio_end_io_acct_remapped
bio_init
bio_put
bio_start_io_acct
__bitmap_and
__bitmap_andnot
@ -76,6 +85,9 @@
bitmap_zalloc
blk_abort_request
__blk_alloc_disk
blk_check_plugged
blkdev_get_by_dev
blkdev_put
blk_execute_rq_nowait
__blk_mq_alloc_disk
blk_mq_alloc_tag_set
@ -114,6 +126,8 @@
blocking_notifier_chain_unregister
bpf_trace_run1
bpf_trace_run10
bpf_trace_run11
bpf_trace_run12
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
@ -143,11 +157,13 @@
cdev_device_del
cdev_init
__check_object_size
check_preempt_curr
__class_create
class_destroy
class_interface_unregister
__class_register
class_unregister
cleancache_register_ops
clear_page
__ClearPageMovable
clk_disable
@ -204,22 +220,36 @@
_copy_from_iter
__copy_overflow
_copy_to_iter
__cpu_active_mask
cpu_all_bits
cpu_bit_bitmap
cpufreq_add_update_util_hook
cpufreq_cpu_get
cpufreq_cpu_get_raw
cpufreq_cpu_put
cpufreq_disable_fast_switch
cpufreq_driver_fast_switch
cpufreq_driver_resolve_freq
__cpufreq_driver_target
cpufreq_driver_target
cpufreq_enable_fast_switch
cpufreq_freq_transition_begin
cpufreq_freq_transition_end
cpufreq_frequency_table_verify
cpufreq_generic_attr
cpufreq_get
cpufreq_get_policy
cpufreq_policy_transition_delay_us
cpufreq_quick_get
cpufreq_register_driver
cpufreq_register_governor
cpufreq_register_notifier
cpufreq_remove_update_util_hook
cpufreq_table_index_unsorted
cpufreq_this_cpu_can_update
cpufreq_update_util_data
cpu_hotplug_disable
cpu_hotplug_enable
__cpuhp_remove_state
__cpuhp_setup_state
__cpuhp_setup_state_cpuslocked
@ -227,15 +257,19 @@
__cpuhp_state_remove_instance
cpuhp_tasks_frozen
cpu_hwcaps
cpuidle_driver_state_disabled
cpuidle_get_driver
cpu_latency_qos_add_request
cpu_latency_qos_remove_request
cpu_latency_qos_update_request
cpumask_local_spread
cpu_number
__cpu_online_mask
cpu_pm_register_notifier
cpu_pm_unregister_notifier
__cpu_possible_mask
__cpu_present_mask
cpupri_find_fitness
cpu_scale
cpus_read_lock
cpus_read_unlock
@ -275,6 +309,7 @@
csum_partial
csum_tcpudp_nofold
_ctype
deactivate_task
debugfs_attr_read
debugfs_attr_write
debugfs_create_atomic_t
@ -282,6 +317,7 @@
debugfs_create_devm_seqfile
debugfs_create_dir
debugfs_create_file
debugfs_create_file_unsafe
debugfs_create_size_t
debugfs_create_symlink
debugfs_create_u16
@ -326,6 +362,7 @@
__dev_get_by_index
dev_get_by_index
dev_get_by_name
dev_get_stats
device_add
device_add_disk
device_add_groups
@ -340,6 +377,7 @@
device_get_child_node_count
device_get_dma_attr
device_get_match_data
device_get_named_child_node
device_get_next_child_node
device_initialize
device_link_add
@ -394,13 +432,17 @@
devm_ioremap_resource
devm_ioremap_wc
devm_iounmap
__devm_irq_alloc_descs
devm_kasprintf
devm_kfree
devm_kmalloc
devm_kmemdup
devm_krealloc
devm_kstrdup
devm_kstrdup_const
devm_led_classdev_register_ext
devm_memremap
devm_memunmap
devm_mfd_add_devices
devm_nvmem_register
__devm_of_phy_provider_register
@ -420,18 +462,21 @@
__devm_regmap_init
__devm_regmap_init_i2c
__devm_regmap_init_spi
__devm_regmap_init_spmi_ext
devm_regulator_bulk_get
devm_regulator_get
devm_regulator_get_exclusive
devm_regulator_get_optional
devm_regulator_put
devm_regulator_register
devm_request_any_context_irq
__devm_request_region
devm_request_threaded_irq
devm_rtc_device_register
devm_snd_soc_register_component
devm_thermal_of_cooling_device_register
devm_thermal_of_zone_register
devm_thermal_of_zone_unregister
devm_usb_get_phy_by_phandle
_dev_notice
dev_pm_domain_attach_by_name
@ -459,6 +504,7 @@
__devres_alloc_node
devres_free
dev_set_name
dev_vprintk_emit
_dev_warn
disable_irq
disable_irq_nosync
@ -486,6 +532,7 @@
dmabuf_page_pool_free
dmabuf_page_pool_get_size
dma_buf_put
dma_buf_set_name
dma_buf_unmap_attachment
dma_buf_vmap
dma_buf_vunmap
@ -542,19 +589,25 @@
drain_workqueue
driver_register
driver_unregister
drm_add_edid_modes
drm_add_modes_noedid
drm_atomic_add_affected_connectors
drm_atomic_add_affected_planes
drm_atomic_bridge_chain_disable
drm_atomic_bridge_chain_post_disable
drm_atomic_commit
drm_atomic_get_connector_state
drm_atomic_get_crtc_state
drm_atomic_get_new_connector_for_encoder
drm_atomic_get_new_private_obj_state
drm_atomic_get_old_connector_for_encoder
drm_atomic_get_old_private_obj_state
drm_atomic_get_plane_state
drm_atomic_get_private_obj_state
drm_atomic_helper_bridge_destroy_state
drm_atomic_helper_bridge_duplicate_state
drm_atomic_helper_bridge_reset
drm_atomic_helper_calc_timestamping_constants
drm_atomic_helper_check_modeset
drm_atomic_helper_check_planes
drm_atomic_helper_check_plane_state
@ -567,7 +620,10 @@
drm_atomic_helper_commit_planes
drm_atomic_helper_commit_tail
__drm_atomic_helper_connector_destroy_state
drm_atomic_helper_connector_destroy_state
__drm_atomic_helper_connector_duplicate_state
drm_atomic_helper_connector_duplicate_state
drm_atomic_helper_connector_reset
__drm_atomic_helper_crtc_destroy_state
__drm_atomic_helper_crtc_duplicate_state
__drm_atomic_helper_crtc_reset
@ -583,6 +639,7 @@
drm_atomic_helper_setup_commit
drm_atomic_helper_shutdown
drm_atomic_helper_swap_state
drm_atomic_helper_update_legacy_modeset_state
drm_atomic_helper_update_plane
drm_atomic_helper_wait_for_dependencies
drm_atomic_helper_wait_for_flip_done
@ -610,12 +667,17 @@
drm_connector_list_iter_next
drm_connector_register
drm_connector_unregister
drm_connector_update_edid_property
drm_crtc_add_crc_entry
drm_crtc_arm_vblank_event
drm_crtc_cleanup
__drm_crtc_commit_free
drm_crtc_commit_wait
drm_crtc_enable_color_mgmt
drm_crtc_handle_vblank
drm_crtc_init_with_planes
drm_crtc_send_vblank_event
drm_crtc_vblank_count
drm_crtc_vblank_count_and_time
drm_crtc_vblank_get
drm_crtc_vblank_off
@ -623,10 +685,20 @@
drm_crtc_vblank_put
drm_crtc_wait_one_vblank
___drm_dbg
__drm_debug
drm_detect_monitor_audio
__drm_dev_dbg
drm_dev_printk
drm_dev_put
drm_dev_register
drm_dev_unregister
drm_display_mode_from_cea_vic
drm_display_mode_to_videomode
drm_do_get_edid
drm_edid_duplicate
drm_edid_get_monitor_name
drm_edid_is_valid
drm_edid_to_sad
drm_encoder_cleanup
drm_encoder_init
__drm_err
@ -648,6 +720,7 @@
drm_gem_private_object_init
drm_gem_vm_close
drm_gem_vm_open
drm_get_edid
drm_get_format_info
drm_helper_mode_fill_fb_struct
drm_helper_probe_single_connector_modes
@ -655,10 +728,13 @@
drm_kms_helper_hotplug_event
drm_kms_helper_poll_fini
drm_kms_helper_poll_init
drm_match_cea_mode
drmm_kmalloc
drmm_mode_config_init
drm_mode_config_reset
drm_mode_convert_to_umode
drm_mode_copy
drm_mode_destroy
drm_mode_duplicate
drm_mode_equal
drm_mode_equal_no_clocks
@ -672,9 +748,11 @@
drm_modeset_drop_locks
drm_modeset_lock
drm_modeset_lock_all_ctx
drm_modeset_lock_single_interruptible
drm_modeset_unlock
drm_mode_vrefresh
drm_object_attach_property
drm_object_property_set_value
drm_open
drm_panel_add
drm_panel_disable
@ -724,10 +802,13 @@
dump_backtrace
dump_stack
dw_handle_msi_irq
dw_pcie_find_capability
dw_pcie_host_init
dw_pcie_read
dw_pcie_read_dbi
dw_pcie_setup_rc
dw_pcie_write
dw_pcie_write_dbi
__dynamic_dev_dbg
__dynamic_pr_debug
em_cpu_get
@ -755,6 +836,9 @@
__fdget
fd_install
fget
file_path
filp_close
filp_open_block
find_extend_vma
_find_first_and_bit
_find_first_bit
@ -765,12 +849,14 @@
_find_next_bit
_find_next_zero_bit
find_pid_ns
find_task_by_vpid
find_vma_intersection
finish_wait
flush_dcache_page
flush_delayed_work
flush_work
__flush_workqueue
__folio_lock
__folio_put
folio_wait_bit
fortify_panic
@ -791,6 +877,9 @@
freq_qos_add_request
freq_qos_remove_request
freq_qos_update_request
fs_bio_set
fsnotify
__fsnotify_parent
full_name_hash
fwnode_get_name
fwnode_gpiod_get_index
@ -823,6 +912,22 @@
get_cpu_iowait_time_us
get_device
__get_free_pages
get_governor_parent_kobj
gether_cleanup
gether_connect
gether_disconnect
gether_get_dev_addr
gether_get_host_addr
gether_get_host_addr_u8
gether_get_ifname
gether_get_qmult
gether_register_netdev
gether_set_dev_addr
gether_set_gadget
gether_set_host_addr
gether_set_ifname
gether_set_qmult
gether_setup_name_default
get_net_ns_by_fd
get_net_ns_by_pid
get_pid_task
@ -832,6 +937,8 @@
__get_random_u32_below
get_random_u8
get_sg_io_hdr
__get_task_comm
get_task_cred
get_thermal_instance
get_unused_fd_flags
get_user_pages
@ -839,6 +946,10 @@
get_vaddr_frames
gic_nonsecure_priorities
glob_match
gov_attr_set_get
gov_attr_set_init
gov_attr_set_put
governor_sysfs_ops
gpiochip_generic_config
gpiochip_generic_free
gpiochip_generic_request
@ -871,6 +982,7 @@
handle_simple_irq
handle_sysrq
hashlen_string
have_governor_per_policy
hex2bin
hex_dump_to_buffer
hex_to_bin
@ -888,6 +1000,7 @@
hwrng_register
hwrng_unregister
i2c_adapter_type
i2c_add_adapter
i2c_add_numbered_adapter
i2c_bus_type
i2c_del_adapter
@ -965,6 +1078,7 @@
interval_tree_iter_first
interval_tree_iter_next
interval_tree_remove
int_pow
int_sqrt
int_to_scsilun
iomem_resource
@ -1015,7 +1129,9 @@
irq_domain_get_irq_data
irq_domain_remove
irq_domain_set_info
irq_domain_simple_ops
irq_domain_xlate_twocell
irq_force_affinity
irq_get_irq_data
irq_modify_status
irq_of_parse_and_map
@ -1027,6 +1143,8 @@
irq_set_irq_type
irq_set_irq_wake
irq_to_desc
irq_work_queue
irq_work_sync
is_vmalloc_addr
jiffies
jiffies64_to_msecs
@ -1039,6 +1157,7 @@
kernel_param_lock
kernel_param_unlock
kernel_restart
kernfs_path_from_node
key_create_or_update
key_put
keyring_alloc
@ -1064,6 +1183,7 @@
kmem_cache_destroy
kmem_cache_free
kmemdup
kmemdup_nul
kobject_add
kobject_create_and_add
kobject_del
@ -1154,6 +1274,7 @@
mbox_request_channel
mbox_send_message
memchr
memchr_inv
memcmp
memcpy
__memcpy_fromio
@ -1202,6 +1323,7 @@
__msecs_to_jiffies
msleep
msleep_interruptible
mtree_load
__mutex_init
mutex_is_locked
mutex_lock
@ -1238,6 +1360,8 @@
netlink_unregister_notifier
net_ns_type_operations
net_ratelimit
nf_register_net_hooks
nf_unregister_net_hooks
nla_find
nla_memcpy
__nla_parse
@ -1252,6 +1376,7 @@
noop_llseek
nr_cpu_ids
nr_irqs
ns_capable
nsec_to_clock_t
ns_to_timespec64
__num_online_cpus
@ -1284,6 +1409,7 @@
of_find_node_by_phandle
of_find_node_by_type
of_find_node_opts_by_path
of_find_node_with_property
of_find_property
of_fwnode_ops
of_genpd_add_provider_simple
@ -1495,17 +1621,23 @@
prepare_to_wait_event
print_hex_dump
_printk
_printk_deferred
proc_create
proc_create_data
proc_create_single_data
proc_dointvec
proc_dostring
proc_douintvec_minmax
proc_mkdir
proc_mkdir_data
proc_remove
proc_set_size
proc_symlink
pskb_expand_head
__pskb_pull_tail
___pskb_trim
push_cpu_stop
__put_cred
put_device
put_disk
put_iova_domain
@ -1537,6 +1669,8 @@
_raw_spin_lock_bh
_raw_spin_lock_irq
_raw_spin_lock_irqsave
raw_spin_rq_lock_nested
raw_spin_rq_unlock
_raw_spin_trylock
_raw_spin_unlock
_raw_spin_unlock_bh
@ -1634,7 +1768,10 @@
__request_percpu_irq
__request_region
request_threaded_irq
resched_curr
reserve_iova
return_address
reweight_task
rfkill_alloc
rfkill_blocked
rfkill_destroy
@ -1651,6 +1788,7 @@
rht_bucket_nested_insert
__root_device_register
root_device_unregister
root_task_group
round_jiffies
round_jiffies_relative
round_jiffies_up
@ -1668,12 +1806,18 @@
rt_mutex_unlock
rtnl_is_locked
rtnl_lock
rtnl_trylock
rtnl_unlock
runqueues
sched_clock
sched_feat_keys
sched_setattr_nocheck
sched_set_fifo
sched_set_normal
sched_setscheduler
sched_setscheduler_nocheck
sched_show_task
sched_uclamp_used
schedule
schedule_timeout
schedule_timeout_interruptible
@ -1722,6 +1866,7 @@
set_page_dirty
set_page_dirty_lock
__SetPageMovable
set_task_cpu
set_user_nice
sg_alloc_table
sg_alloc_table_from_pages_segment
@ -1789,6 +1934,7 @@
snd_jack_set_key
snd_pcm_format_physical_width
snd_pcm_format_width
snd_pcm_hw_constraint_integer
snd_pcm_hw_constraint_list
snd_pcm_lib_free_pages
snd_pcm_lib_ioctl
@ -1845,6 +1991,7 @@
snd_soc_register_card
snd_soc_register_component
snd_soc_runtime_set_dai_fmt
snd_soc_set_runtime_hwparams
snd_soc_unregister_card
snd_soc_unregister_component
snprintf
@ -1867,6 +2014,10 @@
spi_sync
spi_sync_locked
spi_unregister_controller
spmi_controller_add
spmi_controller_alloc
spmi_controller_remove
__spmi_driver_register
sprintf
sprint_symbol
srcu_init_notifier_head
@ -1876,9 +2027,11 @@
sscanf
__stack_chk_fail
static_key_disable
static_key_enable
static_key_slow_dec
static_key_slow_inc
stop_machine
stop_one_cpu_nowait
strcasecmp
strcat
strchr
@ -1892,6 +2045,7 @@
strlen
strncasecmp
strncat
strnchr
strncmp
strncpy
strncpy_from_user
@ -1904,6 +2058,8 @@
strsep
strspn
strstr
submit_bio
submit_bio_wait
subsys_system_register
suspend_set_ops
__sw_hweight16
@ -1917,6 +2073,8 @@
synchronize_net
synchronize_rcu
syscon_regmap_lookup_by_phandle
sysctl_sched_features
sysctl_sched_latency
sysfs_add_file_to_group
sysfs_add_link_to_group
sysfs_create_file_ns
@ -1946,12 +2104,14 @@
system_wq
sys_tz
task_active_pid_ns
__tasklet_hi_schedule
tasklet_init
tasklet_kill
__tasklet_schedule
tasklet_setup
tasklet_unlock_wait
__task_pid_nr_ns
task_rq_lock
tcpci_get_tcpm_port
tcpci_irq
tcpci_register_port
@ -1962,17 +2122,25 @@
tcpm_pd_transmit_complete
tcpm_port_clean
tcpm_port_is_toggling
tcpm_register_port
tcpm_sink_frs
tcpm_sourcing_vbus
tcpm_tcpc_reset
tcpm_unregister_port
tcpm_vbus_change
teo_cpu_get_util_threshold
teo_cpu_set_util_threshold
thermal_cdev_update
thermal_cooling_device_unregister
thermal_of_cooling_device_register
thermal_pressure
thermal_zone_device_disable
thermal_zone_device_enable
thermal_zone_device_register
thermal_zone_device_unregister
thermal_zone_device_update
thermal_zone_get_temp
thermal_zone_get_zone_by_name
thread_group_cputime_adjusted
time64_to_tm
topology_update_thermal_pressure
@ -1986,17 +2154,69 @@
trace_event_raw_init
trace_event_reg
trace_handle_return
__traceiter_android_rvh_attach_entity_load_avg
__traceiter_android_rvh_audio_usb_offload_disconnect
__traceiter_android_rvh_can_migrate_task
__traceiter_android_rvh_cgroup_force_kthread_migration
__traceiter_android_rvh_check_preempt_wakeup
__traceiter_android_rvh_cpu_overutilized
__traceiter_android_rvh_dequeue_task
__traceiter_android_rvh_dequeue_task_fair
__traceiter_android_rvh_detach_entity_load_avg
__traceiter_android_rvh_enqueue_task
__traceiter_android_rvh_enqueue_task_fair
__traceiter_android_rvh_find_lowest_rq
__traceiter_android_rvh_irqs_disable
__traceiter_android_rvh_irqs_enable
__traceiter_android_rvh_post_init_entity_util_avg
__traceiter_android_rvh_preempt_disable
__traceiter_android_rvh_preempt_enable
__traceiter_android_rvh_prepare_prio_fork
__traceiter_android_rvh_remove_entity_load_avg
__traceiter_android_rvh_rtmutex_prepare_setprio
__traceiter_android_rvh_sched_newidle_balance
__traceiter_android_rvh_select_task_rq_fair
__traceiter_android_rvh_select_task_rq_rt
__traceiter_android_rvh_set_cpus_allowed_by_task
__traceiter_android_rvh_set_iowait
__traceiter_android_rvh_setscheduler
__traceiter_android_rvh_set_task_cpu
__traceiter_android_rvh_set_user_nice
__traceiter_android_rvh_typec_tcpci_get_vbus
__traceiter_android_rvh_uclamp_eff_get
__traceiter_android_rvh_update_blocked_fair
__traceiter_android_rvh_update_load_avg
__traceiter_android_rvh_update_misfit_status
__traceiter_android_rvh_update_rt_rq_load_avg
__traceiter_android_vh_arch_set_freq_scale
__traceiter_android_vh_audio_usb_offload_connect
__traceiter_android_vh_binder_restore_priority
__traceiter_android_vh_binder_set_priority
__traceiter_android_vh_cpu_idle_enter
__traceiter_android_vh_cpu_idle_exit
__traceiter_android_vh_dump_throttled_rt_tasks
__traceiter_android_vh_dup_task_struct
__traceiter_android_vh_early_resume_begin
__traceiter_android_vh_enable_thermal_genl_check
__traceiter_android_vh_filemap_get_folio
__traceiter_android_vh_ipi_stop
__traceiter_android_vh_meminfo_proc_show
__traceiter_android_vh_mm_compaction_begin
__traceiter_android_vh_mm_compaction_end
__traceiter_android_vh_prio_inheritance
__traceiter_android_vh_prio_restore
__traceiter_android_vh_resume_end
__traceiter_android_vh_rmqueue
__traceiter_android_vh_scheduler_tick
__traceiter_android_vh_setscheduler_uclamp
__traceiter_android_vh_si_meminfo_adjust
__traceiter_android_vh_sysrq_crash
__traceiter_android_vh_typec_store_partner_src_caps
__traceiter_android_vh_typec_tcpci_override_toggling
__traceiter_android_vh_typec_tcpm_get_timer
__traceiter_android_vh_typec_tcpm_log
__traceiter_android_vh_typec_tcpm_modify_src_caps
__traceiter_android_vh_uclamp_validate
__traceiter_android_vh_ufs_check_int_errors
__traceiter_android_vh_ufs_compl_command
__traceiter_android_vh_ufs_fill_prdt
@ -2006,7 +2226,9 @@
__traceiter_android_vh_ufs_send_uic_command
__traceiter_android_vh_ufs_update_sdev
__traceiter_android_vh_ufs_update_sysfs
__traceiter_android_vh_use_amu_fie
__traceiter_clock_set_rate
__traceiter_cpu_frequency
__traceiter_device_pm_callback_end
__traceiter_device_pm_callback_start
__traceiter_gpu_mem_total
@ -2017,22 +2239,86 @@
__traceiter_mmap_lock_acquire_returned
__traceiter_mmap_lock_released
__traceiter_mmap_lock_start_locking
__traceiter_mm_vmscan_direct_reclaim_begin
__traceiter_mm_vmscan_direct_reclaim_end
__traceiter_pelt_cfs_tp
__traceiter_pelt_dl_tp
__traceiter_pelt_irq_tp
__traceiter_pelt_rt_tp
__traceiter_pelt_se_tp
__traceiter_sched_cpu_capacity_tp
__traceiter_sched_overutilized_tp
__traceiter_sched_switch
__traceiter_sched_util_est_cfs_tp
__traceiter_sched_util_est_se_tp
__traceiter_sched_wakeup
__traceiter_suspend_resume
__traceiter_workqueue_execute_end
__traceiter_workqueue_execute_start
trace_output_call
__tracepoint_android_rvh_attach_entity_load_avg
__tracepoint_android_rvh_audio_usb_offload_disconnect
__tracepoint_android_rvh_can_migrate_task
__tracepoint_android_rvh_cgroup_force_kthread_migration
__tracepoint_android_rvh_check_preempt_wakeup
__tracepoint_android_rvh_cpu_overutilized
__tracepoint_android_rvh_dequeue_task
__tracepoint_android_rvh_dequeue_task_fair
__tracepoint_android_rvh_detach_entity_load_avg
__tracepoint_android_rvh_enqueue_task
__tracepoint_android_rvh_enqueue_task_fair
__tracepoint_android_rvh_find_lowest_rq
__tracepoint_android_rvh_irqs_disable
__tracepoint_android_rvh_irqs_enable
__tracepoint_android_rvh_post_init_entity_util_avg
__tracepoint_android_rvh_preempt_disable
__tracepoint_android_rvh_preempt_enable
__tracepoint_android_rvh_prepare_prio_fork
__tracepoint_android_rvh_remove_entity_load_avg
__tracepoint_android_rvh_rtmutex_prepare_setprio
__tracepoint_android_rvh_sched_newidle_balance
__tracepoint_android_rvh_select_task_rq_fair
__tracepoint_android_rvh_select_task_rq_rt
__tracepoint_android_rvh_set_cpus_allowed_by_task
__tracepoint_android_rvh_set_iowait
__tracepoint_android_rvh_setscheduler
__tracepoint_android_rvh_set_task_cpu
__tracepoint_android_rvh_set_user_nice
__tracepoint_android_rvh_typec_tcpci_get_vbus
__tracepoint_android_rvh_uclamp_eff_get
__tracepoint_android_rvh_update_blocked_fair
__tracepoint_android_rvh_update_load_avg
__tracepoint_android_rvh_update_misfit_status
__tracepoint_android_rvh_update_rt_rq_load_avg
__tracepoint_android_vh_arch_set_freq_scale
__tracepoint_android_vh_audio_usb_offload_connect
__tracepoint_android_vh_binder_restore_priority
__tracepoint_android_vh_binder_set_priority
__tracepoint_android_vh_cpu_idle_enter
__tracepoint_android_vh_cpu_idle_exit
__tracepoint_android_vh_dump_throttled_rt_tasks
__tracepoint_android_vh_dup_task_struct
__tracepoint_android_vh_early_resume_begin
__tracepoint_android_vh_enable_thermal_genl_check
__tracepoint_android_vh_filemap_get_folio
__tracepoint_android_vh_ipi_stop
__tracepoint_android_vh_meminfo_proc_show
__tracepoint_android_vh_mm_compaction_begin
__tracepoint_android_vh_mm_compaction_end
__tracepoint_android_vh_prio_inheritance
__tracepoint_android_vh_prio_restore
__tracepoint_android_vh_resume_end
__tracepoint_android_vh_rmqueue
__tracepoint_android_vh_scheduler_tick
__tracepoint_android_vh_setscheduler_uclamp
__tracepoint_android_vh_si_meminfo_adjust
__tracepoint_android_vh_sysrq_crash
__tracepoint_android_vh_typec_store_partner_src_caps
__tracepoint_android_vh_typec_tcpci_override_toggling
__tracepoint_android_vh_typec_tcpm_get_timer
__tracepoint_android_vh_typec_tcpm_log
__tracepoint_android_vh_typec_tcpm_modify_src_caps
__tracepoint_android_vh_uclamp_validate
__tracepoint_android_vh_ufs_check_int_errors
__tracepoint_android_vh_ufs_compl_command
__tracepoint_android_vh_ufs_fill_prdt
@ -2042,7 +2328,9 @@
__tracepoint_android_vh_ufs_send_uic_command
__tracepoint_android_vh_ufs_update_sdev
__tracepoint_android_vh_ufs_update_sysfs
__tracepoint_android_vh_use_amu_fie
__tracepoint_clock_set_rate
__tracepoint_cpu_frequency
__tracepoint_device_pm_callback_end
__tracepoint_device_pm_callback_start
__tracepoint_gpu_mem_total
@ -2053,9 +2341,21 @@
__tracepoint_mmap_lock_acquire_returned
__tracepoint_mmap_lock_released
__tracepoint_mmap_lock_start_locking
__tracepoint_mm_vmscan_direct_reclaim_begin
__tracepoint_mm_vmscan_direct_reclaim_end
__tracepoint_pelt_cfs_tp
__tracepoint_pelt_dl_tp
__tracepoint_pelt_irq_tp
__tracepoint_pelt_rt_tp
__tracepoint_pelt_se_tp
tracepoint_probe_register
tracepoint_probe_unregister
__tracepoint_sched_cpu_capacity_tp
__tracepoint_sched_overutilized_tp
__tracepoint_sched_switch
__tracepoint_sched_util_est_cfs_tp
__tracepoint_sched_util_est_se_tp
__tracepoint_sched_wakeup
__tracepoint_suspend_resume
__tracepoint_workqueue_execute_end
__tracepoint_workqueue_execute_start
@ -2093,8 +2393,10 @@
uart_unregister_driver
uart_update_timeout
uart_write_wakeup
uclamp_eff_value
__udelay
udp4_hwcsum
ufshcd_auto_hibern8_update
ufshcd_bkops_ctrl
ufshcd_hold
ufshcd_pltfrm_init
@ -2130,29 +2432,42 @@
unregister_virtio_driver
up
update_devfreq
___update_load_avg
___update_load_sum
update_rq_clock
up_read
up_write
usb_add_function
usb_add_hcd
usb_assign_descriptors
usb_copy_descriptors
__usb_create_hcd
usb_disabled
usb_enable_autosuspend
usb_ep_alloc_request
usb_ep_autoconfig
usb_ep_disable
usb_ep_enable
usb_ep_free_request
usb_ep_queue
usb_free_all_descriptors
usb_function_register
usb_function_unregister
usb_gadget_activate
usb_gadget_deactivate
usb_gadget_set_state
usb_gstrings_attach
usb_hcd_is_primary_hcd
usb_hcd_platform_shutdown
usb_hub_find_child
usb_interface_id
usb_os_desc_prepare_interf_dir
usb_otg_state_string
usb_put_function_instance
usb_put_hcd
usb_register_notify
usb_remove_hcd
usb_role_string
usb_role_switch_get_drvdata
usb_role_switch_register
usb_role_switch_unregister
@ -2247,6 +2562,7 @@
vmalloc_user
vmap
vmf_insert_pfn_prot
vm_iomap_memory
vprintk
vprintk_emit
vring_del_virtqueue
@ -2278,15 +2594,20 @@
wireless_nlevent_flush
woken_wake_function
work_busy
__write_overflow_field
__xa_alloc
xa_clear_mark
xa_destroy
__xa_erase
xa_erase
xa_find
xa_find_after
xa_get_mark
xa_load
xa_set_mark
xas_find
xas_pause
__xa_store
__xfrm_state_destroy
xfrm_state_lookup_byspi
xfrm_stateonly_find
@ -2294,7 +2615,18 @@
xhci_bus_resume
xhci_bus_suspend
xhci_gen_setup
xhci_get_endpoint_index
xhci_init_driver
xhci_resume
xhci_run
xhci_suspend
zs_compact
zs_create_pool
zs_destroy_pool
zs_free
zs_get_total_pages
zs_huge_class_size
zs_malloc
zs_map_object
zs_pool_stats
zs_unmap_object

View File

@ -73,6 +73,7 @@
bin2hex
bio_endio
bio_end_io_acct_remapped
bio_split
bio_start_io_acct
bitmap_allocate_region
__bitmap_and
@ -94,6 +95,8 @@
bit_wait_timeout
__blk_alloc_disk
blkdev_get_by_dev
blk_crypto_keyslot_index
blk_crypto_register
blk_execute_rq
blk_execute_rq_nowait
__blk_mq_alloc_disk
@ -342,6 +345,7 @@
copy_from_kernel_nofault
copy_page
__copy_overflow
copy_page
_copy_to_iter
__cpu_active_mask
cpu_bit_bitmap
@ -667,6 +671,7 @@
devm_rtc_allocate_device
__devm_rtc_register_device
devm_snd_soc_register_card
devm_snd_soc_register_component
devm_thermal_of_cooling_device_register
devm_thermal_of_zone_register
devm_usb_get_phy_by_node
@ -735,6 +740,22 @@
divider_recalc_rate
divider_ro_round_rate_parent
divider_round_rate_parent
dm_bufio_client_create
dm_bufio_client_destroy
dm_bufio_mark_buffer_dirty
dm_bufio_new
dm_bufio_read
dm_bufio_release
dm_bufio_write_dirty_buffers
dm_disk
dm_get_device
dm_kobject_release
dm_read_arg_group
dm_register_target
dm_shift_arg
dm_table_get_md
dm_table_get_mode
dm_unregister_target
dma_alloc_attrs
dma_alloc_noncontiguous
dma_alloc_pages
@ -854,8 +875,11 @@
drm_atomic_helper_commit_modeset_enables
drm_atomic_helper_commit_planes
__drm_atomic_helper_connector_destroy_state
drm_atomic_helper_connector_destroy_state
__drm_atomic_helper_connector_duplicate_state
drm_atomic_helper_connector_duplicate_state
__drm_atomic_helper_connector_reset
drm_atomic_helper_connector_reset
__drm_atomic_helper_crtc_destroy_state
__drm_atomic_helper_crtc_duplicate_state
drm_atomic_helper_dirtyfb
@ -929,6 +953,7 @@
drm_dev_register
drm_dev_unregister
drm_display_mode_from_cea_vic
drm_do_get_edid
drm_edid_duplicate
drm_edid_get_monitor_name
drm_edid_is_valid
@ -1331,6 +1356,7 @@
hci_uart_unregister_device
hci_unregister_cb
hci_unregister_dev
hdmi_audio_infoframe_init
hex2bin
hex_asc_upper
hex_dump_to_buffer
@ -1900,9 +1926,13 @@
migrate_pages
migrate_swap
__migrate_task
mipi_dsi_attach
mipi_dsi_create_packet
mipi_dsi_dcs_set_display_brightness
mipi_dsi_dcs_set_tear_off
mipi_dsi_detach
mipi_dsi_device_register_full
mipi_dsi_device_unregister
mipi_dsi_host_register
mipi_dsi_host_unregister
misc_deregister
@ -2202,6 +2232,7 @@
page_ext_put
page_is_ram
page_mapping
page_owner_inited
page_pinner_inited
__page_pinner_put_page
page_pool_alloc_pages
@ -2414,6 +2445,8 @@
__pm_runtime_use_autosuspend
__pm_stay_awake
pm_stay_awake
pm_suspend_global_flags
pm_suspend_target_state
pm_system_wakeup
pm_wakeup_dev_event
pm_wakeup_ws_event
@ -2877,6 +2910,7 @@
set_normalized_timespec64
set_page_dirty_lock
__SetPageMovable
__set_page_owner
set_task_cpu
setup_udp_tunnel_sock
set_user_nice
@ -2980,6 +3014,8 @@
smp_call_function_single
smp_call_function_single_async
snapshot_get_image_size
snd_ctl_add
snd_ctl_new1
snd_ctl_remove
snd_hwdep_new
snd_info_create_card_entry
@ -2988,7 +3024,12 @@
snd_info_register
snd_interval_refine
snd_jack_set_key
snd_pcm_add_chmap_ctls
snd_pcm_create_iec958_consumer_default
snd_pcm_fill_iec958_consumer
snd_pcm_fill_iec958_consumer_hw_params
snd_pcm_format_width
snd_pcm_hw_constraint_eld
_snd_pcm_hw_params_any
snd_pcm_set_managed_buffer
snd_pcm_std_chmaps
@ -3269,6 +3310,7 @@
__traceiter_android_rvh_before_do_sched_yield
__traceiter_android_rvh_build_perf_domains
__traceiter_android_rvh_can_migrate_task
__traceiter_android_rvh_cgroup_force_kthread_migration
__traceiter_android_rvh_check_preempt_tick
__traceiter_android_rvh_check_preempt_wakeup
__traceiter_android_rvh_check_preempt_wakeup_ignore
@ -3340,11 +3382,13 @@
__traceiter_android_rvh_update_thermal_stats
__traceiter_android_rvh_util_est_update
__traceiter_android_rvh_wake_up_new_task
__traceiter_android_vh_alter_mutex_list_add
__traceiter_android_vh_audio_usb_offload_connect
__traceiter_android_vh_binder_restore_priority
__traceiter_android_vh_binder_set_priority
__traceiter_android_vh_binder_wakeup_ilocked
__traceiter_android_vh_build_sched_domains
__traceiter_android_vh_bus_iommu_probe
__traceiter_android_vh_check_hibernation_swap
__traceiter_android_vh_check_uninterrupt_tasks
__traceiter_android_vh_check_uninterrupt_tasks_done
@ -3378,6 +3422,7 @@
__traceiter_android_vh_rproc_recovery_set
__traceiter_android_vh_save_cpu_resume
__traceiter_android_vh_save_hib_resume_bdev
__traceiter_android_vh_scan_abort_check_wmarks
__traceiter_android_vh_scheduler_tick
__traceiter_android_vh_setscheduler_uclamp
__traceiter_android_vh_show_resume_epoch_val
@ -3413,6 +3458,7 @@
__tracepoint_android_rvh_before_do_sched_yield
__tracepoint_android_rvh_build_perf_domains
__tracepoint_android_rvh_can_migrate_task
__tracepoint_android_rvh_cgroup_force_kthread_migration
__tracepoint_android_rvh_check_preempt_tick
__tracepoint_android_rvh_check_preempt_wakeup
__tracepoint_android_rvh_check_preempt_wakeup_ignore
@ -3484,11 +3530,13 @@
__tracepoint_android_rvh_update_thermal_stats
__tracepoint_android_rvh_util_est_update
__tracepoint_android_rvh_wake_up_new_task
__tracepoint_android_vh_alter_mutex_list_add
__tracepoint_android_vh_audio_usb_offload_connect
__tracepoint_android_vh_binder_restore_priority
__tracepoint_android_vh_binder_set_priority
__tracepoint_android_vh_binder_wakeup_ilocked
__tracepoint_android_vh_build_sched_domains
__tracepoint_android_vh_bus_iommu_probe
__tracepoint_android_vh_check_hibernation_swap
__tracepoint_android_vh_check_uninterrupt_tasks
__tracepoint_android_vh_check_uninterrupt_tasks_done
@ -3522,6 +3570,7 @@
__tracepoint_android_vh_rproc_recovery_set
__tracepoint_android_vh_save_cpu_resume
__tracepoint_android_vh_save_hib_resume_bdev
__tracepoint_android_vh_scan_abort_check_wmarks
__tracepoint_android_vh_scheduler_tick
__tracepoint_android_vh_setscheduler_uclamp
__tracepoint_android_vh_show_resume_epoch_val
@ -3883,6 +3932,7 @@
vhost_dev_init
vhost_dev_ioctl
vhost_dev_stop
vhost_dev_flush
vhost_disable_notify
vhost_enable_notify
vhost_get_vq_desc

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,280 @@
[abi_symbol_list]
alt_cb_patch_nops
__arch_copy_from_user
__arch_copy_to_user
autoremove_wake_function
balance_dirty_pages_ratelimited
bcmp
__bforget
__bh_read_batch
bio_add_page
bio_alloc_bioset
bio_put
__bitmap_weight
bit_waitqueue
blkdev_issue_discard
blkdev_issue_flush
blk_finish_plug
blk_start_plug
__blockdev_direct_IO
block_dirty_folio
block_invalidate_folio
block_is_partially_uptodate
__breadahead
__bread_gfp
__brelse
buffer_migrate_folio
call_rcu
capable
capable_wrt_inode_uidgid
__check_object_size
clean_bdev_aliases
clear_inode
clear_page
clear_page_dirty_for_io
copy_page_from_iter_atomic
cpu_hwcaps
create_empty_buffers
current_umask
d_add
d_add_ci
d_instantiate
d_make_root
d_obtain_alias
down_read
down_write
down_write_trylock
dput
drop_nlink
d_splice_alias
dump_stack
end_buffer_read_sync
end_buffer_write_sync
end_page_writeback
errseq_set
fault_in_iov_iter_readable
fault_in_safe_writeable
fget
fiemap_fill_next_extent
fiemap_prep
file_check_and_advance_wb_err
filemap_add_folio
filemap_dirty_folio
filemap_fault
filemap_fdatawait_range
filemap_fdatawrite
filemap_fdatawrite_range
filemap_flush
__filemap_set_wb_err
filemap_write_and_wait_range
file_remove_privs
file_update_time
file_write_and_wait_range
finish_wait
flush_dcache_page
__folio_alloc
__folio_cancel_dirty
__folio_lock
__folio_put
folio_wait_bit
folio_write_one
fortify_panic
fput
freezer_active
freezing_slow_path
fs_bio_set
generic_error_remove_page
generic_file_direct_write
generic_file_llseek
generic_file_mmap
generic_file_open
generic_file_read_iter
generic_file_splice_read
generic_fillattr
generic_perform_write
generic_read_dir
generic_write_checks
__getblk_gfp
gic_nonsecure_priorities
grab_cache_page_write_begin
iget5_locked
igrab
ihold
ilookup5
inc_nlink
in_group_p
__init_rwsem
init_special_inode
init_wait_entry
__init_waitqueue_head
inode_dio_wait
inode_init_once
inode_init_owner
inode_maybe_inc_iversion
inode_newsize_ok
inode_set_flags
__insert_inode_hash
invalidate_bdev
invalidate_inode_pages2_range
invalidate_mapping_pages
io_schedule
iov_iter_advance
iov_iter_alignment
iov_iter_get_pages2
iov_iter_single_seg_count
iput
is_bad_inode
iter_file_splice_write
iunique
jiffies
jiffies_to_msecs
kasan_flag_enabled
kfree
kill_block_super
__kmalloc
kmalloc_caches
kmalloc_trace
kmem_cache_alloc
kmem_cache_alloc_lru
kmem_cache_create
kmem_cache_create_usercopy
kmem_cache_destroy
kmem_cache_free
krealloc
kthread_complete_and_exit
kthread_create_on_node
kthread_should_stop
kthread_stop
ktime_get_coarse_real_ts64
kvfree
__list_add_valid
__list_del_entry_valid
load_nls
load_nls_default
__lock_buffer
make_bad_inode
mark_buffer_async_write
mark_buffer_dirty
mark_buffer_write_io_error
__mark_inode_dirty
mark_page_accessed
memcmp
memcpy
memmove
memset
mktime64
mnt_drop_write_file
mnt_want_write_file
mount_bdev
mpage_readahead
mpage_read_folio
__msecs_to_jiffies
__mutex_init
mutex_lock
mutex_trylock
mutex_unlock
new_inode
notify_change
pagecache_get_page
page_cache_next_miss
page_cache_prev_miss
page_pinner_inited
__page_pinner_put_page
pagevec_lookup_range_tag
__pagevec_release
page_zero_new_buffers
__percpu_down_read
preempt_schedule
preempt_schedule_notrace
prepare_to_wait
prepare_to_wait_event
_printk
__printk_ratelimit
___ratelimit
_raw_read_lock
_raw_read_lock_irqsave
_raw_read_unlock
_raw_read_unlock_irqrestore
_raw_spin_lock
_raw_spin_lock_irqsave
_raw_spin_unlock
_raw_spin_unlock_irqrestore
_raw_write_lock
_raw_write_lock_irqsave
_raw_write_unlock
_raw_write_unlock_irqrestore
rcu_barrier
rcuwait_wake_up
readahead_gfp_mask
read_cache_page
redirty_page_for_writepage
__refrigerator
register_filesystem
__remove_inode_hash
sb_min_blocksize
sb_set_blocksize
schedule
schedule_timeout
schedule_timeout_interruptible
security_inode_init_security
seq_printf
setattr_prepare
set_freezable
set_nlink
set_page_dirty
__set_page_dirty_nobuffers
set_page_writeback
set_user_nice
simple_strtol
simple_strtoul
simple_strtoull
snprintf
sprintf
sscanf
__stack_chk_fail
strchr
strcmp
strlen
strncasecmp
strncmp
strsep
strstr
submit_bh
submit_bio
sync_blockdev
__sync_dirty_buffer
sync_dirty_buffer
sync_filesystem
sync_inode_metadata
sys_tz
tag_pages_for_writeback
time64_to_tm
timestamp_truncate
touch_atime
_trace_android_vh_record_pcpu_rwsem_starttime
_trace_android_vh_record_pcpu_rwsem_time_early
truncate_inode_pages
truncate_inode_pages_final
truncate_pagecache
truncate_setsize
try_to_writeback_inodes_sb
unload_nls
unlock_buffer
unlock_new_inode
unlock_page
unregister_filesystem
up_read
up_write
vfree
vfs_fsync_range
__vmalloc
vmalloc
vsnprintf
vzalloc
__wait_on_buffer
wake_bit_function
__wake_up
wake_up_process
__warn_printk
write_inode_now
xa_load

View File

@ -412,6 +412,7 @@
param_ops_int
param_ops_uint
pcpu_nr_pages
percpu_counter_batch
__per_cpu_offset
perf_trace_buf_alloc
perf_trace_run_bpf_submit
@ -1912,6 +1913,8 @@
# required by trusty-log.ko
vm_map_ram
vm_unmap_ram
# required by sprd_time_sync_cp.ko
pvclock_gtod_register_notifier
# required by trusty-pm.ko
unregister_syscore_ops

View File

@ -807,6 +807,7 @@
blk_bio_list_merge
blk_execute_rq
blk_execute_rq_nowait
blk_fill_rwbs
blk_mq_alloc_request
blk_mq_alloc_sq_tag_set
blk_mq_alloc_tag_set

View File

@ -332,3 +332,12 @@
#required by xm_ispv4_pcie.ko
pci_ioremap_bar
pci_disable_pcie_error_reporting
#required by lock_optimization module
__traceiter_android_vh_record_pcpu_rwsem_time_early
__tracepoint_android_vh_record_pcpu_rwsem_time_early
cgroup_threadgroup_rwsem
#required by zram.ko
bioset_init
bioset_exit

View File

@ -1,3 +1,4 @@
arch/arm64/geniezone/gzvm.ko
drivers/bluetooth/btbcm.ko
drivers/bluetooth/btqca.ko
drivers/bluetooth/btsdio.ko

View File

@ -5,6 +5,7 @@ obj-$(CONFIG_XEN) += xen/
obj-$(subst m,y,$(CONFIG_HYPERV)) += hyperv/
obj-$(CONFIG_GUNYAH) += gunyah/
obj-$(CONFIG_CRYPTO) += crypto/
obj-$(CONFIG_MTK_GZVM) += geniezone/
# for cleaning
subdir- += boot

View File

@ -95,6 +95,7 @@ CONFIG_MODPROBE_PATH="/system/bin/modprobe"
CONFIG_BLK_DEV_ZONED=y
CONFIG_BLK_DEV_THROTTLING=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_BLK_CGROUP_IOPRIO=y
CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
CONFIG_IOSCHED_BFQ=y
@ -551,6 +552,7 @@ CONFIG_GUNYAH=y
CONFIG_GUNYAH_VCPU=y
CONFIG_GUNYAH_IRQFD=y
CONFIG_GUNYAH_IOEVENTFD=y
CONFIG_MTK_GZVM=m
CONFIG_VHOST_VSOCK=y
CONFIG_STAGING=y
CONFIG_ASHMEM=y

View File

@ -0,0 +1,358 @@
# CONFIG_MODULE_SIG_ALL is not set
CONFIG_PWRSEQ_SIMPLE=m
CONFIG_AP6XXX=m
CONFIG_ARCH_ROCKCHIP=y
CONFIG_ARM_ROCKCHIP_BUS_DEVFREQ=m
CONFIG_ARM_ROCKCHIP_CPUFREQ=m
CONFIG_ARM_ROCKCHIP_DMC_DEVFREQ=m
CONFIG_BACKLIGHT_PWM=m
CONFIG_BATTERY_CW2015=m
CONFIG_BATTERY_CW2017=m
CONFIG_BATTERY_CW221X=m
CONFIG_BATTERY_RK817=m
CONFIG_BATTERY_RK818=m
CONFIG_BMA2XX_ACC=m
CONFIG_CHARGER_BQ25700=m
CONFIG_CHARGER_BQ25890=m
CONFIG_CHARGER_RK817=m
CONFIG_CHARGER_RK818=m
CONFIG_CHARGER_SC89890=m
CONFIG_CHARGER_SGM41542=m
CONFIG_CHR_DEV_SG=m
CONFIG_COMMON_CLK_PWM=m
CONFIG_COMMON_CLK_RK808=m
CONFIG_COMMON_CLK_ROCKCHIP=m
CONFIG_COMMON_CLK_SCMI=m
CONFIG_COMPASS_AK8963=m
CONFIG_COMPASS_AK8975=m
CONFIG_COMPASS_DEVICE=m
CONFIG_CPUFREQ_DT=m
CONFIG_CPU_FREQ_GOV_ONDEMAND=m
CONFIG_CPU_FREQ_GOV_USERSPACE=m
CONFIG_CPU_PX30=y
CONFIG_CPU_RK3399=y
CONFIG_CPU_RK3562=y
CONFIG_CPU_RK3568=y
CONFIG_CPU_RK3588=y
CONFIG_CRYPTO_AES_ARM64_CE_CCM=m
CONFIG_CRYPTO_DEV_ROCKCHIP=m
CONFIG_CRYPTO_DEV_ROCKCHIP_DEV=m
CONFIG_CRYPTO_SHA1_ARM64_CE=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_DEVFREQ_EVENT_ROCKCHIP_NOCP=m
CONFIG_DMABUF_HEAPS_CMA=m
CONFIG_DMABUF_HEAPS_SYSTEM=m
CONFIG_DRAGONRISE_FF=y
CONFIG_DRM_DISPLAY_CONNECTOR=m
CONFIG_DRM_DW_HDMI_CEC=m
CONFIG_DRM_DW_HDMI_I2S_AUDIO=m
CONFIG_DRM_MAXIM_MAX96745=m
CONFIG_DRM_MAXIM_MAX96755F=m
CONFIG_DRM_PANEL_SIMPLE=m
CONFIG_DRM_RK1000_TVE=m
CONFIG_DRM_RK630_TVE=m
CONFIG_DRM_ROCKCHIP=m
CONFIG_DRM_ROCKCHIP_RK618=m
CONFIG_DRM_ROCKCHIP_RK628=m
CONFIG_DRM_ROHM_BU18XL82=m
CONFIG_DRM_SII902X=m
CONFIG_DTC_SYMBOLS=y
# CONFIG_DWMAC_GENERIC is not set
# CONFIG_DWMAC_IPQ806X is not set
# CONFIG_DWMAC_QCOM_ETHQOS is not set
# CONFIG_DWMAC_SUN8I is not set
# CONFIG_DWMAC_SUNXI is not set
CONFIG_DW_WATCHDOG=m
CONFIG_FIQ_DEBUGGER=m
CONFIG_FIQ_DEBUGGER_CONSOLE=y
CONFIG_FIQ_DEBUGGER_CONSOLE_DEFAULT_ENABLE=y
CONFIG_FIQ_DEBUGGER_NO_SLEEP=y
CONFIG_FIQ_DEBUGGER_TRUST_ZONE=y
CONFIG_GPIO_ROCKCHIP=m
CONFIG_GREENASIA_FF=y
CONFIG_GSENSOR_DEVICE=m
CONFIG_GS_DA223=m
CONFIG_GS_KXTJ9=m
CONFIG_GS_LIS3DH=m
CONFIG_GS_LSM303D=m
CONFIG_GS_MC3230=m
CONFIG_GS_MMA7660=m
CONFIG_GS_MMA8452=m
CONFIG_GS_MXC6655XA=m
CONFIG_GS_SC7660=m
CONFIG_GS_SC7A20=m
CONFIG_GS_SC7A30=m
CONFIG_GYROSCOPE_DEVICE=m
CONFIG_GYRO_EWTSA=m
CONFIG_GYRO_L3G20D=m
CONFIG_GYRO_L3G4200D=m
CONFIG_GYRO_LSM330=m
CONFIG_GYRO_MPU6500=m
CONFIG_GYRO_MPU6880=m
CONFIG_HALL_DEVICE=m
CONFIG_HID_A4TECH=m
CONFIG_HID_ACRUX=m
CONFIG_HID_ACRUX_FF=y
CONFIG_HID_ALPS=m
CONFIG_HID_APPLEIR=m
CONFIG_HID_AUREAL=m
CONFIG_HID_BELKIN=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CYPRESS=m
CONFIG_HID_DRAGONRISE=m
CONFIG_HID_EMS_FF=m
CONFIG_HID_EZKEY=m
CONFIG_HID_GREENASIA=m
CONFIG_HID_GYRATION=m
CONFIG_HID_HOLTEK=m
CONFIG_HID_ICADE=m
CONFIG_HID_KENSINGTON=m
CONFIG_HID_KEYTOUCH=m
CONFIG_HID_KYE=m
CONFIG_HID_LCPOWER=m
CONFIG_HID_LENOVO=m
CONFIG_HID_MONTEREY=m
CONFIG_HID_NTRIG=m
CONFIG_HID_ORTEK=m
CONFIG_HID_PANTHERLORD=m
CONFIG_HID_PETALYNX=m
CONFIG_HID_PRIMAX=m
CONFIG_HID_SAITEK=m
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SMARTJOYPLUS=m
CONFIG_HID_SPEEDLINK=m
CONFIG_HID_STEELSERIES=m
CONFIG_HID_SUNPLUS=m
CONFIG_HID_THINGM=m
CONFIG_HID_THRUSTMASTER=m
CONFIG_HID_TIVO=m
CONFIG_HID_TOPSEED=m
CONFIG_HID_TWINHAN=m
CONFIG_HID_WALTOP=m
CONFIG_HID_ZEROPLUS=m
CONFIG_HID_ZYDACRON=m
CONFIG_HS_MH248=m
CONFIG_HW_RANDOM_ROCKCHIP=m
CONFIG_I2C_CHARDEV=m
CONFIG_I2C_GPIO=m
CONFIG_I2C_HID_OF=m
CONFIG_I2C_RK3X=m
CONFIG_IEP=m
CONFIG_IIO_BUFFER_CB=m
CONFIG_INPUT_RK805_PWRKEY=m
CONFIG_KEYBOARD_ADC=m
CONFIG_LEDS_GPIO=m
CONFIG_LEDS_RGB13H=m
CONFIG_LEDS_TRIGGER_BACKLIGHT=m
CONFIG_LEDS_TRIGGER_DEFAULT_ON=m
CONFIG_LEDS_TRIGGER_HEARTBEAT=m
CONFIG_LIGHT_DEVICE=m
CONFIG_LSM330_ACC=m
CONFIG_LS_CM3217=m
CONFIG_LS_CM3218=m
CONFIG_LS_STK3410=m
CONFIG_LS_UCS14620=m
CONFIG_MALI_BIFROST=m
CONFIG_MALI_BIFROST_DEBUG=y
CONFIG_MALI_BIFROST_EXPERT=y
CONFIG_MALI_CSF_SUPPORT=y
CONFIG_MALI_PLATFORM_NAME="rk"
CONFIG_MALI_PWRSOFT_765=y
CONFIG_MFD_RK618=m
CONFIG_MFD_RK628=m
CONFIG_MFD_RK630_I2C=m
CONFIG_MFD_RK806_SPI=m
CONFIG_MFD_RK808=m
CONFIG_MMC_DW=m
CONFIG_MMC_DW_ROCKCHIP=m
CONFIG_MMC_SDHCI_OF_ARASAN=m
CONFIG_MMC_SDHCI_OF_DWCMSHC=m
CONFIG_MPU6500_ACC=m
CONFIG_MPU6880_ACC=m
CONFIG_NVMEM_ROCKCHIP_EFUSE=m
CONFIG_NVMEM_ROCKCHIP_OTP=m
CONFIG_OPTEE=m
CONFIG_PANTHERLORD_FF=y
CONFIG_PCIEASPM_EXT=m
CONFIG_PCIE_DW_ROCKCHIP=m
CONFIG_PCIE_ROCKCHIP_HOST=m
CONFIG_PHY_ROCKCHIP_CSI2_DPHY=m
CONFIG_PHY_ROCKCHIP_DP=m
CONFIG_PHY_ROCKCHIP_EMMC=m
CONFIG_PHY_ROCKCHIP_INNO_DSIDPHY=m
CONFIG_PHY_ROCKCHIP_INNO_HDMI=m
CONFIG_PHY_ROCKCHIP_INNO_USB2=m
CONFIG_PHY_ROCKCHIP_INNO_USB3=m
CONFIG_PHY_ROCKCHIP_NANENG_COMBO_PHY=m
CONFIG_PHY_ROCKCHIP_NANENG_EDP=m
CONFIG_PHY_ROCKCHIP_PCIE=m
CONFIG_PHY_ROCKCHIP_SAMSUNG_DCPHY=m
CONFIG_PHY_ROCKCHIP_SAMSUNG_HDPTX=m
CONFIG_PHY_ROCKCHIP_SAMSUNG_HDPTX_HDMI=m
CONFIG_PHY_ROCKCHIP_SNPS_PCIE3=m
CONFIG_PHY_ROCKCHIP_TYPEC=m
CONFIG_PHY_ROCKCHIP_USB=m
CONFIG_PHY_ROCKCHIP_USBDP=m
CONFIG_PINCTRL_RK805=m
CONFIG_PINCTRL_RK806=m
CONFIG_PINCTRL_ROCKCHIP=m
CONFIG_PL330_DMA=m
CONFIG_PROXIMITY_DEVICE=m
CONFIG_PS_STK3410=m
CONFIG_PS_UCS14620=m
CONFIG_PWM_ROCKCHIP=m
CONFIG_REGULATOR_ACT8865=m
CONFIG_REGULATOR_FAN53555=m
CONFIG_REGULATOR_GPIO=m
CONFIG_REGULATOR_LP8752=m
CONFIG_REGULATOR_MP8865=m
CONFIG_REGULATOR_PWM=m
CONFIG_REGULATOR_RK806=m
CONFIG_REGULATOR_RK808=m
CONFIG_REGULATOR_RK860X=m
CONFIG_REGULATOR_TPS65132=m
CONFIG_REGULATOR_WL2868C=m
CONFIG_REGULATOR_XZ3216=m
CONFIG_RFKILL_RK=m
CONFIG_RK_CONSOLE_THREAD=y
CONFIG_RK_HEADSET=m
CONFIG_ROCKCHIP_ANALOGIX_DP=y
CONFIG_ROCKCHIP_CDN_DP=y
CONFIG_ROCKCHIP_CPUINFO=m
CONFIG_ROCKCHIP_DEBUG=m
CONFIG_ROCKCHIP_DW_DP=y
CONFIG_ROCKCHIP_DW_HDCP2=m
CONFIG_ROCKCHIP_DW_HDMI=y
CONFIG_ROCKCHIP_DW_MIPI_DSI=y
CONFIG_ROCKCHIP_GRF=m
CONFIG_ROCKCHIP_INNO_HDMI=y
CONFIG_ROCKCHIP_IODOMAIN=m
CONFIG_ROCKCHIP_IOMMU=m
CONFIG_ROCKCHIP_IPA=m
CONFIG_ROCKCHIP_LVDS=y
CONFIG_ROCKCHIP_MPP_AV1DEC=y
CONFIG_ROCKCHIP_MPP_IEP2=y
CONFIG_ROCKCHIP_MPP_JPGDEC=y
CONFIG_ROCKCHIP_MPP_RKVDEC=y
CONFIG_ROCKCHIP_MPP_RKVDEC2=y
CONFIG_ROCKCHIP_MPP_RKVENC=y
CONFIG_ROCKCHIP_MPP_RKVENC2=y
CONFIG_ROCKCHIP_MPP_SERVICE=m
CONFIG_ROCKCHIP_MPP_VDPU1=y
CONFIG_ROCKCHIP_MPP_VDPU2=y
CONFIG_ROCKCHIP_MPP_VEPU1=y
CONFIG_ROCKCHIP_MPP_VEPU2=y
CONFIG_ROCKCHIP_MULTI_RGA=m
CONFIG_ROCKCHIP_OPP=m
CONFIG_ROCKCHIP_PHY=m
CONFIG_ROCKCHIP_PM_DOMAINS=m
CONFIG_ROCKCHIP_PVTM=m
CONFIG_ROCKCHIP_RAM_VENDOR_STORAGE=m
CONFIG_ROCKCHIP_REMOTECTL=m
CONFIG_ROCKCHIP_REMOTECTL_PWM=m
CONFIG_ROCKCHIP_RGB=y
CONFIG_ROCKCHIP_RKNPU=m
CONFIG_ROCKCHIP_SARADC=m
CONFIG_ROCKCHIP_SIP=m
CONFIG_ROCKCHIP_SUSPEND_MODE=m
CONFIG_ROCKCHIP_SYSTEM_MONITOR=m
CONFIG_ROCKCHIP_THERMAL=m
CONFIG_ROCKCHIP_TIMER=m
CONFIG_ROCKCHIP_VENDOR_STORAGE=m
CONFIG_ROCKCHIP_VENDOR_STORAGE_UPDATE_LOADER=y
CONFIG_RTC_DRV_HYM8563=m
CONFIG_RTC_DRV_RK808=m
CONFIG_SENSOR_DEVICE=m
CONFIG_SMARTJOYPLUS_FF=y
CONFIG_SND_SIMPLE_CARD=m
CONFIG_SND_SOC_AW883XX=m
CONFIG_SND_SOC_BT_SCO=m
CONFIG_SND_SOC_CX2072X=m
CONFIG_SND_SOC_DUMMY_CODEC=m
CONFIG_SND_SOC_ES7202=m
CONFIG_SND_SOC_ES7210=m
CONFIG_SND_SOC_ES7243E=m
CONFIG_SND_SOC_ES8311=m
CONFIG_SND_SOC_ES8316=m
CONFIG_SND_SOC_ES8323=m
CONFIG_SND_SOC_ES8326=m
CONFIG_SND_SOC_ES8396=m
CONFIG_SND_SOC_RK3328=m
CONFIG_SND_SOC_RK817=m
CONFIG_SND_SOC_RK_CODEC_DIGITAL=m
CONFIG_SND_SOC_RK_DSM=m
CONFIG_SND_SOC_ROCKCHIP=m
CONFIG_SND_SOC_ROCKCHIP_HDMI=m
CONFIG_SND_SOC_ROCKCHIP_I2S=m
CONFIG_SND_SOC_ROCKCHIP_I2S_TDM=m
CONFIG_SND_SOC_ROCKCHIP_MULTICODECS=m
CONFIG_SND_SOC_ROCKCHIP_PDM=m
CONFIG_SND_SOC_ROCKCHIP_SAI=m
CONFIG_SND_SOC_ROCKCHIP_SPDIF=m
CONFIG_SND_SOC_ROCKCHIP_SPDIFRX=m
CONFIG_SND_SOC_RT5640=m
CONFIG_SND_SOC_SPDIF=m
CONFIG_SPI_ROCKCHIP=m
CONFIG_SPI_SPIDEV=m
CONFIG_STMMAC_ETH=m
CONFIG_SW_SYNC=m
CONFIG_SYSCON_REBOOT_MODE=m
CONFIG_TEE=m
CONFIG_TEST_POWER=m
CONFIG_TOUCHSCREEN_ELAN5515=m
CONFIG_TOUCHSCREEN_GSL3673=m
CONFIG_TOUCHSCREEN_GSLX680_PAD=m
CONFIG_TOUCHSCREEN_GT1X=m
CONFIG_TYPEC_FUSB302=m
CONFIG_TYPEC_HUSB311=m
CONFIG_UCS12CM0=m
CONFIG_USB_DWC2=m
CONFIG_USB_NET_CDC_MBIM=m
CONFIG_USB_NET_DM9601=m
CONFIG_USB_NET_GL620A=m
CONFIG_USB_NET_KALMIA=m
CONFIG_USB_NET_MCS7830=m
CONFIG_USB_NET_PLUSB=m
CONFIG_USB_NET_SMSC75XX=m
CONFIG_USB_NET_SMSC95XX=m
CONFIG_USB_OHCI_HCD=m
# CONFIG_USB_OHCI_HCD_PCI is not set
CONFIG_USB_OHCI_HCD_PLATFORM=m
CONFIG_USB_PRINTER=m
CONFIG_USB_TRANCEVIBRATOR=m
CONFIG_VIDEO_AW36518=m
CONFIG_VIDEO_AW8601=m
CONFIG_VIDEO_CN3927V=m
CONFIG_VIDEO_DW9714=m
CONFIG_VIDEO_FP5510=m
CONFIG_VIDEO_GC2145=m
CONFIG_VIDEO_GC2385=m
CONFIG_VIDEO_GC4C33=m
CONFIG_VIDEO_GC8034=m
CONFIG_VIDEO_IMX415=m
CONFIG_VIDEO_LT6911UXC=m
CONFIG_VIDEO_LT7911D=m
CONFIG_VIDEO_NVP6188=m
CONFIG_VIDEO_OV02B10=m
CONFIG_VIDEO_OV13850=m
CONFIG_VIDEO_OV13855=m
CONFIG_VIDEO_OV50C40=m
CONFIG_VIDEO_OV5695=m
CONFIG_VIDEO_OV8858=m
CONFIG_VIDEO_RK628_BT1120=m
CONFIG_VIDEO_RK628_CSI=m
CONFIG_VIDEO_RK_IRCUT=m
CONFIG_VIDEO_ROCKCHIP_CIF=m
CONFIG_VIDEO_ROCKCHIP_HDMIRX=m
CONFIG_VIDEO_ROCKCHIP_ISP=m
CONFIG_VIDEO_ROCKCHIP_ISPP=m
CONFIG_VIDEO_ROCKCHIP_RKISP1=m
CONFIG_VIDEO_S5K3L6XX=m
CONFIG_VIDEO_S5KJN1=m
CONFIG_VIDEO_SGM3784=m
CONFIG_VIDEO_THCV244=m
CONFIG_VL6180=m
CONFIG_WIFI_BUILD_MODULE=y
CONFIG_WL_ROCKCHIP=m
# CONFIG_USB_DUMMY_HCD is not set

View File

@ -0,0 +1,9 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# Main Makefile for gzvm, this one includes drivers/virt/geniezone/Makefile
#
include $(srctree)/drivers/virt/geniezone/Makefile
gzvm-y += vm.o vcpu.o vgic.o
obj-$(CONFIG_MTK_GZVM) += gzvm.o

View File

@ -0,0 +1,114 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2023 MediaTek Inc.
*/
#ifndef __GZVM_ARCH_COMMON_H__
#define __GZVM_ARCH_COMMON_H__
#include <linux/arm-smccc.h>
enum {
GZVM_FUNC_CREATE_VM = 0,
GZVM_FUNC_DESTROY_VM = 1,
GZVM_FUNC_CREATE_VCPU = 2,
GZVM_FUNC_DESTROY_VCPU = 3,
GZVM_FUNC_SET_MEMREGION = 4,
GZVM_FUNC_RUN = 5,
GZVM_FUNC_GET_ONE_REG = 8,
GZVM_FUNC_SET_ONE_REG = 9,
GZVM_FUNC_IRQ_LINE = 10,
GZVM_FUNC_CREATE_DEVICE = 11,
GZVM_FUNC_PROBE = 12,
GZVM_FUNC_ENABLE_CAP = 13,
GZVM_FUNC_INFORM_EXIT = 14,
GZVM_FUNC_MEMREGION_PURPOSE = 15,
GZVM_FUNC_SET_DTB_CONFIG = 16,
GZVM_FUNC_MAP_GUEST = 17,
GZVM_FUNC_MAP_GUEST_BLOCK = 18,
NR_GZVM_FUNC,
};
#define SMC_ENTITY_MTK 59
#define GZVM_FUNCID_START (0x1000)
#define GZVM_HCALL_ID(func) \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_32, \
SMC_ENTITY_MTK, (GZVM_FUNCID_START + (func)))
#define MT_HVC_GZVM_CREATE_VM GZVM_HCALL_ID(GZVM_FUNC_CREATE_VM)
#define MT_HVC_GZVM_DESTROY_VM GZVM_HCALL_ID(GZVM_FUNC_DESTROY_VM)
#define MT_HVC_GZVM_CREATE_VCPU GZVM_HCALL_ID(GZVM_FUNC_CREATE_VCPU)
#define MT_HVC_GZVM_DESTROY_VCPU GZVM_HCALL_ID(GZVM_FUNC_DESTROY_VCPU)
#define MT_HVC_GZVM_SET_MEMREGION GZVM_HCALL_ID(GZVM_FUNC_SET_MEMREGION)
#define MT_HVC_GZVM_RUN GZVM_HCALL_ID(GZVM_FUNC_RUN)
#define MT_HVC_GZVM_GET_ONE_REG GZVM_HCALL_ID(GZVM_FUNC_GET_ONE_REG)
#define MT_HVC_GZVM_SET_ONE_REG GZVM_HCALL_ID(GZVM_FUNC_SET_ONE_REG)
#define MT_HVC_GZVM_IRQ_LINE GZVM_HCALL_ID(GZVM_FUNC_IRQ_LINE)
#define MT_HVC_GZVM_CREATE_DEVICE GZVM_HCALL_ID(GZVM_FUNC_CREATE_DEVICE)
#define MT_HVC_GZVM_PROBE GZVM_HCALL_ID(GZVM_FUNC_PROBE)
#define MT_HVC_GZVM_ENABLE_CAP GZVM_HCALL_ID(GZVM_FUNC_ENABLE_CAP)
#define MT_HVC_GZVM_INFORM_EXIT GZVM_HCALL_ID(GZVM_FUNC_INFORM_EXIT)
#define MT_HVC_GZVM_MEMREGION_PURPOSE GZVM_HCALL_ID(GZVM_FUNC_MEMREGION_PURPOSE)
#define MT_HVC_GZVM_SET_DTB_CONFIG GZVM_HCALL_ID(GZVM_FUNC_SET_DTB_CONFIG)
#define MT_HVC_GZVM_MAP_GUEST GZVM_HCALL_ID(GZVM_FUNC_MAP_GUEST)
#define MT_HVC_GZVM_MAP_GUEST_BLOCK GZVM_HCALL_ID(GZVM_FUNC_MAP_GUEST_BLOCK)
#define GIC_V3_NR_LRS 16
/**
* gzvm_hypcall_wrapper() - the wrapper for hvc calls
* @a0-a7: arguments passed in registers 0 to 7
* @res: result values from registers 0 to 3
*
* Return: The wrapper helps caller to convert geniezone errno to Linux errno.
*/
static inline int gzvm_hypcall_wrapper(unsigned long a0, unsigned long a1,
unsigned long a2, unsigned long a3,
unsigned long a4, unsigned long a5,
unsigned long a6, unsigned long a7,
struct arm_smccc_res *res)
{
arm_smccc_hvc(a0, a1, a2, a3, a4, a5, a6, a7, res);
return gzvm_err_to_errno(res->a0);
}
static inline u16 get_vmid_from_tuple(unsigned int tuple)
{
return (u16)(tuple >> 16);
}
static inline u16 get_vcpuid_from_tuple(unsigned int tuple)
{
return (u16)(tuple & 0xffff);
}
/**
* struct gzvm_vcpu_hwstate: Sync architecture state back to host for handling
* @nr_lrs: The available LRs(list registers) in Soc.
* @__pad: add an explicit '__u32 __pad;' in the middle to make it clear
* what the actual layout is.
* @lr: The array of LRs(list registers).
*
* - Keep the same layout of hypervisor data struct.
* - Sync list registers back for acking virtual device interrupt status.
*/
struct gzvm_vcpu_hwstate {
__le32 nr_lrs;
__le32 __pad;
__le64 lr[GIC_V3_NR_LRS];
};
static inline unsigned int
assemble_vm_vcpu_tuple(u16 vmid, u16 vcpuid)
{
return ((unsigned int)vmid << 16 | vcpuid);
}
static inline void
disassemble_vm_vcpu_tuple(unsigned int tuple, u16 *vmid, u16 *vcpuid)
{
*vmid = get_vmid_from_tuple(tuple);
*vcpuid = get_vcpuid_from_tuple(tuple);
}
#endif /* __GZVM_ARCH_COMMON_H__ */

View File

@ -0,0 +1,80 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2023 MediaTek Inc.
*/
#include <linux/arm-smccc.h>
#include <linux/err.h>
#include <linux/uaccess.h>
#include <linux/gzvm.h>
#include <linux/gzvm_drv.h>
#include "gzvm_arch_common.h"
int gzvm_arch_vcpu_update_one_reg(struct gzvm_vcpu *vcpu, __u64 reg_id,
bool is_write, __u64 *data)
{
struct arm_smccc_res res;
unsigned long a1;
int ret;
a1 = assemble_vm_vcpu_tuple(vcpu->gzvm->vm_id, vcpu->vcpuid);
if (!is_write) {
ret = gzvm_hypcall_wrapper(MT_HVC_GZVM_GET_ONE_REG,
a1, reg_id, 0, 0, 0, 0, 0, &res);
if (ret == 0)
*data = res.a1;
} else {
ret = gzvm_hypcall_wrapper(MT_HVC_GZVM_SET_ONE_REG,
a1, reg_id, *data, 0, 0, 0, 0, &res);
}
return ret;
}
int gzvm_arch_vcpu_run(struct gzvm_vcpu *vcpu, __u64 *exit_reason)
{
struct arm_smccc_res res;
unsigned long a1;
int ret;
a1 = assemble_vm_vcpu_tuple(vcpu->gzvm->vm_id, vcpu->vcpuid);
ret = gzvm_hypcall_wrapper(MT_HVC_GZVM_RUN, a1, 0, 0, 0, 0, 0,
0, &res);
*exit_reason = res.a1;
return ret;
}
int gzvm_arch_destroy_vcpu(u16 vm_id, int vcpuid)
{
struct arm_smccc_res res;
unsigned long a1;
a1 = assemble_vm_vcpu_tuple(vm_id, vcpuid);
gzvm_hypcall_wrapper(MT_HVC_GZVM_DESTROY_VCPU, a1, 0, 0, 0, 0, 0, 0,
&res);
return 0;
}
/**
* gzvm_arch_create_vcpu() - Call smc to gz hypervisor to create vcpu
* @vm_id: vm id
* @vcpuid: vcpu id
* @run: Virtual address of vcpu->run
*
* Return: The wrapper helps caller to convert geniezone errno to Linux errno.
*/
int gzvm_arch_create_vcpu(u16 vm_id, int vcpuid, void *run)
{
struct arm_smccc_res res;
unsigned long a1, a2;
int ret;
a1 = assemble_vm_vcpu_tuple(vm_id, vcpuid);
a2 = (__u64)virt_to_phys(run);
ret = gzvm_hypcall_wrapper(MT_HVC_GZVM_CREATE_VCPU, a1, a2, 0, 0, 0, 0,
0, &res);
return ret;
}

View File

@ -0,0 +1,50 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2023 MediaTek Inc.
*/
#include <linux/irqchip/arm-gic-v3.h>
#include <linux/gzvm.h>
#include <linux/gzvm_drv.h>
#include "gzvm_arch_common.h"
int gzvm_arch_create_device(u16 vm_id, struct gzvm_create_device *gzvm_dev)
{
struct arm_smccc_res res;
return gzvm_hypcall_wrapper(MT_HVC_GZVM_CREATE_DEVICE, vm_id,
virt_to_phys(gzvm_dev), 0, 0, 0, 0, 0,
&res);
}
/**
* gzvm_arch_inject_irq() - Inject virtual interrupt to a VM
* @gzvm: Pointer to struct gzvm
* @vcpu_idx: vcpu index, only valid if PPI
* @irq: *SPI* irq number (excluding offset value `32`)
* @level: 1 if true else 0
*
* Return:
* * 0 - Success.
* * Negative - Failure.
*/
int gzvm_arch_inject_irq(struct gzvm *gzvm, unsigned int vcpu_idx,
u32 irq, bool level)
{
unsigned long a1 = assemble_vm_vcpu_tuple(gzvm->vm_id, vcpu_idx);
struct arm_smccc_res res;
/*
* VMM's virtual device irq number starts from 0, but ARM's shared peripheral
* interrupt number starts from 32. hypervisor adds offset 32
*/
gzvm_hypcall_wrapper(MT_HVC_GZVM_IRQ_LINE, a1, irq, level,
0, 0, 0, 0, &res);
if (res.a0) {
pr_err("Failed to set IRQ level (%d) to irq#%u on vcpu %d with ret=%d\n",
level, irq, vcpu_idx, (int)res.a0);
return -EFAULT;
}
return 0;
}

380
arch/arm64/geniezone/vm.c Normal file
View File

@ -0,0 +1,380 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2023 MediaTek Inc.
*/
#include <linux/arm-smccc.h>
#include <linux/err.h>
#include <linux/uaccess.h>
#include <linux/gzvm.h>
#include <linux/gzvm_drv.h>
#include "gzvm_arch_common.h"
#define PAR_PA47_MASK ((((1UL << 48) - 1) >> 12) << 12)
int gzvm_arch_inform_exit(u16 vm_id)
{
struct arm_smccc_res res;
arm_smccc_hvc(MT_HVC_GZVM_INFORM_EXIT, vm_id, 0, 0, 0, 0, 0, 0, &res);
if (res.a0 == 0)
return 0;
return -ENXIO;
}
int gzvm_arch_probe(void)
{
struct arm_smccc_res res;
arm_smccc_hvc(MT_HVC_GZVM_PROBE, 0, 0, 0, 0, 0, 0, 0, &res);
if (res.a0)
return -ENXIO;
return 0;
}
int gzvm_arch_set_memregion(u16 vm_id, size_t buf_size,
phys_addr_t region)
{
struct arm_smccc_res res;
return gzvm_hypcall_wrapper(MT_HVC_GZVM_SET_MEMREGION, vm_id,
buf_size, region, 0, 0, 0, 0, &res);
}
static int gzvm_cap_vm_gpa_size(void __user *argp)
{
__u64 value = CONFIG_ARM64_PA_BITS;
if (copy_to_user(argp, &value, sizeof(__u64)))
return -EFAULT;
return 0;
}
int gzvm_arch_check_extension(struct gzvm *gzvm, __u64 cap, void __user *argp)
{
int ret;
switch (cap) {
case GZVM_CAP_PROTECTED_VM: {
__u64 success = 1;
if (copy_to_user(argp, &success, sizeof(__u64)))
return -EFAULT;
return 0;
}
case GZVM_CAP_VM_GPA_SIZE: {
ret = gzvm_cap_vm_gpa_size(argp);
return ret;
}
default:
break;
}
return -EOPNOTSUPP;
}
/**
* gzvm_arch_create_vm() - create vm
* @vm_type: VM type. Only supports Linux VM now.
*
* Return:
* * positive value - VM ID
* * -ENOMEM - Memory not enough for storing VM data
*/
int gzvm_arch_create_vm(unsigned long vm_type)
{
struct arm_smccc_res res;
int ret;
ret = gzvm_hypcall_wrapper(MT_HVC_GZVM_CREATE_VM, vm_type, 0, 0, 0, 0,
0, 0, &res);
return ret ? ret : res.a1;
}
int gzvm_arch_destroy_vm(u16 vm_id)
{
struct arm_smccc_res res;
return gzvm_hypcall_wrapper(MT_HVC_GZVM_DESTROY_VM, vm_id, 0, 0, 0, 0,
0, 0, &res);
}
int gzvm_arch_memregion_purpose(struct gzvm *gzvm,
struct gzvm_userspace_memory_region *mem)
{
struct arm_smccc_res res;
return gzvm_hypcall_wrapper(MT_HVC_GZVM_MEMREGION_PURPOSE, gzvm->vm_id,
mem->guest_phys_addr, mem->memory_size,
mem->flags, 0, 0, 0, &res);
}
int gzvm_arch_set_dtb_config(struct gzvm *gzvm, struct gzvm_dtb_config *cfg)
{
struct arm_smccc_res res;
return gzvm_hypcall_wrapper(MT_HVC_GZVM_SET_DTB_CONFIG, gzvm->vm_id,
cfg->dtb_addr, cfg->dtb_size, 0, 0, 0, 0,
&res);
}
static int gzvm_vm_arch_enable_cap(struct gzvm *gzvm,
struct gzvm_enable_cap *cap,
struct arm_smccc_res *res)
{
return gzvm_hypcall_wrapper(MT_HVC_GZVM_ENABLE_CAP, gzvm->vm_id,
cap->cap, cap->args[0], cap->args[1],
cap->args[2], cap->args[3], cap->args[4],
res);
}
/**
* gzvm_vm_ioctl_get_pvmfw_size() - Get pvmfw size from hypervisor, return
* in x1, and return to userspace in args
* @gzvm: Pointer to struct gzvm.
* @cap: Pointer to struct gzvm_enable_cap.
* @argp: Pointer to struct gzvm_enable_cap in user space.
*
* Return:
* * 0 - Succeed
* * -EINVAL - Hypervisor return invalid results
* * -EFAULT - Fail to copy back to userspace buffer
*/
static int gzvm_vm_ioctl_get_pvmfw_size(struct gzvm *gzvm,
struct gzvm_enable_cap *cap,
void __user *argp)
{
struct arm_smccc_res res = {0};
if (gzvm_vm_arch_enable_cap(gzvm, cap, &res) != 0)
return -EINVAL;
cap->args[1] = res.a1;
if (copy_to_user(argp, cap, sizeof(*cap)))
return -EFAULT;
return 0;
}
/**
* fill_constituents() - Populate pa to buffer until full
* @consti: Pointer to struct mem_region_addr_range.
* @consti_cnt: Constituent count.
* @max_nr_consti: Maximum number of constituent count.
* @gfn: Guest frame number.
* @total_pages: Total page numbers.
* @slot: Pointer to struct gzvm_memslot.
*
* Return: how many pages we've fill in, negative if error
*/
static int fill_constituents(struct mem_region_addr_range *consti,
int *consti_cnt, int max_nr_consti, u64 gfn,
u32 total_pages, struct gzvm_memslot *slot)
{
u64 pfn, prev_pfn, gfn_end;
int nr_pages = 1;
int i = 0;
if (unlikely(total_pages == 0))
return -EINVAL;
gfn_end = gfn + total_pages;
/* entry 0 */
if (gzvm_gfn_to_pfn_memslot(slot, gfn, &pfn) != 0)
return -EFAULT;
consti[0].address = PFN_PHYS(pfn);
consti[0].pg_cnt = 1;
gfn++;
prev_pfn = pfn;
while (i < max_nr_consti && gfn < gfn_end) {
if (gzvm_gfn_to_pfn_memslot(slot, gfn, &pfn) != 0)
return -EFAULT;
if (pfn == (prev_pfn + 1)) {
consti[i].pg_cnt++;
} else {
i++;
if (i >= max_nr_consti)
break;
consti[i].address = PFN_PHYS(pfn);
consti[i].pg_cnt = 1;
}
prev_pfn = pfn;
gfn++;
nr_pages++;
}
if (i != max_nr_consti)
i++;
*consti_cnt = i;
return nr_pages;
}
/**
* populate_mem_region() - Iterate all mem slot and populate pa to buffer until it's full
* @gzvm: Pointer to struct gzvm.
*
* Return: 0 if it is successful, negative if error
*/
static int populate_mem_region(struct gzvm *gzvm)
{
int slot_cnt = 0;
while (slot_cnt < GZVM_MAX_MEM_REGION && gzvm->memslot[slot_cnt].npages != 0) {
struct gzvm_memslot *memslot = &gzvm->memslot[slot_cnt];
struct gzvm_memory_region_ranges *region;
int max_nr_consti, remain_pages;
u64 gfn, gfn_end;
u32 buf_size;
buf_size = PAGE_SIZE * 2;
region = alloc_pages_exact(buf_size, GFP_KERNEL);
if (!region)
return -ENOMEM;
max_nr_consti = (buf_size - sizeof(*region)) /
sizeof(struct mem_region_addr_range);
region->slot = memslot->slot_id;
remain_pages = memslot->npages;
gfn = memslot->base_gfn;
gfn_end = gfn + remain_pages;
while (gfn < gfn_end) {
int nr_pages;
nr_pages = fill_constituents(region->constituents,
&region->constituent_cnt,
max_nr_consti, gfn,
remain_pages, memslot);
if (nr_pages < 0) {
pr_err("Failed to fill constituents\n");
free_pages_exact(region, buf_size);
return -EFAULT;
}
region->gpa = PFN_PHYS(gfn);
region->total_pages = nr_pages;
remain_pages -= nr_pages;
gfn += nr_pages;
if (gzvm_arch_set_memregion(gzvm->vm_id, buf_size,
virt_to_phys(region))) {
pr_err("Failed to register memregion to hypervisor\n");
free_pages_exact(region, buf_size);
return -EFAULT;
}
}
free_pages_exact(region, buf_size);
++slot_cnt;
}
return 0;
}
/**
* gzvm_vm_ioctl_cap_pvm() - Proceed GZVM_CAP_PROTECTED_VM's subcommands
* @gzvm: Pointer to struct gzvm.
* @cap: Pointer to struct gzvm_enable_cap.
* @argp: Pointer to struct gzvm_enable_cap in user space.
*
* Return:
* * 0 - Succeed
* * -EINVAL - Invalid subcommand or arguments
*/
static int gzvm_vm_ioctl_cap_pvm(struct gzvm *gzvm,
struct gzvm_enable_cap *cap,
void __user *argp)
{
struct arm_smccc_res res = {0};
int ret;
switch (cap->args[0]) {
case GZVM_CAP_PVM_SET_PVMFW_GPA:
fallthrough;
case GZVM_CAP_PVM_SET_PROTECTED_VM:
/*
* If the hypervisor doesn't support block-based demand paging, we
* populate memory in advance to improve performance for protected VM.
*/
if (gzvm->demand_page_gran == PAGE_SIZE)
populate_mem_region(gzvm);
ret = gzvm_vm_arch_enable_cap(gzvm, cap, &res);
return ret;
case GZVM_CAP_PVM_GET_PVMFW_SIZE:
ret = gzvm_vm_ioctl_get_pvmfw_size(gzvm, cap, argp);
return ret;
default:
break;
}
return -EINVAL;
}
int gzvm_vm_ioctl_arch_enable_cap(struct gzvm *gzvm,
struct gzvm_enable_cap *cap,
void __user *argp)
{
struct arm_smccc_res res = {0};
int ret;
switch (cap->cap) {
case GZVM_CAP_PROTECTED_VM:
ret = gzvm_vm_ioctl_cap_pvm(gzvm, cap, argp);
return ret;
case GZVM_CAP_BLOCK_BASED_DEMAND_PAGING:
ret = gzvm_vm_arch_enable_cap(gzvm, cap, &res);
return ret;
default:
break;
}
return -EINVAL;
}
/**
* gzvm_hva_to_pa_arch() - converts hva to pa with arch-specific way
* @hva: Host virtual address.
*
* Return: GZVM_PA_ERR_BAD for translation error
*/
u64 gzvm_hva_to_pa_arch(u64 hva)
{
unsigned long flags;
u64 par;
local_irq_save(flags);
asm volatile("at s1e1r, %0" :: "r" (hva));
isb();
par = read_sysreg_par();
local_irq_restore(flags);
if (par & SYS_PAR_EL1_F)
return GZVM_PA_ERR_BAD;
par = par & PAR_PA47_MASK;
if (!par)
return GZVM_PA_ERR_BAD;
return par;
}
int gzvm_arch_map_guest(u16 vm_id, int memslot_id, u64 pfn, u64 gfn,
u64 nr_pages)
{
struct arm_smccc_res res;
return gzvm_hypcall_wrapper(MT_HVC_GZVM_MAP_GUEST, vm_id, memslot_id,
pfn, gfn, nr_pages, 0, 0, &res);
}
int gzvm_arch_map_guest_block(u16 vm_id, int memslot_id, u64 gfn, u64 nr_pages)
{
struct arm_smccc_res res;
return gzvm_hypcall_wrapper(MT_HVC_GZVM_MAP_GUEST_BLOCK, vm_id,
memslot_id, gfn, nr_pages, 0, 0, 0, &res);
}

View File

@ -259,6 +259,8 @@ extern unsigned long kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[];
DECLARE_KVM_NVHE_SYM(__per_cpu_start);
DECLARE_KVM_NVHE_SYM(__per_cpu_end);
extern unsigned long kvm_nvhe_sym(kvm_arm_hyp_host_fp_state)[];
DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
#define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)

View File

@ -234,7 +234,7 @@ enum kvm_pgtable_prot {
#define KVM_HOST_S2_DEFAULT_MMIO_PTE \
(KVM_HOST_S2_DEFAULT_MEM_PTE | \
KVM_PTE_LEAF_ATTR_HI_S2_XN)
FIELD_PREP(KVM_PTE_LEAF_ATTR_HI_S2_XN, KVM_PTE_LEAF_ATTR_HI_S2_XN_XN))
#define PAGE_HYP KVM_PGTABLE_PROT_RW
#define PAGE_HYP_EXEC (KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_X)

View File

@ -414,10 +414,4 @@ static inline size_t pkvm_host_fp_state_size(void)
return sizeof(struct user_fpsimd_state);
}
static inline unsigned long hyp_host_fp_pages(unsigned long nr_cpus)
{
return PAGE_ALIGN(size_mul(nr_cpus, pkvm_host_fp_state_size())) >>
PAGE_SHIFT;
}
#endif /* __ARM64_KVM_PKVM_H__ */

View File

@ -113,13 +113,21 @@
#define OVERFLOW_STACK_SIZE SZ_4K
#if PAGE_SIZE == SZ_4K
#define NVHE_STACK_SHIFT (PAGE_SHIFT + 1)
#else
#define NVHE_STACK_SHIFT PAGE_SHIFT
#endif
#define NVHE_STACK_SIZE (UL(1) << NVHE_STACK_SHIFT)
/*
* With the minimum frame size of [x29, x30], exactly half the combined
* sizes of the hyp and overflow stacks is the maximum size needed to
* save the unwinded stacktrace; plus an additional entry to delimit the
* end.
*/
#define NVHE_STACKTRACE_SIZE ((OVERFLOW_STACK_SIZE + PAGE_SIZE) / 2 + sizeof(long))
#define NVHE_STACKTRACE_SIZE ((OVERFLOW_STACK_SIZE + NVHE_STACK_SIZE) / 2 + sizeof(long))
/*
* Alignment of kernel segments (e.g. .text, .data).

View File

@ -47,7 +47,7 @@ static inline void kvm_nvhe_unwind_init(struct unwind_state *state,
DECLARE_KVM_NVHE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack);
DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_stacktrace_info, kvm_stacktrace_info);
DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stack_base);
void kvm_nvhe_dump_backtrace(unsigned long hyp_offset);

View File

@ -113,7 +113,7 @@ KVM_NVHE_ALIAS(__hyp_data_start);
KVM_NVHE_ALIAS(__hyp_data_end);
KVM_NVHE_ALIAS(__hyp_rodata_start);
KVM_NVHE_ALIAS(__hyp_rodata_end);
#ifdef CONFIG_FTRACE
#ifdef CONFIG_TRACING
KVM_NVHE_ALIAS(__hyp_event_ids_start);
KVM_NVHE_ALIAS(__hyp_event_ids_end);
#endif

View File

@ -22,6 +22,8 @@
#include <asm/cputype.h>
#include <asm/topology.h>
#include <trace/hooks/topology.h>
#ifdef CONFIG_ACPI
static bool __init acpi_cpu_is_threaded(int cpu)
{
@ -151,6 +153,11 @@ static void amu_scale_freq_tick(void)
{
u64 prev_core_cnt, prev_const_cnt;
u64 core_cnt, const_cnt, scale;
bool use_amu_fie = true;
trace_android_vh_use_amu_fie(&use_amu_fie);
if(!use_amu_fie)
return;
prev_const_cnt = this_cpu_read(arch_const_cycles_prev);
prev_core_cnt = this_cpu_read(arch_core_cycles_prev);

View File

@ -50,7 +50,7 @@ static enum kvm_mode kvm_mode = KVM_MODE_DEFAULT;
DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_base);
DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
DECLARE_KVM_NVHE_PER_CPU(int, hyp_cpu_number);
@ -1646,6 +1646,11 @@ static unsigned long nvhe_percpu_order(void)
return size ? get_order(size) : 0;
}
static inline size_t pkvm_host_fp_state_order(void)
{
return get_order(pkvm_host_fp_state_size());
}
/* A lookup table holding the hypervisor VA for each vector slot */
static void *hyp_spectre_vector_selector[BP_HARDEN_EL2_SLOTS];
@ -2008,8 +2013,10 @@ static void teardown_hyp_mode(void)
free_hyp_pgds();
for_each_possible_cpu(cpu) {
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
free_pages(per_cpu(kvm_arm_hyp_stack_base, cpu), NVHE_STACK_SHIFT - PAGE_SHIFT);
free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order());
free_pages(kvm_nvhe_sym(kvm_arm_hyp_host_fp_state)[cpu],
pkvm_host_fp_state_order());
}
}
@ -2096,6 +2103,48 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
return 0;
}
static int init_pkvm_host_fp_state(void)
{
int cpu;
if (!is_protected_kvm_enabled())
return 0;
/* Allocate pages for protected-mode host-fp state. */
for_each_possible_cpu(cpu) {
struct page *page;
unsigned long addr;
page = alloc_pages(GFP_KERNEL, pkvm_host_fp_state_order());
if (!page)
return -ENOMEM;
addr = (unsigned long)page_address(page);
kvm_nvhe_sym(kvm_arm_hyp_host_fp_state)[cpu] = addr;
}
/*
* Don't map the pages in hyp since these are only used in protected
* mode, which will (re)create its own mapping when initialized.
*/
return 0;
}
/*
* Finalizes the initialization of hyp mode, once everything else is initialized
* and the initialziation process cannot fail.
*/
static void finalize_init_hyp_mode(void)
{
int cpu;
for_each_possible_cpu(cpu) {
kvm_nvhe_sym(kvm_arm_hyp_host_fp_state)[cpu] =
kern_hyp_va(kvm_nvhe_sym(kvm_arm_hyp_host_fp_state)[cpu]);
}
}
/**
* Inits Hyp-mode on all online CPUs
*/
@ -2123,15 +2172,15 @@ static int init_hyp_mode(void)
* Allocate stack pages for Hypervisor-mode
*/
for_each_possible_cpu(cpu) {
unsigned long stack_page;
unsigned long stack_base;
stack_page = __get_free_page(GFP_KERNEL);
if (!stack_page) {
stack_base = __get_free_pages(GFP_KERNEL, NVHE_STACK_SHIFT - PAGE_SHIFT);
if (!stack_base) {
err = -ENOMEM;
goto out_err;
}
per_cpu(kvm_arm_hyp_stack_page, cpu) = stack_page;
per_cpu(kvm_arm_hyp_stack_base, cpu) = stack_base;
}
/*
@ -2207,7 +2256,7 @@ static int init_hyp_mode(void)
*/
for_each_possible_cpu(cpu) {
struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu);
char *stack_page = (char *)per_cpu(kvm_arm_hyp_stack_page, cpu);
char *stack_base = (char *)per_cpu(kvm_arm_hyp_stack_base, cpu);
unsigned long hyp_addr;
/*
@ -2215,7 +2264,7 @@ static int init_hyp_mode(void)
* and guard page. The allocation is also aligned based on
* the order of its size.
*/
err = hyp_alloc_private_va_range(PAGE_SIZE * 2, &hyp_addr);
err = hyp_alloc_private_va_range(NVHE_STACK_SIZE * 2, &hyp_addr);
if (err) {
kvm_err("Cannot allocate hyp stack guard page\n");
goto out_err;
@ -2226,12 +2275,12 @@ static int init_hyp_mode(void)
* at the higher address and leave the lower guard page
* unbacked.
*
* Any valid stack address now has the PAGE_SHIFT bit as 1
* Any valid stack address now has the NVHE_STACK_SHIFT bit as 1
* and addresses corresponding to the guard page have the
* PAGE_SHIFT bit as 0 - this is used for overflow detection.
* NVHE_STACK_SHIFT bit as 0 - this is used for overflow detection.
*/
err = __create_hyp_mappings(hyp_addr + PAGE_SIZE, PAGE_SIZE,
__pa(stack_page), PAGE_HYP);
err = __create_hyp_mappings(hyp_addr + NVHE_STACK_SIZE, NVHE_STACK_SIZE,
__pa(stack_base), PAGE_HYP);
if (err) {
kvm_err("Cannot map hyp stack\n");
goto out_err;
@ -2243,9 +2292,9 @@ static int init_hyp_mode(void)
* __hyp_pa() won't do the right thing there, since the stack
* has been mapped in the flexible private VA space.
*/
params->stack_pa = __pa(stack_page);
params->stack_pa = __pa(stack_base);
params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE);
params->stack_hyp_va = hyp_addr + (2 * NVHE_STACK_SIZE);
}
for_each_possible_cpu(cpu) {
@ -2263,6 +2312,10 @@ static int init_hyp_mode(void)
cpu_prepare_hyp_mode(cpu);
}
err = init_pkvm_host_fp_state();
if (err)
goto out_err;
kvm_hyp_init_symbols();
/* TODO: Real .h interface */
@ -2421,6 +2474,13 @@ int kvm_arch_init(void *opaque)
kvm_info("Hyp mode initialized successfully\n");
}
/*
* This should be called after initialization is done and failure isn't
* possible anymore.
*/
if (!in_hyp_mode)
finalize_init_hyp_mode();
return 0;
out_hyp:

View File

@ -10,7 +10,7 @@ int main(void)
DEFINE(STRUCT_HYP_PAGE_SIZE, sizeof(struct hyp_page));
DEFINE(PKVM_HYP_VM_SIZE, sizeof(struct pkvm_hyp_vm));
DEFINE(PKVM_HYP_VCPU_SIZE, sizeof(struct pkvm_hyp_vcpu));
#ifdef CONFIG_FTRACE
#ifdef CONFIG_TRACING
DEFINE(STRUCT_HYP_BUFFER_PAGE_SIZE, sizeof(struct hyp_buffer_page));
#endif
return 0;

View File

@ -82,8 +82,6 @@ struct pkvm_hyp_vm {
struct pkvm_hyp_vcpu *vcpus[];
};
extern void *host_fp_state;
static inline struct pkvm_hyp_vm *
pkvm_hyp_vcpu_to_hyp_vm(struct pkvm_hyp_vcpu *hyp_vcpu)
{
@ -107,7 +105,6 @@ extern phys_addr_t pvmfw_base;
extern phys_addr_t pvmfw_size;
void pkvm_hyp_vm_table_init(void *tbl);
void pkvm_hyp_host_fp_init(void *host_fp);
int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva,
unsigned long pgd_hva, unsigned long last_ran_hva);

View File

@ -154,12 +154,12 @@ SYM_FUNC_END(__host_hvc)
/*
* Test whether the SP has overflowed, without corrupting a GPR.
* nVHE hypervisor stacks are aligned so that the PAGE_SHIFT bit
* nVHE hypervisor stacks are aligned so that the NVHE_STACK_SHIFT bit
* of SP should always be 1.
*/
add sp, sp, x0 // sp' = sp + x0
sub x0, sp, x0 // x0' = sp' - x0 = (sp + x0) - x0 = sp
tbz x0, #PAGE_SHIFT, .L__hyp_sp_overflow\@
tbz x0, #NVHE_STACK_SHIFT, .L__hyp_sp_overflow\@
sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0
sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp

View File

@ -1383,11 +1383,15 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt)
handled = kvm_host_ffa_handler(host_ctxt);
if (!handled && smp_load_acquire(&default_host_smc_handler))
handled = default_host_smc_handler(host_ctxt);
if (!handled)
__kvm_hyp_host_forward_smc(host_ctxt);
trace_host_smc(func_id, !handled);
if (!handled) {
trace_hyp_exit();
__kvm_hyp_host_forward_smc(host_ctxt);
trace_hyp_enter();
}
/* SMC was trapped, move ELR past the current PC. */
kvm_skip_host_instr();
}

View File

@ -1048,9 +1048,20 @@ static int __host_check_page_state_range(u64 addr, u64 size,
static int __host_set_page_state_range(u64 addr, u64 size,
enum pkvm_page_state state)
{
bool update_iommu = true;
enum kvm_pgtable_prot prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, state);
return host_stage2_idmap_locked(addr, size, prot, true);
/*
* Sharing and unsharing host pages shouldn't change the IOMMU page tables,
* so avoid extra page tables walks for the IOMMU.
* HOWEVER THIS WILL NOT WORK WHEN DEVICE ASSIGNMENT IS SUPPORTED AS THE GUEST
* MIGHT HAVE ACCESS TO DMA.
* but as Android-14 doesn't support device assignment this should be fine.
*/
if ((state == PKVM_PAGE_OWNED) || (state == PKVM_PAGE_SHARED_OWNED))
update_iommu = false;
return host_stage2_idmap_locked(addr, size, prot, update_iommu);
}
static int host_request_owned_transition(u64 *completer_addr,
@ -2038,6 +2049,7 @@ static int restrict_host_page_perms(u64 addr, kvm_pte_t pte, u32 level, enum kvm
}
#define MODULE_PROT_ALLOWLIST (KVM_PGTABLE_PROT_RWX | \
KVM_PGTABLE_PROT_DEVICE |\
KVM_PGTABLE_PROT_NC | \
KVM_PGTABLE_PROT_PXN | \
KVM_PGTABLE_PROT_UXN)

View File

@ -41,17 +41,15 @@ static DEFINE_PER_CPU(struct pkvm_hyp_vcpu *, loaded_hyp_vcpu);
*
* Only valid when (fp_state == FP_STATE_GUEST_OWNED) in the hyp vCPU structure.
*/
void *host_fp_state;
unsigned long __ro_after_init kvm_arm_hyp_host_fp_state[NR_CPUS];
static void *__get_host_fpsimd_bytes(void)
{
void *state = host_fp_state +
size_mul(pkvm_host_fp_state_size(), hyp_smp_processor_id());
if (state < host_fp_state)
return NULL;
return state;
/*
* The addresses in this array have been converted to hyp addresses
* in finalize_init_hyp_mode().
*/
return (void *)kvm_arm_hyp_host_fp_state[hyp_smp_processor_id()];
}
struct user_fpsimd_state *get_host_fpsimd_state(struct kvm_vcpu *vcpu)
@ -295,12 +293,6 @@ void pkvm_hyp_vm_table_init(void *tbl)
vm_table = tbl;
}
void pkvm_hyp_host_fp_init(void *host_fp)
{
WARN_ON(host_fp_state);
host_fp_state = host_fp;
}
/*
* Return the hyp vm structure corresponding to the handle.
*/

View File

@ -34,7 +34,6 @@ static void *vm_table_base;
static void *hyp_pgt_base;
static void *host_s2_pgt_base;
static void *ffa_proxy_pages;
static void *hyp_host_fp_base;
static struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops;
static struct hyp_pool hpool;
@ -69,10 +68,21 @@ static int divide_memory_pool(void *virt, unsigned long size)
if (!ffa_proxy_pages)
return -ENOMEM;
nr_pages = hyp_host_fp_pages(hyp_nr_cpus);
hyp_host_fp_base = hyp_early_alloc_contig(nr_pages);
if (!hyp_host_fp_base)
return -ENOMEM;
return 0;
}
static int create_hyp_host_fp_mappings(void)
{
void *start, *end;
int ret, i;
for (i = 0; i < hyp_nr_cpus; i++) {
start = (void *)kern_hyp_va(kvm_arm_hyp_host_fp_state[i]);
end = start + PAGE_ALIGN(pkvm_host_fp_state_size());
ret = pkvm_create_mappings(start, end, PAGE_HYP);
if (ret)
return ret;
}
return 0;
}
@ -140,7 +150,7 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
* and guard page. The allocation is also aligned based on
* the order of its size.
*/
ret = pkvm_alloc_private_va_range(PAGE_SIZE * 2, &hyp_addr);
ret = pkvm_alloc_private_va_range(NVHE_STACK_SIZE * 2, &hyp_addr);
if (ret)
return ret;
@ -149,21 +159,23 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
* at the higher address and leave the lower guard page
* unbacked.
*
* Any valid stack address now has the PAGE_SHIFT bit as 1
* Any valid stack address now has the NVHE_STACK_SHIFT bit as 1
* and addresses corresponding to the guard page have the
* PAGE_SHIFT bit as 0 - this is used for overflow detection.
* NVHE_STACK_SHIFT bit as 0 - this is used for overflow detection.
*/
hyp_spin_lock(&pkvm_pgd_lock);
ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + PAGE_SIZE,
PAGE_SIZE, params->stack_pa, PAGE_HYP);
ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + NVHE_STACK_SIZE,
NVHE_STACK_SIZE, params->stack_pa, PAGE_HYP);
hyp_spin_unlock(&pkvm_pgd_lock);
if (ret)
return ret;
/* Update stack_hyp_va to end of the stack's private VA range */
params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE);
params->stack_hyp_va = hyp_addr + (2 * NVHE_STACK_SIZE);
}
create_hyp_host_fp_mappings();
/*
* Map the host sections RO in the hypervisor, but transfer the
* ownership from the host to the hypervisor itself to make sure they
@ -405,7 +417,6 @@ void __noreturn __pkvm_init_finalise(void)
goto out;
pkvm_hyp_vm_table_init(vm_table_base);
pkvm_hyp_host_fp_init(hyp_host_fp_base);
out:
/*
* We tail-called to here from handle___pkvm_init() and will not return,

View File

@ -28,7 +28,7 @@ static void hyp_prepare_backtrace(unsigned long fp, unsigned long pc)
struct kvm_nvhe_stacktrace_info *stacktrace_info = this_cpu_ptr(&kvm_stacktrace_info);
struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
stacktrace_info->stack_base = (unsigned long)(params->stack_hyp_va - PAGE_SIZE);
stacktrace_info->stack_base = (unsigned long)(params->stack_hyp_va - NVHE_STACK_SIZE);
stacktrace_info->overflow_stack_base = (unsigned long)this_cpu_ptr(overflow_stack);
stacktrace_info->fp = fp;
stacktrace_info->pc = pc;
@ -54,7 +54,7 @@ static struct stack_info stackinfo_get_hyp(void)
{
struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
unsigned long high = params->stack_hyp_va;
unsigned long low = high - PAGE_SIZE;
unsigned long low = high - NVHE_STACK_SIZE;
return (struct stack_info) {
.low = low,

View File

@ -701,7 +701,7 @@ static int get_user_mapping_size(struct kvm *kvm, u64 addr)
static bool stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot)
{
return true;
return false;
}
static bool stage2_pte_is_counted(kvm_pte_t pte, u32 level)

View File

@ -173,7 +173,6 @@ void __init kvm_hyp_reserve(void)
hyp_mem_pages += hyp_vm_table_pages();
hyp_mem_pages += hyp_vmemmap_pages(STRUCT_HYP_PAGE_SIZE);
hyp_mem_pages += hyp_ffa_proxy_pages();
hyp_mem_pages += hyp_host_fp_pages(num_possible_cpus());
/*
* Try to allocate a PMD-aligned region to reduce TLB pressure once
@ -504,10 +503,6 @@ static int __init finalize_pkvm(void)
if (pkvm_load_early_modules())
pkvm_firmware_rmem_clear();
/* If no DMA protection. */
if (!pkvm_iommu_finalized())
pkvm_firmware_rmem_clear();
/*
* Exclude HYP sections from kmemleak so that they don't get peeked
* at, which would end badly once inaccessible.
@ -516,6 +511,12 @@ static int __init finalize_pkvm(void)
kmemleak_free_part(__hyp_data_start, __hyp_data_end - __hyp_data_start);
kmemleak_free_part_phys(hyp_mem_base, hyp_mem_size);
flush_deferred_probe_now();
/* If no DMA protection. */
if (!pkvm_iommu_finalized())
pkvm_firmware_rmem_clear();
ret = pkvm_drop_host_privileges();
if (ret) {
pr_err("Failed to de-privilege the host kernel: %d\n", ret);

View File

@ -50,7 +50,7 @@ static struct stack_info stackinfo_get_hyp(void)
struct kvm_nvhe_stacktrace_info *stacktrace_info
= this_cpu_ptr_nvhe_sym(kvm_stacktrace_info);
unsigned long low = (unsigned long)stacktrace_info->stack_base;
unsigned long high = low + PAGE_SIZE;
unsigned long high = low + NVHE_STACK_SIZE;
return (struct stack_info) {
.low = low,
@ -60,8 +60,8 @@ static struct stack_info stackinfo_get_hyp(void)
static struct stack_info stackinfo_get_hyp_kern_va(void)
{
unsigned long low = (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page);
unsigned long high = low + PAGE_SIZE;
unsigned long low = (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_base);
unsigned long high = low + NVHE_STACK_SIZE;
return (struct stack_info) {
.low = low,

View File

@ -90,6 +90,7 @@ CONFIG_MODULE_SIG_PROTECT=y
CONFIG_BLK_DEV_ZONED=y
CONFIG_BLK_DEV_THROTTLING=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_BLK_CGROUP_IOPRIO=y
CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
CONFIG_IOSCHED_BFQ=y

View File

@ -77,6 +77,10 @@ static void ioc_destroy_icq(struct io_cq *icq)
struct elevator_type *et = q->elevator->type;
lockdep_assert_held(&ioc->lock);
lockdep_assert_held(&q->queue_lock);
if (icq->flags & ICQ_DESTROYED)
return;
radix_tree_delete(&ioc->icq_tree, icq->q->id);
hlist_del_init(&icq->ioc_node);
@ -128,11 +132,6 @@ static void ioc_release_fn(struct work_struct *work)
spin_lock(&q->queue_lock);
spin_lock(&ioc->lock);
/*
* The icq may have been destroyed when the ioc lock
* was released.
*/
if (!(icq->flags & ICQ_DESTROYED))
ioc_destroy_icq(icq);
spin_unlock(&q->queue_lock);
@ -171,23 +170,20 @@ static bool ioc_delay_free(struct io_context *ioc)
*/
void ioc_clear_queue(struct request_queue *q)
{
LIST_HEAD(icq_list);
spin_lock_irq(&q->queue_lock);
list_splice_init(&q->icq_list, &icq_list);
spin_unlock_irq(&q->queue_lock);
rcu_read_lock();
while (!list_empty(&icq_list)) {
while (!list_empty(&q->icq_list)) {
struct io_cq *icq =
list_entry(icq_list.next, struct io_cq, q_node);
list_first_entry(&q->icq_list, struct io_cq, q_node);
spin_lock_irq(&icq->ioc->lock);
if (!(icq->flags & ICQ_DESTROYED))
/*
* Other context won't hold ioc lock to wait for queue_lock, see
* details in ioc_release_fn().
*/
spin_lock(&icq->ioc->lock);
ioc_destroy_icq(icq);
spin_unlock_irq(&icq->ioc->lock);
spin_unlock(&icq->ioc->lock);
}
rcu_read_unlock();
spin_unlock_irq(&q->queue_lock);
}
#else /* CONFIG_BLK_ICQ */
static inline void ioc_exit_icqs(struct io_context *ioc)

View File

@ -23,25 +23,28 @@
/**
* enum prio_policy - I/O priority class policy.
* @POLICY_NO_CHANGE: (default) do not modify the I/O priority class.
* @POLICY_NONE_TO_RT: modify IOPRIO_CLASS_NONE into IOPRIO_CLASS_RT.
* @POLICY_PROMOTE_TO_RT: modify no-IOPRIO_CLASS_RT to IOPRIO_CLASS_RT.
* @POLICY_RESTRICT_TO_BE: modify IOPRIO_CLASS_NONE and IOPRIO_CLASS_RT into
* IOPRIO_CLASS_BE.
* @POLICY_ALL_TO_IDLE: change the I/O priority class into IOPRIO_CLASS_IDLE.
* @POLICY_NONE_TO_RT: an alias for POLICY_PROMOTE_TO_RT.
*
* See also <linux/ioprio.h>.
*/
enum prio_policy {
POLICY_NO_CHANGE = 0,
POLICY_NONE_TO_RT = 1,
POLICY_PROMOTE_TO_RT = 1,
POLICY_RESTRICT_TO_BE = 2,
POLICY_ALL_TO_IDLE = 3,
POLICY_NONE_TO_RT = 4,
};
static const char *policy_name[] = {
[POLICY_NO_CHANGE] = "no-change",
[POLICY_NONE_TO_RT] = "none-to-rt",
[POLICY_PROMOTE_TO_RT] = "promote-to-rt",
[POLICY_RESTRICT_TO_BE] = "restrict-to-be",
[POLICY_ALL_TO_IDLE] = "idle",
[POLICY_NONE_TO_RT] = "none-to-rt",
};
static struct blkcg_policy ioprio_policy;
@ -189,6 +192,20 @@ void blkcg_set_ioprio(struct bio *bio)
if (!blkcg || blkcg->prio_policy == POLICY_NO_CHANGE)
return;
if (blkcg->prio_policy == POLICY_PROMOTE_TO_RT ||
blkcg->prio_policy == POLICY_NONE_TO_RT) {
/*
* For RT threads, the default priority level is 4 because
* task_nice is 0. By promoting non-RT io-priority to RT-class
* and default level 4, those requests that are already
* RT-class but need a higher io-priority can use ioprio_set()
* to achieve this.
*/
if (IOPRIO_PRIO_CLASS(bio->bi_ioprio) != IOPRIO_CLASS_RT)
bio->bi_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_RT, 4);
return;
}
/*
* Except for IOPRIO_CLASS_NONE, higher I/O priority numbers
* correspond to a lower priority. Hence, the max_t() below selects

15
build.config.rockchip Normal file
View File

@ -0,0 +1,15 @@
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.common
. ${ROOT_DIR}/${KERNEL_DIR}/build.config.aarch64
BUILD_INITRAMFS=1
LZ4_RAMDISK=1
DEFCONFIG=rockchip_gki_defconfig
FRAGMENT_CONFIG=${KERNEL_DIR}/arch/arm64/configs/rockchip_gki.fragment
PRE_DEFCONFIG_CMDS="KCONFIG_CONFIG=${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/configs/${DEFCONFIG} ${ROOT_DIR}/${KERNEL_DIR}/scripts/kconfig/merge_config.sh -m -r ${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/configs/gki_defconfig ${ROOT_DIR}/${FRAGMENT_CONFIG}"
POST_DEFCONFIG_CMDS="rm ${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/configs/${DEFCONFIG}"
DTC_INCLUDE=${ROOT_DIR}/${KERNEL_DIR}/arch/arm64/boot/dts/rockchip
FILES="${FILES}
arch/arm64/boot/dts/rockchip/rk3588*.dtb
"

View File

@ -17,9 +17,9 @@
* related macros to be expanded as they would be for built-in code; e.g.,
* module_init() adds the function to the .initcalls section of the binary.
*
* The .c file that contains the real module_init() for fips140.ko is then
* responsible for redefining MODULE, and the real module_init() is responsible
* for executing all the initcalls that were collected into .initcalls.
* The .c files that contain the real module_init, module license, and module
* parameters for fips140.ko are then responsible for redefining MODULE. The
* real module_init executes all initcalls that were collected into .initcalls.
*/
#undef MODULE

View File

@ -20,6 +20,14 @@
__inline_maybe_unused notrace
#undef BUILD_FIPS140_KO
/*
* Since this .c file contains real module parameters for fips140.ko, it needs
* to be compiled normally, so undo the hacks that were done in fips140-defs.h.
*/
#define MODULE
#undef KBUILD_MODFILE
#undef __DISABLE_EXPORTS
#include <linux/cdev.h>
#include <linux/fs.h>
#include <linux/module.h>

View File

@ -1761,6 +1761,7 @@ static void binder_free_transaction(struct binder_transaction *t)
{
struct binder_proc *target_proc = t->to_proc;
trace_android_vh_free_oem_binder_struct(t);
if (target_proc) {
binder_inner_proc_lock(target_proc);
target_proc->outstanding_txns--;
@ -2931,6 +2932,7 @@ static int binder_proc_transaction(struct binder_transaction *t,
bool pending_async = false;
struct binder_transaction *t_outdated = NULL;
bool skip = false;
bool enqueue_task = true;
BUG_ON(!node);
binder_node_lock(node);
@ -2970,6 +2972,9 @@ static int binder_proc_transaction(struct binder_transaction *t,
binder_transaction_priority(thread, t, node);
binder_enqueue_thread_work_ilocked(thread, &t->work);
} else if (!pending_async) {
trace_android_vh_binder_special_task(t, proc, thread,
&t->work, &proc->todo, !oneway, &enqueue_task);
if (enqueue_task)
binder_enqueue_work_ilocked(&t->work, &proc->todo);
} else {
if ((t->flags & TF_UPDATE_TXN) && proc->is_frozen) {
@ -2983,6 +2988,9 @@ static int binder_proc_transaction(struct binder_transaction *t,
proc->outstanding_txns--;
}
}
trace_android_vh_binder_special_task(t, proc, thread,
&t->work, &node->async_todo, !oneway, &enqueue_task);
if (enqueue_task)
binder_enqueue_work_ilocked(&t->work, &node->async_todo);
}
@ -3460,6 +3468,7 @@ static void binder_transaction(struct binder_proc *proc,
t->buffer->target_node = target_node;
t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);
trace_binder_transaction_alloc_buf(t->buffer);
trace_android_vh_alloc_oem_binder_struct(tr, t, target_proc);
if (binder_alloc_copy_user_to_buffer(
&target_proc->alloc,
@ -3964,6 +3973,9 @@ binder_free_buf(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_buffer *buffer, bool is_failure)
{
bool enqueue_task = true;
trace_android_vh_binder_free_buf(proc, thread, buffer);
binder_inner_proc_lock(proc);
if (buffer->transaction) {
buffer->transaction->buffer = NULL;
@ -3983,8 +3995,10 @@ binder_free_buf(struct binder_proc *proc,
if (!w) {
buf_node->has_async_transaction = false;
} else {
binder_enqueue_work_ilocked(
w, &proc->todo);
trace_android_vh_binder_special_task(NULL, proc, thread, w,
&proc->todo, false, &enqueue_task);
if (enqueue_task)
binder_enqueue_work_ilocked(w, &proc->todo);
binder_wakeup_proc_ilocked(proc);
}
binder_node_inner_unlock(buf_node);
@ -4926,6 +4940,7 @@ static int binder_thread_read(struct binder_proc *proc,
ptr += trsize;
trace_binder_transaction_received(t);
trace_android_vh_binder_transaction_received(t, proc, thread, cmd);
binder_stat_br(proc, thread, cmd);
binder_debug(BINDER_DEBUG_TRANSACTION,
"%d:%d %s %d %d:%d, cmd %u size %zd-%zd ptr %016llx-%016llx\n",

View File

@ -67,7 +67,8 @@
#include <trace/hooks/psi.h>
#include <trace/hooks/bl_hib.h>
#include <trace/hooks/regmap.h>
#include <trace/hooks/compaction.h>
#include <trace/hooks/suspend.h>
/*
* Export tracepoints that act as a bare tracehook (ie: have no trace event
* associated with them) to allow external modules to probe them.
@ -85,6 +86,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_set_priority);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_restore_priority);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_wakeup_ilocked);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_do_send_sig_info);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_killed_process);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mutex_wait_start);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mutex_wait_finish);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mutex_init);
@ -136,6 +138,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_update_sysfs);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_send_command);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_compl_command);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cgroup_set_task);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_cgroup_force_kthread_migration);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_syscall_prctl_finished);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_send_uic_command);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_send_tm_command);
@ -180,12 +183,14 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_mutex_lock_starttime);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_rtmutex_lock_starttime);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_rwsem_lock_starttime);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_pcpu_rwsem_starttime);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_percpu_rwsem_wq_add);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_module_core_rw_nx);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_module_init_rw_nx);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_module_permit_before_init);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_module_permit_after_init);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_selinux_is_initialized);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_shmem_get_folio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_pcpu_rwsem_time_early);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_check_mmap_file);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_check_file_open);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_check_bpf_syscall);
@ -249,6 +254,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_set_readahead_gfp_mask);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alter_mutex_list_add);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mutex_unlock_slowpath);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rwsem_wake_finish);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_adjust_alloc_flags);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_looper_state_registered);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_thread_read);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_free_proc);
@ -320,3 +326,28 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around_migrate_folio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_test_clear_look_around_ref);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_tune_scan_type);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_tune_swappiness);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_exit_signal_whether_wake);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_exit_check);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_freeze_whether_wake);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_use_amu_fie);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_scan_abort_check_wmarks);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_oem_binder_struct);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_transaction_received);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_oem_binder_struct);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_special_task);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_free_buf);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_compaction_exit);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_compaction_try_to_compact_pages_exit);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mm_alloc_pages_direct_reclaim_enter);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mm_alloc_pages_direct_reclaim_exit);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mm_alloc_pages_may_oom_exit);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_vmscan_kswapd_done);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mm_compaction_begin);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mm_compaction_end);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_bus_iommu_probe);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_rmqueue);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_resume_begin);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_resume_end);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_early_resume_begin);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_filemap_get_folio);

View File

@ -34,6 +34,12 @@ static DEFINE_PER_CPU(u32, freq_factor) = 1;
static bool supports_scale_freq_counters(const struct cpumask *cpus)
{
bool use_amu_fie = true;
trace_android_vh_use_amu_fie(&use_amu_fie);
if (!use_amu_fie)
return false;
return cpumask_subset(cpus, &scale_freq_counters_mask);
}

View File

@ -761,6 +761,29 @@ void wait_for_device_probe(void)
}
EXPORT_SYMBOL_GPL(wait_for_device_probe);
/**
* flush_deferred_probe_now
*
* This function should be used sparingly. It's meant for when we need to flush
* the deferred probe list at earlier initcall levels. Really meant only for KVM
* needs. This function should never be exported because it makes no sense for
* modules to call this.
*/
void flush_deferred_probe_now(void)
{
/*
* Really shouldn't using this if deferred probe has already been
* enabled
*/
if (WARN_ON(driver_deferred_probe_enable))
return;
driver_deferred_probe_enable = true;
driver_deferred_probe_trigger();
wait_for_device_probe();
driver_deferred_probe_enable = false;
}
static int __driver_probe_device(struct device_driver *drv, struct device *dev)
{
int ret = 0;

View File

@ -126,6 +126,7 @@ void clk_fractional_divider_general_approximation(struct clk_hw *hw,
GENMASK(fd->mwidth - 1, 0), GENMASK(fd->nwidth - 1, 0),
m, n);
}
EXPORT_SYMBOL_GPL(clk_fractional_divider_general_approximation);
static long clk_fd_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *parent_rate)

View File

@ -202,6 +202,19 @@ struct teo_cpu {
static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
unsigned long teo_cpu_get_util_threshold(int cpu)
{
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, cpu);
return cpu_data->util_threshold;
}
EXPORT_SYMBOL_GPL(teo_cpu_get_util_threshold);
void teo_cpu_set_util_threshold(int cpu, unsigned long util)
{
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, cpu);
cpu_data->util_threshold = util;
}
EXPORT_SYMBOL_GPL(teo_cpu_set_util_threshold);
/**
* teo_cpu_is_utilized - Check if the CPU's util is above the threshold
* @cpu: Target CPU
@ -397,14 +410,24 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* the shallowest non-polling state and exit.
*/
if (drv->state_count < 3 && cpu_data->utilized) {
for (i = 0; i < drv->state_count; ++i) {
if (!dev->states_usage[i].disable &&
!(drv->states[i].flags & CPUIDLE_FLAG_POLLING)) {
idx = i;
/* The CPU is utilized, so assume a short idle duration. */
duration_ns = teo_middle_of_bin(0, drv);
/*
* If state 0 is enabled and it is not a polling one, select it
* right away unless the scheduler tick has been stopped, in
* which case care needs to be taken to leave the CPU in a deep
* enough state in case it is not woken up any time soon after
* all. If state 1 is disabled, though, state 0 must be used
* anyway.
*/
if ((!idx && !(drv->states[0].flags & CPUIDLE_FLAG_POLLING) &&
teo_time_ok(duration_ns)) || dev->states_usage[1].disable)
idx = 0;
else /* Assume that state 1 is not a polling one and use it. */
idx = 1;
goto end;
}
}
}
/*
* Find the deepest idle state whose target residency does not exceed
@ -539,10 +562,20 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
/*
* If the CPU is being utilized over the threshold, choose a shallower
* non-polling state to improve latency
* non-polling state to improve latency, unless the scheduler tick has
* been stopped already and the shallower state's target residency is
* not sufficiently large.
*/
if (cpu_data->utilized)
idx = teo_find_shallower_state(drv, dev, idx, duration_ns, true);
if (cpu_data->utilized) {
s64 span_ns;
i = teo_find_shallower_state(drv, dev, idx, duration_ns, true);
span_ns = teo_middle_of_bin(i, drv);
if (teo_time_ok(span_ns)) {
idx = i;
duration_ns = span_ns;
}
}
end:
/*

View File

@ -1,5 +1,6 @@
CONFIG_KUNIT=y
CONFIG_USB=y
CONFIG_USB_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HID_UCLOGIC=y
CONFIG_HID_KUNIT_TEST=y

View File

@ -1263,6 +1263,7 @@ config HID_MCP2221
config HID_KUNIT_TEST
tristate "KUnit tests for HID" if !KUNIT_ALL_TESTS
depends on KUNIT
depends on HID_BATTERY_STRENGTH
depends on HID_UCLOGIC
default KUNIT_ALL_TESTS
help

View File

@ -0,0 +1,80 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* HID to Linux Input mapping
*
* Copyright (c) 2022 José Expósito <jose.exposito89@gmail.com>
*/
#include <kunit/test.h>
static void hid_test_input_set_battery_charge_status(struct kunit *test)
{
struct hid_device *dev;
bool handled;
dev = kunit_kzalloc(test, sizeof(*dev), GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
handled = hidinput_set_battery_charge_status(dev, HID_DG_HEIGHT, 0);
KUNIT_EXPECT_FALSE(test, handled);
KUNIT_EXPECT_EQ(test, dev->battery_charge_status, POWER_SUPPLY_STATUS_UNKNOWN);
handled = hidinput_set_battery_charge_status(dev, HID_BAT_CHARGING, 0);
KUNIT_EXPECT_TRUE(test, handled);
KUNIT_EXPECT_EQ(test, dev->battery_charge_status, POWER_SUPPLY_STATUS_DISCHARGING);
handled = hidinput_set_battery_charge_status(dev, HID_BAT_CHARGING, 1);
KUNIT_EXPECT_TRUE(test, handled);
KUNIT_EXPECT_EQ(test, dev->battery_charge_status, POWER_SUPPLY_STATUS_CHARGING);
}
static void hid_test_input_get_battery_property(struct kunit *test)
{
struct power_supply *psy;
struct hid_device *dev;
union power_supply_propval val;
int ret;
dev = kunit_kzalloc(test, sizeof(*dev), GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
dev->battery_avoid_query = true;
psy = kunit_kzalloc(test, sizeof(*psy), GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, psy);
psy->drv_data = dev;
dev->battery_status = HID_BATTERY_UNKNOWN;
dev->battery_charge_status = POWER_SUPPLY_STATUS_CHARGING;
ret = hidinput_get_battery_property(psy, POWER_SUPPLY_PROP_STATUS, &val);
KUNIT_EXPECT_EQ(test, ret, 0);
KUNIT_EXPECT_EQ(test, val.intval, POWER_SUPPLY_STATUS_UNKNOWN);
dev->battery_status = HID_BATTERY_REPORTED;
dev->battery_charge_status = POWER_SUPPLY_STATUS_CHARGING;
ret = hidinput_get_battery_property(psy, POWER_SUPPLY_PROP_STATUS, &val);
KUNIT_EXPECT_EQ(test, ret, 0);
KUNIT_EXPECT_EQ(test, val.intval, POWER_SUPPLY_STATUS_CHARGING);
dev->battery_status = HID_BATTERY_REPORTED;
dev->battery_charge_status = POWER_SUPPLY_STATUS_DISCHARGING;
ret = hidinput_get_battery_property(psy, POWER_SUPPLY_PROP_STATUS, &val);
KUNIT_EXPECT_EQ(test, ret, 0);
KUNIT_EXPECT_EQ(test, val.intval, POWER_SUPPLY_STATUS_DISCHARGING);
}
static struct kunit_case hid_input_tests[] = {
KUNIT_CASE(hid_test_input_set_battery_charge_status),
KUNIT_CASE(hid_test_input_get_battery_property),
{ }
};
static struct kunit_suite hid_input_test_suite = {
.name = "hid_input",
.test_cases = hid_input_tests,
};
kunit_test_suite(hid_input_test_suite);
MODULE_DESCRIPTION("HID input KUnit tests");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("José Expósito <jose.exposito89@gmail.com>");

View File

@ -492,7 +492,7 @@ static int hidinput_get_battery_property(struct power_supply *psy,
if (dev->battery_status == HID_BATTERY_UNKNOWN)
val->intval = POWER_SUPPLY_STATUS_UNKNOWN;
else
val->intval = POWER_SUPPLY_STATUS_DISCHARGING;
val->intval = dev->battery_charge_status;
break;
case POWER_SUPPLY_PROP_SCOPE:
@ -560,6 +560,7 @@ static int hidinput_setup_battery(struct hid_device *dev, unsigned report_type,
dev->battery_max = max;
dev->battery_report_type = report_type;
dev->battery_report_id = field->report->id;
dev->battery_charge_status = POWER_SUPPLY_STATUS_DISCHARGING;
/*
* Stylus is normally not connected to the device and thus we
@ -626,6 +627,20 @@ static void hidinput_update_battery(struct hid_device *dev, int value)
power_supply_changed(dev->battery);
}
}
static bool hidinput_set_battery_charge_status(struct hid_device *dev,
unsigned int usage, int value)
{
switch (usage) {
case HID_BAT_CHARGING:
dev->battery_charge_status = value ?
POWER_SUPPLY_STATUS_CHARGING :
POWER_SUPPLY_STATUS_DISCHARGING;
return true;
}
return false;
}
#else /* !CONFIG_HID_BATTERY_STRENGTH */
static int hidinput_setup_battery(struct hid_device *dev, unsigned report_type,
struct hid_field *field, bool is_percentage)
@ -640,6 +655,12 @@ static void hidinput_cleanup_battery(struct hid_device *dev)
static void hidinput_update_battery(struct hid_device *dev, int value)
{
}
static bool hidinput_set_battery_charge_status(struct hid_device *dev,
unsigned int usage, int value)
{
return false;
}
#endif /* CONFIG_HID_BATTERY_STRENGTH */
static bool hidinput_field_in_collection(struct hid_device *device, struct hid_field *field,
@ -1239,6 +1260,9 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
hidinput_setup_battery(device, HID_INPUT_REPORT, field, true);
usage->type = EV_PWR;
return;
case HID_BAT_CHARGING:
usage->type = EV_PWR;
return;
}
goto unknown;
case HID_UP_CAMERA:
@ -1491,7 +1515,11 @@ void hidinput_hid_event(struct hid_device *hid, struct hid_field *field, struct
return;
if (usage->type == EV_PWR) {
bool handled = hidinput_set_battery_charge_status(hid, usage->hid, value);
if (!handled)
hidinput_update_battery(hid, value);
return;
}
@ -2356,3 +2384,7 @@ void hidinput_disconnect(struct hid_device *hid)
cancel_work_sync(&hid->led_work);
}
EXPORT_SYMBOL_GPL(hidinput_disconnect);
#ifdef CONFIG_HID_KUNIT_TEST
#include "hid-input-test.c"
#endif

View File

@ -2060,10 +2060,6 @@ static struct protection_domain *protection_domain_alloc(unsigned int type)
int mode = DEFAULT_PGTABLE_LEVEL;
int ret;
domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain)
return NULL;
/*
* Force IOMMU v1 page table when iommu=pt and
* when allocating domain for pass-through devices.
@ -2079,6 +2075,10 @@ static struct protection_domain *protection_domain_alloc(unsigned int type)
return NULL;
}
domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain)
return NULL;
switch (pgtable) {
case AMD_IOMMU_V1:
ret = protection_domain_init_v1(domain, mode);

View File

@ -23,6 +23,7 @@
#include <linux/memremap.h>
#include <linux/mm.h>
#include <linux/mutex.h>
#include <linux/of_iommu.h>
#include <linux/pci.h>
#include <linux/scatterlist.h>
#include <linux/spinlock.h>
@ -392,6 +393,8 @@ void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)
if (!is_of_node(dev_iommu_fwspec_get(dev)->iommu_fwnode))
iort_iommu_get_resv_regions(dev, list);
if (dev->of_node)
of_iommu_get_resv_regions(dev, list);
}
EXPORT_SYMBOL(iommu_dma_get_resv_regions);
@ -776,6 +779,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
order_size = 1U << order;
if (order_mask > order_size)
alloc_flags |= __GFP_NORETRY;
trace_android_vh_adjust_alloc_flags(order, &alloc_flags);
page = alloc_pages_node(nid, alloc_flags, order);
if (!page)
continue;

View File

@ -30,6 +30,7 @@
#include <linux/cc_platform.h>
#include <trace/events/iommu.h>
#include <linux/sched/mm.h>
#include <trace/hooks/iommu.h>
#include "dma-iommu.h"
@ -223,7 +224,8 @@ int iommu_device_register(struct iommu_device *iommu,
* already the de-facto behaviour, since any possible combination of
* existing drivers would compete for at least the PCI or platform bus.
*/
if (iommu_buses[0]->iommu_ops && iommu_buses[0]->iommu_ops != ops)
if (iommu_buses[0]->iommu_ops && iommu_buses[0]->iommu_ops != ops
&& !trace_android_vh_bus_iommu_probe_enabled())
return -EBUSY;
iommu->ops = ops;
@ -235,6 +237,11 @@ int iommu_device_register(struct iommu_device *iommu,
spin_unlock(&iommu_device_lock);
for (int i = 0; i < ARRAY_SIZE(iommu_buses) && !err; i++) {
bool skip = false;
trace_android_vh_bus_iommu_probe(iommu, iommu_buses[i], &skip);
if (skip)
continue;
iommu_buses[i]->iommu_ops = ops;
err = bus_iommu_probe(iommu_buses[i]);
}

View File

@ -11,6 +11,7 @@
#include <linux/module.h>
#include <linux/msi.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_iommu.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
@ -172,3 +173,98 @@ const struct iommu_ops *of_iommu_configure(struct device *dev,
return ops;
}
static enum iommu_resv_type __maybe_unused
iommu_resv_region_get_type(struct device *dev,
struct resource *phys,
phys_addr_t start, size_t length)
{
phys_addr_t end = start + length - 1;
/*
* IOMMU regions without an associated physical region cannot be
* mapped and are simply reservations.
*/
if (phys->start >= phys->end)
return IOMMU_RESV_RESERVED;
/* may be IOMMU_RESV_DIRECT_RELAXABLE for certain cases */
if (start == phys->start && end == phys->end)
return IOMMU_RESV_DIRECT;
dev_warn(dev, "treating non-direct mapping [%pr] -> [%pap-%pap] as reservation\n", &phys,
&start, &end);
return IOMMU_RESV_RESERVED;
}
/**
* of_iommu_get_resv_regions - reserved region driver helper for device tree
* @dev: device for which to get reserved regions
* @list: reserved region list
*
* IOMMU drivers can use this to implement their .get_resv_regions() callback
* for memory regions attached to a device tree node. See the reserved-memory
* device tree bindings on how to use these:
*
* Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
*/
void of_iommu_get_resv_regions(struct device *dev, struct list_head *list)
{
#if IS_ENABLED(CONFIG_OF_ADDRESS)
struct of_phandle_iterator it;
int err;
of_for_each_phandle(&it, err, dev->of_node, "memory-region", NULL, 0) {
const __be32 *maps, *end;
struct resource phys;
int size;
memset(&phys, 0, sizeof(phys));
/*
* The "reg" property is optional and can be omitted by reserved-memory regions
* that represent reservations in the IOVA space, which are regions that should
* not be mapped.
*/
if (of_find_property(it.node, "reg", NULL)) {
err = of_address_to_resource(it.node, 0, &phys);
if (err < 0) {
dev_err(dev, "failed to parse memory region %pOF: %d\n",
it.node, err);
continue;
}
}
maps = of_get_property(it.node, "iommu-addresses", &size);
if (!maps)
continue;
end = maps + size / sizeof(__be32);
while (maps < end) {
struct device_node *np;
u32 phandle;
phandle = be32_to_cpup(maps++);
np = of_find_node_by_phandle(phandle);
if (np == dev->of_node) {
int prot = IOMMU_READ | IOMMU_WRITE;
struct iommu_resv_region *region;
enum iommu_resv_type type;
phys_addr_t iova;
size_t length;
maps = of_translate_dma_region(np, maps, &iova, &length);
type = iommu_resv_region_get_type(dev, &phys, iova, length);
region = iommu_alloc_resv_region(iova, length, prot, type,
GFP_KERNEL);
if (region)
list_add_tail(&region->list, list);
}
}
}
#endif
}
EXPORT_SYMBOL(of_iommu_get_resv_regions);

View File

@ -19,20 +19,24 @@
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/llist.h>
#include <linux/mm.h>
#include <linux/proc_fs.h>
#include <linux/profile.h>
#include <linux/rtmutex.h>
#include <linux/sched/cputime.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/spinlock_types.h>
#define UID_HASH_BITS 10
#define UID_HASH_NUMS (1 << UID_HASH_BITS)
DECLARE_HASHTABLE(hash_table, UID_HASH_BITS);
/*
* uid_lock[bkt] ensure consistency of hash_table[bkt]
*/
spinlock_t uid_lock[UID_HASH_NUMS];
static DEFINE_RT_MUTEX(uid_lock);
static struct proc_dir_entry *cpu_parent;
static struct proc_dir_entry *io_parent;
static struct proc_dir_entry *proc_parent;
@ -77,6 +81,32 @@ struct uid_entry {
#endif
};
static inline int trylock_uid(uid_t uid)
{
return spin_trylock(
&uid_lock[hash_min(uid, HASH_BITS(hash_table))]);
}
static inline void lock_uid(uid_t uid)
{
spin_lock(&uid_lock[hash_min(uid, HASH_BITS(hash_table))]);
}
static inline void unlock_uid(uid_t uid)
{
spin_unlock(&uid_lock[hash_min(uid, HASH_BITS(hash_table))]);
}
static inline void lock_uid_by_bkt(u32 bkt)
{
spin_lock(&uid_lock[bkt]);
}
static inline void unlock_uid_by_bkt(u32 bkt)
{
spin_unlock(&uid_lock[bkt]);
}
static u64 compute_write_bytes(struct task_io_accounting *ioac)
{
if (ioac->write_bytes <= ioac->cancelled_write_bytes)
@ -332,24 +362,29 @@ static int uid_cputime_show(struct seq_file *m, void *v)
struct user_namespace *user_ns = current_user_ns();
u64 utime;
u64 stime;
unsigned long bkt;
u32 bkt;
uid_t uid;
rt_mutex_lock(&uid_lock);
hash_for_each(hash_table, bkt, uid_entry, hash) {
for (bkt = 0, uid_entry = NULL; uid_entry == NULL &&
bkt < HASH_SIZE(hash_table); bkt++) {
lock_uid_by_bkt(bkt);
hlist_for_each_entry(uid_entry, &hash_table[bkt], hash) {
uid_entry->active_stime = 0;
uid_entry->active_utime = 0;
}
unlock_uid_by_bkt(bkt);
}
rcu_read_lock();
do_each_thread(temp, task) {
uid = from_kuid_munged(user_ns, task_uid(task));
lock_uid(uid);
if (!uid_entry || uid_entry->uid != uid)
uid_entry = find_or_register_uid(uid);
if (!uid_entry) {
rcu_read_unlock();
rt_mutex_unlock(&uid_lock);
unlock_uid(uid);
pr_err("%s: failed to find the uid_entry for uid %d\n",
__func__, uid);
return -ENOMEM;
@ -360,10 +395,14 @@ static int uid_cputime_show(struct seq_file *m, void *v)
uid_entry->active_utime += utime;
uid_entry->active_stime += stime;
}
unlock_uid(uid);
} while_each_thread(temp, task);
rcu_read_unlock();
hash_for_each(hash_table, bkt, uid_entry, hash) {
for (bkt = 0, uid_entry = NULL; uid_entry == NULL &&
bkt < HASH_SIZE(hash_table); bkt++) {
lock_uid_by_bkt(bkt);
hlist_for_each_entry(uid_entry, &hash_table[bkt], hash) {
u64 total_utime = uid_entry->utime +
uid_entry->active_utime;
u64 total_stime = uid_entry->stime +
@ -371,8 +410,9 @@ static int uid_cputime_show(struct seq_file *m, void *v)
seq_printf(m, "%d: %llu %llu\n", uid_entry->uid,
ktime_to_us(total_utime), ktime_to_us(total_stime));
}
unlock_uid_by_bkt(bkt);
}
rt_mutex_unlock(&uid_lock);
return 0;
}
@ -420,9 +460,8 @@ static ssize_t uid_remove_write(struct file *file,
return -EINVAL;
}
rt_mutex_lock(&uid_lock);
for (; uid_start <= uid_end; uid_start++) {
lock_uid(uid_start);
hash_for_each_possible_safe(hash_table, uid_entry, tmp,
hash, (uid_t)uid_start) {
if (uid_start == uid_entry->uid) {
@ -431,9 +470,9 @@ static ssize_t uid_remove_write(struct file *file,
kfree(uid_entry);
}
}
unlock_uid(uid_start);
}
rt_mutex_unlock(&uid_lock);
return count;
}
@ -471,41 +510,59 @@ static void add_uid_io_stats(struct uid_entry *uid_entry,
__add_uid_io_stats(uid_entry, &task->ioac, slot);
}
static void update_io_stats_all_locked(void)
static void update_io_stats_all(void)
{
struct uid_entry *uid_entry = NULL;
struct task_struct *task, *temp;
struct user_namespace *user_ns = current_user_ns();
unsigned long bkt;
u32 bkt;
uid_t uid;
hash_for_each(hash_table, bkt, uid_entry, hash) {
for (bkt = 0, uid_entry = NULL; uid_entry == NULL && bkt < HASH_SIZE(hash_table);
bkt++) {
lock_uid_by_bkt(bkt);
hlist_for_each_entry(uid_entry, &hash_table[bkt], hash) {
memset(&uid_entry->io[UID_STATE_TOTAL_CURR], 0,
sizeof(struct io_stats));
set_io_uid_tasks_zero(uid_entry);
}
unlock_uid_by_bkt(bkt);
}
rcu_read_lock();
do_each_thread(temp, task) {
uid = from_kuid_munged(user_ns, task_uid(task));
lock_uid(uid);
if (!uid_entry || uid_entry->uid != uid)
uid_entry = find_or_register_uid(uid);
if (!uid_entry)
if (!uid_entry) {
unlock_uid(uid);
continue;
}
add_uid_io_stats(uid_entry, task, UID_STATE_TOTAL_CURR);
unlock_uid(uid);
} while_each_thread(temp, task);
rcu_read_unlock();
hash_for_each(hash_table, bkt, uid_entry, hash) {
for (bkt = 0, uid_entry = NULL; uid_entry == NULL && bkt < HASH_SIZE(hash_table);
bkt++) {
lock_uid_by_bkt(bkt);
hlist_for_each_entry(uid_entry, &hash_table[bkt], hash) {
compute_io_bucket_stats(&uid_entry->io[uid_entry->state],
&uid_entry->io[UID_STATE_TOTAL_CURR],
&uid_entry->io[UID_STATE_TOTAL_LAST],
&uid_entry->io[UID_STATE_DEAD_TASKS]);
compute_io_uid_tasks(uid_entry);
}
unlock_uid_by_bkt(bkt);
}
}
#ifndef CONFIG_UID_SYS_STATS_DEBUG
static void update_io_stats_uid(struct uid_entry *uid_entry)
#else
static void update_io_stats_uid_locked(struct uid_entry *uid_entry)
#endif
{
struct task_struct *task, *temp;
struct user_namespace *user_ns = current_user_ns();
@ -533,13 +590,14 @@ static void update_io_stats_uid_locked(struct uid_entry *uid_entry)
static int uid_io_show(struct seq_file *m, void *v)
{
struct uid_entry *uid_entry;
unsigned long bkt;
u32 bkt;
rt_mutex_lock(&uid_lock);
update_io_stats_all();
for (bkt = 0, uid_entry = NULL; uid_entry == NULL && bkt < HASH_SIZE(hash_table);
bkt++) {
update_io_stats_all_locked();
hash_for_each(hash_table, bkt, uid_entry, hash) {
lock_uid_by_bkt(bkt);
hlist_for_each_entry(uid_entry, &hash_table[bkt], hash) {
seq_printf(m, "%d %llu %llu %llu %llu %llu %llu %llu %llu %llu %llu\n",
uid_entry->uid,
uid_entry->io[UID_STATE_FOREGROUND].rchar,
@ -555,8 +613,9 @@ static int uid_io_show(struct seq_file *m, void *v)
show_io_uid_tasks(m, uid_entry);
}
unlock_uid_by_bkt(bkt);
}
rt_mutex_unlock(&uid_lock);
return 0;
}
@ -584,6 +643,9 @@ static ssize_t uid_procstat_write(struct file *file,
uid_t uid;
int argc, state;
char input[128];
#ifndef CONFIG_UID_SYS_STATS_DEBUG
struct uid_entry uid_entry_tmp;
#endif
if (count >= sizeof(input))
return -EINVAL;
@ -600,24 +662,51 @@ static ssize_t uid_procstat_write(struct file *file,
if (state != UID_STATE_BACKGROUND && state != UID_STATE_FOREGROUND)
return -EINVAL;
rt_mutex_lock(&uid_lock);
lock_uid(uid);
uid_entry = find_or_register_uid(uid);
if (!uid_entry) {
rt_mutex_unlock(&uid_lock);
unlock_uid(uid);
return -EINVAL;
}
if (uid_entry->state == state) {
rt_mutex_unlock(&uid_lock);
unlock_uid(uid);
return count;
}
#ifndef CONFIG_UID_SYS_STATS_DEBUG
/*
* Update_io_stats_uid_locked would take a long lock-time of uid_lock
* due to call do_each_thread to compute uid_entry->io, which would
* cause to lock competition sometime.
*
* Using uid_entry_tmp to get the result of Update_io_stats_uid,
* so that we can unlock_uid during update_io_stats_uid, in order
* to avoid the unnecessary lock-time of uid_lock.
*/
uid_entry_tmp.uid = uid_entry->uid;
memcpy(uid_entry_tmp.io, uid_entry->io,
sizeof(struct io_stats) * UID_STATE_SIZE);
unlock_uid(uid);
update_io_stats_uid(&uid_entry_tmp);
lock_uid(uid);
hlist_for_each_entry(uid_entry, &hash_table[hash_min(uid, HASH_BITS(hash_table))], hash) {
if (uid_entry->uid == uid_entry_tmp.uid) {
memcpy(uid_entry->io, uid_entry_tmp.io,
sizeof(struct io_stats) * UID_STATE_SIZE);
uid_entry->state = state;
break;
}
}
unlock_uid(uid);
#else
update_io_stats_uid_locked(uid_entry);
uid_entry->state = state;
rt_mutex_unlock(&uid_lock);
unlock_uid(uid);
#endif
return count;
}
@ -636,22 +725,21 @@ struct update_stats_work {
struct task_io_accounting ioac;
u64 utime;
u64 stime;
struct update_stats_work *next;
struct llist_node node;
};
static atomic_long_t work_usw;
static LLIST_HEAD(work_usw);
static void update_stats_workfn(struct work_struct *work)
{
struct update_stats_work *usw;
struct update_stats_work *usw, *t;
struct uid_entry *uid_entry;
struct task_entry *task_entry __maybe_unused;
struct llist_node *node;
rt_mutex_lock(&uid_lock);
while ((usw = (struct update_stats_work *)atomic_long_read(&work_usw))) {
if (atomic_long_cmpxchg(&work_usw, (long)usw, (long)(usw->next)) != (long)usw)
continue;
node = llist_del_all(&work_usw);
llist_for_each_entry_safe(usw, t, node, node) {
lock_uid(usw->uid);
uid_entry = find_uid_entry(usw->uid);
if (!uid_entry)
goto next;
@ -668,12 +756,13 @@ static void update_stats_workfn(struct work_struct *work)
#endif
__add_uid_io_stats(uid_entry, &usw->ioac, UID_STATE_DEAD_TASKS);
next:
unlock_uid(usw->uid);
#ifdef CONFIG_UID_SYS_STATS_DEBUG
put_task_struct(usw->task);
#endif
kfree(usw);
}
rt_mutex_unlock(&uid_lock);
}
static DECLARE_WORK(update_stats_work, update_stats_workfn);
@ -689,7 +778,7 @@ static int process_notifier(struct notifier_block *self,
return NOTIFY_OK;
uid = from_kuid_munged(current_user_ns(), task_uid(task));
if (!rt_mutex_trylock(&uid_lock)) {
if (!trylock_uid(uid)) {
struct update_stats_work *usw;
usw = kmalloc(sizeof(struct update_stats_work), GFP_KERNEL);
@ -704,8 +793,7 @@ static int process_notifier(struct notifier_block *self,
*/
usw->ioac = task->ioac;
task_cputime_adjusted(task, &usw->utime, &usw->stime);
usw->next = (struct update_stats_work *)atomic_long_xchg(&work_usw,
(long)usw);
llist_add(&usw->node, &work_usw);
schedule_work(&update_stats_work);
}
return NOTIFY_OK;
@ -724,7 +812,7 @@ static int process_notifier(struct notifier_block *self,
add_uid_io_stats(uid_entry, task, UID_STATE_DEAD_TASKS);
exit:
rt_mutex_unlock(&uid_lock);
unlock_uid(uid);
return NOTIFY_OK;
}
@ -732,9 +820,18 @@ static struct notifier_block process_notifier_block = {
.notifier_call = process_notifier,
};
static void init_hash_table_and_lock(void)
{
int i;
hash_init(hash_table);
for (i = 0; i < UID_HASH_NUMS; i++)
spin_lock_init(&uid_lock[i]);
}
static int __init proc_uid_sys_stats_init(void)
{
hash_init(hash_table);
init_hash_table_and_lock();
cpu_parent = proc_mkdir("uid_cputime", NULL);
if (!cpu_parent) {

View File

@ -2183,6 +2183,8 @@ static int ravb_close(struct net_device *ndev)
of_phy_deregister_fixed_link(np);
}
cancel_work_sync(&priv->work);
if (info->multi_irqs) {
free_irq(priv->tx_irqs[RAVB_NC], ndev);
free_irq(priv->rx_irqs[RAVB_NC], ndev);
@ -2907,8 +2909,6 @@ static int ravb_remove(struct platform_device *pdev)
clk_disable_unprepare(priv->gptp_clk);
clk_disable_unprepare(priv->refclk);
dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat,
priv->desc_bat_dma);
/* Set reset mode */
ravb_write(ndev, CCC_OPC_RESET, CCC);
unregister_netdev(ndev);
@ -2916,6 +2916,8 @@ static int ravb_remove(struct platform_device *pdev)
netif_napi_del(&priv->napi[RAVB_NC]);
netif_napi_del(&priv->napi[RAVB_BE]);
ravb_mdio_release(priv);
dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat,
priv->desc_bat_dma);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
reset_control_assert(priv->rstc);

View File

@ -626,6 +626,47 @@ u64 of_translate_dma_address(struct device_node *dev, const __be32 *in_addr)
}
EXPORT_SYMBOL(of_translate_dma_address);
/**
* of_translate_dma_region - Translate device tree address and size tuple
* @dev: device tree node for which to translate
* @prop: pointer into array of cells
* @start: return value for the start of the DMA range
* @length: return value for the length of the DMA range
*
* Returns a pointer to the cell immediately following the translated DMA region.
*/
const __be32 *of_translate_dma_region(struct device_node *dev, const __be32 *prop,
phys_addr_t *start, size_t *length)
{
struct device_node *parent;
u64 address, size;
int na, ns;
parent = __of_get_dma_parent(dev);
if (!parent)
return NULL;
na = of_bus_n_addr_cells(parent);
ns = of_bus_n_size_cells(parent);
of_node_put(parent);
address = of_translate_dma_address(dev, prop);
if (address == OF_BAD_ADDR)
return NULL;
size = of_read_number(prop + na, ns);
if (start)
*start = address;
if (length)
*length = size;
return prop + na + ns;
}
EXPORT_SYMBOL(of_translate_dma_region);
const __be32 *__of_get_address(struct device_node *dev, int index, int bar_no,
u64 *size, unsigned int *flags)
{

View File

@ -285,6 +285,16 @@ void __init fdt_init_reserved_mem(void)
else
memblock_phys_free(rmem->base,
rmem->size);
} else {
phys_addr_t end = rmem->base + rmem->size - 1;
bool reusable =
(of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
pr_info("%pa..%pa ( %lu KB ) %s %s %s\n",
&rmem->base, &end, (unsigned long)(rmem->size / SZ_1K),
nomap ? "nomap" : "map",
reusable ? "reusable" : "non-reusable",
rmem->name ? rmem->name : "unknown");
}
}
}

View File

@ -479,19 +479,14 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (ret)
goto err_free_msi;
if (dw_pcie_link_up(pci)) {
dw_pcie_print_link_status(pci);
} else {
if (!dw_pcie_link_up(pci)) {
ret = dw_pcie_start_link(pci);
if (ret)
goto err_free_msi;
}
if (pci->ops && pci->ops->start_link) {
ret = dw_pcie_wait_for_link(pci);
if (ret)
goto err_stop_link;
}
}
/* Ignore errors, the link may come up later */
dw_pcie_wait_for_link(pci);
bridge->sysdata = pp;

View File

@ -434,20 +434,9 @@ void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index)
dw_pcie_writel_atu(pci, dir, index, PCIE_ATU_REGION_CTRL2, 0);
}
void dw_pcie_print_link_status(struct dw_pcie *pci)
{
u32 offset, val;
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA);
dev_info(pci->dev, "PCIe Gen.%u x%u link up\n",
FIELD_GET(PCI_EXP_LNKSTA_CLS, val),
FIELD_GET(PCI_EXP_LNKSTA_NLW, val));
}
int dw_pcie_wait_for_link(struct dw_pcie *pci)
{
u32 offset, val;
int retries;
/* Check if the link is up or not */
@ -463,7 +452,12 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
return -ETIMEDOUT;
}
dw_pcie_print_link_status(pci);
offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA);
dev_info(pci->dev, "PCIe Gen.%u x%u link up\n",
FIELD_GET(PCI_EXP_LNKSTA_CLS, val),
FIELD_GET(PCI_EXP_LNKSTA_NLW, val));
return 0;
}

View File

@ -351,7 +351,6 @@ int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index,
void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index);
void dw_pcie_setup(struct dw_pcie *pci);
void dw_pcie_iatu_detect(struct dw_pcie *pci);
void dw_pcie_print_link_status(struct dw_pcie *pci);
static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val)
{

View File

@ -523,7 +523,10 @@ static int dw8250_probe(struct platform_device *pdev)
if (!regs)
return dev_err_probe(dev, -EINVAL, "no registers defined\n");
irq = platform_get_irq(pdev, 0);
irq = platform_get_irq_optional(pdev, 0);
/* no interrupt -> fall back to polling */
if (irq == -ENXIO)
irq = 0;
if (irq < 0)
return irq;

View File

@ -295,7 +295,6 @@ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba);
static int ufshcd_host_reset_and_restore(struct ufs_hba *hba);
static void ufshcd_resume_clkscaling(struct ufs_hba *hba);
static void ufshcd_suspend_clkscaling(struct ufs_hba *hba);
static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba);
static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up);
static irqreturn_t ufshcd_intr(int irq, void *__hba);
static int ufshcd_change_power_mode(struct ufs_hba *hba,
@ -1418,9 +1417,10 @@ static void ufshcd_clk_scaling_suspend_work(struct work_struct *work)
return;
}
hba->clk_scaling.is_suspended = true;
hba->clk_scaling.window_start_t = 0;
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
__ufshcd_suspend_clkscaling(hba);
devfreq_suspend_device(hba->devfreq);
}
static void ufshcd_clk_scaling_resume_work(struct work_struct *work)
@ -1465,6 +1465,13 @@ static int ufshcd_devfreq_target(struct device *dev,
return 0;
}
/* Skip scaling clock when clock scaling is suspended */
if (hba->clk_scaling.is_suspended) {
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
dev_warn(hba->dev, "clock scaling is suspended, skip");
return 0;
}
if (!hba->clk_scaling.active_reqs)
sched_clk_scaling_suspend_work = true;
@ -1496,7 +1503,7 @@ static int ufshcd_devfreq_target(struct device *dev,
ktime_to_us(ktime_sub(ktime_get(), start)), ret);
out:
if (sched_clk_scaling_suspend_work)
if (sched_clk_scaling_suspend_work && !scale_up)
queue_work(hba->clk_scaling.workq,
&hba->clk_scaling.suspend_work);
@ -1602,16 +1609,6 @@ static void ufshcd_devfreq_remove(struct ufs_hba *hba)
dev_pm_opp_remove(hba->dev, clki->max_freq);
}
static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba)
{
unsigned long flags;
devfreq_suspend_device(hba->devfreq);
spin_lock_irqsave(hba->host->host_lock, flags);
hba->clk_scaling.window_start_t = 0;
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
static void ufshcd_suspend_clkscaling(struct ufs_hba *hba)
{
unsigned long flags;
@ -1624,11 +1621,12 @@ static void ufshcd_suspend_clkscaling(struct ufs_hba *hba)
if (!hba->clk_scaling.is_suspended) {
suspend = true;
hba->clk_scaling.is_suspended = true;
hba->clk_scaling.window_start_t = 0;
}
spin_unlock_irqrestore(hba->host->host_lock, flags);
if (suspend)
__ufshcd_suspend_clkscaling(hba);
devfreq_suspend_device(hba->devfreq);
}
static void ufshcd_resume_clkscaling(struct ufs_hba *hba)
@ -7859,6 +7857,20 @@ static int ufshcd_eh_host_reset_handler(struct scsi_cmnd *cmd)
hba = shost_priv(cmd->device->host);
/*
* If runtime pm send SSU and got timeout, scsi_error_handler
* stuck at this function to wait for flush_work(&hba->eh_work).
* And ufshcd_err_handler(eh_work) stuck at wait for runtime pm active.
* Do ufshcd_link_recovery instead schedule eh_work can prevent
* dead lock to happen.
*/
if (hba->pm_op_in_progress) {
if (ufshcd_link_recovery(hba))
err = FAILED;
return err;
}
spin_lock_irqsave(hba->host->host_lock, flags);
hba->force_reset = true;
ufshcd_schedule_eh_work(hba);

View File

@ -3103,6 +3103,48 @@ static int hub_port_reset(struct usb_hub *hub, int port1,
return status;
}
/*
* hub_port_stop_enumerate - stop USB enumeration or ignore port events
* @hub: target hub
* @port1: port num of the port
* @retries: port retries number of hub_port_init()
*
* Return:
* true: ignore port actions/events or give up connection attempts.
* false: keep original behavior.
*
* This function will be based on retries to check whether the port which is
* marked with early_stop attribute would stop enumeration or ignore events.
*
* Note:
* This function didn't change anything if early_stop is not set, and it will
* prevent all connection attempts when early_stop is set and the attempts of
* the port are more than 1.
*/
static bool hub_port_stop_enumerate(struct usb_hub *hub, int port1, int retries)
{
struct usb_port *port_dev = hub->ports[port1 - 1];
if (port_dev->early_stop) {
if (port_dev->ignore_event)
return true;
/*
* We want unsuccessful attempts to fail quickly.
* Since some devices may need one failure during
* port initialization, we allow two tries but no
* more.
*/
if (retries < 2)
return false;
port_dev->ignore_event = 1;
} else
port_dev->ignore_event = 0;
return port_dev->ignore_event;
}
/* Check if a port is power on */
int usb_port_is_power_on(struct usb_hub *hub, unsigned int portstatus)
{
@ -4897,6 +4939,11 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
do_new_scheme = use_new_scheme(udev, retry_counter, port_dev);
for (retries = 0; retries < GET_DESCRIPTOR_TRIES; (++retries, msleep(100))) {
if (hub_port_stop_enumerate(hub, port1, retries)) {
retval = -ENODEV;
break;
}
if (do_new_scheme) {
retval = hub_enable_device(udev);
if (retval < 0) {
@ -5323,6 +5370,11 @@ static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus,
status = 0;
for (i = 0; i < PORT_INIT_TRIES; i++) {
if (hub_port_stop_enumerate(hub, port1, i)) {
status = -ENODEV;
break;
}
usb_lock_port(port_dev);
mutex_lock(hcd->address0_mutex);
retry_locked = true;
@ -5687,6 +5739,10 @@ static void port_event(struct usb_hub *hub, int port1)
if (!pm_runtime_active(&port_dev->dev))
return;
/* skip port actions if ignore_event and early_stop are true */
if (port_dev->ignore_event && port_dev->early_stop)
return;
if (hub_handle_remote_wakeup(hub, port1, portstatus, portchange))
connect_change = 1;
@ -6010,6 +6066,10 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
mutex_lock(hcd->address0_mutex);
for (i = 0; i < PORT_INIT_TRIES; ++i) {
if (hub_port_stop_enumerate(parent_hub, port1, i)) {
ret = -ENODEV;
break;
}
/* ep0 maxpacket size may change; let the HCD know about it.
* Other endpoints will be handled by re-enumeration. */

View File

@ -92,6 +92,8 @@ struct usb_hub {
* @is_superspeed cache super-speed status
* @usb3_lpm_u1_permit: whether USB3 U1 LPM is permitted.
* @usb3_lpm_u2_permit: whether USB3 U2 LPM is permitted.
* @early_stop: whether port initialization will be stopped earlier.
* @ignore_event: whether events of the port are ignored.
*/
struct usb_port {
struct usb_device *child;
@ -107,6 +109,8 @@ struct usb_port {
u32 over_current_count;
u8 portnum;
u32 quirks;
unsigned int early_stop:1;
unsigned int ignore_event:1;
unsigned int is_superspeed:1;
unsigned int usb3_lpm_u1_permit:1;
unsigned int usb3_lpm_u2_permit:1;

View File

@ -17,6 +17,32 @@ static int usb_port_block_power_off;
static const struct attribute_group *port_dev_group[];
static ssize_t early_stop_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct usb_port *port_dev = to_usb_port(dev);
return sysfs_emit(buf, "%s\n", port_dev->early_stop ? "yes" : "no");
}
static ssize_t early_stop_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct usb_port *port_dev = to_usb_port(dev);
bool value;
if (kstrtobool(buf, &value))
return -EINVAL;
if (value)
port_dev->early_stop = 1;
else
port_dev->early_stop = 0;
return count;
}
static DEVICE_ATTR_RW(early_stop);
static ssize_t disable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@ -247,6 +273,7 @@ static struct attribute *port_dev_attrs[] = {
&dev_attr_quirks.attr,
&dev_attr_over_current_count.attr,
&dev_attr_disable.attr,
&dev_attr_early_stop.attr,
NULL,
};

View File

@ -1023,15 +1023,12 @@ static int f_midi_bind(struct usb_configuration *c, struct usb_function *f)
if (!f->fs_descriptors)
goto fail_f_midi;
if (gadget_is_dualspeed(c->cdev->gadget)) {
bulk_in_desc.wMaxPacketSize = cpu_to_le16(512);
bulk_out_desc.wMaxPacketSize = cpu_to_le16(512);
f->hs_descriptors = usb_copy_descriptors(midi_function);
if (!f->hs_descriptors)
goto fail_f_midi;
}
if (gadget_is_superspeed(c->cdev->gadget)) {
bulk_in_desc.wMaxPacketSize = cpu_to_le16(1024);
bulk_out_desc.wMaxPacketSize = cpu_to_le16(1024);
i = endpoint_descriptor_index;
@ -1051,13 +1048,6 @@ static int f_midi_bind(struct usb_configuration *c, struct usb_function *f)
if (!f->ss_descriptors)
goto fail_f_midi;
if (gadget_is_superspeed_plus(c->cdev->gadget)) {
f->ssp_descriptors = usb_copy_descriptors(midi_function);
if (!f->ssp_descriptors)
goto fail_f_midi;
}
}
kfree(midi_function);
return 0;

View File

@ -492,6 +492,7 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
void *mem;
switch (speed) {
case USB_SPEED_SUPER_PLUS:
case USB_SPEED_SUPER:
uvc_control_desc = uvc->desc.ss_control;
uvc_streaming_cls = uvc->desc.ss_streaming;
@ -536,7 +537,8 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
+ uvc_control_ep.bLength + uvc_control_cs_ep.bLength
+ uvc_streaming_intf_alt0.bLength;
if (speed == USB_SPEED_SUPER) {
if (speed == USB_SPEED_SUPER ||
speed == USB_SPEED_SUPER_PLUS) {
bytes += uvc_ss_control_comp.bLength;
n_desc = 6;
} else {
@ -580,7 +582,8 @@ uvc_copy_descriptors(struct uvc_device *uvc, enum usb_device_speed speed)
uvc_control_header->baInterfaceNr[0] = uvc->streaming_intf;
UVC_COPY_DESCRIPTOR(mem, dst, &uvc_control_ep);
if (speed == USB_SPEED_SUPER)
if (speed == USB_SPEED_SUPER
|| speed == USB_SPEED_SUPER_PLUS)
UVC_COPY_DESCRIPTOR(mem, dst, &uvc_ss_control_comp);
UVC_COPY_DESCRIPTOR(mem, dst, &uvc_control_cs_ep);
@ -673,21 +676,13 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
}
uvc->control_ep = ep;
if (gadget_is_superspeed(c->cdev->gadget))
ep = usb_ep_autoconfig_ss(cdev->gadget, &uvc_ss_streaming_ep,
&uvc_ss_streaming_comp);
else if (gadget_is_dualspeed(cdev->gadget))
ep = usb_ep_autoconfig(cdev->gadget, &uvc_hs_streaming_ep);
else
ep = usb_ep_autoconfig(cdev->gadget, &uvc_fs_streaming_ep);
if (!ep) {
uvcg_info(f, "Unable to allocate streaming EP\n");
goto error;
}
uvc->video.ep = ep;
uvc_fs_streaming_ep.bEndpointAddress = uvc->video.ep->address;
uvc_hs_streaming_ep.bEndpointAddress = uvc->video.ep->address;
uvc_ss_streaming_ep.bEndpointAddress = uvc->video.ep->address;
@ -726,21 +721,26 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
f->fs_descriptors = NULL;
goto error;
}
if (gadget_is_dualspeed(cdev->gadget)) {
f->hs_descriptors = uvc_copy_descriptors(uvc, USB_SPEED_HIGH);
if (IS_ERR(f->hs_descriptors)) {
ret = PTR_ERR(f->hs_descriptors);
f->hs_descriptors = NULL;
goto error;
}
}
if (gadget_is_superspeed(c->cdev->gadget)) {
f->ss_descriptors = uvc_copy_descriptors(uvc, USB_SPEED_SUPER);
if (IS_ERR(f->ss_descriptors)) {
ret = PTR_ERR(f->ss_descriptors);
f->ss_descriptors = NULL;
goto error;
}
f->ssp_descriptors = uvc_copy_descriptors(uvc, USB_SPEED_SUPER_PLUS);
if (IS_ERR(f->ssp_descriptors)) {
ret = PTR_ERR(f->ssp_descriptors);
f->ssp_descriptors = NULL;
goto error;
}
/* Preallocate control endpoint request. */

View File

@ -20,7 +20,6 @@ struct f_phonet_opts {
struct net_device *gphonet_setup_default(void);
void gphonet_set_gadget(struct net_device *net, struct usb_gadget *g);
int gphonet_register_netdev(struct net_device *net);
int phonet_bind_config(struct usb_configuration *c, struct net_device *dev);
void gphonet_cleanup(struct net_device *dev);
#endif /* __U_PHONET_H */

View File

@ -538,16 +538,20 @@ static int gs_alloc_requests(struct usb_ep *ep, struct list_head *head,
static int gs_start_io(struct gs_port *port)
{
struct list_head *head = &port->read_pool;
struct usb_ep *ep = port->port_usb->out;
struct usb_ep *ep;
int status;
unsigned started;
if (!port->port_usb || !port->port.tty)
return -EIO;
/* Allocate RX and TX I/O buffers. We can't easily do this much
* earlier (with GFP_KERNEL) because the requests are coupled to
* endpoints, as are the packet sizes we'll be using. Different
* configurations may use different endpoints with a given port;
* and high speed vs full speed changes packet sizes too.
*/
ep = port->port_usb->out;
status = gs_alloc_requests(ep, head, gs_read_complete,
&port->read_allocated);
if (status)

View File

@ -71,8 +71,4 @@ void gserial_disconnect(struct gserial *);
void gserial_suspend(struct gserial *p);
void gserial_resume(struct gserial *p);
/* functions are bound to configurations by a config or gadget driver */
int gser_bind_config(struct usb_configuration *c, u8 port_num);
int obex_bind_config(struct usb_configuration *c, u8 port_num);
#endif /* __U_SERIAL_H */

View File

@ -176,8 +176,6 @@ struct uvc_file_handle {
*/
extern void uvc_function_setup_continue(struct uvc_device *uvc);
extern void uvc_endpoint_stream(struct uvc_device *dev);
extern void uvc_function_connect(struct uvc_device *uvc);
extern void uvc_function_disconnect(struct uvc_device *uvc);

View File

@ -382,13 +382,13 @@ static void uvcg_video_pump(struct work_struct *work)
{
struct uvc_video *video = container_of(work, struct uvc_video, pump);
struct uvc_video_queue *queue = &video->queue;
/* video->max_payload_size is only set when using bulk transfer */
bool is_bulk = video->max_payload_size;
struct usb_request *req = NULL;
struct uvc_buffer *buf;
unsigned long flags;
bool buf_done;
int ret;
bool buf_int;
/* video->max_payload_size is only set when using bulk transfer */
bool is_bulk = video->max_payload_size;
while (video->ep->enabled) {
/*
@ -414,20 +414,19 @@ static void uvcg_video_pump(struct work_struct *work)
if (buf != NULL) {
video->encode(req, video, buf);
/* Always interrupt for the last request of a video buffer */
buf_int = buf->state == UVC_BUF_STATE_DONE;
buf_done = buf->state == UVC_BUF_STATE_DONE;
} else if (!(queue->flags & UVC_QUEUE_DISCONNECTED) && !is_bulk) {
/*
* No video buffer available; the queue is still connected and
* we're traferring over ISOC. Queue a 0 length request to
* we're transferring over ISOC. Queue a 0 length request to
* prevent missed ISOC transfers.
*/
req->length = 0;
buf_int = false;
buf_done = false;
} else {
/*
* Either queue has been disconnected or no video buffer
* available to bulk transfer. Either way, stop processing
* Either the queue has been disconnected or no video buffer
* available for bulk transfer. Either way, stop processing
* further.
*/
spin_unlock_irqrestore(&queue->irqlock, flags);
@ -435,11 +434,24 @@ static void uvcg_video_pump(struct work_struct *work)
}
/*
* With usb3 we have more requests. This will decrease the
* interrupt load to a quarter but also catches the corner
* cases, which needs to be handled.
* With USB3 handling more requests at a higher speed, we can't
* afford to generate an interrupt for every request. Decide to
* interrupt:
*
* - When no more requests are available in the free queue, as
* this may be our last chance to refill the endpoint's
* request queue.
*
* - When this is request is the last request for the video
* buffer, as we want to start sending the next video buffer
* ASAP in case it doesn't get started already in the next
* iteration of this loop.
*
* - Four times over the length of the requests queue (as
* indicated by video->uvc_num_requests), as a trade-off
* between latency and interrupt load.
*/
if (list_empty(&video->req_free) || buf_int ||
if (list_empty(&video->req_free) || buf_done ||
!(video->req_int_count %
DIV_ROUND_UP(video->uvc_num_requests, 4))) {
video->req_int_count = 0;

View File

@ -1099,16 +1099,12 @@ EXPORT_SYMBOL_GPL(usb_gadget_set_state);
/* ------------------------------------------------------------------------- */
/* Acquire connect_lock before calling this function. */
static int usb_udc_connect_control_locked(struct usb_udc *udc) __must_hold(&udc->connect_lock)
static void usb_udc_connect_control_locked(struct usb_udc *udc) __must_hold(&udc->connect_lock)
{
int ret;
if (udc->vbus)
ret = usb_gadget_connect_locked(udc->gadget);
usb_gadget_connect_locked(udc->gadget);
else
ret = usb_gadget_disconnect_locked(udc->gadget);
return ret;
usb_gadget_disconnect_locked(udc->gadget);
}
static void vbus_event_work(struct work_struct *work)
@ -1582,21 +1578,12 @@ static int gadget_bind_driver(struct device *dev)
}
usb_gadget_enable_async_callbacks(udc);
udc->allow_connect = true;
ret = usb_udc_connect_control_locked(udc);
if (ret)
goto err_connect_control;
usb_udc_connect_control_locked(udc);
mutex_unlock(&udc->connect_lock);
kobject_uevent(&udc->dev.kobj, KOBJ_CHANGE);
return 0;
err_connect_control:
usb_gadget_disable_async_callbacks(udc);
if (gadget->irq)
synchronize_irq(gadget->irq);
usb_gadget_udc_stop_locked(udc);
err_start:
driver->unbind(udc->gadget);

View File

@ -302,6 +302,10 @@ static int dp_altmode_vdm(struct typec_altmode *alt,
case CMD_EXIT_MODE:
dp->data.status = 0;
dp->data.conf = 0;
if (dp->hpd) {
dp->hpd = false;
sysfs_notify(&dp->alt->dev.kobj, "displayport", "hpd");
}
break;
case DP_CMD_STATUS_UPDATE:
dp->data.status = *vdo;

View File

@ -156,7 +156,20 @@ EXPORT_SYMBOL_GPL(typec_altmode_exit);
*/
void typec_altmode_attention(struct typec_altmode *adev, u32 vdo)
{
struct typec_altmode *pdev = &to_altmode(adev)->partner->adev;
struct altmode *partner = to_altmode(adev)->partner;
struct typec_altmode *pdev;
/*
* If partner is NULL then a NULL pointer error occurs when
* dereferencing pdev and its operations. The original upstream commit
* changes the return type so the tcpm can log when this occurs, but
* due to KMI restrictions we can only silently prevent the error for
* now.
*/
if (!partner)
return;
pdev = &partner->adev;
if (pdev->ops && pdev->ops->attention)
pdev->ops->attention(pdev, vdo);

View File

@ -785,6 +785,7 @@ static void ucsi_handle_connector_change(struct work_struct *work)
if (ret < 0) {
dev_err(ucsi->dev, "%s: GET_CONNECTOR_STATUS failed (%d)\n",
__func__, ret);
clear_bit(EVENT_PENDING, &con->ucsi->flags);
goto out_unlock;
}

View File

@ -54,4 +54,6 @@ source "drivers/virt/coco/sev-guest/Kconfig"
source "drivers/virt/gunyah/Kconfig"
source "drivers/virt/geniezone/Kconfig"
endif

View File

@ -0,0 +1,16 @@
# SPDX-License-Identifier: GPL-2.0-only
config MTK_GZVM
tristate "GenieZone Hypervisor driver for guest VM operation"
depends on ARM64
help
This driver, gzvm, enables to run guest VMs on MTK GenieZone
hypervisor. It exports kvm-like interfaces for VMM (e.g., crosvm) in
order to operate guest VMs on GenieZone hypervisor.
GenieZone hypervisor now only supports MediaTek SoC and arm64
architecture.
Select M if you want it be built as a module (gzvm.ko).
If unsure, say N.

View File

@ -0,0 +1,12 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# Makefile for GenieZone driver, this file should be include in arch's
# to avoid two ko being generated.
#
GZVM_DIR ?= ../../../drivers/virt/geniezone
gzvm-y := $(GZVM_DIR)/gzvm_main.o $(GZVM_DIR)/gzvm_vm.o \
$(GZVM_DIR)/gzvm_vcpu.o $(GZVM_DIR)/gzvm_irqfd.o \
$(GZVM_DIR)/gzvm_ioeventfd.o $(GZVM_DIR)/gzvm_mmu.o \
$(GZVM_DIR)/gzvm_exception.o

View File

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2023 MediaTek Inc.
*/
#ifndef __GZ_COMMON_H__
#define __GZ_COMMON_H__
int gzvm_irqchip_inject_irq(struct gzvm *gzvm, unsigned int vcpu_idx,
u32 irq, bool level);
#endif /* __GZVM_COMMON_H__ */

View File

@ -0,0 +1,39 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2023 MediaTek Inc.
*/
#include <linux/device.h>
#include <linux/gzvm_drv.h>
/**
* gzvm_handle_guest_exception() - Handle guest exception
* @vcpu: Pointer to struct gzvm_vcpu_run in userspace
* Return:
* * true - This exception has been processed, no need to back to VMM.
* * false - This exception has not been processed, require userspace.
*/
bool gzvm_handle_guest_exception(struct gzvm_vcpu *vcpu)
{
int ret;
for (int i = 0; i < ARRAY_SIZE(vcpu->run->exception.reserved); i++) {
if (vcpu->run->exception.reserved[i])
return -EINVAL;
}
switch (vcpu->run->exception.exception) {
case GZVM_EXCEPTION_PAGE_FAULT:
ret = gzvm_handle_page_fault(vcpu);
break;
case GZVM_EXCEPTION_UNKNOWN:
fallthrough;
default:
ret = -EFAULT;
}
if (!ret)
return true;
else
return false;
}

Some files were not shown because too many files have changed in this diff Show More