Merge android-5.4.5 (9cdc723) into msm-5.4
* refs/heads/tmp-9cdc723: Revert "usb: dwc3: gadget: Fix logical condition" Revert "FROMLIST: scsi: ufs-qcom: Adjust bus bandwidth voting and unvoting" Linux 5.4.5 r8169: add missing RX enabling for WoL on RTL8125 net: mscc: ocelot: unregister the PTP clock on deinit ionic: keep users rss hash across lif reset xdp: obtain the mem_id mutex before trying to remove an entry. page_pool: do not release pool until inflight == 0. net/mlx5e: ethtool, Fix analysis of speed setting net/mlx5e: Fix translation of link mode into speed net/mlx5e: Fix freeing flow with kfree() and not kvfree() net/mlx5e: Fix SFF 8472 eeprom length act_ct: support asymmetric conntrack net/mlx5e: Fix TXQ indices to be sequential net: Fixed updating of ethertype in skb_mpls_push() hsr: fix a NULL pointer dereference in hsr_dev_xmit() Fixed updating of ethertype in function skb_mpls_pop gre: refetch erspan header from skb->data after pskb_may_pull() cls_flower: Fix the behavior using port ranges with hw-offload net: sched: allow indirect blocks to bind to clsact in TC net: core: rename indirect block ingress cb function tcp: Protect accesses to .ts_recent_stamp with {READ,WRITE}_ONCE() tcp: tighten acceptance of ACKs not matching a child socket tcp: fix rejected syncookies due to stale timestamps net: ipv6_stub: use ip6_dst_lookup_flow instead of ip6_dst_lookup net: ipv6: add net argument to ip6_dst_lookup_flow net/mlx5e: Query global pause state before setting prio2buffer tipc: fix ordering of tipc module init and exit routine tcp: md5: fix potential overestimation of TCP option space openvswitch: support asymmetric conntrack net/tls: Fix return values to avoid ENOTSUPP net: thunderx: start phy before starting autonegotiation net_sched: validate TCA_KIND attribute in tc_chain_tmplt_add() net: sched: fix dump qlen for sch_mq/sch_mqprio with NOLOCK subqueues net: ethernet: ti: cpsw: fix extra rx interrupt net: dsa: fix flow dissection on Tx path net: bridge: deny dev_set_mac_address() when unregistering mqprio: Fix out-of-bounds access in mqprio_dump inet: protect against too small mtu values. ANDROID: add initial ABI whitelist for android-5.4 ANDROID: abi update for 5.4.4 ANDROID: mm: Throttle rss_stat tracepoint FROMLIST: vsprintf: Inline call to ptr_to_hashval UPSTREAM: rss_stat: Add support to detect RSS updates of external mm UPSTREAM: mm: emit tracepoint when RSS changes Linux 5.4.4 EDAC/ghes: Do not warn when incrementing refcount on 0 r8169: fix rtl_hw_jumbo_disable for RTL8168evl workqueue: Fix missing kfree(rescuer) in destroy_workqueue() blk-mq: make sure that line break can be printed ext4: fix leak of quota reservations ext4: fix a bug in ext4_wait_for_tail_page_commit splice: only read in as much information as there is pipe buffer space rtc: disable uie before setting time and enable after USB: dummy-hcd: increase max number of devices to 32 powerpc: Define arch_is_kernel_initmem_freed() for lockdep mm/shmem.c: cast the type of unmap_start to u64 s390/kaslr: store KASLR offset for early dumps s390/smp,vdso: fix ASCE handling firmware: qcom: scm: Ensure 'a0' status code is treated as signed ext4: work around deleting a file with i_nlink == 0 safely mm: memcg/slab: wait for !root kmem_cache refcnt killing on root kmem_cache destruction mfd: rk808: Fix RK818 ID template mm, memfd: fix COW issue on MAP_PRIVATE and F_SEAL_FUTURE_WRITE mappings powerpc: Fix vDSO clock_getres() powerpc: Avoid clang warnings around setjmp and longjmp omap: pdata-quirks: remove openpandora quirks for mmc3 and wl1251 omap: pdata-quirks: revert pandora specific gpiod additions iio: ad7949: fix channels mixups iio: ad7949: kill pointless "readback"-handling code Revert "scsi: qla2xxx: Fix memory leak when sending I/O fails" scsi: qla2xxx: Fix a dma_pool_free() call scsi: qla2xxx: Fix SRB leak on switch command timeout reiserfs: fix extended attributes on the root directory ext4: Fix credit estimate for final inode freeing quota: fix livelock in dquot_writeback_dquots seccomp: avoid overflow in implicit constant conversion ext2: check err when partial != NULL quota: Check that quota is not dirty before release video/hdmi: Fix AVI bar unpack powerpc/xive: Skip ioremap() of ESB pages for LSI interrupts powerpc: Allow flush_icache_range to work across ranges >4GB powerpc/xive: Prevent page fault issues in the machine crash handler powerpc: Allow 64bit VDSO __kernel_sync_dicache to work across ranges >4GB coresight: Serialize enabling/disabling a link device. stm class: Lose the protocol driver when dropping its reference ppdev: fix PPGETTIME/PPSETTIME ioctls RDMA/core: Fix ib_dma_max_seg_size() ARM: dts: omap3-tao3530: Fix incorrect MMC card detection GPIO polarity mmc: host: omap_hsmmc: add code for special init of wl1251 to get rid of pandora_wl1251_init_card pinctrl: samsung: Fix device node refcount leaks in S3C64xx wakeup controller init pinctrl: samsung: Fix device node refcount leaks in init code pinctrl: samsung: Fix device node refcount leaks in S3C24xx wakeup controller init pinctrl: samsung: Fix device node refcount leaks in Exynos wakeup controller init pinctrl: samsung: Add of_node_put() before return in error path pinctrl: armada-37xx: Fix irq mask access in armada_37xx_irq_set_type() pinctrl: rza2: Fix gpio name typos ACPI: PM: Avoid attaching ACPI PM domain to certain devices ACPI: EC: Rework flushing of pending work ACPI: bus: Fix NULL pointer check in acpi_bus_get_private_data() ACPI: OSL: only free map once in osl.c ACPI / hotplug / PCI: Allocate resources directly under the non-hotplug bridge ACPI: LPSS: Add dmi quirk for skipping _DEP check for some device-links ACPI: LPSS: Add LNXVIDEO -> BYT I2C1 to lpss_device_links ACPI: LPSS: Add LNXVIDEO -> BYT I2C7 to lpss_device_links ACPI / utils: Move acpi_dev_get_first_match_dev() under CONFIG_ACPI ALSA: hda/realtek - Line-out jack doesn't work on a Dell AIO ALSA: oxfw: fix return value in error path of isochronous resources reservation ALSA: fireface: fix return value in error path of isochronous resources reservation cpufreq: powernv: fix stack bloat and hard limit on number of CPUs PM / devfreq: Lock devfreq in trans_stat_show intel_th: pci: Add Tiger Lake CPU support intel_th: pci: Add Ice Lake CPU support intel_th: Fix a double put_device() in error path powerpc/perf: Disable trace_imc pmu drm/panfrost: Open/close the perfcnt BO perf tests: Fix out of bounds memory access erofs: zero out when listxattr is called with no xattr cpuidle: use first valid target residency as poll time cpuidle: teo: Fix "early hits" handling for disabled idle states cpuidle: teo: Consider hits and misses metrics of disabled states cpuidle: teo: Rename local variable in teo_select() cpuidle: teo: Ignore disabled idle states that are too deep cpuidle: Do not unset the driver if it is there already media: cec.h: CEC_OP_REC_FLAG_ values were swapped media: radio: wl1273: fix interrupt masking on release media: bdisp: fix memleak on release media: vimc: sen: remove unused kthread_sen field media: hantro: Fix picture order count table enable media: hantro: Fix motion vectors usage condition media: hantro: Fix s_fmt for dynamic resolution changes s390/mm: properly clear _PAGE_NOEXEC bit when it is not supported ar5523: check NULL before memcpy() in ar5523_cmd() wil6210: check len before memcpy() calls cgroup: pids: use atomic64_t for pids->limit blk-mq: avoid sysfs buffer overflow with too many CPU cores md: improve handling of bio with REQ_PREFLUSH in md_flush_request() ASoC: fsl_audmix: Add spin lock to protect tdms ASoC: Jack: Fix NULL pointer dereference in snd_soc_jack_report ASoC: rt5645: Fixed typo for buddy jack support. ASoC: rt5645: Fixed buddy jack support. workqueue: Fix pwq ref leak in rescuer_thread() workqueue: Fix spurious sanity check failures in destroy_workqueue() dm zoned: reduce overhead of backing device checks dm writecache: handle REQ_FUA hwrng: omap - Fix RNG wait loop timeout ovl: relax WARN_ON() on rename to self ovl: fix corner case of non-unique st_dev;st_ino ovl: fix lookup failure on multi lower squashfs lib: raid6: fix awk build warnings rtlwifi: rtl8192de: Fix missing enable interrupt flag rtlwifi: rtl8192de: Fix missing callback that tests for hw release of buffer rtlwifi: rtl8192de: Fix missing code to retrieve RX buffer address btrfs: record all roots for rename exchange on a subvol Btrfs: send, skip backreference walking for extents with many references btrfs: Remove btrfs_bio::flags member btrfs: Avoid getting stuck during cyclic writebacks Btrfs: fix negative subv_writers counter and data space leak after buffered write Btrfs: fix metadata space leak on fixup worker failure to set range as delalloc btrfs: use refcount_inc_not_zero in kill_all_nodes btrfs: use btrfs_block_group_cache_done in update_block_group btrfs: check page->mapping when loading free space cache iwlwifi: pcie: fix support for transmitting SKBs with fraglist usb: typec: fix use after free in typec_register_port() phy: renesas: rcar-gen3-usb2: Fix sysfs interface of "role" usb: dwc3: ep0: Clear started flag on completion usb: dwc3: gadget: Clear started flag for non-IOC usb: dwc3: gadget: Fix logical condition usb: dwc3: pci: add ID for the Intel Comet Lake -H variant virtio-balloon: fix managed page counts when migrating pages between zones virt_wifi: fix use-after-free in virt_wifi_newlink() mtd: rawnand: Change calculating of position page containing BBM mtd: spear_smi: Fix Write Burst mode brcmfmac: disable PCIe interrupts before bus reset EDAC/altera: Use fast register IO for S10 IRQs tpm: Switch to platform_get_irq_optional() tpm: add check after commands attribs tab allocation usb: mon: Fix a deadlock in usbmon between mmap and read usb: core: urb: fix URB structure initialization function USB: adutux: fix interface sanity check usb: roles: fix a potential use after free USB: serial: io_edgeport: fix epic endpoint lookup USB: idmouse: fix interface sanity checks USB: atm: ueagle-atm: add missing endpoint check iio: adc: ad7124: Enable internal reference iio: adc: ad7606: fix reading unnecessary data from device iio: imu: inv_mpu6050: fix temperature reporting using bad unit iio: humidity: hdc100x: fix IIO_HUMIDITYRELATIVE channel reporting iio: adis16480: Fix scales factors iio: imu: st_lsm6dsx: fix ODR check in st_lsm6dsx_write_raw iio: adis16480: Add debugfs_reg_access entry ARM: dts: pandora-common: define wl1251 as child node of mmc3 usb: common: usb-conn-gpio: Don't log an error on probe deferral interconnect: qcom: qcs404: Walk the list safely on node removal interconnect: qcom: sdm845: Walk the list safely on node removal xhci: make sure interrupts are restored to correct state xhci: handle some XHCI_TRUST_TX_LENGTH quirks cases as default behaviour. xhci: Increase STS_HALT timeout in xhci_suspend() xhci: fix USB3 device initiated resume race with roothub autosuspend xhci: Fix memory leak in xhci_add_in_port() usb: xhci: only set D3hot for pci device staging: gigaset: add endpoint-type sanity check staging: gigaset: fix illegal free on probe errors staging: gigaset: fix general protection fault on probe staging: vchiq: call unregister_chrdev_region() when driver registration fails staging: rtl8712: fix interface sanity check staging: rtl8188eu: fix interface sanity check staging: exfat: fix multiple definition error of `rename_file' binder: fix incorrect calculation for num_valid usb: host: xhci-tegra: Correct phy enable sequence usb: Allow USB device to be warm reset in suspended state USB: documentation: flags on usb-storage versus UAS USB: uas: heed CAPACITY_HEURISTICS USB: uas: honor flag to avoid CAPACITY16 media: venus: remove invalid compat_ioctl32 handler ceph: fix compat_ioctl for ceph_dir_operations compat_ioctl: add compat_ptr_ioctl() scsi: qla2xxx: Fix memory leak when sending I/O fails scsi: qla2xxx: Fix double scsi_done for abort path scsi: qla2xxx: Fix driver unload hang scsi: qla2xxx: Do command completion on abort timeout scsi: zfcp: trace channel log even for FCP command responses scsi: lpfc: Fix bad ndlp ptr in xri aborted handling Revert "nvme: Add quirk for Kingston NVME SSD running FW E8FK11.T" nvme: Namepace identification descriptor list is optional usb: gadget: pch_udc: fix use after free usb: gadget: configfs: Fix missing spin_lock_init() BACKPORT: FROMLIST: scsi: ufs: Export query request interfaces ANDROID: update abi with unbindable_ports sysctl BACKPORT: FROMLIST: net: introduce ip_local_unbindable_ports sysctl ANDROID: update abi for 5.4.3 merge ANDROID: update abi_gki_aarch64.xml for ion, drm changes ANDROID: drivers: gpu: drm: export drm_mode_convert_umode symbol ANDROID: ion: flush cache before exporting non-cached buffers Linux 5.4.3 kselftest: Fix NULL INSTALL_PATH for TARGETS runlist perf script: Fix invalid LBR/binary mismatch error EDAC/ghes: Fix locking and memory barrier issues watchdog: aspeed: Fix clock behaviour for ast2600 drm/mcde: Fix an error handling path in 'mcde_probe()' md/raid0: Fix an error message in raid0_make_request() cpufreq: imx-cpufreq-dt: Correct i.MX8MN's default speed grade value ALSA: hda - Fix pending unsol events at shutdown KVM: x86: fix out-of-bounds write in KVM_GET_EMULATED_CPUID (CVE-2019-19332) binder: Handle start==NULL in binder_update_page_range() binder: Prevent repeated use of ->mmap() via NULL mapping binder: Fix race between mmap() and binder_alloc_print_pages() Revert "serial/8250: Add support for NI-Serial PXI/PXIe+485 devices" vcs: prevent write access to vcsu devices thermal: Fix deadlock in thermal thermal_zone_device_check iomap: Fix pipe page leakage during splicing bdev: Refresh bdev size for disks without partitioning bdev: Factor out bdev revalidation into a common helper rfkill: allocate static minor RDMA/qib: Validate ->show()/store() callbacks before calling them can: ucan: fix non-atomic allocation in completion handler spi: Fix NULL pointer when setting SPI_CS_HIGH for GPIO CS spi: Fix SPI_CS_HIGH setting when using native and GPIO CS spi: atmel: Fix CS high support spi: stm32-qspi: Fix kernel oops when unbinding driver spi: spi-fsl-qspi: Clear TDH bits in FLSHCR register crypto: user - fix memory leak in crypto_reportstat crypto: user - fix memory leak in crypto_report crypto: ecdh - fix big endian bug in ECC library crypto: ccp - fix uninitialized list head crypto: geode-aes - switch to skcipher for cbc(aes) fallback crypto: af_alg - cast ki_complete ternary op to int crypto: atmel-aes - Fix IV handling when req->nbytes < ivsize crypto: crypto4xx - fix double-free in crypto4xx_destroy_sdr KVM: x86: Grab KVM's srcu lock when setting nested state KVM: x86: Remove a spurious export of a static function KVM: x86: fix presentation of TSX feature in ARCH_CAPABILITIES KVM: x86: do not modify masked bits of shared MSRs KVM: arm/arm64: vgic: Don't rely on the wrong pending table KVM: nVMX: Always write vmcs02.GUEST_CR3 during nested VM-Enter KVM: PPC: Book3S HV: XIVE: Set kvm->arch.xive when VPs are allocated KVM: PPC: Book3S HV: XIVE: Fix potential page leak on error path KVM: PPC: Book3S HV: XIVE: Free previous EQ page when setting up a new one arm64: dts: exynos: Revert "Remove unneeded address space mapping for soc node" arm64: Validate tagged addresses in access_ok() called from kernel threads drm/i810: Prevent underflow in ioctl drm: damage_helper: Fix race checking plane->state->fb drm/msm: fix memleak on release jbd2: Fix possible overflow in jbd2_log_space_left() kernfs: fix ino wrap-around detection nfsd: restore NFSv3 ACL support nfsd: Ensure CLONE persists data and metadata changes to the target file can: slcan: Fix use-after-free Read in slcan_open tty: vt: keyboard: reject invalid keycodes CIFS: Fix SMB2 oplock break processing CIFS: Fix NULL-pointer dereference in smb2_push_mandatory_locks x86/PCI: Avoid AMD FCH XHCI USB PME# from D0 defect x86/mm/32: Sync only to VMALLOC_END in vmalloc_sync_all() media: rc: mark input device as pointing stick Input: Fix memory leak in psxpad_spi_probe coresight: etm4x: Fix input validation for sysfs. Input: goodix - add upside-down quirk for Teclast X89 tablet Input: synaptics-rmi4 - don't increment rmiaddr for SMBus transfers Input: synaptics-rmi4 - re-enable IRQs in f34v7_do_reflash Input: synaptics - switch another X1 Carbon 6 to RMI/SMbus soc: mediatek: cmdq: fixup wrong input order of write api ALSA: hda: Modify stream stripe mask only when needed ALSA: hda - Add mute led support for HP ProBook 645 G4 ALSA: pcm: oss: Avoid potential buffer overflows ALSA: hda/realtek - Fix inverted bass GPIO pin on Acer 8951G ALSA: hda/realtek - Dell headphone has noise on unmute for ALC236 ALSA: hda/realtek - Enable the headset-mic on a Xiaomi's laptop ALSA: hda/realtek - Enable internal speaker of ASUS UX431FLC SUNRPC: Avoid RPC delays when exiting suspend io_uring: ensure req->submit is copied when req is deferred io_uring: fix missing kmap() declaration on powerpc fuse: verify attributes fuse: verify write return fuse: verify nlink fuse: fix leak of fuse_io_priv io_uring: transform send/recvmsg() -ERESTARTSYS to -EINTR io_uring: fix dead-hung for non-iter fixed rw mwifiex: Re-work support for SDIO HW reset serial: ifx6x60: add missed pm_runtime_disable serial: 8250_dw: Avoid double error messaging when IRQ absent serial: stm32: fix clearing interrupt error flags serial: serial_core: Perform NULL checks for break_ctl ops serial: pl011: Fix DMA ->flush_buffer() tty: serial: msm_serial: Fix flow control tty: serial: fsl_lpuart: use the sg count from dma_map_sg serial: 8250-mtk: Use platform_get_irq_optional() for optional irq usb: gadget: u_serial: add missing port entry locking staging/octeon: Use stubs for MIPS && !CAVIUM_OCTEON_SOC mailbox: tegra: Fix superfluous IRQ error message time: Zero the upper 32-bits in __kernel_timespec on 32-bit lp: fix sparc64 LPSETTIMEOUT ioctl sparc64: implement ioremap_uc perf scripts python: exported-sql-viewer.py: Fix use of TRUE with SQLite arm64: tegra: Fix 'active-low' warning for Jetson Xavier regulator arm64: tegra: Fix 'active-low' warning for Jetson TX1 regulator rsi: release skb if rsi_prepare_beacon fails FROMLIST: scsi: ufs: Fix ufshcd_hold() caused scheduling while atomic FROMLIST: scsi: ufs: Add dev ref clock gating wait time support FROMLIST: scsi: ufs-qcom: Adjust bus bandwidth voting and unvoting FROMLIST: scsi: ufs: Remove the check before call setup clock notify vops FROMLIST: scsi: ufs: set load before setting voltage in regulators FROMLIST: scsi: ufs: Flush exception event before suspend FROMLIST: scsi: ufs: Do not rely on prefetched data FROMLIST: scsi: ufs: Fix up clock scaling FROMGIT: scsi: ufs: Do not free irq in suspend FROMGIT: scsi: ufs: Do not clear the DL layer timers FROMGIT: scsi: ufs: Release clock if DMA map fails FROMGIT: scsi: ufs: Use DBD setting in mode sense FROMGIT: scsi: core: Adjust DBD setting in MODE SENSE for caching mode page per LLD FROMGIT: scsi: ufs: Complete pending requests in host reset and restore path FROMGIT: scsi: ufs: Avoid messing up the compl_time_stamp of lrbs FROMGIT: scsi: ufs: Update VCCQ2 and VCCQ min/max voltage hard codes FROMGIT: scsi: ufs: Recheck bkops level if bkops is disabled ANDROID: update abi_gki_aarch64.xml for LTO, CFI, and SCS ANDROID: gki_defconfig: enable LTO, CFI, and SCS ANDROID: update abi_gki_aarch64.xml for CONFIG_GNSS ANDROID: cuttlefish_defconfig: Enable CONFIG_GNSS ANDROID: gki_defconfig: enable HID configs UPSTREAM: arm64: Validate tagged addresses in access_ok() called from kernel threads ANDROID: kbuild: limit LTO inlining ANDROID: kbuild: merge module sections with LTO ANDROID: f2fs: fix possible merge of unencrypted with encrypted I/O ANDROID: gki_defconfig: Enable UCLAMP by default ANDROID: make sure proc mount options are applied ANDROID: sound: usb: Add helper APIs to enable audio stream ANDROID: Update ABI representation ANDROID: Don't base allmodconfig on gki_defconfig ANDROID: Disable UNWINDER_ORC for allmodconfig ANDROID: ASoC: Fix 'allmodconfig' build break Linux 5.4.2 platform/x86: hp-wmi: Fix ACPI errors caused by passing 0 as input size platform/x86: hp-wmi: Fix ACPI errors caused by too small buffer HID: core: check whether Usage Page item is after Usage ID items crypto: talitos - Fix build error by selecting LIB_DES Revert "jffs2: Fix possible null-pointer dereferences in jffs2_add_frag_to_fragtree()" ext4: add more paranoia checking in ext4_expand_extra_isize handling r8169: fix resume on cable plug-in r8169: fix jumbo configuration for RTL8168evl selftests: pmtu: use -oneline for ip route list cache tipc: fix link name length check selftests: bpf: correct perror strings selftests: bpf: test_sockmap: handle file creation failures gracefully net/tls: use sg_next() to walk sg entries net/tls: remove the dead inplace_crypto code selftests/tls: add a test for fragmented messages net: skmsg: fix TLS 1.3 crash with full sk_msg net/tls: free the record on encryption error net/tls: take into account that bpf_exec_tx_verdict() may free the record openvswitch: remove another BUG_ON() openvswitch: drop unneeded BUG_ON() in ovs_flow_cmd_build_info() sctp: cache netns in sctp_ep_common slip: Fix use-after-free Read in slip_open sctp: Fix memory leak in sctp_sf_do_5_2_4_dupcook openvswitch: fix flow command message size net: sched: fix `tc -s class show` no bstats on class with nolock subqueues net: psample: fix skb_over_panic net: macb: add missed tasklet_kill net: dsa: sja1105: fix sja1105_parse_rgmii_delays() mdio_bus: don't use managed reset-controller macvlan: schedule bc_work even if error gve: Fix the queue page list allocated pages count x86/fpu: Don't cache access to fpu_fpregs_owner_ctx thunderbolt: Power cycle the router if NVM authentication fails mei: me: add comet point V device id mei: bus: prefix device names on bus with the bus name USB: serial: ftdi_sio: add device IDs for U-Blox C099-F9P staging: rtl8723bs: Add 024c:0525 to the list of SDIO device-ids staging: rtl8723bs: Drop ACPI device ids staging: rtl8192e: fix potential use after free staging: wilc1000: fix illegal memory access in wilc_parse_join_bss_param() usb: dwc2: use a longer core rest timeout in dwc2_core_reset() driver core: platform: use the correct callback type for bus_find_device crypto: inside-secure - Fix stability issue with Macchiatobin net: disallow ancillary data for __sys_{send,recv}msg_file() net: separate out the msghdr copy from ___sys_{send,recv}msg() io_uring: async workers should inherit the user creds ANDROID: Update ABI representation UPSTREAM: of: property: Add device link support for interrupt-parent, dmas and -gpio(s) UPSTREAM: of: property: Fix the semantics of of_is_ancestor_of() UPSTREAM: i2c: of: Populate fwnode in of_i2c_get_board_info() UPSTREAM: regulator: core: Don't try to remove device links if add failed UPSTREAM: driver core: Clarify documentation for fwnode_operations.add_links() ANDROID: Update ABI representation ANDROID: gki_defconfig: IIO=y ANDROID: Update ABI representation ANDROID: ASoC: core - add hostless DAI support ANDROID: gki_defconfig: =m's applied for virtio configs in arm64 ANDROID: Update ABI representation after 5.4.1 merge Linux 5.4.1 KVM: PPC: Book3S HV: Flush link stack on guest exit to host kernel powerpc/book3s64: Fix link stack flush on context switch staging: comedi: usbduxfast: usbduxfast_ai_cmdtest rounding error USB: serial: option: add support for Foxconn T77W968 LTE modules USB: serial: option: add support for DW5821e with eSIM support USB: serial: mos7840: fix remote wakeup USB: serial: mos7720: fix remote wakeup USB: serial: mos7840: add USB ID to support Moxa UPort 2210 appledisplay: fix error handling in the scheduled work USB: chaoskey: fix error case of a timeout usb-serial: cp201x: support Mark-10 digital force gauge usbip: Fix uninitialized symbol 'nents' in stub_recv_cmd_submit() usbip: tools: fix fd leakage in the function of read_attr_usbip_status USBIP: add config dependency for SGL_ALLOC ALSA: hda - Disable audio component for legacy Nvidia HDMI codecs media: mceusb: fix out of bounds read in MCE receiver buffer media: imon: invalid dereference in imon_touch_event media: cxusb: detect cxusb_ctrl_msg error in query media: b2c2-flexcop-usb: add sanity checking media: uvcvideo: Fix error path in control parsing failure futex: Prevent exit livelock futex: Provide distinct return value when owner is exiting futex: Add mutex around futex exit futex: Provide state handling for exec() as well futex: Sanitize exit state handling futex: Mark the begin of futex exit explicitly futex: Set task::futex_state to DEAD right after handling futex exit futex: Split futex_mm_release() for exit/exec exit/exec: Seperate mm_release() futex: Replace PF_EXITPIDONE with a state futex: Move futex exit handling into futex code cpufreq: Add NULL checks to show() and store() methods of cpufreq media: usbvision: Fix races among open, close, and disconnect media: usbvision: Fix invalid accesses after device disconnect media: vivid: Fix wrong locking that causes race conditions on streaming stop media: vivid: Set vid_cap_streaming and vid_out_streaming to true ALSA: usb-audio: Fix Scarlett 6i6 Gen 2 port data ALSA: usb-audio: Fix NULL dereference at parsing BADD futex: Prevent robust futex exit race x86/entry/32: Fix FIXUP_ESPFIX_STACK with user CR3 x86/pti/32: Calculate the various PTI cpu_entry_area sizes correctly, make the CPU_ENTRY_AREA_PAGES assert precise selftests/x86/sigreturn/32: Invalidate DS and ES when abusing the kernel selftests/x86/mov_ss_trap: Fix the SYSENTER test x86/entry/32: Fix NMI vs ESPFIX x86/entry/32: Unwind the ESPFIX stack earlier on exception entry x86/entry/32: Move FIXUP_FRAME after pushing %fs in SAVE_ALL x86/entry/32: Use %ss segment where required x86/entry/32: Fix IRET exception x86/cpu_entry_area: Add guard page for entry stack on 32bit x86/pti/32: Size initial_page_table correctly x86/doublefault/32: Fix stack canaries in the double fault handler x86/xen/32: Simplify ring check in xen_iret_crit_fixup() x86/xen/32: Make xen_iret_crit_fixup() independent of frame layout x86/stackframe/32: Repair 32-bit Xen PV nbd: prevent memory leak x86/speculation: Fix redundant MDS mitigation message x86/speculation: Fix incorrect MDS/TAA mitigation status x86/insn: Fix awk regexp warnings md/raid10: prevent access of uninitialized resync_pages offset Revert "dm crypt: use WQ_HIGHPRI for the IO and crypt workqueues" Revert "Bluetooth: hci_ll: set operational frequency earlier" ath10k: restore QCA9880-AR1A (v1) detection ath10k: Fix HOST capability QMI incompatibility ath10k: Fix a NULL-ptr-deref bug in ath10k_usb_alloc_urb_from_pipe ath9k_hw: fix uninitialized variable data Bluetooth: Fix invalid-free in bcsp_close() ANDROID: gki_defconfig: enable CONFIG_REGULATOR_FIXED_VOLTAGE FROMLIST: crypto: arm64/sha: fix function types ANDROID: arm64: kvm: disable CFI ANDROID: arm64: add __nocfi to __apply_alternatives ANDROID: arm64: add __pa_function ANDROID: arm64: add __nocfi to functions that jump to a physical address ANDROID: arm64: bpf: implement arch_bpf_jit_check_func ANDROID: bpf: validate bpf_func when BPF_JIT is enabled with CFI ANDROID: add support for Clang's Control Flow Integrity (CFI) ANDROID: arm64: allow LTO_CLANG and THINLTO to be selected FROMLIST: arm64: fix alternatives with LLVM's integrated assembler FROMLIST: arm64: lse: fix LSE atomics with LLVM's integrated assembler ANDROID: arm64: disable HAVE_ARCH_PREL32_RELOCATIONS with LTO_CLANG ANDROID: arm64: vdso: disable LTO ANDROID: irqchip/gic-v3: rename gic_of_init to work around a ThinLTO+CFI bug ANDROID: soc/tegra: disable ARCH_TEGRA_210_SOC with LTO ANDROID: init: ensure initcall ordering with LTO ANDROID: drivers/misc/lkdtm: disable LTO for rodata.o ANDROID: efi/libstub: disable LTO ANDROID: scripts/mod: disable LTO for empty.c ANDROID: kbuild: fix dynamic ftrace with clang LTO ANDROID: kbuild: add support for Clang LTO ANDROID: kbuild: add CONFIG_LD_IS_LLD FROMGIT: driver core: platform: use the correct callback type for bus_find_device FROMLIST: arm64: implement Shadow Call Stack FROMLIST: arm64: disable SCS for hypervisor code FROMLIST: arm64: vdso: disable Shadow Call Stack FROMLIST: arm64: efi: restore x18 if it was corrupted FROMLIST: arm64: preserve x18 when CPU is suspended FROMLIST: arm64: reserve x18 from general allocation with SCS FROMLIST: arm64: disable function graph tracing with SCS FROMLIST: scs: add support for stack usage debugging FROMLIST: scs: add accounting FROMLIST: add support for Clang's Shadow Call Stack (SCS) FROMLIST: arm64: kernel: avoid x18 in __cpu_soft_restart FROMLIST: arm64: kvm: stop treating register x18 as caller save FROMLIST: arm64/lib: copy_page: avoid x18 register in assembler code FROMLIST: arm64: mm: avoid x18 in idmap_kpti_install_ng_mappings ANDROID: clang: update to 10.0.1 ANDROID: update ABI representation Conflicts: Documentation/devicetree/bindings Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt arch/arm64/Kconfig drivers/firmware/qcom_scm-64.c drivers/hwtracing/coresight/coresight.c drivers/scsi/ufs/ufs.h drivers/scsi/ufs/ufshcd.c drivers/scsi/ufs/ufshcd.h drivers/scsi/ufs/unipro.h drivers/staging/android/ion/heaps/ion_cma_heap.c drivers/staging/android/ion/heaps/ion_system_heap.c drivers/usb/dwc3/ep0.c drivers/usb/dwc3/gadget.c include/sound/pcm.h include/sound/soc.h kernel/exit.c kernel/sched/core.c Change-Id: I66ea973ddcafd352ba999a1dc98e04df33397e3b Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
This commit is contained in:
commit
e79e029826
@ -265,8 +265,11 @@ time with the option "mds=". The valid arguments for this option are:
|
||||
|
||||
============ =============================================================
|
||||
|
||||
Not specifying this option is equivalent to "mds=full".
|
||||
|
||||
Not specifying this option is equivalent to "mds=full". For processors
|
||||
that are affected by both TAA (TSX Asynchronous Abort) and MDS,
|
||||
specifying just "mds=off" without an accompanying "tsx_async_abort=off"
|
||||
will have no effect as the same mitigation is used for both
|
||||
vulnerabilities.
|
||||
|
||||
Mitigation selection guide
|
||||
--------------------------
|
||||
|
@ -174,7 +174,10 @@ the option "tsx_async_abort=". The valid arguments for this option are:
|
||||
CPU is not vulnerable to cross-thread TAA attacks.
|
||||
============ =============================================================
|
||||
|
||||
Not specifying this option is equivalent to "tsx_async_abort=full".
|
||||
Not specifying this option is equivalent to "tsx_async_abort=full". For
|
||||
processors that are affected by both TAA and MDS, specifying just
|
||||
"tsx_async_abort=off" without an accompanying "mds=off" will have no
|
||||
effect as the same mitigation is used for both vulnerabilities.
|
||||
|
||||
The kernel command line also allows to control the TSX feature using the
|
||||
parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used
|
||||
|
@ -2477,6 +2477,12 @@
|
||||
SMT on vulnerable CPUs
|
||||
off - Unconditionally disable MDS mitigation
|
||||
|
||||
On TAA-affected machines, mds=off can be prevented by
|
||||
an active TAA mitigation as both vulnerabilities are
|
||||
mitigated with the same mechanism so in order to disable
|
||||
this mitigation, you need to specify tsx_async_abort=off
|
||||
too.
|
||||
|
||||
Not specifying this option is equivalent to
|
||||
mds=full.
|
||||
|
||||
@ -4941,6 +4947,11 @@
|
||||
vulnerable to cross-thread TAA attacks.
|
||||
off - Unconditionally disable TAA mitigation
|
||||
|
||||
On MDS-affected machines, tsx_async_abort=off can be
|
||||
prevented by an active MDS mitigation as both vulnerabilities
|
||||
are mitigated with the same mechanism so in order to disable
|
||||
this mitigation, you need to specify mds=off too.
|
||||
|
||||
Not specifying this option is equivalent to
|
||||
tsx_async_abort=full. On CPUs which are MDS affected
|
||||
and deploy MDS mitigation, TAA mitigation is not
|
||||
@ -5100,13 +5111,13 @@
|
||||
Flags is a set of characters, each corresponding
|
||||
to a common usb-storage quirk flag as follows:
|
||||
a = SANE_SENSE (collect more than 18 bytes
|
||||
of sense data);
|
||||
of sense data, not on uas);
|
||||
b = BAD_SENSE (don't collect more than 18
|
||||
bytes of sense data);
|
||||
bytes of sense data, not on uas);
|
||||
c = FIX_CAPACITY (decrease the reported
|
||||
device capacity by one sector);
|
||||
d = NO_READ_DISC_INFO (don't use
|
||||
READ_DISC_INFO command);
|
||||
READ_DISC_INFO command, not on uas);
|
||||
e = NO_READ_CAPACITY_16 (don't use
|
||||
READ_CAPACITY_16 command);
|
||||
f = NO_REPORT_OPCODES (don't use report opcodes
|
||||
@ -5121,17 +5132,18 @@
|
||||
j = NO_REPORT_LUNS (don't use report luns
|
||||
command, uas only);
|
||||
l = NOT_LOCKABLE (don't try to lock and
|
||||
unlock ejectable media);
|
||||
unlock ejectable media, not on uas);
|
||||
m = MAX_SECTORS_64 (don't transfer more
|
||||
than 64 sectors = 32 KB at a time);
|
||||
than 64 sectors = 32 KB at a time,
|
||||
not on uas);
|
||||
n = INITIAL_READ10 (force a retry of the
|
||||
initial READ(10) command);
|
||||
initial READ(10) command, not on uas);
|
||||
o = CAPACITY_OK (accept the capacity
|
||||
reported by the device);
|
||||
reported by the device, not on uas);
|
||||
p = WRITE_CACHE (the device cache is ON
|
||||
by default);
|
||||
by default, not on uas);
|
||||
r = IGNORE_RESIDUE (the device reports
|
||||
bogus residue values);
|
||||
bogus residue values, not on uas);
|
||||
s = SINGLE_LUN (the device has only one
|
||||
Logical Unit);
|
||||
t = NO_ATA_1X (don't allow ATA(12) and ATA(16)
|
||||
@ -5140,7 +5152,8 @@
|
||||
w = NO_WP_DETECT (don't test whether the
|
||||
medium is write-protected).
|
||||
y = ALWAYS_SYNC (issue a SYNCHRONIZE_CACHE
|
||||
even if the device claims no cache)
|
||||
even if the device claims no cache,
|
||||
not on uas)
|
||||
Example: quirks=0419:aaf5:rl,0421:0433:rc
|
||||
|
||||
user_debug= [KNL,ARM]
|
||||
|
@ -939,6 +939,19 @@ ip_local_reserved_ports - list of comma separated ranges
|
||||
|
||||
Default: Empty
|
||||
|
||||
ip_local_unbindable_ports - list of comma separated ranges
|
||||
Specify the ports which are not directly bind()able.
|
||||
|
||||
Usually you would use this to block the use of ports which
|
||||
are invalid due to something outside of the control of the
|
||||
kernel. For example a port stolen by the nic for serial
|
||||
console, remote power management or debugging.
|
||||
|
||||
There's a relatively high chance you will also want to list
|
||||
these ports in 'ip_local_reserved_ports' to prevent autobinding.
|
||||
|
||||
Default: Empty
|
||||
|
||||
ip_unprivileged_port_start - INTEGER
|
||||
This is a per-namespace sysctl. It defines the first
|
||||
unprivileged port in the network namespace. Privileged ports
|
||||
|
64
Makefile
64
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 0
|
||||
SUBLEVEL = 5
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
@ -662,6 +662,16 @@ RETPOLINE_VDSO_CFLAGS := $(call cc-option,$(RETPOLINE_VDSO_CFLAGS_GCC),$(call cc
|
||||
export RETPOLINE_CFLAGS
|
||||
export RETPOLINE_VDSO_CFLAGS
|
||||
|
||||
# Make toolchain changes before including arch/$(SRCARCH)/Makefile to ensure
|
||||
# ar/cc/ld-* macros return correct values.
|
||||
ifdef CONFIG_LTO_CLANG
|
||||
# LTO produces LLVM IR instead of object files. Use llvm-ar and llvm-nm, so we
|
||||
# can process these.
|
||||
AR := llvm-ar
|
||||
LLVM_NM := llvm-nm
|
||||
export LLVM_NM
|
||||
endif
|
||||
|
||||
include arch/$(SRCARCH)/Makefile
|
||||
|
||||
ifdef need-config
|
||||
@ -860,6 +870,55 @@ ifdef CONFIG_LIVEPATCH
|
||||
KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone)
|
||||
endif
|
||||
|
||||
ifdef CONFIG_SHADOW_CALL_STACK
|
||||
CC_FLAGS_SCS := -fsanitize=shadow-call-stack
|
||||
KBUILD_CFLAGS += $(CC_FLAGS_SCS)
|
||||
export CC_FLAGS_SCS
|
||||
endif
|
||||
|
||||
ifdef CONFIG_LTO_CLANG
|
||||
ifdef CONFIG_THINLTO
|
||||
CC_FLAGS_LTO_CLANG := -flto=thin $(call cc-option, -fsplit-lto-unit)
|
||||
KBUILD_LDFLAGS += --thinlto-cache-dir=.thinlto-cache
|
||||
else
|
||||
CC_FLAGS_LTO_CLANG := -flto
|
||||
endif
|
||||
CC_FLAGS_LTO_CLANG += -fvisibility=default
|
||||
|
||||
# Limit inlining across translation units to reduce binary size
|
||||
LD_FLAGS_LTO_CLANG := -mllvm -import-instr-limit=5
|
||||
|
||||
KBUILD_LDFLAGS += $(LD_FLAGS_LTO_CLANG)
|
||||
KBUILD_LDFLAGS_MODULE += $(LD_FLAGS_LTO_CLANG)
|
||||
|
||||
KBUILD_LDS_MODULE += $(srctree)/scripts/module-lto.lds
|
||||
endif
|
||||
|
||||
ifdef CONFIG_LTO
|
||||
CC_FLAGS_LTO := $(CC_FLAGS_LTO_CLANG)
|
||||
KBUILD_CFLAGS += $(CC_FLAGS_LTO)
|
||||
export CC_FLAGS_LTO
|
||||
endif
|
||||
|
||||
ifdef CONFIG_CFI_CLANG
|
||||
CC_FLAGS_CFI := -fsanitize=cfi \
|
||||
-fno-sanitize-cfi-canonical-jump-tables
|
||||
|
||||
ifdef CONFIG_MODULES
|
||||
CC_FLAGS_CFI += -fsanitize-cfi-cross-dso
|
||||
endif
|
||||
|
||||
ifdef CONFIG_CFI_PERMISSIVE
|
||||
CC_FLAGS_CFI += -fsanitize-recover=cfi \
|
||||
-fno-sanitize-trap=cfi
|
||||
endif
|
||||
|
||||
# If LTO flags are filtered out, we must also filter out CFI.
|
||||
CC_FLAGS_LTO += $(CC_FLAGS_CFI)
|
||||
KBUILD_CFLAGS += $(CC_FLAGS_CFI)
|
||||
export CC_FLAGS_CFI
|
||||
endif
|
||||
|
||||
# arch Makefile may override CC so keep this after arch Makefile is included
|
||||
NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
|
||||
|
||||
@ -1695,7 +1754,8 @@ clean: $(clean-dirs)
|
||||
-o -name modules.builtin -o -name '.tmp_*.o.*' \
|
||||
-o -name '*.c.[012]*.*' \
|
||||
-o -name '*.ll' \
|
||||
-o -name '*.gcno' \) -type f -print | xargs rm -f
|
||||
-o -name '*.gcno' \
|
||||
-o -name '*.*.symversions' \) -type f -print | xargs rm -f
|
||||
|
||||
# Generate tags for editors
|
||||
# ---------------------------------------------------------------------------
|
||||
|
234477
abi_gki_aarch64.xml
234477
abi_gki_aarch64.xml
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
859
abi_gki_aarch64_whitelist
Normal file
859
abi_gki_aarch64_whitelist
Normal file
@ -0,0 +1,859 @@
|
||||
[abi_whitelist]
|
||||
add_timer
|
||||
add_uevent_var
|
||||
add_wait_queue
|
||||
alloc_chrdev_region
|
||||
__alloc_disk_node
|
||||
alloc_etherdev_mqs
|
||||
alloc_netdev_mqs
|
||||
alloc_pages_exact
|
||||
__alloc_pages_nodemask
|
||||
__alloc_percpu
|
||||
__alloc_skb
|
||||
alloc_workqueue
|
||||
arch_bpf_jit_check_func
|
||||
__arch_copy_from_user
|
||||
__arch_copy_to_user
|
||||
arm64_const_caps_ready
|
||||
autoremove_wake_function
|
||||
bcmp
|
||||
blk_cleanup_queue
|
||||
blk_execute_rq
|
||||
blk_get_queue
|
||||
blk_get_request
|
||||
blk_mq_alloc_tag_set
|
||||
blk_mq_complete_request
|
||||
__blk_mq_end_request
|
||||
blk_mq_end_request
|
||||
blk_mq_free_tag_set
|
||||
blk_mq_init_queue
|
||||
blk_mq_quiesce_queue
|
||||
blk_mq_requeue_request
|
||||
blk_mq_run_hw_queues
|
||||
blk_mq_start_request
|
||||
blk_mq_start_stopped_hw_queues
|
||||
blk_mq_stop_hw_queue
|
||||
blk_mq_unquiesce_queue
|
||||
blk_mq_virtio_map_queues
|
||||
blk_put_queue
|
||||
blk_put_request
|
||||
blk_queue_alignment_offset
|
||||
blk_queue_bounce_limit
|
||||
blk_queue_can_use_dma_map_merging
|
||||
blk_queue_flag_clear
|
||||
blk_queue_flag_set
|
||||
blk_queue_io_min
|
||||
blk_queue_io_opt
|
||||
blk_queue_logical_block_size
|
||||
blk_queue_max_discard_sectors
|
||||
blk_queue_max_discard_segments
|
||||
blk_queue_max_hw_sectors
|
||||
blk_queue_max_segments
|
||||
blk_queue_max_segment_size
|
||||
blk_queue_max_write_zeroes_sectors
|
||||
blk_queue_physical_block_size
|
||||
blk_queue_rq_timeout
|
||||
blk_queue_write_cache
|
||||
blk_rq_map_kern
|
||||
blk_rq_map_sg
|
||||
blk_status_to_errno
|
||||
blk_update_request
|
||||
bpf_prog_add
|
||||
bpf_prog_put
|
||||
bpf_prog_sub
|
||||
bpf_stats_enabled_key
|
||||
bpf_trace_run10
|
||||
bpf_trace_run2
|
||||
bpf_trace_run8
|
||||
bpf_warn_invalid_xdp_action
|
||||
build_skb
|
||||
bus_register
|
||||
bus_unregister
|
||||
call_netdevice_notifiers
|
||||
call_rcu
|
||||
cancel_delayed_work
|
||||
cancel_delayed_work_sync
|
||||
cancel_work_sync
|
||||
capable
|
||||
cdev_add
|
||||
cdev_alloc
|
||||
cdev_del
|
||||
cdev_device_add
|
||||
cdev_device_del
|
||||
cdev_init
|
||||
cfg80211_connect_done
|
||||
cfg80211_disconnected
|
||||
cfg80211_inform_bss_data
|
||||
cfg80211_put_bss
|
||||
cfg80211_scan_done
|
||||
__cfi_slowpath
|
||||
check_disk_change
|
||||
__check_object_size
|
||||
__class_create
|
||||
class_destroy
|
||||
__class_register
|
||||
class_unregister
|
||||
clear_page
|
||||
clk_disable
|
||||
clk_enable
|
||||
clk_get_rate
|
||||
clk_prepare
|
||||
clk_unprepare
|
||||
complete
|
||||
complete_all
|
||||
completion_done
|
||||
console_suspend_enabled
|
||||
__const_udelay
|
||||
consume_skb
|
||||
_copy_from_iter_full
|
||||
copy_page
|
||||
_copy_to_iter
|
||||
cpu_bit_bitmap
|
||||
cpufreq_generic_attr
|
||||
cpufreq_register_driver
|
||||
cpufreq_unregister_driver
|
||||
__cpuhp_remove_state
|
||||
__cpuhp_setup_state
|
||||
__cpuhp_state_add_instance
|
||||
__cpuhp_state_remove_instance
|
||||
cpu_hwcap_keys
|
||||
cpu_hwcaps
|
||||
cpumask_next
|
||||
cpumask_next_wrap
|
||||
cpu_number
|
||||
__cpu_online_mask
|
||||
cpus_read_lock
|
||||
cpus_read_unlock
|
||||
cpu_topology
|
||||
crypto_ablkcipher_type
|
||||
crypto_dequeue_request
|
||||
crypto_enqueue_request
|
||||
crypto_init_queue
|
||||
crypto_register_alg
|
||||
crypto_unregister_alg
|
||||
datagram_poll
|
||||
debugfs_attr_read
|
||||
debugfs_attr_write
|
||||
debugfs_create_dir
|
||||
debugfs_create_file
|
||||
debugfs_create_file_unsafe
|
||||
debugfs_create_x32
|
||||
debugfs_remove
|
||||
debugfs_remove_recursive
|
||||
debug_smp_processor_id
|
||||
default_llseek
|
||||
default_wake_function
|
||||
delayed_work_timer_fn
|
||||
del_gendisk
|
||||
del_timer
|
||||
del_timer_sync
|
||||
destroy_workqueue
|
||||
dev_add_pack
|
||||
dev_close
|
||||
dev_driver_string
|
||||
_dev_err
|
||||
dev_fwnode
|
||||
__dev_get_by_index
|
||||
dev_get_by_index
|
||||
dev_get_by_index_rcu
|
||||
dev_get_stats
|
||||
device_add
|
||||
device_add_disk
|
||||
device_create
|
||||
device_create_file
|
||||
device_del
|
||||
device_destroy
|
||||
device_for_each_child
|
||||
device_initialize
|
||||
device_init_wakeup
|
||||
device_property_present
|
||||
device_property_read_u32_array
|
||||
device_register
|
||||
device_remove_file
|
||||
device_unregister
|
||||
_dev_info
|
||||
__dev_kfree_skb_any
|
||||
devm_clk_get
|
||||
dev_mc_sync_multiple
|
||||
dev_mc_unsync
|
||||
devm_gpiod_get_index
|
||||
devm_ioremap
|
||||
devm_kasprintf
|
||||
devm_kfree
|
||||
devm_kmalloc
|
||||
devm_platform_ioremap_resource
|
||||
devm_regulator_get_optional
|
||||
__devm_request_region
|
||||
devm_request_threaded_irq
|
||||
__devm_reset_control_get
|
||||
devm_rtc_allocate_device
|
||||
_dev_notice
|
||||
dev_open
|
||||
dev_pm_domain_attach
|
||||
dev_pm_domain_detach
|
||||
dev_printk
|
||||
dev_queue_xmit
|
||||
dev_remove_pack
|
||||
devres_add
|
||||
__devres_alloc_node
|
||||
devres_destroy
|
||||
devres_free
|
||||
dev_set_mtu
|
||||
dev_set_name
|
||||
dev_uc_sync_multiple
|
||||
dev_uc_unsync
|
||||
_dev_warn
|
||||
disable_irq
|
||||
dma_alloc_attrs
|
||||
dma_direct_map_page
|
||||
dma_direct_map_sg
|
||||
dma_direct_sync_sg_for_cpu
|
||||
dma_direct_sync_sg_for_device
|
||||
dma_direct_sync_single_for_cpu
|
||||
dma_direct_sync_single_for_device
|
||||
dma_direct_unmap_page
|
||||
dma_direct_unmap_sg
|
||||
dma_fence_context_alloc
|
||||
dma_fence_enable_sw_signaling
|
||||
dma_fence_init
|
||||
dma_fence_match_context
|
||||
dma_fence_release
|
||||
dma_fence_signal
|
||||
dma_fence_signal_locked
|
||||
dma_fence_wait_timeout
|
||||
dma_free_attrs
|
||||
dma_get_merge_boundary
|
||||
dma_max_mapping_size
|
||||
dma_resv_add_excl_fence
|
||||
dma_resv_add_shared_fence
|
||||
dma_resv_copy_fences
|
||||
dma_resv_fini
|
||||
dma_resv_init
|
||||
dma_resv_reserve_shared
|
||||
dma_resv_test_signaled_rcu
|
||||
dma_resv_wait_timeout_rcu
|
||||
dma_set_coherent_mask
|
||||
dma_set_mask
|
||||
driver_register
|
||||
driver_unregister
|
||||
drm_add_edid_modes
|
||||
drm_add_modes_noedid
|
||||
drm_atomic_helper_check
|
||||
drm_atomic_helper_cleanup_planes
|
||||
drm_atomic_helper_commit
|
||||
drm_atomic_helper_commit_hw_done
|
||||
drm_atomic_helper_commit_modeset_disables
|
||||
drm_atomic_helper_commit_modeset_enables
|
||||
drm_atomic_helper_commit_planes
|
||||
drm_atomic_helper_connector_destroy_state
|
||||
drm_atomic_helper_connector_duplicate_state
|
||||
drm_atomic_helper_connector_reset
|
||||
drm_atomic_helper_crtc_destroy_state
|
||||
drm_atomic_helper_crtc_duplicate_state
|
||||
drm_atomic_helper_crtc_reset
|
||||
drm_atomic_helper_dirtyfb
|
||||
drm_atomic_helper_disable_plane
|
||||
drm_atomic_helper_page_flip
|
||||
drm_atomic_helper_plane_destroy_state
|
||||
drm_atomic_helper_plane_duplicate_state
|
||||
drm_atomic_helper_plane_reset
|
||||
drm_atomic_helper_set_config
|
||||
drm_atomic_helper_shutdown
|
||||
drm_atomic_helper_update_plane
|
||||
drm_atomic_helper_wait_for_vblanks
|
||||
drm_class_device_register
|
||||
drm_class_device_unregister
|
||||
drm_clflush_pages
|
||||
drm_compat_ioctl
|
||||
drm_connector_attach_edid_property
|
||||
drm_connector_attach_encoder
|
||||
drm_connector_cleanup
|
||||
drm_connector_init
|
||||
drm_connector_register
|
||||
drm_connector_unregister
|
||||
drm_connector_update_edid_property
|
||||
drm_crtc_cleanup
|
||||
drm_crtc_init_with_planes
|
||||
drm_crtc_send_vblank_event
|
||||
drm_cvt_mode
|
||||
drm_dbg
|
||||
drm_debugfs_create_files
|
||||
drm_dev_alloc
|
||||
drm_dev_put
|
||||
drm_dev_register
|
||||
drm_dev_set_unique
|
||||
drm_dev_unregister
|
||||
drm_do_get_edid
|
||||
drm_encoder_cleanup
|
||||
drm_encoder_init
|
||||
drm_err
|
||||
drm_framebuffer_init
|
||||
drm_gem_fb_create_handle
|
||||
drm_gem_fb_destroy
|
||||
drm_gem_handle_create
|
||||
drm_gem_object_init
|
||||
drm_gem_object_lookup
|
||||
drm_gem_object_put_unlocked
|
||||
drm_gem_object_release
|
||||
drm_gem_prime_fd_to_handle
|
||||
drm_gem_prime_handle_to_fd
|
||||
drm_gem_prime_mmap
|
||||
drm_helper_hpd_irq_event
|
||||
drm_helper_mode_fill_fb_struct
|
||||
drm_helper_probe_single_connector_modes
|
||||
drm_ioctl
|
||||
drm_kms_helper_hotplug_event
|
||||
drm_mm_init
|
||||
drm_mm_insert_node_in_range
|
||||
drm_mm_print
|
||||
drm_mm_remove_node
|
||||
drm_mm_takedown
|
||||
drm_mode_config_cleanup
|
||||
drm_mode_config_init
|
||||
drm_mode_config_reset
|
||||
drm_mode_probed_add
|
||||
drm_open
|
||||
drm_plane_cleanup
|
||||
drm_poll
|
||||
drm_prime_pages_to_sg
|
||||
drm_printf
|
||||
__drm_printfn_debug
|
||||
drm_put_dev
|
||||
drm_read
|
||||
drm_release
|
||||
drm_set_preferred_mode
|
||||
drm_universal_plane_init
|
||||
drm_vma_offset_add
|
||||
drm_vma_offset_lookup_locked
|
||||
drm_vma_offset_manager_destroy
|
||||
drm_vma_offset_manager_init
|
||||
drm_vma_offset_remove
|
||||
eth_commit_mac_addr_change
|
||||
ether_setup
|
||||
eth_prepare_mac_addr_change
|
||||
__ethtool_get_link_ksettings
|
||||
ethtool_op_get_link
|
||||
ethtool_op_get_ts_info
|
||||
eth_type_trans
|
||||
eth_validate_addr
|
||||
event_triggers_call
|
||||
fasync_helper
|
||||
fd_install
|
||||
find_next_bit
|
||||
finish_wait
|
||||
flow_keys_basic_dissector
|
||||
flush_work
|
||||
flush_workqueue
|
||||
fput
|
||||
free_irq
|
||||
free_netdev
|
||||
__free_pages
|
||||
free_pages_exact
|
||||
free_percpu
|
||||
freezing_slow_path
|
||||
fsl8250_handle_irq
|
||||
generic_file_llseek
|
||||
get_device
|
||||
get_random_bytes
|
||||
__get_task_comm
|
||||
get_unused_fd_flags
|
||||
gpiod_cansleep
|
||||
gpiod_get_raw_value
|
||||
gpiod_get_raw_value_cansleep
|
||||
gpiod_get_value
|
||||
gpiod_get_value_cansleep
|
||||
gpiod_is_active_low
|
||||
gpiod_set_debounce
|
||||
gpiod_to_irq
|
||||
hrtimer_active
|
||||
hrtimer_cancel
|
||||
hrtimer_forward
|
||||
hrtimer_init
|
||||
hrtimer_start_range_ns
|
||||
hvc_alloc
|
||||
hvc_instantiate
|
||||
hvc_kick
|
||||
hvc_poll
|
||||
hvc_remove
|
||||
__hvc_resize
|
||||
hwrng_register
|
||||
hwrng_unregister
|
||||
ida_alloc_range
|
||||
ida_destroy
|
||||
ida_free
|
||||
init_net
|
||||
init_timer_key
|
||||
init_wait_entry
|
||||
__init_waitqueue_head
|
||||
input_alloc_absinfo
|
||||
input_allocate_device
|
||||
input_event
|
||||
input_free_device
|
||||
input_mt_init_slots
|
||||
input_register_device
|
||||
input_set_abs_params
|
||||
input_unregister_device
|
||||
iomem_resource
|
||||
__ioremap
|
||||
iounmap
|
||||
irq_dispose_mapping
|
||||
irq_set_affinity_hint
|
||||
irq_set_irq_wake
|
||||
jiffies
|
||||
jiffies_to_msecs
|
||||
kernel_kobj
|
||||
kfree
|
||||
kfree_skb
|
||||
kill_fasync
|
||||
kimage_vaddr
|
||||
kimage_voffset
|
||||
__kmalloc
|
||||
kmalloc_caches
|
||||
kmalloc_order_trace
|
||||
kmem_cache_alloc
|
||||
kmem_cache_alloc_trace
|
||||
kmem_cache_create
|
||||
kmem_cache_destroy
|
||||
kmem_cache_free
|
||||
kmemdup
|
||||
kobject_del
|
||||
kobject_init_and_add
|
||||
kobject_put
|
||||
kobject_uevent
|
||||
kobject_uevent_env
|
||||
kstrtoull
|
||||
kthread_create_on_node
|
||||
kthread_create_worker
|
||||
kthread_destroy_worker
|
||||
kthread_queue_work
|
||||
kthread_should_stop
|
||||
kthread_stop
|
||||
ktime_get
|
||||
ktime_get_mono_fast_ns
|
||||
ktime_get_real_seconds
|
||||
ktime_get_ts64
|
||||
ktime_get_with_offset
|
||||
kvfree
|
||||
kvmalloc_node
|
||||
kzfree
|
||||
led_classdev_register_ext
|
||||
led_classdev_unregister
|
||||
led_trigger_event
|
||||
led_trigger_register_simple
|
||||
led_trigger_unregister_simple
|
||||
__local_bh_enable_ip
|
||||
lock_sock_nested
|
||||
mark_page_accessed
|
||||
memcpy
|
||||
__memcpy_fromio
|
||||
__memcpy_toio
|
||||
memdup_user
|
||||
memmove
|
||||
memparse
|
||||
memset
|
||||
__memset_io
|
||||
misc_deregister
|
||||
misc_register
|
||||
mod_timer
|
||||
__module_get
|
||||
module_put
|
||||
__msecs_to_jiffies
|
||||
msleep
|
||||
__mutex_init
|
||||
mutex_is_locked
|
||||
mutex_lock
|
||||
mutex_lock_interruptible
|
||||
mutex_trylock
|
||||
mutex_unlock
|
||||
__napi_alloc_skb
|
||||
napi_complete_done
|
||||
napi_consume_skb
|
||||
napi_disable
|
||||
napi_gro_receive
|
||||
napi_hash_del
|
||||
__napi_schedule
|
||||
napi_schedule_prep
|
||||
__netdev_alloc_skb
|
||||
netdev_change_features
|
||||
netdev_err
|
||||
netdev_increment_features
|
||||
netdev_info
|
||||
netdev_lower_state_changed
|
||||
netdev_master_upper_dev_link
|
||||
netdev_notify_peers
|
||||
netdev_pick_tx
|
||||
netdev_rx_handler_register
|
||||
netdev_rx_handler_unregister
|
||||
netdev_upper_dev_link
|
||||
netdev_upper_dev_unlink
|
||||
netdev_warn
|
||||
netif_carrier_off
|
||||
netif_carrier_on
|
||||
netif_device_attach
|
||||
netif_device_detach
|
||||
netif_napi_add
|
||||
netif_napi_del
|
||||
netif_receive_skb
|
||||
netif_rx
|
||||
netif_rx_ni
|
||||
netif_schedule_queue
|
||||
netif_set_real_num_rx_queues
|
||||
netif_set_real_num_tx_queues
|
||||
__netif_set_xps_queue
|
||||
netif_stacked_transfer_operstate
|
||||
netif_tx_stop_all_queues
|
||||
netif_tx_wake_queue
|
||||
netlink_capable
|
||||
__netlink_dump_start
|
||||
net_ratelimit
|
||||
nf_conntrack_destroy
|
||||
nla_memcpy
|
||||
__nla_parse
|
||||
nla_put
|
||||
__nlmsg_put
|
||||
no_llseek
|
||||
nonseekable_open
|
||||
noop_llseek
|
||||
nr_cpu_ids
|
||||
nr_swap_pages
|
||||
nsecs_to_jiffies
|
||||
__num_online_cpus
|
||||
of_address_to_resource
|
||||
of_alias_get_id
|
||||
of_device_get_match_data
|
||||
of_device_is_big_endian
|
||||
of_device_is_compatible
|
||||
of_find_property
|
||||
of_get_child_by_name
|
||||
of_get_next_child
|
||||
of_get_property
|
||||
of_irq_get
|
||||
of_parse_phandle
|
||||
of_property_read_u64
|
||||
of_property_read_variable_u32_array
|
||||
panic
|
||||
param_ops_bool
|
||||
param_ops_int
|
||||
param_ops_uint
|
||||
passthru_features_check
|
||||
pci_alloc_irq_vectors_affinity
|
||||
pci_bus_type
|
||||
pci_disable_device
|
||||
pci_enable_device
|
||||
pci_find_capability
|
||||
pci_find_ext_capability
|
||||
pci_find_next_capability
|
||||
pci_free_irq_vectors
|
||||
pci_iomap_range
|
||||
pci_irq_get_affinity
|
||||
pci_irq_vector
|
||||
pci_read_config_byte
|
||||
pci_read_config_dword
|
||||
__pci_register_driver
|
||||
pci_release_selected_regions
|
||||
pci_request_selected_regions
|
||||
pci_set_master
|
||||
pci_unregister_driver
|
||||
PDE_DATA
|
||||
__per_cpu_offset
|
||||
perf_trace_buf_alloc
|
||||
perf_trace_run_bpf_submit
|
||||
physvirt_offset
|
||||
pipe_lock
|
||||
pipe_unlock
|
||||
platform_device_add
|
||||
platform_device_add_data
|
||||
platform_device_alloc
|
||||
platform_device_del
|
||||
platform_device_put
|
||||
platform_device_register_full
|
||||
platform_device_unregister
|
||||
__platform_driver_register
|
||||
platform_driver_unregister
|
||||
platform_get_irq
|
||||
platform_get_resource
|
||||
pm_generic_resume
|
||||
pm_generic_runtime_resume
|
||||
pm_generic_runtime_suspend
|
||||
pm_generic_suspend
|
||||
__pm_runtime_disable
|
||||
pm_runtime_enable
|
||||
__pm_runtime_idle
|
||||
__pm_runtime_resume
|
||||
pm_runtime_set_autosuspend_delay
|
||||
__pm_runtime_set_status
|
||||
__pm_runtime_suspend
|
||||
__pm_runtime_use_autosuspend
|
||||
pm_wakeup_dev_event
|
||||
prandom_u32
|
||||
preempt_count_add
|
||||
preempt_count_sub
|
||||
preempt_schedule
|
||||
preempt_schedule_notrace
|
||||
prepare_to_wait
|
||||
prepare_to_wait_event
|
||||
printk
|
||||
proc_create_net_single
|
||||
proc_mkdir_data
|
||||
proto_register
|
||||
proto_unregister
|
||||
__put_cred
|
||||
put_device
|
||||
put_disk
|
||||
__put_page
|
||||
put_unused_fd
|
||||
queue_delayed_work_on
|
||||
queue_work_on
|
||||
___ratelimit
|
||||
_raw_read_lock
|
||||
_raw_read_unlock
|
||||
_raw_spin_lock
|
||||
_raw_spin_lock_bh
|
||||
_raw_spin_lock_irq
|
||||
_raw_spin_lock_irqsave
|
||||
_raw_spin_trylock
|
||||
_raw_spin_unlock
|
||||
_raw_spin_unlock_bh
|
||||
_raw_spin_unlock_irq
|
||||
_raw_spin_unlock_irqrestore
|
||||
_raw_write_lock_bh
|
||||
_raw_write_unlock_bh
|
||||
rcu_barrier
|
||||
__rcu_read_lock
|
||||
__rcu_read_unlock
|
||||
refcount_dec_and_test_checked
|
||||
refcount_inc_checked
|
||||
refcount_inc_not_zero_checked
|
||||
__refrigerator
|
||||
register_blkdev
|
||||
__register_chrdev
|
||||
register_netdev
|
||||
register_netdevice
|
||||
register_netdevice_notifier
|
||||
register_pernet_subsys
|
||||
register_pm_notifier
|
||||
register_shrinker
|
||||
regulator_count_voltages
|
||||
regulator_disable
|
||||
regulator_enable
|
||||
regulator_get_current_limit
|
||||
regulator_get_voltage
|
||||
regulator_is_supported_voltage
|
||||
regulator_list_voltage
|
||||
regulator_set_voltage
|
||||
release_sock
|
||||
remove_proc_entry
|
||||
remove_wait_queue
|
||||
__request_module
|
||||
request_threaded_irq
|
||||
reservation_ww_class
|
||||
reset_control_assert
|
||||
reset_control_deassert
|
||||
revalidate_disk
|
||||
round_jiffies
|
||||
__rtc_register_device
|
||||
rtc_time64_to_tm
|
||||
rtc_tm_to_time64
|
||||
rtc_update_irq
|
||||
rtnl_is_locked
|
||||
rtnl_link_register
|
||||
rtnl_link_unregister
|
||||
rtnl_lock
|
||||
rtnl_register_module
|
||||
rtnl_unlock
|
||||
rtnl_unregister
|
||||
rtnl_unregister_all
|
||||
sched_clock
|
||||
sched_setscheduler
|
||||
schedule
|
||||
schedule_timeout
|
||||
scnprintf
|
||||
security_sock_graft
|
||||
seq_lseek
|
||||
seq_printf
|
||||
seq_putc
|
||||
seq_puts
|
||||
seq_read
|
||||
serial8250_get_port
|
||||
serial8250_register_8250_port
|
||||
serial8250_resume_port
|
||||
serial8250_suspend_port
|
||||
serial8250_unregister_port
|
||||
set_disk_ro
|
||||
set_page_dirty
|
||||
sg_alloc_table
|
||||
__sg_alloc_table_from_pages
|
||||
sg_copy_from_buffer
|
||||
sg_copy_to_buffer
|
||||
sg_free_table
|
||||
sg_init_one
|
||||
sg_init_table
|
||||
sg_miter_next
|
||||
sg_miter_start
|
||||
sg_miter_stop
|
||||
sg_nents
|
||||
sg_nents_for_len
|
||||
sg_next
|
||||
shmem_file_setup
|
||||
shmem_read_mapping_page_gfp
|
||||
si_mem_available
|
||||
si_meminfo
|
||||
simple_attr_open
|
||||
simple_attr_read
|
||||
simple_attr_release
|
||||
simple_attr_write
|
||||
simple_read_from_buffer
|
||||
simple_strtoul
|
||||
single_open
|
||||
single_release
|
||||
sk_alloc
|
||||
skb_add_rx_frag
|
||||
skb_clone
|
||||
skb_coalesce_rx_frag
|
||||
skb_copy
|
||||
skb_dequeue
|
||||
__skb_flow_dissect
|
||||
skb_free_datagram
|
||||
skb_page_frag_refill
|
||||
skb_partial_csum_set
|
||||
skb_put
|
||||
skb_queue_purge
|
||||
skb_queue_tail
|
||||
skb_recv_datagram
|
||||
skb_to_sgvec
|
||||
skb_trim
|
||||
skb_tstamp_tx
|
||||
sk_free
|
||||
snprintf
|
||||
sock_alloc_send_skb
|
||||
sock_diag_register
|
||||
sock_diag_save_cookie
|
||||
sock_diag_unregister
|
||||
sock_efree
|
||||
sock_gettstamp
|
||||
sock_i_ino
|
||||
sock_init_data
|
||||
sock_no_accept
|
||||
sock_no_bind
|
||||
sock_no_connect
|
||||
sock_no_getname
|
||||
sock_no_getsockopt
|
||||
sock_no_ioctl
|
||||
sock_no_listen
|
||||
sock_no_mmap
|
||||
sock_no_sendpage
|
||||
sock_no_setsockopt
|
||||
sock_no_shutdown
|
||||
sock_no_socketpair
|
||||
sock_queue_rcv_skb
|
||||
__sock_recv_ts_and_drops
|
||||
sock_register
|
||||
__sock_tx_timestamp
|
||||
sock_unregister
|
||||
softnet_data
|
||||
__splice_from_pipe
|
||||
sprintf
|
||||
sscanf
|
||||
__stack_chk_fail
|
||||
__stack_chk_guard
|
||||
strcmp
|
||||
strcpy
|
||||
string_get_size
|
||||
strlcpy
|
||||
strlen
|
||||
strncmp
|
||||
strncpy
|
||||
strstr
|
||||
swiotlb_max_segment
|
||||
sync_file_create
|
||||
sync_file_get_fence
|
||||
synchronize_hardirq
|
||||
synchronize_irq
|
||||
synchronize_net
|
||||
synchronize_rcu
|
||||
sysfs_create_bin_file
|
||||
sysfs_create_group
|
||||
__sysfs_match_string
|
||||
sysfs_remove_bin_file
|
||||
sysfs_remove_group
|
||||
system_freezable_wq
|
||||
system_freezing_cnt
|
||||
system_wq
|
||||
__this_cpu_preempt_check
|
||||
trace_define_field
|
||||
trace_event_buffer_commit
|
||||
trace_event_buffer_reserve
|
||||
trace_event_ignore_this_pid
|
||||
trace_event_raw_init
|
||||
trace_event_reg
|
||||
trace_handle_return
|
||||
__tracepoint_dma_fence_emit
|
||||
__tracepoint_xdp_exception
|
||||
trace_print_symbols_seq
|
||||
trace_raw_output_prep
|
||||
trace_seq_printf
|
||||
try_module_get
|
||||
unlock_page
|
||||
unmap_mapping_range
|
||||
unregister_blkdev
|
||||
__unregister_chrdev
|
||||
unregister_chrdev_region
|
||||
unregister_netdev
|
||||
unregister_netdevice_many
|
||||
unregister_netdevice_notifier
|
||||
unregister_netdevice_queue
|
||||
unregister_pernet_subsys
|
||||
unregister_pm_notifier
|
||||
unregister_shrinker
|
||||
up_read
|
||||
usb_add_gadget_udc
|
||||
usb_add_hcd
|
||||
usb_create_hcd
|
||||
usb_create_shared_hcd
|
||||
usb_del_gadget_udc
|
||||
usb_disabled
|
||||
usb_ep_set_maxpacket_limit
|
||||
usb_gadget_giveback_request
|
||||
usb_gadget_udc_reset
|
||||
usb_get_dev
|
||||
usb_hcd_check_unlink_urb
|
||||
usb_hcd_giveback_urb
|
||||
usb_hcd_is_primary_hcd
|
||||
usb_hcd_link_urb_to_ep
|
||||
usb_hcd_poll_rh_status
|
||||
usb_hcd_resume_root_hub
|
||||
usb_hcd_unlink_urb_from_ep
|
||||
usb_put_dev
|
||||
usb_put_hcd
|
||||
usb_remove_hcd
|
||||
usleep_range
|
||||
vabits_actual
|
||||
vmalloc_to_page
|
||||
vmap
|
||||
vmemmap
|
||||
vmf_insert_mixed
|
||||
vmf_insert_pfn
|
||||
vm_get_page_prot
|
||||
vunmap
|
||||
wait_for_completion
|
||||
wait_for_completion_killable
|
||||
wait_woken
|
||||
__wake_up
|
||||
wake_up_process
|
||||
__warn_printk
|
||||
wiphy_free
|
||||
wiphy_new_nm
|
||||
wiphy_register
|
||||
wiphy_unregister
|
||||
woken_wake_function
|
||||
ww_mutex_lock
|
||||
ww_mutex_lock_interruptible
|
||||
ww_mutex_unlock
|
||||
xdp_convert_zc_to_xdp_frame
|
||||
xdp_do_flush_map
|
||||
xdp_do_redirect
|
||||
xdp_return_frame
|
||||
xdp_return_frame_rx_napi
|
||||
xdp_rxq_info_reg
|
||||
xdp_rxq_info_reg_mem_model
|
||||
xdp_rxq_info_unreg
|
104
arch/Kconfig
104
arch/Kconfig
@ -521,6 +521,110 @@ config STACKPROTECTOR_STRONG
|
||||
about 20% of all kernel functions, which increases the kernel code
|
||||
size by about 2%.
|
||||
|
||||
config ARCH_SUPPORTS_SHADOW_CALL_STACK
|
||||
bool
|
||||
help
|
||||
An architecture should select this if it supports Clang's Shadow
|
||||
Call Stack, has asm/scs.h, and implements runtime support for shadow
|
||||
stack switching.
|
||||
|
||||
config SHADOW_CALL_STACK
|
||||
bool "Clang Shadow Call Stack"
|
||||
depends on ARCH_SUPPORTS_SHADOW_CALL_STACK
|
||||
help
|
||||
This option enables Clang's Shadow Call Stack, which uses a
|
||||
shadow stack to protect function return addresses from being
|
||||
overwritten by an attacker. More information can be found from
|
||||
Clang's documentation:
|
||||
|
||||
https://clang.llvm.org/docs/ShadowCallStack.html
|
||||
|
||||
Note that security guarantees in the kernel differ from the ones
|
||||
documented for user space. The kernel must store addresses of shadow
|
||||
stacks used by other tasks and interrupt handlers in memory, which
|
||||
means an attacker capable reading and writing arbitrary memory may
|
||||
be able to locate them and hijack control flow by modifying shadow
|
||||
stacks that are not currently in use.
|
||||
|
||||
config SHADOW_CALL_STACK_VMAP
|
||||
bool "Use virtually mapped shadow call stacks"
|
||||
depends on SHADOW_CALL_STACK
|
||||
help
|
||||
Use virtually mapped shadow call stacks. Selecting this option
|
||||
provides better stack exhaustion protection, but increases per-thread
|
||||
memory consumption as a full page is allocated for each shadow stack.
|
||||
|
||||
config LTO
|
||||
bool
|
||||
|
||||
config ARCH_SUPPORTS_LTO_CLANG
|
||||
bool
|
||||
help
|
||||
An architecture should select this option if it supports:
|
||||
- compiling with Clang,
|
||||
- compiling inline assembly with Clang's integrated assembler,
|
||||
- and linking with LLD.
|
||||
|
||||
config ARCH_SUPPORTS_THINLTO
|
||||
bool
|
||||
help
|
||||
An architecture should select this if it supports Clang ThinLTO.
|
||||
|
||||
config THINLTO
|
||||
bool "Use Clang's ThinLTO (EXPERIMENTAL)"
|
||||
depends on LTO_CLANG && ARCH_SUPPORTS_THINLTO
|
||||
default y
|
||||
help
|
||||
Use ThinLTO to speed up Link Time Optimization.
|
||||
|
||||
choice
|
||||
prompt "Link-Time Optimization (LTO) (EXPERIMENTAL)"
|
||||
default LTO_NONE
|
||||
help
|
||||
This option turns on Link-Time Optimization (LTO).
|
||||
|
||||
config LTO_NONE
|
||||
bool "None"
|
||||
|
||||
config LTO_CLANG
|
||||
bool "Use Clang's Link Time Optimization (LTO) (EXPERIMENTAL)"
|
||||
depends on ARCH_SUPPORTS_LTO_CLANG
|
||||
depends on !KASAN
|
||||
depends on !FTRACE_MCOUNT_RECORD || HAVE_C_RECORDMCOUNT
|
||||
depends on CC_IS_CLANG && CLANG_VERSION >= 100000 && LD_IS_LLD
|
||||
select LTO
|
||||
help
|
||||
This option enables Clang's Link Time Optimization (LTO), which allows
|
||||
the compiler to optimize the kernel globally at link time. If you
|
||||
enable this option, the compiler generates LLVM IR instead of object
|
||||
files, and the actual compilation from IR occurs at the LTO link step,
|
||||
which may take several minutes.
|
||||
|
||||
endchoice
|
||||
|
||||
config CFI_CLANG
|
||||
bool "Use Clang's Control Flow Integrity (CFI)"
|
||||
depends on LTO_CLANG && KALLSYMS
|
||||
help
|
||||
This option enables Clang's Control Flow Integrity (CFI), which adds
|
||||
runtime checking for indirect function calls.
|
||||
|
||||
config CFI_CLANG_SHADOW
|
||||
bool "Use CFI shadow to speed up cross-module checks"
|
||||
default y
|
||||
depends on CFI_CLANG
|
||||
help
|
||||
If you select this option, the kernel builds a fast look-up table of
|
||||
CFI check functions in loaded modules to reduce overhead.
|
||||
|
||||
config CFI_PERMISSIVE
|
||||
bool "Use CFI in permissive mode"
|
||||
depends on CFI_CLANG
|
||||
help
|
||||
When selected, Control Flow Integrity (CFI) violations result in a
|
||||
warning instead of a kernel panic. This option is useful for finding
|
||||
CFI violations during development.
|
||||
|
||||
config HAVE_ARCH_WITHIN_STACK_FRAMES
|
||||
bool
|
||||
help
|
||||
|
@ -104,3 +104,4 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
|
||||
dev_info(dev, "use %scoherent DMA ops\n",
|
||||
dev->dma_coherent ? "" : "non");
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(arch_setup_dma_ops);
|
||||
|
@ -226,6 +226,17 @@ usb_host_5v: fixed-regulator-usb_host_5v {
|
||||
gpio = <&gpio6 4 GPIO_ACTIVE_HIGH>; /* GPIO_164 */
|
||||
};
|
||||
|
||||
/* wl1251 wifi+bt module */
|
||||
wlan_en: fixed-regulator-wg7210_en {
|
||||
compatible = "regulator-fixed";
|
||||
regulator-name = "vwlan";
|
||||
regulator-min-microvolt = <1800000>;
|
||||
regulator-max-microvolt = <1800000>;
|
||||
startup-delay-us = <50000>;
|
||||
enable-active-high;
|
||||
gpio = <&gpio1 23 GPIO_ACTIVE_HIGH>;
|
||||
};
|
||||
|
||||
/* wg7210 (wifi+bt module) 32k clock buffer */
|
||||
wg7210_32k: fixed-regulator-wg7210_32k {
|
||||
compatible = "regulator-fixed";
|
||||
@ -522,9 +533,30 @@ &mmc2 {
|
||||
/*wp-gpios = <&gpio4 31 GPIO_ACTIVE_HIGH>;*/ /* GPIO_127 */
|
||||
};
|
||||
|
||||
/* mmc3 is probed using pdata-quirks to pass wl1251 card data */
|
||||
&mmc3 {
|
||||
status = "disabled";
|
||||
vmmc-supply = <&wlan_en>;
|
||||
|
||||
bus-width = <4>;
|
||||
non-removable;
|
||||
ti,non-removable;
|
||||
cap-power-off-card;
|
||||
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&mmc3_pins>;
|
||||
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
wlan: wifi@1 {
|
||||
compatible = "ti,wl1251";
|
||||
|
||||
reg = <1>;
|
||||
|
||||
interrupt-parent = <&gpio1>;
|
||||
interrupts = <21 IRQ_TYPE_LEVEL_HIGH>; /* GPIO_21 */
|
||||
|
||||
ti,wl1251-has-eeprom;
|
||||
};
|
||||
};
|
||||
|
||||
/* bluetooth*/
|
||||
|
@ -222,7 +222,7 @@ &mmc1 {
|
||||
pinctrl-0 = <&mmc1_pins>;
|
||||
vmmc-supply = <&vmmc1>;
|
||||
vqmmc-supply = <&vsim>;
|
||||
cd-gpios = <&twl_gpio 0 GPIO_ACTIVE_HIGH>;
|
||||
cd-gpios = <&twl_gpio 0 GPIO_ACTIVE_LOW>;
|
||||
bus-width = <8>;
|
||||
};
|
||||
|
||||
|
@ -7,7 +7,6 @@
|
||||
#include <linux/clk.h>
|
||||
#include <linux/davinci_emac.h>
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/gpio/machine.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/of_platform.h>
|
||||
@ -311,118 +310,15 @@ static void __init omap3_logicpd_torpedo_init(void)
|
||||
}
|
||||
|
||||
/* omap3pandora legacy devices */
|
||||
#define PANDORA_WIFI_IRQ_GPIO 21
|
||||
#define PANDORA_WIFI_NRESET_GPIO 23
|
||||
|
||||
static struct platform_device pandora_backlight = {
|
||||
.name = "pandora-backlight",
|
||||
.id = -1,
|
||||
};
|
||||
|
||||
static struct regulator_consumer_supply pandora_vmmc3_supply[] = {
|
||||
REGULATOR_SUPPLY("vmmc", "omap_hsmmc.2"),
|
||||
};
|
||||
|
||||
static struct regulator_init_data pandora_vmmc3 = {
|
||||
.constraints = {
|
||||
.valid_ops_mask = REGULATOR_CHANGE_STATUS,
|
||||
},
|
||||
.num_consumer_supplies = ARRAY_SIZE(pandora_vmmc3_supply),
|
||||
.consumer_supplies = pandora_vmmc3_supply,
|
||||
};
|
||||
|
||||
static struct fixed_voltage_config pandora_vwlan = {
|
||||
.supply_name = "vwlan",
|
||||
.microvolts = 1800000, /* 1.8V */
|
||||
.startup_delay = 50000, /* 50ms */
|
||||
.init_data = &pandora_vmmc3,
|
||||
};
|
||||
|
||||
static struct platform_device pandora_vwlan_device = {
|
||||
.name = "reg-fixed-voltage",
|
||||
.id = 1,
|
||||
.dev = {
|
||||
.platform_data = &pandora_vwlan,
|
||||
},
|
||||
};
|
||||
|
||||
static struct gpiod_lookup_table pandora_vwlan_gpiod_table = {
|
||||
.dev_id = "reg-fixed-voltage.1",
|
||||
.table = {
|
||||
/*
|
||||
* As this is a low GPIO number it should be at the first
|
||||
* GPIO bank.
|
||||
*/
|
||||
GPIO_LOOKUP("gpio-0-31", PANDORA_WIFI_NRESET_GPIO,
|
||||
NULL, GPIO_ACTIVE_HIGH),
|
||||
{ },
|
||||
},
|
||||
};
|
||||
|
||||
static void pandora_wl1251_init_card(struct mmc_card *card)
|
||||
{
|
||||
/*
|
||||
* We have TI wl1251 attached to MMC3. Pass this information to
|
||||
* SDIO core because it can't be probed by normal methods.
|
||||
*/
|
||||
if (card->type == MMC_TYPE_SDIO || card->type == MMC_TYPE_SD_COMBO) {
|
||||
card->quirks |= MMC_QUIRK_NONSTD_SDIO;
|
||||
card->cccr.wide_bus = 1;
|
||||
card->cis.vendor = 0x104c;
|
||||
card->cis.device = 0x9066;
|
||||
card->cis.blksize = 512;
|
||||
card->cis.max_dtr = 24000000;
|
||||
card->ocr = 0x80;
|
||||
}
|
||||
}
|
||||
|
||||
static struct omap2_hsmmc_info pandora_mmc3[] = {
|
||||
{
|
||||
.mmc = 3,
|
||||
.caps = MMC_CAP_4_BIT_DATA | MMC_CAP_POWER_OFF_CARD,
|
||||
.init_card = pandora_wl1251_init_card,
|
||||
},
|
||||
{} /* Terminator */
|
||||
};
|
||||
|
||||
static void __init pandora_wl1251_init(void)
|
||||
{
|
||||
struct wl1251_platform_data pandora_wl1251_pdata;
|
||||
int ret;
|
||||
|
||||
memset(&pandora_wl1251_pdata, 0, sizeof(pandora_wl1251_pdata));
|
||||
|
||||
pandora_wl1251_pdata.power_gpio = -1;
|
||||
|
||||
ret = gpio_request_one(PANDORA_WIFI_IRQ_GPIO, GPIOF_IN, "wl1251 irq");
|
||||
if (ret < 0)
|
||||
goto fail;
|
||||
|
||||
pandora_wl1251_pdata.irq = gpio_to_irq(PANDORA_WIFI_IRQ_GPIO);
|
||||
if (pandora_wl1251_pdata.irq < 0)
|
||||
goto fail_irq;
|
||||
|
||||
pandora_wl1251_pdata.use_eeprom = true;
|
||||
ret = wl1251_set_platform_data(&pandora_wl1251_pdata);
|
||||
if (ret < 0)
|
||||
goto fail_irq;
|
||||
|
||||
return;
|
||||
|
||||
fail_irq:
|
||||
gpio_free(PANDORA_WIFI_IRQ_GPIO);
|
||||
fail:
|
||||
pr_err("wl1251 board initialisation failed\n");
|
||||
}
|
||||
|
||||
static void __init omap3_pandora_legacy_init(void)
|
||||
{
|
||||
platform_device_register(&pandora_backlight);
|
||||
gpiod_add_lookup_table(&pandora_vwlan_gpiod_table);
|
||||
platform_device_register(&pandora_vwlan_device);
|
||||
omap_hsmmc_init(pandora_mmc3);
|
||||
omap_hsmmc_late_init(pandora_mmc3);
|
||||
pandora_wl1251_init();
|
||||
}
|
||||
#endif /* CONFIG_ARCH_OMAP3 */
|
||||
|
||||
|
@ -209,3 +209,4 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
|
||||
if (!dev->archdata.dma_coherent)
|
||||
set_dma_ops(dev, &arm_nommu_dma_ops);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(arch_setup_dma_ops);
|
||||
|
@ -2320,6 +2320,7 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
|
||||
#endif
|
||||
dev->archdata.dma_ops_setup = true;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(arch_setup_dma_ops);
|
||||
|
||||
void arch_teardown_dma_ops(struct device *dev)
|
||||
{
|
||||
|
@ -66,6 +66,9 @@ config ARM64
|
||||
select ARCH_USE_QUEUED_RWLOCKS
|
||||
select ARCH_USE_QUEUED_SPINLOCKS
|
||||
select ARCH_SUPPORTS_MEMORY_FAILURE
|
||||
select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK
|
||||
select ARCH_SUPPORTS_LTO_CLANG
|
||||
select ARCH_SUPPORTS_THINLTO
|
||||
select ARCH_SUPPORTS_ATOMIC_RMW
|
||||
select ARCH_SUPPORTS_INT128 if GCC_VERSION >= 50000 || CC_IS_CLANG
|
||||
select ARCH_SUPPORTS_NUMA_BALANCING
|
||||
@ -125,7 +128,7 @@ config ARM64
|
||||
select HAVE_ARCH_KGDB
|
||||
select HAVE_ARCH_MMAP_RND_BITS
|
||||
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
|
||||
select HAVE_ARCH_PREL32_RELOCATIONS
|
||||
select HAVE_ARCH_PREL32_RELOCATIONS if !LTO_CLANG
|
||||
select HAVE_ARCH_SECCOMP_FILTER
|
||||
select HAVE_ARCH_STACKLEAK
|
||||
select HAVE_ARCH_THREAD_STRUCT_WHITELIST
|
||||
@ -148,7 +151,7 @@ config ARM64
|
||||
select HAVE_FTRACE_MCOUNT_RECORD
|
||||
select HAVE_FUNCTION_TRACER
|
||||
select HAVE_FUNCTION_ERROR_INJECTION
|
||||
select HAVE_FUNCTION_GRAPH_TRACER
|
||||
select HAVE_FUNCTION_GRAPH_TRACER if !SHADOW_CALL_STACK
|
||||
select HAVE_GCC_PLUGINS
|
||||
select HAVE_HW_BREAKPOINT if PERF_EVENTS
|
||||
select HAVE_IRQ_TIME_ACCOUNTING
|
||||
@ -990,6 +993,10 @@ config ARCH_MEMORY_PROBE
|
||||
def_bool y
|
||||
depends on MEMORY_HOTPLUG
|
||||
|
||||
# Supported by clang >= 7.0
|
||||
config CC_HAVE_SHADOW_CALL_STACK
|
||||
def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18)
|
||||
|
||||
config SECCOMP
|
||||
bool "Enable seccomp to safely compute untrusted bytecode"
|
||||
---help---
|
||||
|
@ -78,6 +78,10 @@ stack_protector_prepare: prepare0
|
||||
include/generated/asm-offsets.h))
|
||||
endif
|
||||
|
||||
ifeq ($(CONFIG_SHADOW_CALL_STACK), y)
|
||||
KBUILD_CFLAGS += -ffixed-x18
|
||||
endif
|
||||
|
||||
ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
|
||||
KBUILD_CPPFLAGS += -mbig-endian
|
||||
CHECKFLAGS += -D__AARCH64EB__
|
||||
|
@ -18,8 +18,8 @@
|
||||
|
||||
/ {
|
||||
compatible = "samsung,exynos5433";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
#address-cells = <2>;
|
||||
#size-cells = <2>;
|
||||
|
||||
interrupt-parent = <&gic>;
|
||||
|
||||
@ -311,7 +311,7 @@ soc: soc {
|
||||
compatible = "simple-bus";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
ranges;
|
||||
ranges = <0x0 0x0 0x0 0x18000000>;
|
||||
|
||||
chipid@10000000 {
|
||||
compatible = "samsung,exynos4210-chipid";
|
||||
|
@ -12,8 +12,8 @@
|
||||
/ {
|
||||
compatible = "samsung,exynos7";
|
||||
interrupt-parent = <&gic>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
#address-cells = <2>;
|
||||
#size-cells = <2>;
|
||||
|
||||
aliases {
|
||||
pinctrl0 = &pinctrl_alive;
|
||||
@ -98,7 +98,7 @@ soc: soc {
|
||||
compatible = "simple-bus";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
ranges;
|
||||
ranges = <0 0 0 0x18000000>;
|
||||
|
||||
chipid@10000000 {
|
||||
compatible = "samsung,exynos4210-chipid";
|
||||
|
@ -309,9 +309,8 @@ vdd_12v_pcie: regulator@3 {
|
||||
regulator-name = "VDD_12V";
|
||||
regulator-min-microvolt = <1200000>;
|
||||
regulator-max-microvolt = <1200000>;
|
||||
gpio = <&gpio TEGRA194_MAIN_GPIO(A, 1) GPIO_ACTIVE_LOW>;
|
||||
gpio = <&gpio TEGRA194_MAIN_GPIO(A, 1) GPIO_ACTIVE_HIGH>;
|
||||
regulator-boot-on;
|
||||
enable-active-low;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
@ -1612,7 +1612,7 @@ vdd_hdmi: regulator@10 {
|
||||
regulator-name = "VDD_HDMI_5V0";
|
||||
regulator-min-microvolt = <5000000>;
|
||||
regulator-max-microvolt = <5000000>;
|
||||
gpio = <&exp1 12 GPIO_ACTIVE_LOW>;
|
||||
gpio = <&exp1 12 GPIO_ACTIVE_HIGH>;
|
||||
enable-active-high;
|
||||
vin-supply = <&vdd_5v0_sys>;
|
||||
};
|
||||
|
@ -9,10 +9,12 @@ CONFIG_PSI=y
|
||||
CONFIG_IKCONFIG=y
|
||||
CONFIG_IKCONFIG_PROC=y
|
||||
CONFIG_IKHEADERS=m
|
||||
CONFIG_UCLAMP_TASK=y
|
||||
CONFIG_MEMCG=y
|
||||
CONFIG_MEMCG_SWAP=y
|
||||
CONFIG_BLK_CGROUP=y
|
||||
CONFIG_RT_GROUP_SCHED=y
|
||||
CONFIG_UCLAMP_TASK_GROUP=y
|
||||
CONFIG_CGROUP_FREEZER=y
|
||||
CONFIG_CPUSETS=y
|
||||
CONFIG_CGROUP_CPUACCT=y
|
||||
@ -70,6 +72,9 @@ CONFIG_ARM64_CRYPTO=y
|
||||
CONFIG_CRYPTO_SHA2_ARM64_CE=y
|
||||
CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
|
||||
CONFIG_KPROBES=y
|
||||
CONFIG_SHADOW_CALL_STACK=y
|
||||
CONFIG_LTO_CLANG=y
|
||||
CONFIG_CFI_CLANG=y
|
||||
CONFIG_MODULES=y
|
||||
CONFIG_MODULE_UNLOAD=y
|
||||
CONFIG_MODVERSIONS=y
|
||||
@ -186,9 +191,11 @@ CONFIG_NET_CLS_BPF=y
|
||||
CONFIG_NET_EMATCH=y
|
||||
CONFIG_NET_EMATCH_U32=y
|
||||
CONFIG_NET_CLS_ACT=y
|
||||
CONFIG_VSOCKETS=y
|
||||
CONFIG_VIRTIO_VSOCKETS=y
|
||||
CONFIG_VSOCKETS=m
|
||||
CONFIG_VIRTIO_VSOCKETS=m
|
||||
CONFIG_BPF_JIT=y
|
||||
CONFIG_CAN=m
|
||||
CONFIG_CAN_VCAN=m
|
||||
CONFIG_BT=y
|
||||
CONFIG_CFG80211=y
|
||||
# CONFIG_CFG80211_DEFAULT_PS is not set
|
||||
@ -204,11 +211,12 @@ CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
|
||||
# CONFIG_FW_CACHE is not set
|
||||
# CONFIG_ALLOW_DEV_COREDUMP is not set
|
||||
CONFIG_DEBUG_DEVRES=y
|
||||
CONFIG_GNSS=y
|
||||
CONFIG_ZRAM=y
|
||||
CONFIG_BLK_DEV_LOOP=y
|
||||
CONFIG_BLK_DEV_RAM=y
|
||||
CONFIG_BLK_DEV_RAM_SIZE=8192
|
||||
CONFIG_VIRTIO_BLK=y
|
||||
CONFIG_VIRTIO_BLK=m
|
||||
CONFIG_UID_SYS_STATS=y
|
||||
CONFIG_SCSI=y
|
||||
# CONFIG_SCSI_PROC_FS is not set
|
||||
@ -227,7 +235,7 @@ CONFIG_DM_VERITY_FEC=y
|
||||
CONFIG_DM_BOW=y
|
||||
CONFIG_NETDEVICES=y
|
||||
CONFIG_TUN=y
|
||||
CONFIG_VIRTIO_NET=y
|
||||
CONFIG_VIRTIO_NET=m
|
||||
# CONFIG_ETHERNET is not set
|
||||
CONFIG_PHYLIB=y
|
||||
CONFIG_PPP=y
|
||||
@ -261,7 +269,7 @@ CONFIG_USB_USBNET=y
|
||||
# CONFIG_WLAN_VENDOR_TI is not set
|
||||
# CONFIG_WLAN_VENDOR_ZYDAS is not set
|
||||
# CONFIG_WLAN_VENDOR_QUANTENNA is not set
|
||||
CONFIG_VIRT_WIFI=y
|
||||
CONFIG_VIRT_WIFI=m
|
||||
CONFIG_INPUT_EVDEV=y
|
||||
CONFIG_KEYBOARD_GPIO=y
|
||||
# CONFIG_INPUT_MOUSE is not set
|
||||
@ -279,9 +287,9 @@ CONFIG_SERIAL_OF_PLATFORM=m
|
||||
CONFIG_SERIAL_AMBA_PL011=y
|
||||
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
|
||||
CONFIG_SERIAL_DEV_BUS=y
|
||||
CONFIG_VIRTIO_CONSOLE=y
|
||||
CONFIG_VIRTIO_CONSOLE=m
|
||||
CONFIG_HW_RANDOM=y
|
||||
CONFIG_HW_RANDOM_VIRTIO=y
|
||||
CONFIG_HW_RANDOM_VIRTIO=m
|
||||
# CONFIG_HW_RANDOM_CAVIUM is not set
|
||||
# CONFIG_DEVPORT is not set
|
||||
# CONFIG_I2C_COMPAT is not set
|
||||
@ -301,12 +309,13 @@ CONFIG_WATCHDOG=y
|
||||
CONFIG_MFD_ACT8945A=y
|
||||
CONFIG_MFD_SYSCON=y
|
||||
CONFIG_REGULATOR=y
|
||||
CONFIG_REGULATOR_FIXED_VOLTAGE=y
|
||||
CONFIG_MEDIA_CAMERA_SUPPORT=y
|
||||
CONFIG_MEDIA_CONTROLLER=y
|
||||
# CONFIG_VGA_ARB is not set
|
||||
CONFIG_DRM=y
|
||||
# CONFIG_DRM_FBDEV_EMULATION is not set
|
||||
CONFIG_DRM_VIRTIO_GPU=y
|
||||
CONFIG_DRM_VIRTIO_GPU=m
|
||||
CONFIG_BACKLIGHT_CLASS_DEVICE=y
|
||||
CONFIG_SOUND=y
|
||||
CONFIG_SND=y
|
||||
@ -325,6 +334,8 @@ CONFIG_HID_ELECOM=y
|
||||
CONFIG_HID_MAGICMOUSE=y
|
||||
CONFIG_HID_MICROSOFT=y
|
||||
CONFIG_HID_MULTITOUCH=y
|
||||
CONFIG_HID_PLANTRONICS=y
|
||||
CONFIG_HID_SONY=y
|
||||
CONFIG_USB_HIDDEV=y
|
||||
CONFIG_USB=y
|
||||
CONFIG_USB_OTG=y
|
||||
@ -347,14 +358,15 @@ CONFIG_LEDS_TRIGGERS=y
|
||||
CONFIG_EDAC=y
|
||||
CONFIG_RTC_CLASS=y
|
||||
# CONFIG_RTC_SYSTOHC is not set
|
||||
CONFIG_RTC_DRV_TEST=m
|
||||
CONFIG_RTC_DRV_PL030=y
|
||||
CONFIG_RTC_DRV_PL031=y
|
||||
CONFIG_DMADEVICES=y
|
||||
CONFIG_UIO=y
|
||||
CONFIG_VIRTIO_PCI=y
|
||||
CONFIG_VIRTIO_PCI=m
|
||||
# CONFIG_VIRTIO_PCI_LEGACY is not set
|
||||
CONFIG_VIRTIO_INPUT=y
|
||||
CONFIG_VIRTIO_MMIO=y
|
||||
CONFIG_VIRTIO_INPUT=m
|
||||
CONFIG_VIRTIO_MMIO=m
|
||||
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y
|
||||
CONFIG_STAGING=y
|
||||
CONFIG_ASHMEM=y
|
||||
@ -373,6 +385,7 @@ CONFIG_DEVFREQ_GOV_POWERSAVE=y
|
||||
CONFIG_DEVFREQ_GOV_USERSPACE=y
|
||||
CONFIG_DEVFREQ_GOV_PASSIVE=y
|
||||
CONFIG_EXTCON=y
|
||||
CONFIG_IIO=y
|
||||
CONFIG_PWM=y
|
||||
CONFIG_QCOM_PDC=y
|
||||
CONFIG_GENERIC_PHY=y
|
||||
@ -461,7 +474,6 @@ CONFIG_CRYPTO_MD4=y
|
||||
CONFIG_CRYPTO_LZ4=y
|
||||
CONFIG_CRYPTO_ZSTD=y
|
||||
CONFIG_CRYPTO_ANSI_CPRNG=y
|
||||
CONFIG_CRYPTO_DEV_VIRTIO=y
|
||||
CONFIG_CRC_CCITT=y
|
||||
CONFIG_CRC8=y
|
||||
CONFIG_XZ_DEC=y
|
||||
|
@ -28,6 +28,13 @@ struct sha1_ce_state {
|
||||
asmlinkage void sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
|
||||
int blocks);
|
||||
|
||||
static void __sha1_ce_transform(struct sha1_state *sst, u8 const *src,
|
||||
int blocks)
|
||||
{
|
||||
return sha1_ce_transform(container_of(sst, struct sha1_ce_state, sst),
|
||||
src, blocks);
|
||||
}
|
||||
|
||||
const u32 sha1_ce_offsetof_count = offsetof(struct sha1_ce_state, sst.count);
|
||||
const u32 sha1_ce_offsetof_finalize = offsetof(struct sha1_ce_state, finalize);
|
||||
|
||||
@ -41,8 +48,7 @@ static int sha1_ce_update(struct shash_desc *desc, const u8 *data,
|
||||
|
||||
sctx->finalize = 0;
|
||||
kernel_neon_begin();
|
||||
sha1_base_do_update(desc, data, len,
|
||||
(sha1_block_fn *)sha1_ce_transform);
|
||||
sha1_base_do_update(desc, data, len, __sha1_ce_transform);
|
||||
kernel_neon_end();
|
||||
|
||||
return 0;
|
||||
@ -64,10 +70,9 @@ static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
|
||||
sctx->finalize = finalize;
|
||||
|
||||
kernel_neon_begin();
|
||||
sha1_base_do_update(desc, data, len,
|
||||
(sha1_block_fn *)sha1_ce_transform);
|
||||
sha1_base_do_update(desc, data, len, __sha1_ce_transform);
|
||||
if (!finalize)
|
||||
sha1_base_do_finalize(desc, (sha1_block_fn *)sha1_ce_transform);
|
||||
sha1_base_do_finalize(desc, __sha1_ce_transform);
|
||||
kernel_neon_end();
|
||||
return sha1_base_finish(desc, out);
|
||||
}
|
||||
@ -81,7 +86,7 @@ static int sha1_ce_final(struct shash_desc *desc, u8 *out)
|
||||
|
||||
sctx->finalize = 0;
|
||||
kernel_neon_begin();
|
||||
sha1_base_do_finalize(desc, (sha1_block_fn *)sha1_ce_transform);
|
||||
sha1_base_do_finalize(desc, __sha1_ce_transform);
|
||||
kernel_neon_end();
|
||||
return sha1_base_finish(desc, out);
|
||||
}
|
||||
|
@ -28,6 +28,13 @@ struct sha256_ce_state {
|
||||
asmlinkage void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
|
||||
int blocks);
|
||||
|
||||
static void __sha2_ce_transform(struct sha256_state *sst, u8 const *src,
|
||||
int blocks)
|
||||
{
|
||||
return sha2_ce_transform(container_of(sst, struct sha256_ce_state, sst),
|
||||
src, blocks);
|
||||
}
|
||||
|
||||
const u32 sha256_ce_offsetof_count = offsetof(struct sha256_ce_state,
|
||||
sst.count);
|
||||
const u32 sha256_ce_offsetof_finalize = offsetof(struct sha256_ce_state,
|
||||
@ -35,6 +42,12 @@ const u32 sha256_ce_offsetof_finalize = offsetof(struct sha256_ce_state,
|
||||
|
||||
asmlinkage void sha256_block_data_order(u32 *digest, u8 const *src, int blocks);
|
||||
|
||||
static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src,
|
||||
int blocks)
|
||||
{
|
||||
return sha256_block_data_order(sst->state, src, blocks);
|
||||
}
|
||||
|
||||
static int sha256_ce_update(struct shash_desc *desc, const u8 *data,
|
||||
unsigned int len)
|
||||
{
|
||||
@ -42,12 +55,11 @@ static int sha256_ce_update(struct shash_desc *desc, const u8 *data,
|
||||
|
||||
if (!crypto_simd_usable())
|
||||
return sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
__sha256_block_data_order);
|
||||
|
||||
sctx->finalize = 0;
|
||||
kernel_neon_begin();
|
||||
sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha2_ce_transform);
|
||||
sha256_base_do_update(desc, data, len, __sha2_ce_transform);
|
||||
kernel_neon_end();
|
||||
|
||||
return 0;
|
||||
@ -62,9 +74,8 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data,
|
||||
if (!crypto_simd_usable()) {
|
||||
if (len)
|
||||
sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
sha256_base_do_finalize(desc,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
__sha256_block_data_order);
|
||||
sha256_base_do_finalize(desc, __sha256_block_data_order);
|
||||
return sha256_base_finish(desc, out);
|
||||
}
|
||||
|
||||
@ -75,11 +86,9 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data,
|
||||
sctx->finalize = finalize;
|
||||
|
||||
kernel_neon_begin();
|
||||
sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha2_ce_transform);
|
||||
sha256_base_do_update(desc, data, len, __sha2_ce_transform);
|
||||
if (!finalize)
|
||||
sha256_base_do_finalize(desc,
|
||||
(sha256_block_fn *)sha2_ce_transform);
|
||||
sha256_base_do_finalize(desc, __sha2_ce_transform);
|
||||
kernel_neon_end();
|
||||
return sha256_base_finish(desc, out);
|
||||
}
|
||||
@ -89,14 +98,13 @@ static int sha256_ce_final(struct shash_desc *desc, u8 *out)
|
||||
struct sha256_ce_state *sctx = shash_desc_ctx(desc);
|
||||
|
||||
if (!crypto_simd_usable()) {
|
||||
sha256_base_do_finalize(desc,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
sha256_base_do_finalize(desc, __sha256_block_data_order);
|
||||
return sha256_base_finish(desc, out);
|
||||
}
|
||||
|
||||
sctx->finalize = 0;
|
||||
kernel_neon_begin();
|
||||
sha256_base_do_finalize(desc, (sha256_block_fn *)sha2_ce_transform);
|
||||
sha256_base_do_finalize(desc, __sha2_ce_transform);
|
||||
kernel_neon_end();
|
||||
return sha256_base_finish(desc, out);
|
||||
}
|
||||
|
@ -27,14 +27,26 @@ asmlinkage void sha256_block_data_order(u32 *digest, const void *data,
|
||||
unsigned int num_blks);
|
||||
EXPORT_SYMBOL(sha256_block_data_order);
|
||||
|
||||
static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src,
|
||||
int blocks)
|
||||
{
|
||||
return sha256_block_data_order(sst->state, src, blocks);
|
||||
}
|
||||
|
||||
asmlinkage void sha256_block_neon(u32 *digest, const void *data,
|
||||
unsigned int num_blks);
|
||||
|
||||
static void __sha256_block_neon(struct sha256_state *sst, u8 const *src,
|
||||
int blocks)
|
||||
{
|
||||
return sha256_block_neon(sst->state, src, blocks);
|
||||
}
|
||||
|
||||
static int crypto_sha256_arm64_update(struct shash_desc *desc, const u8 *data,
|
||||
unsigned int len)
|
||||
{
|
||||
return sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
__sha256_block_data_order);
|
||||
}
|
||||
|
||||
static int crypto_sha256_arm64_finup(struct shash_desc *desc, const u8 *data,
|
||||
@ -42,9 +54,8 @@ static int crypto_sha256_arm64_finup(struct shash_desc *desc, const u8 *data,
|
||||
{
|
||||
if (len)
|
||||
sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
sha256_base_do_finalize(desc,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
__sha256_block_data_order);
|
||||
sha256_base_do_finalize(desc, __sha256_block_data_order);
|
||||
|
||||
return sha256_base_finish(desc, out);
|
||||
}
|
||||
@ -87,7 +98,7 @@ static int sha256_update_neon(struct shash_desc *desc, const u8 *data,
|
||||
|
||||
if (!crypto_simd_usable())
|
||||
return sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
__sha256_block_data_order);
|
||||
|
||||
while (len > 0) {
|
||||
unsigned int chunk = len;
|
||||
@ -103,8 +114,7 @@ static int sha256_update_neon(struct shash_desc *desc, const u8 *data,
|
||||
sctx->count % SHA256_BLOCK_SIZE;
|
||||
|
||||
kernel_neon_begin();
|
||||
sha256_base_do_update(desc, data, chunk,
|
||||
(sha256_block_fn *)sha256_block_neon);
|
||||
sha256_base_do_update(desc, data, chunk, __sha256_block_neon);
|
||||
kernel_neon_end();
|
||||
data += chunk;
|
||||
len -= chunk;
|
||||
@ -118,15 +128,13 @@ static int sha256_finup_neon(struct shash_desc *desc, const u8 *data,
|
||||
if (!crypto_simd_usable()) {
|
||||
if (len)
|
||||
sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
sha256_base_do_finalize(desc,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
__sha256_block_data_order);
|
||||
sha256_base_do_finalize(desc, __sha256_block_data_order);
|
||||
} else {
|
||||
if (len)
|
||||
sha256_update_neon(desc, data, len);
|
||||
kernel_neon_begin();
|
||||
sha256_base_do_finalize(desc,
|
||||
(sha256_block_fn *)sha256_block_neon);
|
||||
sha256_base_do_finalize(desc, __sha256_block_neon);
|
||||
kernel_neon_end();
|
||||
}
|
||||
return sha256_base_finish(desc, out);
|
||||
|
@ -29,16 +29,21 @@ asmlinkage void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
|
||||
|
||||
asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks);
|
||||
|
||||
static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
|
||||
int blocks)
|
||||
{
|
||||
return sha512_block_data_order(sst->state, src, blocks);
|
||||
}
|
||||
|
||||
static int sha512_ce_update(struct shash_desc *desc, const u8 *data,
|
||||
unsigned int len)
|
||||
{
|
||||
if (!crypto_simd_usable())
|
||||
return sha512_base_do_update(desc, data, len,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
__sha512_block_data_order);
|
||||
|
||||
kernel_neon_begin();
|
||||
sha512_base_do_update(desc, data, len,
|
||||
(sha512_block_fn *)sha512_ce_transform);
|
||||
sha512_base_do_update(desc, data, len, sha512_ce_transform);
|
||||
kernel_neon_end();
|
||||
|
||||
return 0;
|
||||
@ -50,16 +55,14 @@ static int sha512_ce_finup(struct shash_desc *desc, const u8 *data,
|
||||
if (!crypto_simd_usable()) {
|
||||
if (len)
|
||||
sha512_base_do_update(desc, data, len,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
sha512_base_do_finalize(desc,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
__sha512_block_data_order);
|
||||
sha512_base_do_finalize(desc, __sha512_block_data_order);
|
||||
return sha512_base_finish(desc, out);
|
||||
}
|
||||
|
||||
kernel_neon_begin();
|
||||
sha512_base_do_update(desc, data, len,
|
||||
(sha512_block_fn *)sha512_ce_transform);
|
||||
sha512_base_do_finalize(desc, (sha512_block_fn *)sha512_ce_transform);
|
||||
sha512_base_do_update(desc, data, len, sha512_ce_transform);
|
||||
sha512_base_do_finalize(desc, sha512_ce_transform);
|
||||
kernel_neon_end();
|
||||
return sha512_base_finish(desc, out);
|
||||
}
|
||||
@ -67,13 +70,12 @@ static int sha512_ce_finup(struct shash_desc *desc, const u8 *data,
|
||||
static int sha512_ce_final(struct shash_desc *desc, u8 *out)
|
||||
{
|
||||
if (!crypto_simd_usable()) {
|
||||
sha512_base_do_finalize(desc,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
sha512_base_do_finalize(desc, __sha512_block_data_order);
|
||||
return sha512_base_finish(desc, out);
|
||||
}
|
||||
|
||||
kernel_neon_begin();
|
||||
sha512_base_do_finalize(desc, (sha512_block_fn *)sha512_ce_transform);
|
||||
sha512_base_do_finalize(desc, sha512_ce_transform);
|
||||
kernel_neon_end();
|
||||
return sha512_base_finish(desc, out);
|
||||
}
|
||||
|
@ -20,15 +20,21 @@ MODULE_LICENSE("GPL v2");
|
||||
MODULE_ALIAS_CRYPTO("sha384");
|
||||
MODULE_ALIAS_CRYPTO("sha512");
|
||||
|
||||
asmlinkage void sha512_block_data_order(u32 *digest, const void *data,
|
||||
asmlinkage void sha512_block_data_order(u64 *digest, const void *data,
|
||||
unsigned int num_blks);
|
||||
EXPORT_SYMBOL(sha512_block_data_order);
|
||||
|
||||
static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
|
||||
int blocks)
|
||||
{
|
||||
return sha512_block_data_order(sst->state, src, blocks);
|
||||
}
|
||||
|
||||
static int sha512_update(struct shash_desc *desc, const u8 *data,
|
||||
unsigned int len)
|
||||
{
|
||||
return sha512_base_do_update(desc, data, len,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
__sha512_block_data_order);
|
||||
}
|
||||
|
||||
static int sha512_finup(struct shash_desc *desc, const u8 *data,
|
||||
@ -36,9 +42,8 @@ static int sha512_finup(struct shash_desc *desc, const u8 *data,
|
||||
{
|
||||
if (len)
|
||||
sha512_base_do_update(desc, data, len,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
sha512_base_do_finalize(desc,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
__sha512_block_data_order);
|
||||
sha512_base_do_finalize(desc, __sha512_block_data_order);
|
||||
|
||||
return sha512_base_finish(desc, out);
|
||||
}
|
||||
|
@ -35,13 +35,16 @@ void apply_alternatives_module(void *start, size_t length);
|
||||
static inline void apply_alternatives_module(void *start, size_t length) { }
|
||||
#endif
|
||||
|
||||
#define ALTINSTR_ENTRY(feature,cb) \
|
||||
#define ALTINSTR_ENTRY(feature) \
|
||||
" .word 661b - .\n" /* label */ \
|
||||
" .if " __stringify(cb) " == 0\n" \
|
||||
" .word 663f - .\n" /* new instruction */ \
|
||||
" .else\n" \
|
||||
" .hword " __stringify(feature) "\n" /* feature bit */ \
|
||||
" .byte 662b-661b\n" /* source len */ \
|
||||
" .byte 664f-663f\n" /* replacement len */
|
||||
|
||||
#define ALTINSTR_ENTRY_CB(feature, cb) \
|
||||
" .word 661b - .\n" /* label */ \
|
||||
" .word " __stringify(cb) "- .\n" /* callback */ \
|
||||
" .endif\n" \
|
||||
" .hword " __stringify(feature) "\n" /* feature bit */ \
|
||||
" .byte 662b-661b\n" /* source len */ \
|
||||
" .byte 664f-663f\n" /* replacement len */
|
||||
@ -62,15 +65,14 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
|
||||
*
|
||||
* Alternatives with callbacks do not generate replacement instructions.
|
||||
*/
|
||||
#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled, cb) \
|
||||
#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled) \
|
||||
".if "__stringify(cfg_enabled)" == 1\n" \
|
||||
"661:\n\t" \
|
||||
oldinstr "\n" \
|
||||
"662:\n" \
|
||||
".pushsection .altinstructions,\"a\"\n" \
|
||||
ALTINSTR_ENTRY(feature,cb) \
|
||||
ALTINSTR_ENTRY(feature) \
|
||||
".popsection\n" \
|
||||
" .if " __stringify(cb) " == 0\n" \
|
||||
".pushsection .altinstr_replacement, \"a\"\n" \
|
||||
"663:\n\t" \
|
||||
newinstr "\n" \
|
||||
@ -78,17 +80,25 @@ static inline void apply_alternatives_module(void *start, size_t length) { }
|
||||
".popsection\n\t" \
|
||||
".org . - (664b-663b) + (662b-661b)\n\t" \
|
||||
".org . - (662b-661b) + (664b-663b)\n" \
|
||||
".else\n\t" \
|
||||
".endif\n"
|
||||
|
||||
#define __ALTERNATIVE_CFG_CB(oldinstr, feature, cfg_enabled, cb) \
|
||||
".if "__stringify(cfg_enabled)" == 1\n" \
|
||||
"661:\n\t" \
|
||||
oldinstr "\n" \
|
||||
"662:\n" \
|
||||
".pushsection .altinstructions,\"a\"\n" \
|
||||
ALTINSTR_ENTRY_CB(feature, cb) \
|
||||
".popsection\n" \
|
||||
"663:\n\t" \
|
||||
"664:\n\t" \
|
||||
".endif\n" \
|
||||
".endif\n"
|
||||
|
||||
#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...) \
|
||||
__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg), 0)
|
||||
__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
|
||||
|
||||
#define ALTERNATIVE_CB(oldinstr, cb) \
|
||||
__ALTERNATIVE_CFG(oldinstr, "NOT_AN_INSTRUCTION", ARM64_CB_PATCH, 1, cb)
|
||||
__ALTERNATIVE_CFG_CB(oldinstr, ARM64_CB_PATCH, 1, cb)
|
||||
#else
|
||||
|
||||
#include <asm/assembler.h>
|
||||
|
@ -14,6 +14,7 @@
|
||||
static inline void __lse_atomic_##op(int i, atomic_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" " #asm_op " %w[i], %[v]\n" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v)); \
|
||||
@ -30,6 +31,7 @@ ATOMIC_OP(add, stadd)
|
||||
static inline int __lse_atomic_fetch_##op##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" " #asm_op #mb " %w[i], %w[i], %[v]" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v) \
|
||||
@ -58,6 +60,7 @@ static inline int __lse_atomic_add_return##name(int i, atomic_t *v) \
|
||||
u32 tmp; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" ldadd" #mb " %w[i], %w[tmp], %[v]\n" \
|
||||
" add %w[i], %w[i], %w[tmp]" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
|
||||
@ -77,6 +80,7 @@ ATOMIC_OP_ADD_RETURN( , al, "memory")
|
||||
static inline void __lse_atomic_and(int i, atomic_t *v)
|
||||
{
|
||||
asm volatile(
|
||||
__LSE_PREAMBLE
|
||||
" mvn %w[i], %w[i]\n"
|
||||
" stclr %w[i], %[v]"
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter)
|
||||
@ -87,6 +91,7 @@ static inline void __lse_atomic_and(int i, atomic_t *v)
|
||||
static inline int __lse_atomic_fetch_and##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" mvn %w[i], %w[i]\n" \
|
||||
" ldclr" #mb " %w[i], %w[i], %[v]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter) \
|
||||
@ -106,6 +111,7 @@ ATOMIC_FETCH_OP_AND( , al, "memory")
|
||||
static inline void __lse_atomic_sub(int i, atomic_t *v)
|
||||
{
|
||||
asm volatile(
|
||||
__LSE_PREAMBLE
|
||||
" neg %w[i], %w[i]\n"
|
||||
" stadd %w[i], %[v]"
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter)
|
||||
@ -118,6 +124,7 @@ static inline int __lse_atomic_sub_return##name(int i, atomic_t *v) \
|
||||
u32 tmp; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" neg %w[i], %w[i]\n" \
|
||||
" ldadd" #mb " %w[i], %w[tmp], %[v]\n" \
|
||||
" add %w[i], %w[i], %w[tmp]" \
|
||||
@ -139,6 +146,7 @@ ATOMIC_OP_SUB_RETURN( , al, "memory")
|
||||
static inline int __lse_atomic_fetch_sub##name(int i, atomic_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" neg %w[i], %w[i]\n" \
|
||||
" ldadd" #mb " %w[i], %w[i], %[v]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter) \
|
||||
@ -159,6 +167,7 @@ ATOMIC_FETCH_OP_SUB( , al, "memory")
|
||||
static inline void __lse_atomic64_##op(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" " #asm_op " %[i], %[v]\n" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v)); \
|
||||
@ -175,6 +184,7 @@ ATOMIC64_OP(add, stadd)
|
||||
static inline long __lse_atomic64_fetch_##op##name(s64 i, atomic64_t *v)\
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" " #asm_op #mb " %[i], %[i], %[v]" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter) \
|
||||
: "r" (v) \
|
||||
@ -203,6 +213,7 @@ static inline long __lse_atomic64_add_return##name(s64 i, atomic64_t *v)\
|
||||
unsigned long tmp; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" ldadd" #mb " %[i], %x[tmp], %[v]\n" \
|
||||
" add %[i], %[i], %x[tmp]" \
|
||||
: [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp) \
|
||||
@ -222,6 +233,7 @@ ATOMIC64_OP_ADD_RETURN( , al, "memory")
|
||||
static inline void __lse_atomic64_and(s64 i, atomic64_t *v)
|
||||
{
|
||||
asm volatile(
|
||||
__LSE_PREAMBLE
|
||||
" mvn %[i], %[i]\n"
|
||||
" stclr %[i], %[v]"
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter)
|
||||
@ -232,6 +244,7 @@ static inline void __lse_atomic64_and(s64 i, atomic64_t *v)
|
||||
static inline long __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" mvn %[i], %[i]\n" \
|
||||
" ldclr" #mb " %[i], %[i], %[v]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter) \
|
||||
@ -251,6 +264,7 @@ ATOMIC64_FETCH_OP_AND( , al, "memory")
|
||||
static inline void __lse_atomic64_sub(s64 i, atomic64_t *v)
|
||||
{
|
||||
asm volatile(
|
||||
__LSE_PREAMBLE
|
||||
" neg %[i], %[i]\n"
|
||||
" stadd %[i], %[v]"
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter)
|
||||
@ -263,6 +277,7 @@ static inline long __lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \
|
||||
unsigned long tmp; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" neg %[i], %[i]\n" \
|
||||
" ldadd" #mb " %[i], %x[tmp], %[v]\n" \
|
||||
" add %[i], %[i], %x[tmp]" \
|
||||
@ -284,6 +299,7 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory")
|
||||
static inline long __lse_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \
|
||||
{ \
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" neg %[i], %[i]\n" \
|
||||
" ldadd" #mb " %[i], %[i], %[v]" \
|
||||
: [i] "+&r" (i), [v] "+Q" (v->counter) \
|
||||
@ -305,6 +321,7 @@ static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v)
|
||||
unsigned long tmp;
|
||||
|
||||
asm volatile(
|
||||
__LSE_PREAMBLE
|
||||
"1: ldr %x[tmp], %[v]\n"
|
||||
" subs %[ret], %x[tmp], #1\n"
|
||||
" b.lt 2f\n"
|
||||
@ -332,6 +349,7 @@ __lse__cmpxchg_case_##name##sz(volatile void *ptr, \
|
||||
unsigned long tmp; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" mov %" #w "[tmp], %" #w "[old]\n" \
|
||||
" cas" #mb #sfx "\t%" #w "[tmp], %" #w "[new], %[v]\n" \
|
||||
" mov %" #w "[ret], %" #w "[tmp]" \
|
||||
@ -379,6 +397,7 @@ __lse__cmpxchg_double##name(unsigned long old1, \
|
||||
register unsigned long x4 asm ("x4") = (unsigned long)ptr; \
|
||||
\
|
||||
asm volatile( \
|
||||
__LSE_PREAMBLE \
|
||||
" casp" #mb "\t%[old1], %[old2], %[new1], %[new2], %[v]\n"\
|
||||
" eor %[old1], %[old1], %[oldval1]\n" \
|
||||
" eor %[old2], %[old2], %[oldval2]\n" \
|
||||
|
@ -6,6 +6,8 @@
|
||||
|
||||
#if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS)
|
||||
|
||||
#define __LSE_PREAMBLE ".arch armv8-a+lse\n"
|
||||
|
||||
#include <linux/compiler_types.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/jump_label.h>
|
||||
@ -14,8 +16,6 @@
|
||||
#include <asm/atomic_lse.h>
|
||||
#include <asm/cpucaps.h>
|
||||
|
||||
__asm__(".arch_extension lse");
|
||||
|
||||
extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS];
|
||||
extern struct static_key_false arm64_const_caps_ready;
|
||||
|
||||
@ -34,7 +34,7 @@ static inline bool system_uses_lse_atomics(void)
|
||||
|
||||
/* In-line patching at runtime */
|
||||
#define ARM64_LSE_ATOMIC_INSN(llsc, lse) \
|
||||
ALTERNATIVE(llsc, lse, ARM64_HAS_LSE_ATOMICS)
|
||||
ALTERNATIVE(llsc, __LSE_PREAMBLE lse, ARM64_HAS_LSE_ATOMICS)
|
||||
|
||||
#else /* CONFIG_AS_LSE && CONFIG_ARM64_LSE_ATOMICS */
|
||||
|
||||
|
@ -312,6 +312,22 @@ static inline void *phys_to_virt(phys_addr_t x)
|
||||
#define virt_to_pfn(x) __phys_to_pfn(__virt_to_phys((unsigned long)(x)))
|
||||
#define sym_to_pfn(x) __phys_to_pfn(__pa_symbol(x))
|
||||
|
||||
/*
|
||||
* With non-canonical CFI jump tables, the compiler replaces function
|
||||
* address references with the address of the function's CFI jump
|
||||
* table entry. This results in __pa_symbol(function) returning the
|
||||
* physical address of the jump table entry, which can lead to address
|
||||
* space confusion since the jump table points to the function's
|
||||
* virtual address. Therefore, use inline assembly to ensure we are
|
||||
* always taking the address of the actual function.
|
||||
*/
|
||||
#define __pa_function(x) ({ \
|
||||
unsigned long addr; \
|
||||
asm("adrp %0, " __stringify(x) "\n\t" \
|
||||
"add %0, %0, :lo12:" __stringify(x) : "=r" (addr)); \
|
||||
__pa_symbol(addr); \
|
||||
})
|
||||
|
||||
/*
|
||||
* virt_to_page(x) convert a _valid_ virtual address to struct page *
|
||||
* virt_addr_valid(x) indicates whether a virtual address is valid
|
||||
|
@ -141,7 +141,7 @@ static inline void cpu_install_idmap(void)
|
||||
* Atomically replaces the active TTBR1_EL1 PGD with a new VA-compatible PGD,
|
||||
* avoiding the possibility of conflicting TLB entries being allocated.
|
||||
*/
|
||||
static inline void cpu_replace_ttbr1(pgd_t *pgdp)
|
||||
static inline void __nocfi cpu_replace_ttbr1(pgd_t *pgdp)
|
||||
{
|
||||
typedef void (ttbr_replace_func)(phys_addr_t);
|
||||
extern ttbr_replace_func idmap_cpu_replace_ttbr1;
|
||||
@ -162,7 +162,7 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
|
||||
ttbr1 |= TTBR_CNP_BIT;
|
||||
}
|
||||
|
||||
replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
|
||||
replace_phys = (void *)__pa_function(idmap_cpu_replace_ttbr1);
|
||||
|
||||
cpu_install_idmap();
|
||||
replace_phys(ttbr1);
|
||||
|
37
arch/arm64/include/asm/scs.h
Normal file
37
arch/arm64/include/asm/scs.h
Normal file
@ -0,0 +1,37 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _ASM_SCS_H
|
||||
#define _ASM_SCS_H
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/scs.h>
|
||||
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
|
||||
extern void scs_init_irq(void);
|
||||
|
||||
static __always_inline void scs_save(struct task_struct *tsk)
|
||||
{
|
||||
void *s;
|
||||
|
||||
asm volatile("mov %0, x18" : "=r" (s));
|
||||
task_set_scs(tsk, s);
|
||||
}
|
||||
|
||||
static inline void scs_overflow_check(struct task_struct *tsk)
|
||||
{
|
||||
if (unlikely(scs_corrupted(tsk)))
|
||||
panic("corrupted shadow stack detected inside scheduler\n");
|
||||
}
|
||||
|
||||
#else /* CONFIG_SHADOW_CALL_STACK */
|
||||
|
||||
static inline void scs_init_irq(void) {}
|
||||
static inline void scs_save(struct task_struct *tsk) {}
|
||||
static inline void scs_overflow_check(struct task_struct *tsk) {}
|
||||
|
||||
#endif /* CONFIG_SHADOW_CALL_STACK */
|
||||
|
||||
#endif /* __ASSEMBLY __ */
|
||||
|
||||
#endif /* _ASM_SCS_H */
|
@ -68,6 +68,10 @@ extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk);
|
||||
|
||||
DECLARE_PER_CPU(unsigned long *, irq_stack_ptr);
|
||||
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
DECLARE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr);
|
||||
#endif
|
||||
|
||||
static inline bool on_irq_stack(unsigned long sp,
|
||||
struct stack_info *info)
|
||||
{
|
||||
|
@ -2,7 +2,7 @@
|
||||
#ifndef __ASM_SUSPEND_H
|
||||
#define __ASM_SUSPEND_H
|
||||
|
||||
#define NR_CTX_REGS 12
|
||||
#define NR_CTX_REGS 13
|
||||
#define NR_CALLEE_SAVED_REGS 12
|
||||
|
||||
/*
|
||||
|
@ -41,6 +41,9 @@ struct thread_info {
|
||||
#endif
|
||||
} preempt;
|
||||
};
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
void *shadow_call_stack;
|
||||
#endif
|
||||
};
|
||||
|
||||
#define thread_saved_pc(tsk) \
|
||||
|
@ -62,8 +62,13 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si
|
||||
{
|
||||
unsigned long ret, limit = current_thread_info()->addr_limit;
|
||||
|
||||
/*
|
||||
* Asynchronous I/O running in a kernel thread does not have the
|
||||
* TIF_TAGGED_ADDR flag of the process owning the mm, so always untag
|
||||
* the user address before checking.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_ARM64_TAGGED_ADDR_ABI) &&
|
||||
test_thread_flag(TIF_TAGGED_ADDR))
|
||||
(current->flags & PF_KTHREAD || test_thread_flag(TIF_TAGGED_ADDR)))
|
||||
addr = untagged_addr(addr);
|
||||
|
||||
__chk_user_ptr(addr);
|
||||
|
@ -63,6 +63,7 @@ obj-$(CONFIG_CRASH_CORE) += crash_core.o
|
||||
obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
|
||||
obj-$(CONFIG_ARM64_SSBD) += ssbd.o
|
||||
obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
|
||||
obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
|
||||
|
||||
obj-y += vdso/ probes/
|
||||
obj-$(CONFIG_COMPAT_VDSO) += vdso32/
|
||||
|
@ -144,8 +144,8 @@ static void clean_dcache_range_nopatch(u64 start, u64 end)
|
||||
} while (cur += d_size, cur < end);
|
||||
}
|
||||
|
||||
static void __apply_alternatives(void *alt_region, bool is_module,
|
||||
unsigned long *feature_mask)
|
||||
static void __nocfi __apply_alternatives(void *alt_region, bool is_module,
|
||||
unsigned long *feature_mask)
|
||||
{
|
||||
struct alt_instr *alt;
|
||||
struct alt_region *region = alt_region;
|
||||
|
@ -33,6 +33,9 @@ int main(void)
|
||||
DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit));
|
||||
#ifdef CONFIG_ARM64_SW_TTBR0_PAN
|
||||
DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0));
|
||||
#endif
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
DEFINE(TSK_TI_SCS, offsetof(struct task_struct, thread_info.shadow_call_stack));
|
||||
#endif
|
||||
DEFINE(TSK_STACK, offsetof(struct task_struct, stack));
|
||||
#ifdef CONFIG_STACKPROTECTOR
|
||||
|
@ -42,11 +42,11 @@ ENTRY(__cpu_soft_restart)
|
||||
mov x0, #HVC_SOFT_RESTART
|
||||
hvc #0 // no return
|
||||
|
||||
1: mov x18, x1 // entry
|
||||
1: mov x8, x1 // entry
|
||||
mov x0, x2 // arg0
|
||||
mov x1, x3 // arg1
|
||||
mov x2, x4 // arg2
|
||||
br x18
|
||||
br x8
|
||||
ENDPROC(__cpu_soft_restart)
|
||||
|
||||
.popsection
|
||||
|
@ -13,16 +13,16 @@
|
||||
void __cpu_soft_restart(unsigned long el2_switch, unsigned long entry,
|
||||
unsigned long arg0, unsigned long arg1, unsigned long arg2);
|
||||
|
||||
static inline void __noreturn cpu_soft_restart(unsigned long entry,
|
||||
unsigned long arg0,
|
||||
unsigned long arg1,
|
||||
unsigned long arg2)
|
||||
static inline void __noreturn __nocfi cpu_soft_restart(unsigned long entry,
|
||||
unsigned long arg0,
|
||||
unsigned long arg1,
|
||||
unsigned long arg2)
|
||||
{
|
||||
typeof(__cpu_soft_restart) *restart;
|
||||
|
||||
unsigned long el2_switch = !is_kernel_in_hyp_mode() &&
|
||||
is_hyp_mode_available();
|
||||
restart = (void *)__pa_symbol(__cpu_soft_restart);
|
||||
restart = (void *)__pa_function(__cpu_soft_restart);
|
||||
|
||||
cpu_install_idmap();
|
||||
restart(el2_switch, entry, arg0, arg1, arg2);
|
||||
|
@ -1035,7 +1035,7 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
|
||||
}
|
||||
|
||||
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
|
||||
static void
|
||||
static void __nocfi
|
||||
kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
|
||||
{
|
||||
typedef void (kpti_remap_fn)(int, int, phys_addr_t);
|
||||
@ -1053,7 +1053,7 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
|
||||
if (kpti_applied || kaslr_offset() > 0)
|
||||
return;
|
||||
|
||||
remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
|
||||
remap_fn = (void *)__pa_function(idmap_kpti_install_ng_mappings);
|
||||
|
||||
cpu_install_idmap();
|
||||
remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));
|
||||
|
@ -34,5 +34,14 @@ ENTRY(__efi_rt_asm_wrapper)
|
||||
ldp x29, x30, [sp], #32
|
||||
b.ne 0f
|
||||
ret
|
||||
0: b efi_handle_corrupted_x18 // tail call
|
||||
0:
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
/*
|
||||
* Restore x18 before returning to instrumented code. This is
|
||||
* safe because the wrapper is called with preemption disabled and
|
||||
* a separate shadow stack is used for interrupts.
|
||||
*/
|
||||
mov x18, x2
|
||||
#endif
|
||||
b efi_handle_corrupted_x18 // tail call
|
||||
ENDPROC(__efi_rt_asm_wrapper)
|
||||
|
@ -172,6 +172,10 @@ alternative_cb_end
|
||||
|
||||
apply_ssbd 1, x22, x23
|
||||
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
ldr x18, [tsk, #TSK_TI_SCS] // Restore shadow call stack
|
||||
str xzr, [tsk, #TSK_TI_SCS] // Limit visibility of saved SCS
|
||||
#endif
|
||||
.else
|
||||
add x21, sp, #S_FRAME_SIZE
|
||||
get_current_task tsk
|
||||
@ -278,6 +282,12 @@ alternative_else_nop_endif
|
||||
ct_user_enter
|
||||
.endif
|
||||
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
.if \el == 0
|
||||
str x18, [tsk, #TSK_TI_SCS] // Save shadow call stack
|
||||
.endif
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_ARM64_SW_TTBR0_PAN
|
||||
/*
|
||||
* Restore access to TTBR0_EL1. If returning to EL0, no need for SPSR
|
||||
@ -383,6 +393,9 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
|
||||
|
||||
.macro irq_stack_entry
|
||||
mov x19, sp // preserve the original sp
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
mov x20, x18 // preserve the original shadow stack
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Compare sp with the base of the task stack.
|
||||
@ -400,15 +413,24 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
|
||||
|
||||
/* switch to the irq stack */
|
||||
mov sp, x26
|
||||
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
/* also switch to the irq shadow stack */
|
||||
ldr_this_cpu x18, irq_shadow_call_stack_ptr, x26
|
||||
#endif
|
||||
|
||||
9998:
|
||||
.endm
|
||||
|
||||
/*
|
||||
* x19 should be preserved between irq_stack_entry and
|
||||
* irq_stack_exit.
|
||||
* The callee-saved regs (x19-x29) should be preserved between
|
||||
* irq_stack_entry and irq_stack_exit.
|
||||
*/
|
||||
.macro irq_stack_exit
|
||||
mov sp, x19
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
mov x18, x20
|
||||
#endif
|
||||
.endm
|
||||
|
||||
/* GPRs used by entry code */
|
||||
@ -1155,6 +1177,11 @@ ENTRY(cpu_switch_to)
|
||||
ldr lr, [x8]
|
||||
mov sp, x9
|
||||
msr sp_el0, x1
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
str x18, [x0, #TSK_TI_SCS]
|
||||
ldr x18, [x1, #TSK_TI_SCS]
|
||||
str xzr, [x1, #TSK_TI_SCS] // limit visibility of saved SCS
|
||||
#endif
|
||||
ret
|
||||
ENDPROC(cpu_switch_to)
|
||||
NOKPROBE(cpu_switch_to)
|
||||
|
@ -27,6 +27,7 @@
|
||||
#include <asm/pgtable-hwdef.h>
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/page.h>
|
||||
#include <asm/scs.h>
|
||||
#include <asm/smp.h>
|
||||
#include <asm/sysreg.h>
|
||||
#include <asm/thread_info.h>
|
||||
@ -424,6 +425,10 @@ __primary_switched:
|
||||
stp xzr, x30, [sp, #-16]!
|
||||
mov x29, sp
|
||||
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
adr_l x18, init_shadow_call_stack // Set shadow call stack
|
||||
#endif
|
||||
|
||||
str_l x21, __fdt_pointer, x5 // Save FDT pointer
|
||||
|
||||
ldr_l x4, kimage_vaddr // Save the offset between
|
||||
@ -731,6 +736,10 @@ __secondary_switched:
|
||||
ldr x2, [x0, #CPU_BOOT_TASK]
|
||||
cbz x2, __secondary_too_slow
|
||||
msr sp_el0, x2
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK
|
||||
ldr x18, [x2, #TSK_TI_SCS] // set shadow call stack
|
||||
str xzr, [x2, #TSK_TI_SCS] // limit visibility of saved SCS
|
||||
#endif
|
||||
mov x29, #0
|
||||
mov x30, #0
|
||||
b secondary_start_kernel
|
||||
|
@ -21,6 +21,7 @@
|
||||
#include <linux/vmalloc.h>
|
||||
#include <asm/daifflags.h>
|
||||
#include <asm/vmap_stack.h>
|
||||
#include <asm/scs.h>
|
||||
|
||||
unsigned long irq_err_count;
|
||||
|
||||
@ -63,6 +64,7 @@ static void init_irq_stacks(void)
|
||||
void __init init_IRQ(void)
|
||||
{
|
||||
init_irq_stacks();
|
||||
scs_init_irq();
|
||||
irqchip_init();
|
||||
if (!handle_arch_irq)
|
||||
panic("No interrupt controller found.");
|
||||
|
@ -52,6 +52,7 @@
|
||||
#include <asm/mmu_context.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/pointer_auth.h>
|
||||
#include <asm/scs.h>
|
||||
#include <asm/stacktrace.h>
|
||||
|
||||
#if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK)
|
||||
@ -507,6 +508,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
|
||||
uao_thread_switch(next);
|
||||
ptrauth_thread_switch(next);
|
||||
ssbs_thread_switch(next);
|
||||
scs_overflow_check(next);
|
||||
|
||||
/*
|
||||
* Complete any pending TLB or cache maintenance on this CPU in case
|
||||
|
@ -38,7 +38,8 @@ static int __init cpu_psci_cpu_prepare(unsigned int cpu)
|
||||
|
||||
static int cpu_psci_cpu_boot(unsigned int cpu)
|
||||
{
|
||||
int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa_symbol(secondary_entry));
|
||||
int err = psci_ops.cpu_on(cpu_logical_map(cpu),
|
||||
__pa_function(secondary_entry));
|
||||
if (err)
|
||||
pr_err("failed to boot CPU%d (%d)\n", cpu, err);
|
||||
|
||||
|
40
arch/arm64/kernel/scs.c
Normal file
40
arch/arm64/kernel/scs.c
Normal file
@ -0,0 +1,40 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Shadow Call Stack support.
|
||||
*
|
||||
* Copyright (C) 2019 Google LLC
|
||||
*/
|
||||
|
||||
#include <linux/percpu.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/scs.h>
|
||||
|
||||
DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr);
|
||||
|
||||
#ifndef CONFIG_SHADOW_CALL_STACK_VMAP
|
||||
DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack)
|
||||
__aligned(SCS_SIZE);
|
||||
#endif
|
||||
|
||||
void scs_init_irq(void)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
#ifdef CONFIG_SHADOW_CALL_STACK_VMAP
|
||||
unsigned long *p;
|
||||
|
||||
p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE,
|
||||
VMALLOC_START, VMALLOC_END,
|
||||
GFP_SCS, PAGE_KERNEL,
|
||||
0, cpu_to_node(cpu),
|
||||
__builtin_return_address(0));
|
||||
|
||||
per_cpu(irq_shadow_call_stack_ptr, cpu) = p;
|
||||
#else
|
||||
per_cpu(irq_shadow_call_stack_ptr, cpu) =
|
||||
per_cpu(irq_shadow_call_stack, cpu);
|
||||
#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */
|
||||
}
|
||||
}
|
@ -44,6 +44,7 @@
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/scs.h>
|
||||
#include <asm/smp_plat.h>
|
||||
#include <asm/sections.h>
|
||||
#include <asm/tlbflush.h>
|
||||
@ -363,6 +364,9 @@ void cpu_die(void)
|
||||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
|
||||
/* Save the shadow stack pointer before exiting the idle task */
|
||||
scs_save(current);
|
||||
|
||||
idle_task_exit();
|
||||
|
||||
local_daif_mask();
|
||||
|
@ -88,7 +88,7 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
|
||||
* boot-loader's endianess before jumping. This is mandated by
|
||||
* the boot protocol.
|
||||
*/
|
||||
writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
|
||||
writeq_relaxed(__pa_function(secondary_holding_pen), release_addr);
|
||||
__flush_dcache_area((__force void *)release_addr,
|
||||
sizeof(*release_addr));
|
||||
|
||||
|
@ -25,8 +25,8 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING
|
||||
|
||||
VDSO_LDFLAGS := -Bsymbolic
|
||||
|
||||
CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os
|
||||
KBUILD_CFLAGS += $(DISABLE_LTO)
|
||||
CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) \
|
||||
$(CC_FLAGS_LTO)
|
||||
KASAN_SANITIZE := n
|
||||
UBSAN_SANITIZE := n
|
||||
OBJECT_FILES_NON_STANDARD := y
|
||||
|
@ -4,6 +4,7 @@
|
||||
#
|
||||
|
||||
ccflags-y += -I $(srctree)/$(src) -I $(srctree)/virt/kvm/arm/vgic
|
||||
CFLAGS_REMOVE_debug.o += $(CC_FLAGS_CFI)
|
||||
|
||||
KVM=../../../virt/kvm
|
||||
|
||||
|
@ -28,3 +28,6 @@ GCOV_PROFILE := n
|
||||
KASAN_SANITIZE := n
|
||||
UBSAN_SANITIZE := n
|
||||
KCOV_INSTRUMENT := n
|
||||
|
||||
# remove SCS and CFI flags from all objects in this directory
|
||||
KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS) $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
|
||||
|
@ -22,7 +22,12 @@
|
||||
.text
|
||||
.pushsection .hyp.text, "ax"
|
||||
|
||||
/*
|
||||
* We treat x18 as callee-saved as the host may use it as a platform
|
||||
* register (e.g. for shadow call stack).
|
||||
*/
|
||||
.macro save_callee_saved_regs ctxt
|
||||
str x18, [\ctxt, #CPU_XREG_OFFSET(18)]
|
||||
stp x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)]
|
||||
stp x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)]
|
||||
stp x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)]
|
||||
@ -32,6 +37,8 @@
|
||||
.endm
|
||||
|
||||
.macro restore_callee_saved_regs ctxt
|
||||
// We require \ctxt is not x18-x28
|
||||
ldr x18, [\ctxt, #CPU_XREG_OFFSET(18)]
|
||||
ldp x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)]
|
||||
ldp x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)]
|
||||
ldp x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)]
|
||||
@ -48,7 +55,7 @@ ENTRY(__guest_enter)
|
||||
// x0: vcpu
|
||||
// x1: host context
|
||||
// x2-x17: clobbered by macros
|
||||
// x18: guest context
|
||||
// x29: guest context
|
||||
|
||||
// Store the host regs
|
||||
save_callee_saved_regs x1
|
||||
@ -67,31 +74,28 @@ alternative_else_nop_endif
|
||||
ret
|
||||
|
||||
1:
|
||||
add x18, x0, #VCPU_CONTEXT
|
||||
add x29, x0, #VCPU_CONTEXT
|
||||
|
||||
// Macro ptrauth_switch_to_guest format:
|
||||
// ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3)
|
||||
// The below macro to restore guest keys is not implemented in C code
|
||||
// as it may cause Pointer Authentication key signing mismatch errors
|
||||
// when this feature is enabled for kernel code.
|
||||
ptrauth_switch_to_guest x18, x0, x1, x2
|
||||
ptrauth_switch_to_guest x29, x0, x1, x2
|
||||
|
||||
// Restore guest regs x0-x17
|
||||
ldp x0, x1, [x18, #CPU_XREG_OFFSET(0)]
|
||||
ldp x2, x3, [x18, #CPU_XREG_OFFSET(2)]
|
||||
ldp x4, x5, [x18, #CPU_XREG_OFFSET(4)]
|
||||
ldp x6, x7, [x18, #CPU_XREG_OFFSET(6)]
|
||||
ldp x8, x9, [x18, #CPU_XREG_OFFSET(8)]
|
||||
ldp x10, x11, [x18, #CPU_XREG_OFFSET(10)]
|
||||
ldp x12, x13, [x18, #CPU_XREG_OFFSET(12)]
|
||||
ldp x14, x15, [x18, #CPU_XREG_OFFSET(14)]
|
||||
ldp x16, x17, [x18, #CPU_XREG_OFFSET(16)]
|
||||
ldp x0, x1, [x29, #CPU_XREG_OFFSET(0)]
|
||||
ldp x2, x3, [x29, #CPU_XREG_OFFSET(2)]
|
||||
ldp x4, x5, [x29, #CPU_XREG_OFFSET(4)]
|
||||
ldp x6, x7, [x29, #CPU_XREG_OFFSET(6)]
|
||||
ldp x8, x9, [x29, #CPU_XREG_OFFSET(8)]
|
||||
ldp x10, x11, [x29, #CPU_XREG_OFFSET(10)]
|
||||
ldp x12, x13, [x29, #CPU_XREG_OFFSET(12)]
|
||||
ldp x14, x15, [x29, #CPU_XREG_OFFSET(14)]
|
||||
ldp x16, x17, [x29, #CPU_XREG_OFFSET(16)]
|
||||
|
||||
// Restore guest regs x19-x29, lr
|
||||
restore_callee_saved_regs x18
|
||||
|
||||
// Restore guest reg x18
|
||||
ldr x18, [x18, #CPU_XREG_OFFSET(18)]
|
||||
// Restore guest regs x18-x29, lr
|
||||
restore_callee_saved_regs x29
|
||||
|
||||
// Do not touch any register after this!
|
||||
eret
|
||||
@ -114,7 +118,7 @@ ENTRY(__guest_exit)
|
||||
// Retrieve the guest regs x0-x1 from the stack
|
||||
ldp x2, x3, [sp], #16 // x0, x1
|
||||
|
||||
// Store the guest regs x0-x1 and x4-x18
|
||||
// Store the guest regs x0-x1 and x4-x17
|
||||
stp x2, x3, [x1, #CPU_XREG_OFFSET(0)]
|
||||
stp x4, x5, [x1, #CPU_XREG_OFFSET(4)]
|
||||
stp x6, x7, [x1, #CPU_XREG_OFFSET(6)]
|
||||
@ -123,9 +127,8 @@ ENTRY(__guest_exit)
|
||||
stp x12, x13, [x1, #CPU_XREG_OFFSET(12)]
|
||||
stp x14, x15, [x1, #CPU_XREG_OFFSET(14)]
|
||||
stp x16, x17, [x1, #CPU_XREG_OFFSET(16)]
|
||||
str x18, [x1, #CPU_XREG_OFFSET(18)]
|
||||
|
||||
// Store the guest regs x19-x29, lr
|
||||
// Store the guest regs x18-x29, lr
|
||||
save_callee_saved_regs x1
|
||||
|
||||
get_host_ctxt x2, x3
|
||||
|
@ -34,45 +34,45 @@ alternative_else_nop_endif
|
||||
ldp x14, x15, [x1, #96]
|
||||
ldp x16, x17, [x1, #112]
|
||||
|
||||
mov x18, #(PAGE_SIZE - 128)
|
||||
add x0, x0, #256
|
||||
add x1, x1, #128
|
||||
1:
|
||||
subs x18, x18, #128
|
||||
tst x0, #(PAGE_SIZE - 1)
|
||||
|
||||
alternative_if ARM64_HAS_NO_HW_PREFETCH
|
||||
prfm pldl1strm, [x1, #384]
|
||||
alternative_else_nop_endif
|
||||
|
||||
stnp x2, x3, [x0]
|
||||
stnp x2, x3, [x0, #-256]
|
||||
ldp x2, x3, [x1]
|
||||
stnp x4, x5, [x0, #16]
|
||||
stnp x4, x5, [x0, #16 - 256]
|
||||
ldp x4, x5, [x1, #16]
|
||||
stnp x6, x7, [x0, #32]
|
||||
stnp x6, x7, [x0, #32 - 256]
|
||||
ldp x6, x7, [x1, #32]
|
||||
stnp x8, x9, [x0, #48]
|
||||
stnp x8, x9, [x0, #48 - 256]
|
||||
ldp x8, x9, [x1, #48]
|
||||
stnp x10, x11, [x0, #64]
|
||||
stnp x10, x11, [x0, #64 - 256]
|
||||
ldp x10, x11, [x1, #64]
|
||||
stnp x12, x13, [x0, #80]
|
||||
stnp x12, x13, [x0, #80 - 256]
|
||||
ldp x12, x13, [x1, #80]
|
||||
stnp x14, x15, [x0, #96]
|
||||
stnp x14, x15, [x0, #96 - 256]
|
||||
ldp x14, x15, [x1, #96]
|
||||
stnp x16, x17, [x0, #112]
|
||||
stnp x16, x17, [x0, #112 - 256]
|
||||
ldp x16, x17, [x1, #112]
|
||||
|
||||
add x0, x0, #128
|
||||
add x1, x1, #128
|
||||
|
||||
b.gt 1b
|
||||
b.ne 1b
|
||||
|
||||
stnp x2, x3, [x0]
|
||||
stnp x4, x5, [x0, #16]
|
||||
stnp x6, x7, [x0, #32]
|
||||
stnp x8, x9, [x0, #48]
|
||||
stnp x10, x11, [x0, #64]
|
||||
stnp x12, x13, [x0, #80]
|
||||
stnp x14, x15, [x0, #96]
|
||||
stnp x16, x17, [x0, #112]
|
||||
stnp x2, x3, [x0, #-256]
|
||||
stnp x4, x5, [x0, #16 - 256]
|
||||
stnp x6, x7, [x0, #32 - 256]
|
||||
stnp x8, x9, [x0, #48 - 256]
|
||||
stnp x10, x11, [x0, #64 - 256]
|
||||
stnp x12, x13, [x0, #80 - 256]
|
||||
stnp x14, x15, [x0, #96 - 256]
|
||||
stnp x16, x17, [x0, #112 - 256]
|
||||
|
||||
ret
|
||||
ENDPROC(copy_page)
|
||||
|
@ -57,3 +57,4 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
|
||||
dev->dma_ops = &xen_swiotlb_dma_ops;
|
||||
#endif
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(arch_setup_dma_ops);
|
||||
|
@ -95,6 +95,8 @@ ENDPROC(cpu_soft_restart)
|
||||
* cpu_do_suspend - save CPU registers context
|
||||
*
|
||||
* x0: virtual address of context pointer
|
||||
*
|
||||
* This must be kept in sync with struct cpu_suspend_ctx in <asm/suspend.h>.
|
||||
*/
|
||||
ENTRY(cpu_do_suspend)
|
||||
mrs x2, tpidr_el0
|
||||
@ -119,6 +121,11 @@ alternative_endif
|
||||
stp x8, x9, [x0, #48]
|
||||
stp x10, x11, [x0, #64]
|
||||
stp x12, x13, [x0, #80]
|
||||
/*
|
||||
* Save x18 as it may be used as a platform register, e.g. by shadow
|
||||
* call stack.
|
||||
*/
|
||||
str x18, [x0, #96]
|
||||
ret
|
||||
ENDPROC(cpu_do_suspend)
|
||||
|
||||
@ -135,6 +142,13 @@ ENTRY(cpu_do_resume)
|
||||
ldp x9, x10, [x0, #48]
|
||||
ldp x11, x12, [x0, #64]
|
||||
ldp x13, x14, [x0, #80]
|
||||
/*
|
||||
* Restore x18, as it may be used as a platform register, and clear
|
||||
* the buffer to minimize the risk of exposure when used for shadow
|
||||
* call stack.
|
||||
*/
|
||||
ldr x18, [x0, #96]
|
||||
str xzr, [x0, #96]
|
||||
msr tpidr_el0, x2
|
||||
msr tpidrro_el0, x3
|
||||
msr contextidr_el1, x4
|
||||
@ -296,15 +310,15 @@ ENTRY(idmap_kpti_install_ng_mappings)
|
||||
/* We're the boot CPU. Wait for the others to catch up */
|
||||
sevl
|
||||
1: wfe
|
||||
ldaxr w18, [flag_ptr]
|
||||
eor w18, w18, num_cpus
|
||||
cbnz w18, 1b
|
||||
ldaxr w17, [flag_ptr]
|
||||
eor w17, w17, num_cpus
|
||||
cbnz w17, 1b
|
||||
|
||||
/* We need to walk swapper, so turn off the MMU. */
|
||||
pre_disable_mmu_workaround
|
||||
mrs x18, sctlr_el1
|
||||
bic x18, x18, #SCTLR_ELx_M
|
||||
msr sctlr_el1, x18
|
||||
mrs x17, sctlr_el1
|
||||
bic x17, x17, #SCTLR_ELx_M
|
||||
msr sctlr_el1, x17
|
||||
isb
|
||||
|
||||
/* Everybody is enjoying the idmap, so we can rewrite swapper. */
|
||||
@ -327,9 +341,9 @@ skip_pgd:
|
||||
isb
|
||||
|
||||
/* We're done: fire up the MMU again */
|
||||
mrs x18, sctlr_el1
|
||||
orr x18, x18, #SCTLR_ELx_M
|
||||
msr sctlr_el1, x18
|
||||
mrs x17, sctlr_el1
|
||||
orr x17, x17, #SCTLR_ELx_M
|
||||
msr sctlr_el1, x17
|
||||
isb
|
||||
|
||||
/*
|
||||
@ -399,34 +413,9 @@ skip_pte:
|
||||
b.ne do_pte
|
||||
b next_pmd
|
||||
|
||||
/* Secondary CPUs end up here */
|
||||
__idmap_kpti_secondary:
|
||||
/* Uninstall swapper before surgery begins */
|
||||
__idmap_cpu_set_reserved_ttbr1 x18, x17
|
||||
|
||||
/* Increment the flag to let the boot CPU we're ready */
|
||||
1: ldxr w18, [flag_ptr]
|
||||
add w18, w18, #1
|
||||
stxr w17, w18, [flag_ptr]
|
||||
cbnz w17, 1b
|
||||
|
||||
/* Wait for the boot CPU to finish messing around with swapper */
|
||||
sevl
|
||||
1: wfe
|
||||
ldxr w18, [flag_ptr]
|
||||
cbnz w18, 1b
|
||||
|
||||
/* All done, act like nothing happened */
|
||||
offset_ttbr1 swapper_ttb, x18
|
||||
msr ttbr1_el1, swapper_ttb
|
||||
isb
|
||||
ret
|
||||
|
||||
.unreq cpu
|
||||
.unreq num_cpus
|
||||
.unreq swapper_pa
|
||||
.unreq swapper_ttb
|
||||
.unreq flag_ptr
|
||||
.unreq cur_pgdp
|
||||
.unreq end_pgdp
|
||||
.unreq pgd
|
||||
@ -439,6 +428,32 @@ __idmap_kpti_secondary:
|
||||
.unreq cur_ptep
|
||||
.unreq end_ptep
|
||||
.unreq pte
|
||||
|
||||
/* Secondary CPUs end up here */
|
||||
__idmap_kpti_secondary:
|
||||
/* Uninstall swapper before surgery begins */
|
||||
__idmap_cpu_set_reserved_ttbr1 x16, x17
|
||||
|
||||
/* Increment the flag to let the boot CPU we're ready */
|
||||
1: ldxr w16, [flag_ptr]
|
||||
add w16, w16, #1
|
||||
stxr w17, w16, [flag_ptr]
|
||||
cbnz w17, 1b
|
||||
|
||||
/* Wait for the boot CPU to finish messing around with swapper */
|
||||
sevl
|
||||
1: wfe
|
||||
ldxr w16, [flag_ptr]
|
||||
cbnz w16, 1b
|
||||
|
||||
/* All done, act like nothing happened */
|
||||
offset_ttbr1 swapper_ttb, x16
|
||||
msr ttbr1_el1, swapper_ttb
|
||||
isb
|
||||
ret
|
||||
|
||||
.unreq swapper_ttb
|
||||
.unreq flag_ptr
|
||||
ENDPROC(idmap_kpti_install_ng_mappings)
|
||||
.popsection
|
||||
#endif
|
||||
|
@ -976,3 +976,14 @@ void bpf_jit_free_exec(void *addr)
|
||||
{
|
||||
return vfree(addr);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_CFI_CLANG
|
||||
bool arch_bpf_jit_check_func(const struct bpf_prog *prog)
|
||||
{
|
||||
const uintptr_t func = (const uintptr_t)prog->bpf_func;
|
||||
|
||||
/* bpf_func must be correctly aligned and within the BPF JIT region */
|
||||
return (func >= BPF_JIT_REGION_START && func < BPF_JIT_REGION_END &&
|
||||
IS_ALIGNED(func, sizeof(u32)));
|
||||
}
|
||||
#endif
|
||||
|
@ -147,4 +147,5 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
|
||||
{
|
||||
dev->dma_coherent = coherent;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(arch_setup_dma_ops);
|
||||
#endif
|
||||
|
@ -152,9 +152,12 @@ void _kvmppc_save_tm_pr(struct kvm_vcpu *vcpu, u64 guest_msr);
|
||||
/* Patch sites */
|
||||
extern s32 patch__call_flush_count_cache;
|
||||
extern s32 patch__flush_count_cache_return;
|
||||
extern s32 patch__flush_link_stack_return;
|
||||
extern s32 patch__call_kvm_flush_link_stack;
|
||||
extern s32 patch__memset_nocache, patch__memcpy_nocache;
|
||||
|
||||
extern long flush_count_cache;
|
||||
extern long kvm_flush_link_stack;
|
||||
|
||||
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
||||
void kvmppc_save_tm_hv(struct kvm_vcpu *vcpu, u64 msr, bool preserve_nv);
|
||||
|
@ -5,8 +5,22 @@
|
||||
|
||||
#include <linux/elf.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#define arch_is_kernel_initmem_freed arch_is_kernel_initmem_freed
|
||||
|
||||
#include <asm-generic/sections.h>
|
||||
|
||||
extern bool init_mem_is_free;
|
||||
|
||||
static inline int arch_is_kernel_initmem_freed(unsigned long addr)
|
||||
{
|
||||
if (!init_mem_is_free)
|
||||
return 0;
|
||||
|
||||
return addr >= (unsigned long)__init_begin &&
|
||||
addr < (unsigned long)__init_end;
|
||||
}
|
||||
|
||||
extern char __head_end[];
|
||||
|
||||
#ifdef __powerpc64__
|
||||
|
@ -81,6 +81,9 @@ static inline bool security_ftr_enabled(unsigned long feature)
|
||||
// Software required to flush count cache on context switch
|
||||
#define SEC_FTR_FLUSH_COUNT_CACHE 0x0000000000000400ull
|
||||
|
||||
// Software required to flush link stack on context switch
|
||||
#define SEC_FTR_FLUSH_LINK_STACK 0x0000000000001000ull
|
||||
|
||||
|
||||
// Features enabled by default
|
||||
#define SEC_FTR_DEFAULT \
|
||||
|
@ -82,6 +82,7 @@ struct vdso_data {
|
||||
__s32 wtom_clock_nsec; /* Wall to monotonic clock nsec */
|
||||
__s64 wtom_clock_sec; /* Wall to monotonic clock sec */
|
||||
struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */
|
||||
__u32 hrtimer_res; /* hrtimer resolution */
|
||||
__u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls */
|
||||
__u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
|
||||
};
|
||||
@ -103,6 +104,7 @@ struct vdso_data {
|
||||
__s32 wtom_clock_nsec;
|
||||
struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */
|
||||
__u32 stamp_sec_fraction; /* fractional seconds of stamp_xtime */
|
||||
__u32 hrtimer_res; /* hrtimer resolution */
|
||||
__u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
|
||||
__u32 dcache_block_size; /* L1 d-cache block size */
|
||||
__u32 icache_block_size; /* L1 i-cache block size */
|
||||
|
@ -5,8 +5,8 @@
|
||||
|
||||
CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"'
|
||||
|
||||
# Disable clang warning for using setjmp without setjmp.h header
|
||||
CFLAGS_crash.o += $(call cc-disable-warning, builtin-requires-header)
|
||||
# Avoid clang warnings around longjmp/setjmp declarations
|
||||
CFLAGS_crash.o += -ffreestanding
|
||||
|
||||
ifdef CONFIG_PPC64
|
||||
CFLAGS_prom_init.o += $(NO_MINIMAL_TOC)
|
||||
|
@ -387,6 +387,7 @@ int main(void)
|
||||
OFFSET(WTOM_CLOCK_NSEC, vdso_data, wtom_clock_nsec);
|
||||
OFFSET(STAMP_XTIME, vdso_data, stamp_xtime);
|
||||
OFFSET(STAMP_SEC_FRAC, vdso_data, stamp_sec_fraction);
|
||||
OFFSET(CLOCK_HRTIMER_RES, vdso_data, hrtimer_res);
|
||||
OFFSET(CFG_ICACHE_BLOCKSZ, vdso_data, icache_block_size);
|
||||
OFFSET(CFG_DCACHE_BLOCKSZ, vdso_data, dcache_block_size);
|
||||
OFFSET(CFG_ICACHE_LOGBLOCKSZ, vdso_data, icache_log_block_size);
|
||||
@ -417,7 +418,6 @@ int main(void)
|
||||
DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
|
||||
DEFINE(CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE);
|
||||
DEFINE(NSEC_PER_SEC, NSEC_PER_SEC);
|
||||
DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC);
|
||||
|
||||
#ifdef CONFIG_BUG
|
||||
DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));
|
||||
|
@ -537,6 +537,7 @@ flush_count_cache:
|
||||
/* Save LR into r9 */
|
||||
mflr r9
|
||||
|
||||
// Flush the link stack
|
||||
.rept 64
|
||||
bl .+4
|
||||
.endr
|
||||
@ -546,6 +547,11 @@ flush_count_cache:
|
||||
.balign 32
|
||||
/* Restore LR */
|
||||
1: mtlr r9
|
||||
|
||||
// If we're just flushing the link stack, return here
|
||||
3: nop
|
||||
patch_site 3b patch__flush_link_stack_return
|
||||
|
||||
li r9,0x7fff
|
||||
mtctr r9
|
||||
|
||||
|
@ -82,7 +82,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
|
||||
subf r8,r6,r4 /* compute length */
|
||||
add r8,r8,r5 /* ensure we get enough */
|
||||
lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of cache block size */
|
||||
srw. r8,r8,r9 /* compute line count */
|
||||
srd. r8,r8,r9 /* compute line count */
|
||||
beqlr /* nothing to do? */
|
||||
mtctr r8
|
||||
1: dcbst 0,r6
|
||||
@ -98,7 +98,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE)
|
||||
subf r8,r6,r4 /* compute length */
|
||||
add r8,r8,r5
|
||||
lwz r9,ICACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of Icache block size */
|
||||
srw. r8,r8,r9 /* compute line count */
|
||||
srd. r8,r8,r9 /* compute line count */
|
||||
beqlr /* nothing to do? */
|
||||
mtctr r8
|
||||
2: icbi 0,r6
|
||||
|
@ -24,6 +24,7 @@ enum count_cache_flush_type {
|
||||
COUNT_CACHE_FLUSH_HW = 0x4,
|
||||
};
|
||||
static enum count_cache_flush_type count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
|
||||
static bool link_stack_flush_enabled;
|
||||
|
||||
bool barrier_nospec_enabled;
|
||||
static bool no_nospec;
|
||||
@ -212,11 +213,19 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
|
||||
|
||||
if (ccd)
|
||||
seq_buf_printf(&s, "Indirect branch cache disabled");
|
||||
|
||||
if (link_stack_flush_enabled)
|
||||
seq_buf_printf(&s, ", Software link stack flush");
|
||||
|
||||
} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
|
||||
seq_buf_printf(&s, "Mitigation: Software count cache flush");
|
||||
|
||||
if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
|
||||
seq_buf_printf(&s, " (hardware accelerated)");
|
||||
|
||||
if (link_stack_flush_enabled)
|
||||
seq_buf_printf(&s, ", Software link stack flush");
|
||||
|
||||
} else if (btb_flush_enabled) {
|
||||
seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
|
||||
} else {
|
||||
@ -377,18 +386,49 @@ static __init int stf_barrier_debugfs_init(void)
|
||||
device_initcall(stf_barrier_debugfs_init);
|
||||
#endif /* CONFIG_DEBUG_FS */
|
||||
|
||||
static void no_count_cache_flush(void)
|
||||
{
|
||||
count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
|
||||
pr_info("count-cache-flush: software flush disabled.\n");
|
||||
}
|
||||
|
||||
static void toggle_count_cache_flush(bool enable)
|
||||
{
|
||||
if (!enable || !security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
|
||||
if (!security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE) &&
|
||||
!security_ftr_enabled(SEC_FTR_FLUSH_LINK_STACK))
|
||||
enable = false;
|
||||
|
||||
if (!enable) {
|
||||
patch_instruction_site(&patch__call_flush_count_cache, PPC_INST_NOP);
|
||||
count_cache_flush_type = COUNT_CACHE_FLUSH_NONE;
|
||||
pr_info("count-cache-flush: software flush disabled.\n");
|
||||
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
|
||||
patch_instruction_site(&patch__call_kvm_flush_link_stack, PPC_INST_NOP);
|
||||
#endif
|
||||
pr_info("link-stack-flush: software flush disabled.\n");
|
||||
link_stack_flush_enabled = false;
|
||||
no_count_cache_flush();
|
||||
return;
|
||||
}
|
||||
|
||||
// This enables the branch from _switch to flush_count_cache
|
||||
patch_branch_site(&patch__call_flush_count_cache,
|
||||
(u64)&flush_count_cache, BRANCH_SET_LINK);
|
||||
|
||||
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
|
||||
// This enables the branch from guest_exit_cont to kvm_flush_link_stack
|
||||
patch_branch_site(&patch__call_kvm_flush_link_stack,
|
||||
(u64)&kvm_flush_link_stack, BRANCH_SET_LINK);
|
||||
#endif
|
||||
|
||||
pr_info("link-stack-flush: software flush enabled.\n");
|
||||
link_stack_flush_enabled = true;
|
||||
|
||||
// If we just need to flush the link stack, patch an early return
|
||||
if (!security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE)) {
|
||||
patch_instruction_site(&patch__flush_link_stack_return, PPC_INST_BLR);
|
||||
no_count_cache_flush();
|
||||
return;
|
||||
}
|
||||
|
||||
if (!security_ftr_enabled(SEC_FTR_BCCTR_FLUSH_ASSIST)) {
|
||||
count_cache_flush_type = COUNT_CACHE_FLUSH_SW;
|
||||
pr_info("count-cache-flush: full software flush sequence enabled.\n");
|
||||
@ -407,11 +447,20 @@ void setup_count_cache_flush(void)
|
||||
if (no_spectrev2 || cpu_mitigations_off()) {
|
||||
if (security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED) ||
|
||||
security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED))
|
||||
pr_warn("Spectre v2 mitigations not under software control, can't disable\n");
|
||||
pr_warn("Spectre v2 mitigations not fully under software control, can't disable\n");
|
||||
|
||||
enable = false;
|
||||
}
|
||||
|
||||
/*
|
||||
* There's no firmware feature flag/hypervisor bit to tell us we need to
|
||||
* flush the link stack on context switch. So we set it here if we see
|
||||
* either of the Spectre v2 mitigations that aim to protect userspace.
|
||||
*/
|
||||
if (security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED) ||
|
||||
security_ftr_enabled(SEC_FTR_FLUSH_COUNT_CACHE))
|
||||
security_ftr_set(SEC_FTR_FLUSH_LINK_STACK);
|
||||
|
||||
toggle_count_cache_flush(enable);
|
||||
}
|
||||
|
||||
|
@ -959,6 +959,7 @@ void update_vsyscall(struct timekeeper *tk)
|
||||
vdso_data->wtom_clock_nsec = tk->wall_to_monotonic.tv_nsec;
|
||||
vdso_data->stamp_xtime = xt;
|
||||
vdso_data->stamp_sec_fraction = frac_sec;
|
||||
vdso_data->hrtimer_res = hrtimer_resolution;
|
||||
smp_wmb();
|
||||
++(vdso_data->tb_update_count);
|
||||
}
|
||||
|
@ -156,12 +156,15 @@ V_FUNCTION_BEGIN(__kernel_clock_getres)
|
||||
cror cr0*4+eq,cr0*4+eq,cr1*4+eq
|
||||
bne cr0,99f
|
||||
|
||||
mflr r12
|
||||
.cfi_register lr,r12
|
||||
bl __get_datapage@local /* get data page */
|
||||
lwz r5, CLOCK_HRTIMER_RES(r3)
|
||||
mtlr r12
|
||||
li r3,0
|
||||
cmpli cr0,r4,0
|
||||
crclr cr0*4+so
|
||||
beqlr
|
||||
lis r5,CLOCK_REALTIME_RES@h
|
||||
ori r5,r5,CLOCK_REALTIME_RES@l
|
||||
stw r3,TSPC32_TV_SEC(r4)
|
||||
stw r5,TSPC32_TV_NSEC(r4)
|
||||
blr
|
||||
|
@ -35,7 +35,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
|
||||
subf r8,r6,r4 /* compute length */
|
||||
add r8,r8,r5 /* ensure we get enough */
|
||||
lwz r9,CFG_DCACHE_LOGBLOCKSZ(r10)
|
||||
srw. r8,r8,r9 /* compute line count */
|
||||
srd. r8,r8,r9 /* compute line count */
|
||||
crclr cr0*4+so
|
||||
beqlr /* nothing to do? */
|
||||
mtctr r8
|
||||
@ -52,7 +52,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache)
|
||||
subf r8,r6,r4 /* compute length */
|
||||
add r8,r8,r5
|
||||
lwz r9,CFG_ICACHE_LOGBLOCKSZ(r10)
|
||||
srw. r8,r8,r9 /* compute line count */
|
||||
srd. r8,r8,r9 /* compute line count */
|
||||
crclr cr0*4+so
|
||||
beqlr /* nothing to do? */
|
||||
mtctr r8
|
||||
|
@ -186,12 +186,15 @@ V_FUNCTION_BEGIN(__kernel_clock_getres)
|
||||
cror cr0*4+eq,cr0*4+eq,cr1*4+eq
|
||||
bne cr0,99f
|
||||
|
||||
mflr r12
|
||||
.cfi_register lr,r12
|
||||
bl V_LOCAL_FUNC(__get_datapage)
|
||||
lwz r5, CLOCK_HRTIMER_RES(r3)
|
||||
mtlr r12
|
||||
li r3,0
|
||||
cmpldi cr0,r4,0
|
||||
crclr cr0*4+so
|
||||
beqlr
|
||||
lis r5,CLOCK_REALTIME_RES@h
|
||||
ori r5,r5,CLOCK_REALTIME_RES@l
|
||||
std r3,TSPC64_TV_SEC(r4)
|
||||
std r5,TSPC64_TV_NSEC(r4)
|
||||
blr
|
||||
|
@ -11,6 +11,7 @@
|
||||
*/
|
||||
|
||||
#include <asm/ppc_asm.h>
|
||||
#include <asm/code-patching-asm.h>
|
||||
#include <asm/kvm_asm.h>
|
||||
#include <asm/reg.h>
|
||||
#include <asm/mmu.h>
|
||||
@ -1487,6 +1488,13 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
|
||||
1:
|
||||
#endif /* CONFIG_KVM_XICS */
|
||||
|
||||
/*
|
||||
* Possibly flush the link stack here, before we do a blr in
|
||||
* guest_exit_short_path.
|
||||
*/
|
||||
1: nop
|
||||
patch_site 1b patch__call_kvm_flush_link_stack
|
||||
|
||||
/* If we came in through the P9 short path, go back out to C now */
|
||||
lwz r0, STACK_SLOT_SHORT_PATH(r1)
|
||||
cmpwi r0, 0
|
||||
@ -1963,6 +1971,28 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
|
||||
mtlr r0
|
||||
blr
|
||||
|
||||
.balign 32
|
||||
.global kvm_flush_link_stack
|
||||
kvm_flush_link_stack:
|
||||
/* Save LR into r0 */
|
||||
mflr r0
|
||||
|
||||
/* Flush the link stack. On Power8 it's up to 32 entries in size. */
|
||||
.rept 32
|
||||
bl .+4
|
||||
.endr
|
||||
|
||||
/* And on Power9 it's up to 64. */
|
||||
BEGIN_FTR_SECTION
|
||||
.rept 32
|
||||
bl .+4
|
||||
.endr
|
||||
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
|
||||
|
||||
/* Restore LR */
|
||||
mtlr r0
|
||||
blr
|
||||
|
||||
kvmppc_guest_external:
|
||||
/* External interrupt, first check for host_ipi. If this is
|
||||
* set, we know the host wants us out so let's do it now
|
||||
|
@ -2005,6 +2005,10 @@ static int kvmppc_xive_create(struct kvm_device *dev, u32 type)
|
||||
|
||||
pr_devel("Creating xive for partition\n");
|
||||
|
||||
/* Already there ? */
|
||||
if (kvm->arch.xive)
|
||||
return -EEXIST;
|
||||
|
||||
xive = kvmppc_xive_get_device(kvm, type);
|
||||
if (!xive)
|
||||
return -ENOMEM;
|
||||
@ -2014,12 +2018,6 @@ static int kvmppc_xive_create(struct kvm_device *dev, u32 type)
|
||||
xive->kvm = kvm;
|
||||
mutex_init(&xive->lock);
|
||||
|
||||
/* Already there ? */
|
||||
if (kvm->arch.xive)
|
||||
ret = -EEXIST;
|
||||
else
|
||||
kvm->arch.xive = xive;
|
||||
|
||||
/* We use the default queue size set by the host */
|
||||
xive->q_order = xive_native_default_eq_shift();
|
||||
if (xive->q_order < PAGE_SHIFT)
|
||||
@ -2039,6 +2037,7 @@ static int kvmppc_xive_create(struct kvm_device *dev, u32 type)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
kvm->arch.xive = xive;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -50,6 +50,24 @@ static void kvmppc_xive_native_cleanup_queue(struct kvm_vcpu *vcpu, int prio)
|
||||
}
|
||||
}
|
||||
|
||||
static int kvmppc_xive_native_configure_queue(u32 vp_id, struct xive_q *q,
|
||||
u8 prio, __be32 *qpage,
|
||||
u32 order, bool can_escalate)
|
||||
{
|
||||
int rc;
|
||||
__be32 *qpage_prev = q->qpage;
|
||||
|
||||
rc = xive_native_configure_queue(vp_id, q, prio, qpage, order,
|
||||
can_escalate);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (qpage_prev)
|
||||
put_page(virt_to_page(qpage_prev));
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
void kvmppc_xive_native_cleanup_vcpu(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu;
|
||||
@ -582,19 +600,14 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
|
||||
q->guest_qaddr = 0;
|
||||
q->guest_qshift = 0;
|
||||
|
||||
rc = xive_native_configure_queue(xc->vp_id, q, priority,
|
||||
NULL, 0, true);
|
||||
rc = kvmppc_xive_native_configure_queue(xc->vp_id, q, priority,
|
||||
NULL, 0, true);
|
||||
if (rc) {
|
||||
pr_err("Failed to reset queue %d for VCPU %d: %d\n",
|
||||
priority, xc->server_num, rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
if (q->qpage) {
|
||||
put_page(virt_to_page(q->qpage));
|
||||
q->qpage = NULL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -624,12 +637,6 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
|
||||
|
||||
srcu_idx = srcu_read_lock(&kvm->srcu);
|
||||
gfn = gpa_to_gfn(kvm_eq.qaddr);
|
||||
page = gfn_to_page(kvm, gfn);
|
||||
if (is_error_page(page)) {
|
||||
srcu_read_unlock(&kvm->srcu, srcu_idx);
|
||||
pr_err("Couldn't get queue page %llx!\n", kvm_eq.qaddr);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
page_size = kvm_host_page_size(kvm, gfn);
|
||||
if (1ull << kvm_eq.qshift > page_size) {
|
||||
@ -638,6 +645,13 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
page = gfn_to_page(kvm, gfn);
|
||||
if (is_error_page(page)) {
|
||||
srcu_read_unlock(&kvm->srcu, srcu_idx);
|
||||
pr_err("Couldn't get queue page %llx!\n", kvm_eq.qaddr);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
qaddr = page_to_virt(page) + (kvm_eq.qaddr & ~PAGE_MASK);
|
||||
srcu_read_unlock(&kvm->srcu, srcu_idx);
|
||||
|
||||
@ -653,8 +667,8 @@ static int kvmppc_xive_native_set_queue_config(struct kvmppc_xive *xive,
|
||||
* OPAL level because the use of END ESBs is not supported by
|
||||
* Linux.
|
||||
*/
|
||||
rc = xive_native_configure_queue(xc->vp_id, q, priority,
|
||||
(__be32 *) qaddr, kvm_eq.qshift, true);
|
||||
rc = kvmppc_xive_native_configure_queue(xc->vp_id, q, priority,
|
||||
(__be32 *) qaddr, kvm_eq.qshift, true);
|
||||
if (rc) {
|
||||
pr_err("Failed to configure queue %d for VCPU %d: %d\n",
|
||||
priority, xc->server_num, rc);
|
||||
@ -1081,7 +1095,6 @@ static int kvmppc_xive_native_create(struct kvm_device *dev, u32 type)
|
||||
dev->private = xive;
|
||||
xive->dev = dev;
|
||||
xive->kvm = kvm;
|
||||
kvm->arch.xive = xive;
|
||||
mutex_init(&xive->mapping_lock);
|
||||
mutex_init(&xive->lock);
|
||||
|
||||
@ -1102,6 +1115,7 @@ static int kvmppc_xive_native_create(struct kvm_device *dev, u32 type)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
kvm->arch.xive = xive;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -285,7 +285,14 @@ static int opal_imc_counters_probe(struct platform_device *pdev)
|
||||
domain = IMC_DOMAIN_THREAD;
|
||||
break;
|
||||
case IMC_TYPE_TRACE:
|
||||
domain = IMC_DOMAIN_TRACE;
|
||||
/*
|
||||
* FIXME. Using trace_imc events to monitor application
|
||||
* or KVM thread performance can cause a checkstop
|
||||
* (system crash).
|
||||
* Disable it for now.
|
||||
*/
|
||||
pr_info_once("IMC: disabling trace_imc PMU\n");
|
||||
domain = -1;
|
||||
break;
|
||||
default:
|
||||
pr_warn("IMC Unknown Device type \n");
|
||||
|
@ -1035,6 +1035,15 @@ static int xive_irq_alloc_data(unsigned int virq, irq_hw_number_t hw)
|
||||
xd->target = XIVE_INVALID_TARGET;
|
||||
irq_set_handler_data(virq, xd);
|
||||
|
||||
/*
|
||||
* Turn OFF by default the interrupt being mapped. A side
|
||||
* effect of this check is the mapping the ESB page of the
|
||||
* interrupt in the Linux address space. This prevents page
|
||||
* fault issues in the crash handler which masks all
|
||||
* interrupts.
|
||||
*/
|
||||
xive_esb_read(xd, XIVE_ESB_SET_PQ_01);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -392,20 +392,28 @@ static int xive_spapr_populate_irq_data(u32 hw_irq, struct xive_irq_data *data)
|
||||
data->esb_shift = esb_shift;
|
||||
data->trig_page = trig_page;
|
||||
|
||||
data->hw_irq = hw_irq;
|
||||
|
||||
/*
|
||||
* No chip-id for the sPAPR backend. This has an impact how we
|
||||
* pick a target. See xive_pick_irq_target().
|
||||
*/
|
||||
data->src_chip = XIVE_INVALID_CHIP_ID;
|
||||
|
||||
/*
|
||||
* When the H_INT_ESB flag is set, the H_INT_ESB hcall should
|
||||
* be used for interrupt management. Skip the remapping of the
|
||||
* ESB pages which are not available.
|
||||
*/
|
||||
if (data->flags & XIVE_IRQ_FLAG_H_INT_ESB)
|
||||
return 0;
|
||||
|
||||
data->eoi_mmio = ioremap(data->eoi_page, 1u << data->esb_shift);
|
||||
if (!data->eoi_mmio) {
|
||||
pr_err("Failed to map EOI page for irq 0x%x\n", hw_irq);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
data->hw_irq = hw_irq;
|
||||
|
||||
/* Full function page supports trigger */
|
||||
if (flags & XIVE_SRC_TRIGGER) {
|
||||
data->trig_mmio = data->eoi_mmio;
|
||||
|
@ -1,8 +1,8 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# Makefile for xmon
|
||||
|
||||
# Disable clang warning for using setjmp without setjmp.h header
|
||||
subdir-ccflags-y := $(call cc-disable-warning, builtin-requires-header)
|
||||
# Avoid clang warnings around longjmp/setjmp declarations
|
||||
subdir-ccflags-y := -ffreestanding
|
||||
|
||||
GCOV_PROFILE := n
|
||||
KCOV_INSTRUMENT := n
|
||||
|
@ -170,6 +170,11 @@ void startup_kernel(void)
|
||||
handle_relocs(__kaslr_offset);
|
||||
|
||||
if (__kaslr_offset) {
|
||||
/*
|
||||
* Save KASLR offset for early dumps, before vmcore_info is set.
|
||||
* Mark as uneven to distinguish from real vmcore_info pointer.
|
||||
*/
|
||||
S390_lowcore.vmcore_info = __kaslr_offset | 0x1UL;
|
||||
/* Clear non-relocated kernel */
|
||||
if (IS_ENABLED(CONFIG_KERNEL_UNCOMPRESSED))
|
||||
memset(img, 0, vmlinux.image_size);
|
||||
|
@ -1173,8 +1173,6 @@ void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr);
|
||||
static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t entry)
|
||||
{
|
||||
if (!MACHINE_HAS_NX)
|
||||
pte_val(entry) &= ~_PAGE_NOEXEC;
|
||||
if (pte_present(entry))
|
||||
pte_val(entry) &= ~_PAGE_UNUSED;
|
||||
if (mm_has_pgste(mm))
|
||||
@ -1191,6 +1189,8 @@ static inline pte_t mk_pte_phys(unsigned long physpage, pgprot_t pgprot)
|
||||
{
|
||||
pte_t __pte;
|
||||
pte_val(__pte) = physpage + pgprot_val(pgprot);
|
||||
if (!MACHINE_HAS_NX)
|
||||
pte_val(__pte) &= ~_PAGE_NOEXEC;
|
||||
return pte_mkyoung(__pte);
|
||||
}
|
||||
|
||||
|
@ -254,10 +254,10 @@ void arch_crash_save_vmcoreinfo(void)
|
||||
VMCOREINFO_SYMBOL(lowcore_ptr);
|
||||
VMCOREINFO_SYMBOL(high_memory);
|
||||
VMCOREINFO_LENGTH(lowcore_ptr, NR_CPUS);
|
||||
mem_assign_absolute(S390_lowcore.vmcore_info, paddr_vmcoreinfo_note());
|
||||
vmcoreinfo_append_str("SDMA=%lx\n", __sdma);
|
||||
vmcoreinfo_append_str("EDMA=%lx\n", __edma);
|
||||
vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
|
||||
mem_assign_absolute(S390_lowcore.vmcore_info, paddr_vmcoreinfo_note());
|
||||
}
|
||||
|
||||
void machine_shutdown(void)
|
||||
|
@ -262,10 +262,13 @@ static void pcpu_prepare_secondary(struct pcpu *pcpu, int cpu)
|
||||
lc->spinlock_index = 0;
|
||||
lc->percpu_offset = __per_cpu_offset[cpu];
|
||||
lc->kernel_asce = S390_lowcore.kernel_asce;
|
||||
lc->user_asce = S390_lowcore.kernel_asce;
|
||||
lc->machine_flags = S390_lowcore.machine_flags;
|
||||
lc->user_timer = lc->system_timer =
|
||||
lc->steal_timer = lc->avg_steal_timer = 0;
|
||||
__ctl_store(lc->cregs_save_area, 0, 15);
|
||||
lc->cregs_save_area[1] = lc->kernel_asce;
|
||||
lc->cregs_save_area[7] = lc->vdso_asce;
|
||||
save_access_regs((unsigned int *) lc->access_regs_save_area);
|
||||
memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list,
|
||||
sizeof(lc->stfle_fac_list));
|
||||
@ -816,6 +819,8 @@ static void smp_init_secondary(void)
|
||||
|
||||
S390_lowcore.last_update_clock = get_tod_clock();
|
||||
restore_access_regs(S390_lowcore.access_regs_save_area);
|
||||
set_cpu_flag(CIF_ASCE_PRIMARY);
|
||||
set_cpu_flag(CIF_ASCE_SECONDARY);
|
||||
cpu_init();
|
||||
preempt_disable();
|
||||
init_cpu_timer();
|
||||
|
@ -407,6 +407,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
|
||||
}
|
||||
|
||||
#define ioremap_nocache(X,Y) ioremap((X),(Y))
|
||||
#define ioremap_uc(X,Y) ioremap((X),(Y))
|
||||
#define ioremap_wc(X,Y) ioremap((X),(Y))
|
||||
#define ioremap_wt(X,Y) ioremap((X),(Y))
|
||||
|
||||
|
@ -10,9 +10,11 @@ CONFIG_PSI=y
|
||||
CONFIG_IKCONFIG=y
|
||||
CONFIG_IKCONFIG_PROC=y
|
||||
CONFIG_IKHEADERS=m
|
||||
CONFIG_UCLAMP_TASK=y
|
||||
CONFIG_MEMCG=y
|
||||
CONFIG_MEMCG_SWAP=y
|
||||
CONFIG_RT_GROUP_SCHED=y
|
||||
CONFIG_UCLAMP_TASK_GROUP=y
|
||||
CONFIG_CGROUP_FREEZER=y
|
||||
CONFIG_CPUSETS=y
|
||||
CONFIG_CGROUP_CPUACCT=y
|
||||
@ -43,6 +45,7 @@ CONFIG_PARAVIRT=y
|
||||
CONFIG_NR_CPUS=32
|
||||
CONFIG_EFI=y
|
||||
CONFIG_CPU_FREQ_TIMES=y
|
||||
CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL=y
|
||||
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
|
||||
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
|
||||
CONFIG_CPUFREQ_DUMMY=m
|
||||
@ -180,6 +183,7 @@ CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
|
||||
# CONFIG_FW_CACHE is not set
|
||||
# CONFIG_ALLOW_DEV_COREDUMP is not set
|
||||
CONFIG_DEBUG_DEVRES=y
|
||||
CONFIG_GNSS=y
|
||||
CONFIG_OF=y
|
||||
CONFIG_ZRAM=y
|
||||
CONFIG_BLK_DEV_LOOP=y
|
||||
@ -250,6 +254,7 @@ CONFIG_SERIAL_8250=y
|
||||
CONFIG_SERIAL_8250_CONSOLE=y
|
||||
CONFIG_SERIAL_OF_PLATFORM=m
|
||||
CONFIG_SERIAL_DEV_BUS=y
|
||||
CONFIG_VIRTIO_CONSOLE=m
|
||||
CONFIG_HW_RANDOM=y
|
||||
CONFIG_HW_RANDOM_VIRTIO=m
|
||||
# CONFIG_I2C_COMPAT is not set
|
||||
@ -260,6 +265,7 @@ CONFIG_GPIOLIB=y
|
||||
CONFIG_DEVFREQ_THERMAL=y
|
||||
# CONFIG_X86_PKG_TEMP_THERMAL is not set
|
||||
CONFIG_REGULATOR=y
|
||||
CONFIG_REGULATOR_FIXED_VOLTAGE=y
|
||||
CONFIG_MEDIA_CAMERA_SUPPORT=y
|
||||
CONFIG_DRM=y
|
||||
# CONFIG_DRM_FBDEV_EMULATION is not set
|
||||
@ -281,6 +287,8 @@ CONFIG_HID_ELECOM=y
|
||||
CONFIG_HID_MAGICMOUSE=y
|
||||
CONFIG_HID_MICROSOFT=y
|
||||
CONFIG_HID_MULTITOUCH=y
|
||||
CONFIG_HID_PLANTRONICS=y
|
||||
CONFIG_HID_SONY=y
|
||||
CONFIG_USB_HIDDEV=y
|
||||
CONFIG_USB=y
|
||||
CONFIG_USB_GADGET=y
|
||||
@ -294,6 +302,8 @@ CONFIG_USB_CONFIGFS_F_MIDI=y
|
||||
CONFIG_MMC=m
|
||||
# CONFIG_PWRSEQ_EMMC is not set
|
||||
# CONFIG_PWRSEQ_SIMPLE is not set
|
||||
CONFIG_MMC_SDHCI=m
|
||||
CONFIG_MMC_SDHCI_PLTFM=m
|
||||
CONFIG_NEW_LEDS=y
|
||||
CONFIG_LEDS_CLASS=y
|
||||
CONFIG_LEDS_TRIGGERS=y
|
||||
@ -311,6 +321,7 @@ CONFIG_ION=y
|
||||
CONFIG_ION_SYSTEM_HEAP=y
|
||||
CONFIG_ION_SYSTEM_CONTIG_HEAP=y
|
||||
CONFIG_PM_DEVFREQ=y
|
||||
CONFIG_IIO=y
|
||||
CONFIG_ANDROID=y
|
||||
CONFIG_ANDROID_BINDER_IPC=y
|
||||
CONFIG_EXT4_FS=y
|
||||
|
@ -172,7 +172,7 @@
|
||||
ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_PTI
|
||||
.if \no_user_check == 0
|
||||
/* coming from usermode? */
|
||||
testl $SEGMENT_RPL_MASK, PT_CS(%esp)
|
||||
testl $USER_SEGMENT_RPL_MASK, PT_CS(%esp)
|
||||
jz .Lend_\@
|
||||
.endif
|
||||
/* On user-cr3? */
|
||||
@ -205,64 +205,76 @@
|
||||
#define CS_FROM_ENTRY_STACK (1 << 31)
|
||||
#define CS_FROM_USER_CR3 (1 << 30)
|
||||
#define CS_FROM_KERNEL (1 << 29)
|
||||
#define CS_FROM_ESPFIX (1 << 28)
|
||||
|
||||
.macro FIXUP_FRAME
|
||||
/*
|
||||
* The high bits of the CS dword (__csh) are used for CS_FROM_*.
|
||||
* Clear them in case hardware didn't do this for us.
|
||||
*/
|
||||
andl $0x0000ffff, 3*4(%esp)
|
||||
andl $0x0000ffff, 4*4(%esp)
|
||||
|
||||
#ifdef CONFIG_VM86
|
||||
testl $X86_EFLAGS_VM, 4*4(%esp)
|
||||
testl $X86_EFLAGS_VM, 5*4(%esp)
|
||||
jnz .Lfrom_usermode_no_fixup_\@
|
||||
#endif
|
||||
testl $SEGMENT_RPL_MASK, 3*4(%esp)
|
||||
testl $USER_SEGMENT_RPL_MASK, 4*4(%esp)
|
||||
jnz .Lfrom_usermode_no_fixup_\@
|
||||
|
||||
orl $CS_FROM_KERNEL, 3*4(%esp)
|
||||
orl $CS_FROM_KERNEL, 4*4(%esp)
|
||||
|
||||
/*
|
||||
* When we're here from kernel mode; the (exception) stack looks like:
|
||||
*
|
||||
* 5*4(%esp) - <previous context>
|
||||
* 4*4(%esp) - flags
|
||||
* 3*4(%esp) - cs
|
||||
* 2*4(%esp) - ip
|
||||
* 1*4(%esp) - orig_eax
|
||||
* 0*4(%esp) - gs / function
|
||||
* 6*4(%esp) - <previous context>
|
||||
* 5*4(%esp) - flags
|
||||
* 4*4(%esp) - cs
|
||||
* 3*4(%esp) - ip
|
||||
* 2*4(%esp) - orig_eax
|
||||
* 1*4(%esp) - gs / function
|
||||
* 0*4(%esp) - fs
|
||||
*
|
||||
* Lets build a 5 entry IRET frame after that, such that struct pt_regs
|
||||
* is complete and in particular regs->sp is correct. This gives us
|
||||
* the original 5 enties as gap:
|
||||
* the original 6 enties as gap:
|
||||
*
|
||||
* 12*4(%esp) - <previous context>
|
||||
* 11*4(%esp) - gap / flags
|
||||
* 10*4(%esp) - gap / cs
|
||||
* 9*4(%esp) - gap / ip
|
||||
* 8*4(%esp) - gap / orig_eax
|
||||
* 7*4(%esp) - gap / gs / function
|
||||
* 6*4(%esp) - ss
|
||||
* 5*4(%esp) - sp
|
||||
* 4*4(%esp) - flags
|
||||
* 3*4(%esp) - cs
|
||||
* 2*4(%esp) - ip
|
||||
* 1*4(%esp) - orig_eax
|
||||
* 0*4(%esp) - gs / function
|
||||
* 14*4(%esp) - <previous context>
|
||||
* 13*4(%esp) - gap / flags
|
||||
* 12*4(%esp) - gap / cs
|
||||
* 11*4(%esp) - gap / ip
|
||||
* 10*4(%esp) - gap / orig_eax
|
||||
* 9*4(%esp) - gap / gs / function
|
||||
* 8*4(%esp) - gap / fs
|
||||
* 7*4(%esp) - ss
|
||||
* 6*4(%esp) - sp
|
||||
* 5*4(%esp) - flags
|
||||
* 4*4(%esp) - cs
|
||||
* 3*4(%esp) - ip
|
||||
* 2*4(%esp) - orig_eax
|
||||
* 1*4(%esp) - gs / function
|
||||
* 0*4(%esp) - fs
|
||||
*/
|
||||
|
||||
pushl %ss # ss
|
||||
pushl %esp # sp (points at ss)
|
||||
addl $6*4, (%esp) # point sp back at the previous context
|
||||
pushl 6*4(%esp) # flags
|
||||
pushl 6*4(%esp) # cs
|
||||
pushl 6*4(%esp) # ip
|
||||
pushl 6*4(%esp) # orig_eax
|
||||
pushl 6*4(%esp) # gs / function
|
||||
addl $7*4, (%esp) # point sp back at the previous context
|
||||
pushl 7*4(%esp) # flags
|
||||
pushl 7*4(%esp) # cs
|
||||
pushl 7*4(%esp) # ip
|
||||
pushl 7*4(%esp) # orig_eax
|
||||
pushl 7*4(%esp) # gs / function
|
||||
pushl 7*4(%esp) # fs
|
||||
.Lfrom_usermode_no_fixup_\@:
|
||||
.endm
|
||||
|
||||
.macro IRET_FRAME
|
||||
/*
|
||||
* We're called with %ds, %es, %fs, and %gs from the interrupted
|
||||
* frame, so we shouldn't use them. Also, we may be in ESPFIX
|
||||
* mode and therefore have a nonzero SS base and an offset ESP,
|
||||
* so any attempt to access the stack needs to use SS. (except for
|
||||
* accesses through %esp, which automatically use SS.)
|
||||
*/
|
||||
testl $CS_FROM_KERNEL, 1*4(%esp)
|
||||
jz .Lfinished_frame_\@
|
||||
|
||||
@ -276,31 +288,40 @@
|
||||
movl 5*4(%esp), %eax # (modified) regs->sp
|
||||
|
||||
movl 4*4(%esp), %ecx # flags
|
||||
movl %ecx, -4(%eax)
|
||||
movl %ecx, %ss:-1*4(%eax)
|
||||
|
||||
movl 3*4(%esp), %ecx # cs
|
||||
andl $0x0000ffff, %ecx
|
||||
movl %ecx, -8(%eax)
|
||||
movl %ecx, %ss:-2*4(%eax)
|
||||
|
||||
movl 2*4(%esp), %ecx # ip
|
||||
movl %ecx, -12(%eax)
|
||||
movl %ecx, %ss:-3*4(%eax)
|
||||
|
||||
movl 1*4(%esp), %ecx # eax
|
||||
movl %ecx, -16(%eax)
|
||||
movl %ecx, %ss:-4*4(%eax)
|
||||
|
||||
popl %ecx
|
||||
lea -16(%eax), %esp
|
||||
lea -4*4(%eax), %esp
|
||||
popl %eax
|
||||
.Lfinished_frame_\@:
|
||||
.endm
|
||||
|
||||
.macro SAVE_ALL pt_regs_ax=%eax switch_stacks=0 skip_gs=0
|
||||
.macro SAVE_ALL pt_regs_ax=%eax switch_stacks=0 skip_gs=0 unwind_espfix=0
|
||||
cld
|
||||
.if \skip_gs == 0
|
||||
PUSH_GS
|
||||
.endif
|
||||
FIXUP_FRAME
|
||||
pushl %fs
|
||||
|
||||
pushl %eax
|
||||
movl $(__KERNEL_PERCPU), %eax
|
||||
movl %eax, %fs
|
||||
.if \unwind_espfix > 0
|
||||
UNWIND_ESPFIX_STACK
|
||||
.endif
|
||||
popl %eax
|
||||
|
||||
FIXUP_FRAME
|
||||
pushl %es
|
||||
pushl %ds
|
||||
pushl \pt_regs_ax
|
||||
@ -313,8 +334,6 @@
|
||||
movl $(__USER_DS), %edx
|
||||
movl %edx, %ds
|
||||
movl %edx, %es
|
||||
movl $(__KERNEL_PERCPU), %edx
|
||||
movl %edx, %fs
|
||||
.if \skip_gs == 0
|
||||
SET_KERNEL_GS %edx
|
||||
.endif
|
||||
@ -324,8 +343,8 @@
|
||||
.endif
|
||||
.endm
|
||||
|
||||
.macro SAVE_ALL_NMI cr3_reg:req
|
||||
SAVE_ALL
|
||||
.macro SAVE_ALL_NMI cr3_reg:req unwind_espfix=0
|
||||
SAVE_ALL unwind_espfix=\unwind_espfix
|
||||
|
||||
BUG_IF_WRONG_CR3
|
||||
|
||||
@ -357,6 +376,7 @@
|
||||
2: popl %es
|
||||
3: popl %fs
|
||||
POP_GS \pop
|
||||
IRET_FRAME
|
||||
.pushsection .fixup, "ax"
|
||||
4: movl $0, (%esp)
|
||||
jmp 1b
|
||||
@ -395,7 +415,8 @@
|
||||
|
||||
.macro CHECK_AND_APPLY_ESPFIX
|
||||
#ifdef CONFIG_X86_ESPFIX32
|
||||
#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8)
|
||||
#define GDT_ESPFIX_OFFSET (GDT_ENTRY_ESPFIX_SS * 8)
|
||||
#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + GDT_ESPFIX_OFFSET
|
||||
|
||||
ALTERNATIVE "jmp .Lend_\@", "", X86_BUG_ESPFIX
|
||||
|
||||
@ -1075,7 +1096,6 @@ restore_all:
|
||||
/* Restore user state */
|
||||
RESTORE_REGS pop=4 # skip orig_eax/error_code
|
||||
.Lirq_return:
|
||||
IRET_FRAME
|
||||
/*
|
||||
* ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization
|
||||
* when returning from IPI handler and when returning from
|
||||
@ -1128,30 +1148,43 @@ ENDPROC(entry_INT80_32)
|
||||
* We can't call C functions using the ESPFIX stack. This code reads
|
||||
* the high word of the segment base from the GDT and swiches to the
|
||||
* normal stack and adjusts ESP with the matching offset.
|
||||
*
|
||||
* We might be on user CR3 here, so percpu data is not mapped and we can't
|
||||
* access the GDT through the percpu segment. Instead, use SGDT to find
|
||||
* the cpu_entry_area alias of the GDT.
|
||||
*/
|
||||
#ifdef CONFIG_X86_ESPFIX32
|
||||
/* fixup the stack */
|
||||
mov GDT_ESPFIX_SS + 4, %al /* bits 16..23 */
|
||||
mov GDT_ESPFIX_SS + 7, %ah /* bits 24..31 */
|
||||
pushl %ecx
|
||||
subl $2*4, %esp
|
||||
sgdt (%esp)
|
||||
movl 2(%esp), %ecx /* GDT address */
|
||||
/*
|
||||
* Careful: ECX is a linear pointer, so we need to force base
|
||||
* zero. %cs is the only known-linear segment we have right now.
|
||||
*/
|
||||
mov %cs:GDT_ESPFIX_OFFSET + 4(%ecx), %al /* bits 16..23 */
|
||||
mov %cs:GDT_ESPFIX_OFFSET + 7(%ecx), %ah /* bits 24..31 */
|
||||
shl $16, %eax
|
||||
addl $2*4, %esp
|
||||
popl %ecx
|
||||
addl %esp, %eax /* the adjusted stack pointer */
|
||||
pushl $__KERNEL_DS
|
||||
pushl %eax
|
||||
lss (%esp), %esp /* switch to the normal stack segment */
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro UNWIND_ESPFIX_STACK
|
||||
/* It's safe to clobber %eax, all other regs need to be preserved */
|
||||
#ifdef CONFIG_X86_ESPFIX32
|
||||
movl %ss, %eax
|
||||
/* see if on espfix stack */
|
||||
cmpw $__ESPFIX_SS, %ax
|
||||
jne 27f
|
||||
movl $__KERNEL_DS, %eax
|
||||
movl %eax, %ds
|
||||
movl %eax, %es
|
||||
jne .Lno_fixup_\@
|
||||
/* switch to normal stack */
|
||||
FIXUP_ESPFIX_STACK
|
||||
27:
|
||||
.Lno_fixup_\@:
|
||||
#endif
|
||||
.endm
|
||||
|
||||
@ -1341,11 +1374,6 @@ END(spurious_interrupt_bug)
|
||||
|
||||
#ifdef CONFIG_XEN_PV
|
||||
ENTRY(xen_hypervisor_callback)
|
||||
pushl $-1 /* orig_ax = -1 => not a system call */
|
||||
SAVE_ALL
|
||||
ENCODE_FRAME_POINTER
|
||||
TRACE_IRQS_OFF
|
||||
|
||||
/*
|
||||
* Check to see if we got the event in the critical
|
||||
* region in xen_iret_direct, after we've reenabled
|
||||
@ -1353,16 +1381,17 @@ ENTRY(xen_hypervisor_callback)
|
||||
* iret instruction's behaviour where it delivers a
|
||||
* pending interrupt when enabling interrupts:
|
||||
*/
|
||||
movl PT_EIP(%esp), %eax
|
||||
cmpl $xen_iret_start_crit, %eax
|
||||
cmpl $xen_iret_start_crit, (%esp)
|
||||
jb 1f
|
||||
cmpl $xen_iret_end_crit, %eax
|
||||
cmpl $xen_iret_end_crit, (%esp)
|
||||
jae 1f
|
||||
|
||||
jmp xen_iret_crit_fixup
|
||||
|
||||
ENTRY(xen_do_upcall)
|
||||
1: mov %esp, %eax
|
||||
call xen_iret_crit_fixup
|
||||
1:
|
||||
pushl $-1 /* orig_ax = -1 => not a system call */
|
||||
SAVE_ALL
|
||||
ENCODE_FRAME_POINTER
|
||||
TRACE_IRQS_OFF
|
||||
mov %esp, %eax
|
||||
call xen_evtchn_do_upcall
|
||||
#ifndef CONFIG_PREEMPTION
|
||||
call xen_maybe_preempt_hcall
|
||||
@ -1449,10 +1478,9 @@ END(page_fault)
|
||||
|
||||
common_exception_read_cr2:
|
||||
/* the function address is in %gs's slot on the stack */
|
||||
SAVE_ALL switch_stacks=1 skip_gs=1
|
||||
SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
|
||||
|
||||
ENCODE_FRAME_POINTER
|
||||
UNWIND_ESPFIX_STACK
|
||||
|
||||
/* fixup %gs */
|
||||
GS_TO_REG %ecx
|
||||
@ -1474,9 +1502,8 @@ END(common_exception_read_cr2)
|
||||
|
||||
common_exception:
|
||||
/* the function address is in %gs's slot on the stack */
|
||||
SAVE_ALL switch_stacks=1 skip_gs=1
|
||||
SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
|
||||
ENCODE_FRAME_POINTER
|
||||
UNWIND_ESPFIX_STACK
|
||||
|
||||
/* fixup %gs */
|
||||
GS_TO_REG %ecx
|
||||
@ -1515,6 +1542,10 @@ ENTRY(nmi)
|
||||
ASM_CLAC
|
||||
|
||||
#ifdef CONFIG_X86_ESPFIX32
|
||||
/*
|
||||
* ESPFIX_SS is only ever set on the return to user path
|
||||
* after we've switched to the entry stack.
|
||||
*/
|
||||
pushl %eax
|
||||
movl %ss, %eax
|
||||
cmpw $__ESPFIX_SS, %ax
|
||||
@ -1550,6 +1581,11 @@ ENTRY(nmi)
|
||||
movl %ebx, %esp
|
||||
|
||||
.Lnmi_return:
|
||||
#ifdef CONFIG_X86_ESPFIX32
|
||||
testl $CS_FROM_ESPFIX, PT_CS(%esp)
|
||||
jnz .Lnmi_from_espfix
|
||||
#endif
|
||||
|
||||
CHECK_AND_APPLY_ESPFIX
|
||||
RESTORE_ALL_NMI cr3_reg=%edi pop=4
|
||||
jmp .Lirq_return
|
||||
@ -1557,23 +1593,42 @@ ENTRY(nmi)
|
||||
#ifdef CONFIG_X86_ESPFIX32
|
||||
.Lnmi_espfix_stack:
|
||||
/*
|
||||
* create the pointer to lss back
|
||||
* Create the pointer to LSS back
|
||||
*/
|
||||
pushl %ss
|
||||
pushl %esp
|
||||
addl $4, (%esp)
|
||||
/* copy the iret frame of 12 bytes */
|
||||
.rept 3
|
||||
pushl 16(%esp)
|
||||
.endr
|
||||
pushl %eax
|
||||
SAVE_ALL_NMI cr3_reg=%edi
|
||||
|
||||
/* Copy the (short) IRET frame */
|
||||
pushl 4*4(%esp) # flags
|
||||
pushl 4*4(%esp) # cs
|
||||
pushl 4*4(%esp) # ip
|
||||
|
||||
pushl %eax # orig_ax
|
||||
|
||||
SAVE_ALL_NMI cr3_reg=%edi unwind_espfix=1
|
||||
ENCODE_FRAME_POINTER
|
||||
FIXUP_ESPFIX_STACK # %eax == %esp
|
||||
|
||||
/* clear CS_FROM_KERNEL, set CS_FROM_ESPFIX */
|
||||
xorl $(CS_FROM_ESPFIX | CS_FROM_KERNEL), PT_CS(%esp)
|
||||
|
||||
xorl %edx, %edx # zero error code
|
||||
call do_nmi
|
||||
movl %esp, %eax # pt_regs pointer
|
||||
jmp .Lnmi_from_sysenter_stack
|
||||
|
||||
.Lnmi_from_espfix:
|
||||
RESTORE_ALL_NMI cr3_reg=%edi
|
||||
lss 12+4(%esp), %esp # back to espfix stack
|
||||
/*
|
||||
* Because we cleared CS_FROM_KERNEL, IRET_FRAME 'forgot' to
|
||||
* fix up the gap and long frame:
|
||||
*
|
||||
* 3 - original frame (exception)
|
||||
* 2 - ESPFIX block (above)
|
||||
* 6 - gap (FIXUP_FRAME)
|
||||
* 5 - long frame (FIXUP_FRAME)
|
||||
* 1 - orig_ax
|
||||
*/
|
||||
lss (1+5+6)*4(%esp), %esp # back to espfix stack
|
||||
jmp .Lirq_return
|
||||
#endif
|
||||
END(nmi)
|
||||
|
@ -78,8 +78,12 @@ struct cpu_entry_area {
|
||||
|
||||
/*
|
||||
* The GDT is just below entry_stack and thus serves (on x86_64) as
|
||||
* a a read-only guard page.
|
||||
* a read-only guard page. On 32-bit the GDT must be writeable, so
|
||||
* it needs an extra guard page.
|
||||
*/
|
||||
#ifdef CONFIG_X86_32
|
||||
char guard_entry_stack[PAGE_SIZE];
|
||||
#endif
|
||||
struct entry_stack_page entry_stack_page;
|
||||
|
||||
/*
|
||||
@ -94,7 +98,6 @@ struct cpu_entry_area {
|
||||
*/
|
||||
struct cea_exception_stacks estacks;
|
||||
#endif
|
||||
#ifdef CONFIG_CPU_SUP_INTEL
|
||||
/*
|
||||
* Per CPU debug store for Intel performance monitoring. Wastes a
|
||||
* full page at the moment.
|
||||
@ -105,11 +108,13 @@ struct cpu_entry_area {
|
||||
* Reserve enough fixmap PTEs.
|
||||
*/
|
||||
struct debug_store_buffers cpu_debug_buffers;
|
||||
#endif
|
||||
};
|
||||
|
||||
#define CPU_ENTRY_AREA_SIZE (sizeof(struct cpu_entry_area))
|
||||
#define CPU_ENTRY_AREA_TOT_SIZE (CPU_ENTRY_AREA_SIZE * NR_CPUS)
|
||||
#define CPU_ENTRY_AREA_SIZE (sizeof(struct cpu_entry_area))
|
||||
#define CPU_ENTRY_AREA_ARRAY_SIZE (CPU_ENTRY_AREA_SIZE * NR_CPUS)
|
||||
|
||||
/* Total size includes the readonly IDT mapping page as well: */
|
||||
#define CPU_ENTRY_AREA_TOTAL_SIZE (CPU_ENTRY_AREA_ARRAY_SIZE + PAGE_SIZE)
|
||||
|
||||
DECLARE_PER_CPU(struct cpu_entry_area *, cpu_entry_area);
|
||||
DECLARE_PER_CPU(struct cea_exception_stacks *, cea_exception_stacks);
|
||||
@ -117,13 +122,14 @@ DECLARE_PER_CPU(struct cea_exception_stacks *, cea_exception_stacks);
|
||||
extern void setup_cpu_entry_areas(void);
|
||||
extern void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags);
|
||||
|
||||
/* Single page reserved for the readonly IDT mapping: */
|
||||
#define CPU_ENTRY_AREA_RO_IDT CPU_ENTRY_AREA_BASE
|
||||
#define CPU_ENTRY_AREA_PER_CPU (CPU_ENTRY_AREA_RO_IDT + PAGE_SIZE)
|
||||
|
||||
#define CPU_ENTRY_AREA_RO_IDT_VADDR ((void *)CPU_ENTRY_AREA_RO_IDT)
|
||||
|
||||
#define CPU_ENTRY_AREA_MAP_SIZE \
|
||||
(CPU_ENTRY_AREA_PER_CPU + CPU_ENTRY_AREA_TOT_SIZE - CPU_ENTRY_AREA_BASE)
|
||||
(CPU_ENTRY_AREA_PER_CPU + CPU_ENTRY_AREA_ARRAY_SIZE - CPU_ENTRY_AREA_BASE)
|
||||
|
||||
extern struct cpu_entry_area *get_cpu_entry_area(int cpu);
|
||||
|
||||
|
@ -509,7 +509,7 @@ static inline void __fpu_invalidate_fpregs_state(struct fpu *fpu)
|
||||
|
||||
static inline int fpregs_state_valid(struct fpu *fpu, unsigned int cpu)
|
||||
{
|
||||
return fpu == this_cpu_read_stable(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
|
||||
return fpu == this_cpu_read(fpu_fpregs_owner_ctx) && cpu == fpu->last_cpu;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -44,11 +44,11 @@ extern bool __vmalloc_start_set; /* set once high_memory is set */
|
||||
* Define this here and validate with BUILD_BUG_ON() in pgtable_32.c
|
||||
* to avoid include recursion hell
|
||||
*/
|
||||
#define CPU_ENTRY_AREA_PAGES (NR_CPUS * 40)
|
||||
#define CPU_ENTRY_AREA_PAGES (NR_CPUS * 39)
|
||||
|
||||
#define CPU_ENTRY_AREA_BASE \
|
||||
((FIXADDR_TOT_START - PAGE_SIZE * (CPU_ENTRY_AREA_PAGES + 1)) \
|
||||
& PMD_MASK)
|
||||
/* The +1 is for the readonly IDT page: */
|
||||
#define CPU_ENTRY_AREA_BASE \
|
||||
((FIXADDR_TOT_START - PAGE_SIZE*(CPU_ENTRY_AREA_PAGES+1)) & PMD_MASK)
|
||||
|
||||
#define LDT_BASE_ADDR \
|
||||
((CPU_ENTRY_AREA_BASE - PAGE_SIZE) & PMD_MASK)
|
||||
|
@ -31,6 +31,18 @@
|
||||
*/
|
||||
#define SEGMENT_RPL_MASK 0x3
|
||||
|
||||
/*
|
||||
* When running on Xen PV, the actual privilege level of the kernel is 1,
|
||||
* not 0. Testing the Requested Privilege Level in a segment selector to
|
||||
* determine whether the context is user mode or kernel mode with
|
||||
* SEGMENT_RPL_MASK is wrong because the PV kernel's privilege level
|
||||
* matches the 0x3 mask.
|
||||
*
|
||||
* Testing with USER_SEGMENT_RPL_MASK is valid for both native and Xen PV
|
||||
* kernels because privilege level 2 is never used.
|
||||
*/
|
||||
#define USER_SEGMENT_RPL_MASK 0x2
|
||||
|
||||
/* User mode is privilege level 3: */
|
||||
#define USER_RPL 0x3
|
||||
|
||||
|
@ -39,6 +39,7 @@ static void __init spectre_v2_select_mitigation(void);
|
||||
static void __init ssb_select_mitigation(void);
|
||||
static void __init l1tf_select_mitigation(void);
|
||||
static void __init mds_select_mitigation(void);
|
||||
static void __init mds_print_mitigation(void);
|
||||
static void __init taa_select_mitigation(void);
|
||||
|
||||
/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
|
||||
@ -108,6 +109,12 @@ void __init check_bugs(void)
|
||||
mds_select_mitigation();
|
||||
taa_select_mitigation();
|
||||
|
||||
/*
|
||||
* As MDS and TAA mitigations are inter-related, print MDS
|
||||
* mitigation until after TAA mitigation selection is done.
|
||||
*/
|
||||
mds_print_mitigation();
|
||||
|
||||
arch_smt_update();
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
@ -245,6 +252,12 @@ static void __init mds_select_mitigation(void)
|
||||
(mds_nosmt || cpu_mitigations_auto_nosmt()))
|
||||
cpu_smt_disable(false);
|
||||
}
|
||||
}
|
||||
|
||||
static void __init mds_print_mitigation(void)
|
||||
{
|
||||
if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
|
||||
return;
|
||||
|
||||
pr_info("%s\n", mds_strings[mds_mitigation]);
|
||||
}
|
||||
@ -304,8 +317,12 @@ static void __init taa_select_mitigation(void)
|
||||
return;
|
||||
}
|
||||
|
||||
/* TAA mitigation is turned off on the cmdline (tsx_async_abort=off) */
|
||||
if (taa_mitigation == TAA_MITIGATION_OFF)
|
||||
/*
|
||||
* TAA mitigation via VERW is turned off if both
|
||||
* tsx_async_abort=off and mds=off are specified.
|
||||
*/
|
||||
if (taa_mitigation == TAA_MITIGATION_OFF &&
|
||||
mds_mitigation == MDS_MITIGATION_OFF)
|
||||
goto out;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
|
||||
@ -339,6 +356,15 @@ static void __init taa_select_mitigation(void)
|
||||
if (taa_nosmt || cpu_mitigations_auto_nosmt())
|
||||
cpu_smt_disable(false);
|
||||
|
||||
/*
|
||||
* Update MDS mitigation, if necessary, as the mds_user_clear is
|
||||
* now enabled for TAA mitigation.
|
||||
*/
|
||||
if (mds_mitigation == MDS_MITIGATION_OFF &&
|
||||
boot_cpu_has_bug(X86_BUG_MDS)) {
|
||||
mds_mitigation = MDS_MITIGATION_FULL;
|
||||
mds_select_mitigation();
|
||||
}
|
||||
out:
|
||||
pr_info("%s\n", taa_strings[taa_mitigation]);
|
||||
}
|
||||
|
@ -65,6 +65,9 @@ struct x86_hw_tss doublefault_tss __cacheline_aligned = {
|
||||
.ss = __KERNEL_DS,
|
||||
.ds = __USER_DS,
|
||||
.fs = __KERNEL_PERCPU,
|
||||
#ifndef CONFIG_X86_32_LAZY_GS
|
||||
.gs = __KERNEL_STACK_CANARY,
|
||||
#endif
|
||||
|
||||
.__cr3 = __pa_nodebug(swapper_pg_dir),
|
||||
};
|
||||
|
@ -571,6 +571,16 @@ ENTRY(initial_page_table)
|
||||
# error "Kernel PMDs should be 1, 2 or 3"
|
||||
# endif
|
||||
.align PAGE_SIZE /* needs to be page-sized too */
|
||||
|
||||
#ifdef CONFIG_PAGE_TABLE_ISOLATION
|
||||
/*
|
||||
* PTI needs another page so sync_initial_pagetable() works correctly
|
||||
* and does not scribble over the data which is placed behind the
|
||||
* actual initial_page_table. See clone_pgd_range().
|
||||
*/
|
||||
.fill 1024, 4, 0
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
.data
|
||||
|
@ -504,7 +504,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
|
||||
|
||||
r = -E2BIG;
|
||||
|
||||
if (*nent >= maxnent)
|
||||
if (WARN_ON(*nent >= maxnent))
|
||||
goto out;
|
||||
|
||||
do_host_cpuid(entry, function, 0);
|
||||
@ -810,6 +810,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 function,
|
||||
static int do_cpuid_func(struct kvm_cpuid_entry2 *entry, u32 func,
|
||||
int *nent, int maxnent, unsigned int type)
|
||||
{
|
||||
if (*nent >= maxnent)
|
||||
return -E2BIG;
|
||||
|
||||
if (type == KVM_GET_EMULATED_CPUID)
|
||||
return __do_cpuid_func_emulated(entry, func, nent, maxnent);
|
||||
|
||||
|
@ -2418,6 +2418,16 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
|
||||
entry_failure_code))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Immediately write vmcs02.GUEST_CR3. It will be propagated to vmcs12
|
||||
* on nested VM-Exit, which can occur without actually running L2 and
|
||||
* thus without hitting vmx_set_cr3(), e.g. if L1 is entering L2 with
|
||||
* vmcs12.GUEST_ACTIVITYSTATE=HLT, in which case KVM will intercept the
|
||||
* transition to HLT instead of running L2.
|
||||
*/
|
||||
if (enable_ept)
|
||||
vmcs_writel(GUEST_CR3, vmcs12->guest_cr3);
|
||||
|
||||
/* Late preparation of GUEST_PDPTRs now that EFER and CRs are set. */
|
||||
if (load_guest_pdptrs_vmcs12 && nested_cpu_has_ept(vmcs12) &&
|
||||
is_pae_paging(vcpu)) {
|
||||
|
@ -2995,6 +2995,7 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa)
|
||||
void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
|
||||
{
|
||||
struct kvm *kvm = vcpu->kvm;
|
||||
bool update_guest_cr3 = true;
|
||||
unsigned long guest_cr3;
|
||||
u64 eptp;
|
||||
|
||||
@ -3011,15 +3012,18 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
|
||||
spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock);
|
||||
}
|
||||
|
||||
if (enable_unrestricted_guest || is_paging(vcpu) ||
|
||||
is_guest_mode(vcpu))
|
||||
/* Loading vmcs02.GUEST_CR3 is handled by nested VM-Enter. */
|
||||
if (is_guest_mode(vcpu))
|
||||
update_guest_cr3 = false;
|
||||
else if (enable_unrestricted_guest || is_paging(vcpu))
|
||||
guest_cr3 = kvm_read_cr3(vcpu);
|
||||
else
|
||||
guest_cr3 = to_kvm_vmx(kvm)->ept_identity_map_addr;
|
||||
ept_load_pdptrs(vcpu);
|
||||
}
|
||||
|
||||
vmcs_writel(GUEST_CR3, guest_cr3);
|
||||
if (update_guest_cr3)
|
||||
vmcs_writel(GUEST_CR3, guest_cr3);
|
||||
}
|
||||
|
||||
int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
|
||||
|
@ -300,13 +300,14 @@ int kvm_set_shared_msr(unsigned slot, u64 value, u64 mask)
|
||||
struct kvm_shared_msrs *smsr = per_cpu_ptr(shared_msrs, cpu);
|
||||
int err;
|
||||
|
||||
if (((value ^ smsr->values[slot].curr) & mask) == 0)
|
||||
value = (value & mask) | (smsr->values[slot].host & ~mask);
|
||||
if (value == smsr->values[slot].curr)
|
||||
return 0;
|
||||
smsr->values[slot].curr = value;
|
||||
err = wrmsrl_safe(shared_msrs_global.msrs[slot], value);
|
||||
if (err)
|
||||
return 1;
|
||||
|
||||
smsr->values[slot].curr = value;
|
||||
if (!smsr->registered) {
|
||||
smsr->urn.on_user_return = kvm_on_user_return;
|
||||
user_return_notifier_register(&smsr->urn);
|
||||
@ -1327,10 +1328,15 @@ static u64 kvm_get_arch_capabilities(void)
|
||||
* If TSX is disabled on the system, guests are also mitigated against
|
||||
* TAA and clear CPU buffer mitigation is not required for guests.
|
||||
*/
|
||||
if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) &&
|
||||
(data & ARCH_CAP_TSX_CTRL_MSR))
|
||||
if (!boot_cpu_has(X86_FEATURE_RTM))
|
||||
data &= ~ARCH_CAP_TAA_NO;
|
||||
else if (!boot_cpu_has_bug(X86_BUG_TAA))
|
||||
data |= ARCH_CAP_TAA_NO;
|
||||
else if (data & ARCH_CAP_TSX_CTRL_MSR)
|
||||
data &= ~ARCH_CAP_MDS_NO;
|
||||
|
||||
/* KVM does not emulate MSR_IA32_TSX_CTRL. */
|
||||
data &= ~ARCH_CAP_TSX_CTRL_MSR;
|
||||
return data;
|
||||
}
|
||||
|
||||
@ -4421,6 +4427,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
|
||||
case KVM_SET_NESTED_STATE: {
|
||||
struct kvm_nested_state __user *user_kvm_nested_state = argp;
|
||||
struct kvm_nested_state kvm_state;
|
||||
int idx;
|
||||
|
||||
r = -EINVAL;
|
||||
if (!kvm_x86_ops->set_nested_state)
|
||||
@ -4444,7 +4451,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
|
||||
&& !(kvm_state.flags & KVM_STATE_NESTED_GUEST_MODE))
|
||||
break;
|
||||
|
||||
idx = srcu_read_lock(&vcpu->kvm->srcu);
|
||||
r = kvm_x86_ops->set_nested_state(vcpu, user_kvm_nested_state, &kvm_state);
|
||||
srcu_read_unlock(&vcpu->kvm->srcu, idx);
|
||||
break;
|
||||
}
|
||||
case KVM_GET_SUPPORTED_HV_CPUID: {
|
||||
|
@ -178,7 +178,9 @@ static __init void setup_cpu_entry_area_ptes(void)
|
||||
#ifdef CONFIG_X86_32
|
||||
unsigned long start, end;
|
||||
|
||||
BUILD_BUG_ON(CPU_ENTRY_AREA_PAGES * PAGE_SIZE < CPU_ENTRY_AREA_MAP_SIZE);
|
||||
/* The +1 is for the readonly IDT: */
|
||||
BUILD_BUG_ON((CPU_ENTRY_AREA_PAGES+1)*PAGE_SIZE != CPU_ENTRY_AREA_MAP_SIZE);
|
||||
BUILD_BUG_ON(CPU_ENTRY_AREA_TOTAL_SIZE != CPU_ENTRY_AREA_MAP_SIZE);
|
||||
BUG_ON(CPU_ENTRY_AREA_BASE & ~PMD_MASK);
|
||||
|
||||
start = CPU_ENTRY_AREA_BASE;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user