Merge 6.1.77 into android14-6.1-lts
Changes in 6.1.77 asm-generic: make sparse happy with odd-sized put_unaligned_*() powerpc/mm: Fix null-pointer dereference in pgtable_cache_add arm64: irq: set the correct node for VMAP stack drivers/perf: pmuv3: don't expose SW_INCR event in sysfs powerpc: Fix build error due to is_valid_bugaddr() powerpc/mm: Fix build failures due to arch_reserved_kernel_pages() powerpc/64s: Fix CONFIG_NUMA=n build due to create_section_mapping() x86/boot: Ignore NMIs during very early boot powerpc: pmd_move_must_withdraw() is only needed for CONFIG_TRANSPARENT_HUGEPAGE powerpc/lib: Validate size for vector operations x86/mce: Mark fatal MCE's page as poison to avoid panic in the kdump kernel perf/core: Fix narrow startup race when creating the perf nr_addr_filters sysfs file debugobjects: Stop accessing objects after releasing hash bucket lock regulator: core: Only increment use_count when enable_count changes audit: Send netlink ACK before setting connection in auditd_set ACPI: video: Add quirk for the Colorful X15 AT 23 Laptop PNP: ACPI: fix fortify warning ACPI: extlog: fix NULL pointer dereference check ACPI: NUMA: Fix the logic of getting the fake_pxm value PM / devfreq: Synchronize devfreq_monitor_[start/stop] ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on synchronous events FS:JFS:UBSAN:array-index-out-of-bounds in dbAdjTree UBSAN: array-index-out-of-bounds in dtSplitRoot jfs: fix slab-out-of-bounds Read in dtSearch jfs: fix array-index-out-of-bounds in dbAdjTree jfs: fix uaf in jfs_evict_inode pstore/ram: Fix crash when setting number of cpus to an odd number crypto: octeontx2 - Fix cptvf driver cleanup erofs: fix ztailpacking for subpage compressed blocks crypto: stm32/crc32 - fix parsing list of devices afs: fix the usage of read_seqbegin_or_lock() in afs_lookup_volume_rcu() afs: fix the usage of read_seqbegin_or_lock() in afs_find_server*() rxrpc_find_service_conn_rcu: fix the usage of read_seqbegin_or_lock() jfs: fix array-index-out-of-bounds in diNewExt arch: consolidate arch_irq_work_raise prototypes s390/vfio-ap: fix sysfs status attribute for AP queue devices s390/ptrace: handle setting of fpc register correctly KVM: s390: fix setting of fpc register SUNRPC: Fix a suspicious RCU usage warning ecryptfs: Reject casefold directory inodes ext4: fix inconsistent between segment fstrim and full fstrim ext4: unify the type of flexbg_size to unsigned int ext4: remove unnecessary check from alloc_flex_gd() ext4: avoid online resizing failures due to oversized flex bg wifi: rt2x00: restart beacon queue when hardware reset selftests/bpf: satisfy compiler by having explicit return in btf test selftests/bpf: Fix pyperf180 compilation failure with clang18 wifi: rt2x00: correct wrong BBP register in RxDCOC calibration selftests/bpf: Fix issues in setup_classid_environment() soc: xilinx: Fix for call trace due to the usage of smp_processor_id() soc: xilinx: fix unhandled SGI warning message scsi: lpfc: Fix possible file string name overflow when updating firmware PCI: Add no PM reset quirk for NVIDIA Spectrum devices bonding: return -ENOMEM instead of BUG in alb_upper_dev_walk net: usb: ax88179_178a: avoid two consecutive device resets scsi: mpi3mr: Add PCI checks where SAS5116 diverges from SAS4116 scsi: arcmsr: Support new PCI device IDs 1883 and 1886 ARM: dts: imx7d: Fix coresight funnel ports ARM: dts: imx7s: Fix lcdif compatible ARM: dts: imx7s: Fix nand-controller #size-cells wifi: ath9k: Fix potential array-index-out-of-bounds read in ath9k_htc_txstatus() wifi: ath11k: fix race due to setting ATH11K_FLAG_EXT_IRQ_ENABLED too early bpf: Check rcu_read_lock_trace_held() before calling bpf map helpers scsi: libfc: Don't schedule abort twice scsi: libfc: Fix up timeout error in fc_fcp_rec_error() bpf: Set uattr->batch.count as zero before batched update or deletion wifi: wfx: fix possible NULL pointer dereference in wfx_set_mfp_ap() ARM: dts: rockchip: fix rk3036 hdmi ports node ARM: dts: imx25/27-eukrea: Fix RTC node name ARM: dts: imx: Use flash@0,0 pattern ARM: dts: imx27: Fix sram node ARM: dts: imx1: Fix sram node net: phy: at803x: fix passing the wrong reference for config_intr ionic: pass opcode to devcmd_wait ionic: bypass firmware cmds when stuck in reset block/rnbd-srv: Check for unlikely string overflow ARM: dts: imx25: Fix the iim compatible string ARM: dts: imx25/27: Pass timing0 ARM: dts: imx27-apf27dev: Fix LED name ARM: dts: imx23-sansa: Use preferred i2c-gpios properties ARM: dts: imx23/28: Fix the DMA controller node name scsi: hisi_sas: Set .phy_attached before notifing phyup event HISI_PHYE_PHY_UP_PM ice: fix ICE_AQ_VSI_Q_OPT_RSS_* register values net: atlantic: eliminate double free in error handling logic net: dsa: mv88e6xxx: Fix mv88e6352_serdes_get_stats error path block: prevent an integer overflow in bvec_try_merge_hw_page md: Whenassemble the array, consult the superblock of the freshest device arm64: dts: qcom: msm8996: Fix 'in-ports' is a required property arm64: dts: qcom: msm8998: Fix 'out-ports' is a required property ice: fix pre-shifted bit usage arm64: dts: amlogic: fix format for s4 uart node wifi: rtl8xxxu: Add additional USB IDs for RTL8192EU devices libbpf: Fix NULL pointer dereference in bpf_object__collect_prog_relos wifi: rtlwifi: rtl8723{be,ae}: using calculate_bit_shift() wifi: cfg80211: free beacon_ies when overridden from hidden BSS Bluetooth: qca: Set both WIDEBAND_SPEECH and LE_STATES quirks for QCA2066 Bluetooth: hci_sync: fix BR/EDR wakeup bug Bluetooth: L2CAP: Fix possible multiple reject send net/smc: disable SEID on non-s390 archs where virtual ISM may be used bridge: cfm: fix enum typo in br_cc_ccm_tx_parse i40e: Fix VF disable behavior to block all traffic octeontx2-af: Fix max NPC MCAM entry check while validating ref_entry net: dsa: qca8k: put MDIO bus OF node on qca8k_mdio_register() failure f2fs: fix to check return value of f2fs_reserve_new_block() ALSA: hda: Refer to correct stream index at loops ASoC: doc: Fix undefined SND_SOC_DAPM_NOPM argument fast_dput(): handle underflows gracefully RDMA/IPoIB: Fix error code return in ipoib_mcast_join drm/panel-edp: Add override_edid_mode quirk for generic edp drm/bridge: anx7625: Fix Set HPD irq detect window to 2ms drm/amd/display: Fix tiled display misalignment f2fs: fix write pointers on zoned device after roll forward ASoC: amd: Add new dmi entries for acp5x platform drm/drm_file: fix use of uninitialized variable drm/framebuffer: Fix use of uninitialized variable drm/mipi-dsi: Fix detach call without attach media: stk1160: Fixed high volume of stk1160_dbg messages media: rockchip: rga: fix swizzling for RGB formats PCI: add INTEL_HDA_ARL to pci_ids.h ALSA: hda: Intel: add HDA_ARL PCI ID support media: rkisp1: Drop IRQF_SHARED media: rkisp1: Fix IRQ handler return values media: rkisp1: Store IRQ lines media: rkisp1: Fix IRQ disable race issue hwmon: (nct6775) Fix fan speed set failure in automatic mode f2fs: fix to tag gcing flag on page during block migration drm/exynos: Call drm_atomic_helper_shutdown() at shutdown/unbind time IB/ipoib: Fix mcast list locking media: amphion: remove mutext lock in condition of wait_event media: ddbridge: fix an error code problem in ddb_probe media: i2c: imx335: Fix hblank min/max values drm/amd/display: For prefetch mode > 0, extend prefetch if possible drm/msm/dpu: Ratelimit framedone timeout msgs drm/msm/dpu: fix writeback programming for YUV cases drm/amdgpu: fix ftrace event amdgpu_bo_move always move on same heap clk: hi3620: Fix memory leak in hi3620_mmc_clk_init() clk: mmp: pxa168: Fix memory leak in pxa168_clk_init() watchdog: it87_wdt: Keep WDTCTRL bit 3 unmodified for IT8784/IT8786 drm/amd/display: make flip_timestamp_in_us a 64-bit variable clk: imx: clk-imx8qxp: fix LVDS bypass, pixel and phy clocks drm/amdgpu: Fix ecc irq enable/disable unpaired drm/amdgpu: Let KFD sync with VM fences drm/amdgpu: Fix '*fw' from request_firmware() not released in 'amdgpu_ucode_request()' drm/amdgpu: Drop 'fence' check in 'to_amdgpu_amdkfd_fence()' drm/amdkfd: Fix iterator used outside loop in 'kfd_add_peer_prop()' ALSA: hda/conexant: Fix headset auto detect fail in cx8070 and SN6140 leds: trigger: panic: Don't register panic notifier if creating the trigger failed um: Fix naming clash between UML and scheduler um: Don't use vfprintf() for os_info() um: net: Fix return type of uml_net_start_xmit() um: time-travel: fix time corruption i3c: master: cdns: Update maximum prescaler value for i2c clock xen/gntdev: Fix the abuse of underlying struct page in DMA-buf import mfd: ti_am335x_tscadc: Fix TI SoC dependencies mailbox: arm_mhuv2: Fix a bug for mhuv2_sender_interrupt PCI: Only override AMD USB controller if required PCI: switchtec: Fix stdev_release() crash after surprise hot remove perf cs-etm: Bump minimum OpenCSD version to ensure a bugfix is present usb: hub: Replace hardcoded quirk value with BIT() macro usb: hub: Add quirk to decrease IN-ep poll interval for Microchip USB491x hub selftests/sgx: Fix linker script asserts tty: allow TIOCSLCKTRMIOS with CAP_CHECKPOINT_RESTORE fs/kernfs/dir: obey S_ISGID spmi: mediatek: Fix UAF on device remove PCI: Fix 64GT/s effective data rate calculation PCI/AER: Decode Requester ID when no error info found 9p: Fix initialisation of netfs_inode for 9p misc: lis3lv02d_i2c: Add missing setting of the reg_ctrl callback libsubcmd: Fix memory leak in uniq() drm/amdkfd: Fix lock dependency warning drm/amdkfd: Fix lock dependency warning with srcu virtio_net: Fix "‘%d’ directive writing between 1 and 11 bytes into a region of size 10" warnings blk-mq: fix IO hang from sbitmap wakeup race ceph: reinitialize mds feature bit even when session in open ceph: fix deadlock or deadcode of misusing dget() ceph: fix invalid pointer access if get_quota_realm return ERR_PTR drm/amd/powerplay: Fix kzalloc parameter 'ATOM_Tonga_PPM_Table' in 'get_platform_power_management_table()' drm/amdgpu: Fix with right return code '-EIO' in 'amdgpu_gmc_vram_checking()' drm/amdgpu: Release 'adev->pm.fw' before return in 'amdgpu_device_need_post()' drm/amdkfd: Fix 'node' NULL check in 'svm_range_get_range_boundaries()' perf: Fix the nr_addr_filters fix wifi: cfg80211: fix RCU dereference in __cfg80211_bss_update drm: using mul_u32_u32() requires linux/math64.h scsi: isci: Fix an error code problem in isci_io_request_build() regulator: ti-abb: don't use devm_platform_ioremap_resource_byname for shared interrupt register scsi: core: Move scsi_host_busy() out of host lock for waking up EH handler HID: hidraw: fix a problem of memory leak in hidraw_release() selftests: net: give more time for GRO aggregation ip6_tunnel: make sure to pull inner header in __ip6_tnl_rcv() ipv4: raw: add drop reasons ipmr: fix kernel panic when forwarding mcast packets net: lan966x: Fix port configuration when using SGMII interface tcp: add sanity checks to rx zerocopy ixgbe: Refactor returning internal error codes ixgbe: Refactor overtemp event handling ixgbe: Fix an error handling path in ixgbe_read_iosf_sb_reg_x550() net: dsa: qca8k: fix illegal usage of GPIO ipv6: Ensure natural alignment of const ipv6 loopback and router addresses llc: call sock_orphan() at release time bridge: mcast: fix disabled snooping after long uptime selftests: net: add missing config for GENEVE netfilter: conntrack: correct window scaling with retransmitted SYN netfilter: nf_tables: restrict tunnel object to NFPROTO_NETDEV netfilter: nf_log: replace BUG_ON by WARN_ON_ONCE when putting logger netfilter: nft_ct: sanitize layer 3 and 4 protocol number in custom expectations net: ipv4: fix a memleak in ip_setup_cork af_unix: fix lockdep positive in sk_diag_dump_icons() selftests: net: fix available tunnels detection net: sysfs: Fix /sys/class/net/<iface> path selftests: team: Add missing config options selftests: bonding: Check initial state arm64: irq: set the correct node for shadow call stack mm, kmsan: fix infinite recursion due to RCU critical section Revert "drm/amd/display: Disable PSR-SU on Parade 0803 TCON again" drm/msm/dsi: Enable runtime PM LoongArch/smp: Call rcutree_report_cpu_starting() at tlb_init() gve: Fix use-after-free vulnerability bonding: remove print in bond_verify_device_path ASoC: codecs: lpass-wsa-macro: fix compander volume hack ASoC: codecs: wsa883x: fix PA volume control drm/amdgpu: Fix missing error code in 'gmc_v6/7/8/9_0_hw_init()' Linux 6.1.77 Change-Id: I8d69fc7831db64d8a0fad88a318f03052f8bbf69 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
20b90d46a0
@ -1,4 +1,4 @@
|
|||||||
What: /sys/class/<iface>/queues/rx-<queue>/rps_cpus
|
What: /sys/class/net/<iface>/queues/rx-<queue>/rps_cpus
|
||||||
Date: March 2010
|
Date: March 2010
|
||||||
KernelVersion: 2.6.35
|
KernelVersion: 2.6.35
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -8,7 +8,7 @@ Description:
|
|||||||
network device queue. Possible values depend on the number
|
network device queue. Possible values depend on the number
|
||||||
of available CPU(s) in the system.
|
of available CPU(s) in the system.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/rx-<queue>/rps_flow_cnt
|
What: /sys/class/net/<iface>/queues/rx-<queue>/rps_flow_cnt
|
||||||
Date: April 2010
|
Date: April 2010
|
||||||
KernelVersion: 2.6.35
|
KernelVersion: 2.6.35
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -16,7 +16,7 @@ Description:
|
|||||||
Number of Receive Packet Steering flows being currently
|
Number of Receive Packet Steering flows being currently
|
||||||
processed by this particular network device receive queue.
|
processed by this particular network device receive queue.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/tx-<queue>/tx_timeout
|
What: /sys/class/net/<iface>/queues/tx-<queue>/tx_timeout
|
||||||
Date: November 2011
|
Date: November 2011
|
||||||
KernelVersion: 3.3
|
KernelVersion: 3.3
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -24,7 +24,7 @@ Description:
|
|||||||
Indicates the number of transmit timeout events seen by this
|
Indicates the number of transmit timeout events seen by this
|
||||||
network interface transmit queue.
|
network interface transmit queue.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/tx-<queue>/tx_maxrate
|
What: /sys/class/net/<iface>/queues/tx-<queue>/tx_maxrate
|
||||||
Date: March 2015
|
Date: March 2015
|
||||||
KernelVersion: 4.1
|
KernelVersion: 4.1
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -32,7 +32,7 @@ Description:
|
|||||||
A Mbps max-rate set for the queue, a value of zero means disabled,
|
A Mbps max-rate set for the queue, a value of zero means disabled,
|
||||||
default is disabled.
|
default is disabled.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/tx-<queue>/xps_cpus
|
What: /sys/class/net/<iface>/queues/tx-<queue>/xps_cpus
|
||||||
Date: November 2010
|
Date: November 2010
|
||||||
KernelVersion: 2.6.38
|
KernelVersion: 2.6.38
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -42,7 +42,7 @@ Description:
|
|||||||
network device transmit queue. Possible vaules depend on the
|
network device transmit queue. Possible vaules depend on the
|
||||||
number of available CPU(s) in the system.
|
number of available CPU(s) in the system.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/tx-<queue>/xps_rxqs
|
What: /sys/class/net/<iface>/queues/tx-<queue>/xps_rxqs
|
||||||
Date: June 2018
|
Date: June 2018
|
||||||
KernelVersion: 4.18.0
|
KernelVersion: 4.18.0
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -53,7 +53,7 @@ Description:
|
|||||||
number of available receive queue(s) in the network device.
|
number of available receive queue(s) in the network device.
|
||||||
Default is disabled.
|
Default is disabled.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time
|
What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time
|
||||||
Date: November 2011
|
Date: November 2011
|
||||||
KernelVersion: 3.3
|
KernelVersion: 3.3
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -62,7 +62,7 @@ Description:
|
|||||||
of this particular network device transmit queue.
|
of this particular network device transmit queue.
|
||||||
Default value is 1000.
|
Default value is 1000.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/inflight
|
What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/inflight
|
||||||
Date: November 2011
|
Date: November 2011
|
||||||
KernelVersion: 3.3
|
KernelVersion: 3.3
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -70,7 +70,7 @@ Description:
|
|||||||
Indicates the number of bytes (objects) in flight on this
|
Indicates the number of bytes (objects) in flight on this
|
||||||
network device transmit queue.
|
network device transmit queue.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit
|
What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit
|
||||||
Date: November 2011
|
Date: November 2011
|
||||||
KernelVersion: 3.3
|
KernelVersion: 3.3
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -79,7 +79,7 @@ Description:
|
|||||||
on this network device transmit queue. This value is clamped
|
on this network device transmit queue. This value is clamped
|
||||||
to be within the bounds defined by limit_max and limit_min.
|
to be within the bounds defined by limit_max and limit_min.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max
|
What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max
|
||||||
Date: November 2011
|
Date: November 2011
|
||||||
KernelVersion: 3.3
|
KernelVersion: 3.3
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
@ -88,7 +88,7 @@ Description:
|
|||||||
queued on this network device transmit queue. See
|
queued on this network device transmit queue. See
|
||||||
include/linux/dynamic_queue_limits.h for the default value.
|
include/linux/dynamic_queue_limits.h for the default value.
|
||||||
|
|
||||||
What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min
|
What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min
|
||||||
Date: November 2011
|
Date: November 2011
|
||||||
KernelVersion: 3.3
|
KernelVersion: 3.3
|
||||||
Contact: netdev@vger.kernel.org
|
Contact: netdev@vger.kernel.org
|
||||||
|
@ -234,7 +234,7 @@ corresponding soft power control. In this case it is necessary to create
|
|||||||
a virtual widget - a widget with no control bits e.g.
|
a virtual widget - a widget with no control bits e.g.
|
||||||
::
|
::
|
||||||
|
|
||||||
SND_SOC_DAPM_MIXER("AC97 Mixer", SND_SOC_DAPM_NOPM, 0, 0, NULL, 0),
|
SND_SOC_DAPM_MIXER("AC97 Mixer", SND_SOC_NOPM, 0, 0, NULL, 0),
|
||||||
|
|
||||||
This can be used to merge to signal paths together in software.
|
This can be used to merge to signal paths together in software.
|
||||||
|
|
||||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 6
|
VERSION = 6
|
||||||
PATCHLEVEL = 1
|
PATCHLEVEL = 1
|
||||||
SUBLEVEL = 76
|
SUBLEVEL = 77
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Curry Ramen
|
NAME = Curry Ramen
|
||||||
|
|
||||||
|
@ -65,7 +65,7 @@ &weim {
|
|||||||
pinctrl-0 = <&pinctrl_weim>;
|
pinctrl-0 = <&pinctrl_weim>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
nor: nor@0,0 {
|
nor: flash@0,0 {
|
||||||
compatible = "cfi-flash";
|
compatible = "cfi-flash";
|
||||||
reg = <0 0x00000000 0x02000000>;
|
reg = <0 0x00000000 0x02000000>;
|
||||||
bank-width = <4>;
|
bank-width = <4>;
|
||||||
|
@ -45,7 +45,7 @@ &weim {
|
|||||||
pinctrl-0 = <&pinctrl_weim>;
|
pinctrl-0 = <&pinctrl_weim>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
nor: nor@0,0 {
|
nor: flash@0,0 {
|
||||||
compatible = "cfi-flash";
|
compatible = "cfi-flash";
|
||||||
reg = <0 0x00000000 0x02000000>;
|
reg = <0 0x00000000 0x02000000>;
|
||||||
bank-width = <2>;
|
bank-width = <2>;
|
||||||
|
@ -268,9 +268,12 @@ weim: weim@220000 {
|
|||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
esram: esram@300000 {
|
esram: sram@300000 {
|
||||||
compatible = "mmio-sram";
|
compatible = "mmio-sram";
|
||||||
reg = <0x00300000 0x20000>;
|
reg = <0x00300000 0x20000>;
|
||||||
|
ranges = <0 0x00300000 0x20000>;
|
||||||
|
#address-cells = <1>;
|
||||||
|
#size-cells = <1>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -175,10 +175,8 @@ i2c-0 {
|
|||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <0>;
|
#size-cells = <0>;
|
||||||
compatible = "i2c-gpio";
|
compatible = "i2c-gpio";
|
||||||
gpios = <
|
sda-gpios = <&gpio1 24 0>;
|
||||||
&gpio1 24 0 /* SDA */
|
scl-gpios = <&gpio1 22 0>;
|
||||||
&gpio1 22 0 /* SCL */
|
|
||||||
>;
|
|
||||||
i2c-gpio,delay-us = <2>; /* ~100 kHz */
|
i2c-gpio,delay-us = <2>; /* ~100 kHz */
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -186,10 +184,8 @@ i2c-1 {
|
|||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <0>;
|
#size-cells = <0>;
|
||||||
compatible = "i2c-gpio";
|
compatible = "i2c-gpio";
|
||||||
gpios = <
|
sda-gpios = <&gpio0 31 0>;
|
||||||
&gpio0 31 0 /* SDA */
|
scl-gpios = <&gpio0 30 0>;
|
||||||
&gpio0 30 0 /* SCL */
|
|
||||||
>;
|
|
||||||
i2c-gpio,delay-us = <2>; /* ~100 kHz */
|
i2c-gpio,delay-us = <2>; /* ~100 kHz */
|
||||||
|
|
||||||
touch: touch@20 {
|
touch: touch@20 {
|
||||||
|
@ -414,7 +414,7 @@ emi@80020000 {
|
|||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
dma_apbx: dma-apbx@80024000 {
|
dma_apbx: dma-controller@80024000 {
|
||||||
compatible = "fsl,imx23-dma-apbx";
|
compatible = "fsl,imx23-dma-apbx";
|
||||||
reg = <0x80024000 0x2000>;
|
reg = <0x80024000 0x2000>;
|
||||||
interrupts = <7 5 9 26
|
interrupts = <7 5 9 26
|
||||||
|
@ -27,7 +27,7 @@ &i2c1 {
|
|||||||
pinctrl-0 = <&pinctrl_i2c1>;
|
pinctrl-0 = <&pinctrl_i2c1>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
pcf8563@51 {
|
rtc@51 {
|
||||||
compatible = "nxp,pcf8563";
|
compatible = "nxp,pcf8563";
|
||||||
reg = <0x51>;
|
reg = <0x51>;
|
||||||
};
|
};
|
||||||
|
@ -16,7 +16,7 @@ cmo_qvga: display {
|
|||||||
bus-width = <18>;
|
bus-width = <18>;
|
||||||
display-timings {
|
display-timings {
|
||||||
native-mode = <&qvga_timings>;
|
native-mode = <&qvga_timings>;
|
||||||
qvga_timings: 320x240 {
|
qvga_timings: timing0 {
|
||||||
clock-frequency = <6500000>;
|
clock-frequency = <6500000>;
|
||||||
hactive = <320>;
|
hactive = <320>;
|
||||||
vactive = <240>;
|
vactive = <240>;
|
||||||
|
@ -16,7 +16,7 @@ dvi_svga: display {
|
|||||||
bus-width = <18>;
|
bus-width = <18>;
|
||||||
display-timings {
|
display-timings {
|
||||||
native-mode = <&dvi_svga_timings>;
|
native-mode = <&dvi_svga_timings>;
|
||||||
dvi_svga_timings: 800x600 {
|
dvi_svga_timings: timing0 {
|
||||||
clock-frequency = <40000000>;
|
clock-frequency = <40000000>;
|
||||||
hactive = <800>;
|
hactive = <800>;
|
||||||
vactive = <600>;
|
vactive = <600>;
|
||||||
|
@ -16,7 +16,7 @@ dvi_vga: display {
|
|||||||
bus-width = <18>;
|
bus-width = <18>;
|
||||||
display-timings {
|
display-timings {
|
||||||
native-mode = <&dvi_vga_timings>;
|
native-mode = <&dvi_vga_timings>;
|
||||||
dvi_vga_timings: 640x480 {
|
dvi_vga_timings: timing0 {
|
||||||
clock-frequency = <31250000>;
|
clock-frequency = <31250000>;
|
||||||
hactive = <640>;
|
hactive = <640>;
|
||||||
vactive = <480>;
|
vactive = <480>;
|
||||||
|
@ -78,7 +78,7 @@ wvga: display {
|
|||||||
bus-width = <18>;
|
bus-width = <18>;
|
||||||
display-timings {
|
display-timings {
|
||||||
native-mode = <&wvga_timings>;
|
native-mode = <&wvga_timings>;
|
||||||
wvga_timings: 640x480 {
|
wvga_timings: timing0 {
|
||||||
hactive = <640>;
|
hactive = <640>;
|
||||||
vactive = <480>;
|
vactive = <480>;
|
||||||
hback-porch = <45>;
|
hback-porch = <45>;
|
||||||
|
@ -543,7 +543,7 @@ pwm1: pwm@53fe0000 {
|
|||||||
};
|
};
|
||||||
|
|
||||||
iim: efuse@53ff0000 {
|
iim: efuse@53ff0000 {
|
||||||
compatible = "fsl,imx25-iim", "fsl,imx27-iim";
|
compatible = "fsl,imx25-iim";
|
||||||
reg = <0x53ff0000 0x4000>;
|
reg = <0x53ff0000 0x4000>;
|
||||||
interrupts = <19>;
|
interrupts = <19>;
|
||||||
clocks = <&clks 99>;
|
clocks = <&clks 99>;
|
||||||
|
@ -16,7 +16,7 @@ display: display {
|
|||||||
fsl,pcr = <0xfae80083>; /* non-standard but required */
|
fsl,pcr = <0xfae80083>; /* non-standard but required */
|
||||||
display-timings {
|
display-timings {
|
||||||
native-mode = <&timing0>;
|
native-mode = <&timing0>;
|
||||||
timing0: 800x480 {
|
timing0: timing0 {
|
||||||
clock-frequency = <33000033>;
|
clock-frequency = <33000033>;
|
||||||
hactive = <800>;
|
hactive = <800>;
|
||||||
vactive = <480>;
|
vactive = <480>;
|
||||||
@ -47,7 +47,7 @@ leds {
|
|||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&pinctrl_gpio_leds>;
|
pinctrl-0 = <&pinctrl_gpio_leds>;
|
||||||
|
|
||||||
user {
|
led-user {
|
||||||
label = "Heartbeat";
|
label = "Heartbeat";
|
||||||
gpios = <&gpio6 14 GPIO_ACTIVE_HIGH>;
|
gpios = <&gpio6 14 GPIO_ACTIVE_HIGH>;
|
||||||
linux,default-trigger = "heartbeat";
|
linux,default-trigger = "heartbeat";
|
||||||
|
@ -33,7 +33,7 @@ &i2c1 {
|
|||||||
pinctrl-0 = <&pinctrl_i2c1>;
|
pinctrl-0 = <&pinctrl_i2c1>;
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
pcf8563@51 {
|
rtc@51 {
|
||||||
compatible = "nxp,pcf8563";
|
compatible = "nxp,pcf8563";
|
||||||
reg = <0x51>;
|
reg = <0x51>;
|
||||||
};
|
};
|
||||||
@ -90,7 +90,7 @@ &usbotg {
|
|||||||
&weim {
|
&weim {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
nor: nor@0,0 {
|
nor: flash@0,0 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "cfi-flash";
|
compatible = "cfi-flash";
|
||||||
|
@ -16,7 +16,7 @@ display0: CMO-QVGA {
|
|||||||
|
|
||||||
display-timings {
|
display-timings {
|
||||||
native-mode = <&timing0>;
|
native-mode = <&timing0>;
|
||||||
timing0: 320x240 {
|
timing0: timing0 {
|
||||||
clock-frequency = <6500000>;
|
clock-frequency = <6500000>;
|
||||||
hactive = <320>;
|
hactive = <320>;
|
||||||
vactive = <240>;
|
vactive = <240>;
|
||||||
|
@ -19,7 +19,7 @@ display: display {
|
|||||||
fsl,pcr = <0xf0c88080>; /* non-standard but required */
|
fsl,pcr = <0xf0c88080>; /* non-standard but required */
|
||||||
display-timings {
|
display-timings {
|
||||||
native-mode = <&timing0>;
|
native-mode = <&timing0>;
|
||||||
timing0: 640x480 {
|
timing0: timing0 {
|
||||||
hactive = <640>;
|
hactive = <640>;
|
||||||
vactive = <480>;
|
vactive = <480>;
|
||||||
hback-porch = <112>;
|
hback-porch = <112>;
|
||||||
|
@ -19,7 +19,7 @@ display0: LQ035Q7 {
|
|||||||
|
|
||||||
display-timings {
|
display-timings {
|
||||||
native-mode = <&timing0>;
|
native-mode = <&timing0>;
|
||||||
timing0: 240x320 {
|
timing0: timing0 {
|
||||||
clock-frequency = <5500000>;
|
clock-frequency = <5500000>;
|
||||||
hactive = <240>;
|
hactive = <240>;
|
||||||
vactive = <320>;
|
vactive = <320>;
|
||||||
|
@ -322,7 +322,7 @@ &usbotg {
|
|||||||
&weim {
|
&weim {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
||||||
nor: nor@0,0 {
|
nor: flash@0,0 {
|
||||||
compatible = "cfi-flash";
|
compatible = "cfi-flash";
|
||||||
reg = <0 0x00000000 0x02000000>;
|
reg = <0 0x00000000 0x02000000>;
|
||||||
bank-width = <2>;
|
bank-width = <2>;
|
||||||
|
@ -588,6 +588,9 @@ weim: weim@d8002000 {
|
|||||||
iram: sram@ffff4c00 {
|
iram: sram@ffff4c00 {
|
||||||
compatible = "mmio-sram";
|
compatible = "mmio-sram";
|
||||||
reg = <0xffff4c00 0xb400>;
|
reg = <0xffff4c00 0xb400>;
|
||||||
|
ranges = <0 0xffff4c00 0xb400>;
|
||||||
|
#address-cells = <1>;
|
||||||
|
#size-cells = <1>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -994,7 +994,7 @@ etm: etm@80022000 {
|
|||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
dma_apbx: dma-apbx@80024000 {
|
dma_apbx: dma-controller@80024000 {
|
||||||
compatible = "fsl,imx28-dma-apbx";
|
compatible = "fsl,imx28-dma-apbx";
|
||||||
reg = <0x80024000 0x2000>;
|
reg = <0x80024000 0x2000>;
|
||||||
interrupts = <78 79 66 0
|
interrupts = <78 79 66 0
|
||||||
|
@ -208,9 +208,6 @@ fec2: ethernet@30bf0000 {
|
|||||||
};
|
};
|
||||||
|
|
||||||
&ca_funnel_in_ports {
|
&ca_funnel_in_ports {
|
||||||
#address-cells = <1>;
|
|
||||||
#size-cells = <0>;
|
|
||||||
|
|
||||||
port@1 {
|
port@1 {
|
||||||
reg = <1>;
|
reg = <1>;
|
||||||
ca_funnel_in_port1: endpoint {
|
ca_funnel_in_port1: endpoint {
|
||||||
|
@ -190,7 +190,11 @@ funnel@30041000 {
|
|||||||
clock-names = "apb_pclk";
|
clock-names = "apb_pclk";
|
||||||
|
|
||||||
ca_funnel_in_ports: in-ports {
|
ca_funnel_in_ports: in-ports {
|
||||||
port {
|
#address-cells = <1>;
|
||||||
|
#size-cells = <0>;
|
||||||
|
|
||||||
|
port@0 {
|
||||||
|
reg = <0>;
|
||||||
ca_funnel_in_port0: endpoint {
|
ca_funnel_in_port0: endpoint {
|
||||||
remote-endpoint = <&etm0_out_port>;
|
remote-endpoint = <&etm0_out_port>;
|
||||||
};
|
};
|
||||||
@ -814,7 +818,7 @@ csi_from_csi_mux: endpoint {
|
|||||||
};
|
};
|
||||||
|
|
||||||
lcdif: lcdif@30730000 {
|
lcdif: lcdif@30730000 {
|
||||||
compatible = "fsl,imx7d-lcdif", "fsl,imx28-lcdif";
|
compatible = "fsl,imx7d-lcdif", "fsl,imx6sx-lcdif";
|
||||||
reg = <0x30730000 0x10000>;
|
reg = <0x30730000 0x10000>;
|
||||||
interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
clocks = <&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>,
|
clocks = <&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>,
|
||||||
@ -1279,7 +1283,7 @@ dma_apbh: dma-apbh@33000000 {
|
|||||||
gpmi: nand-controller@33002000{
|
gpmi: nand-controller@33002000{
|
||||||
compatible = "fsl,imx7d-gpmi-nand";
|
compatible = "fsl,imx7d-gpmi-nand";
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <0>;
|
||||||
reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
|
reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
|
||||||
reg-names = "gpmi-nand", "bch";
|
reg-names = "gpmi-nand", "bch";
|
||||||
interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
|
@ -402,12 +402,20 @@ hdmi: hdmi@20034000 {
|
|||||||
pinctrl-0 = <&hdmi_ctl>;
|
pinctrl-0 = <&hdmi_ctl>;
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
|
|
||||||
hdmi_in: port {
|
ports {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <0>;
|
#size-cells = <0>;
|
||||||
hdmi_in_vop: endpoint@0 {
|
|
||||||
|
hdmi_in: port@0 {
|
||||||
reg = <0>;
|
reg = <0>;
|
||||||
remote-endpoint = <&vop_out_hdmi>;
|
|
||||||
|
hdmi_in_vop: endpoint {
|
||||||
|
remote-endpoint = <&vop_out_hdmi>;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
hdmi_out: port@1 {
|
||||||
|
reg = <1>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -9,6 +9,4 @@ static inline bool arch_irq_work_has_interrupt(void)
|
|||||||
return is_smp();
|
return is_smp();
|
||||||
}
|
}
|
||||||
|
|
||||||
extern void arch_irq_work_raise(void);
|
|
||||||
|
|
||||||
#endif /* _ASM_ARM_IRQ_WORK_H */
|
#endif /* _ASM_ARM_IRQ_WORK_H */
|
||||||
|
@ -15,7 +15,7 @@ / {
|
|||||||
#size-cells = <2>;
|
#size-cells = <2>;
|
||||||
|
|
||||||
aliases {
|
aliases {
|
||||||
serial0 = &uart_B;
|
serial0 = &uart_b;
|
||||||
};
|
};
|
||||||
|
|
||||||
memory@0 {
|
memory@0 {
|
||||||
@ -25,6 +25,6 @@ memory@0 {
|
|||||||
|
|
||||||
};
|
};
|
||||||
|
|
||||||
&uart_B {
|
&uart_b {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
@ -118,14 +118,14 @@ gpio_intc: interrupt-controller@4080 {
|
|||||||
<10 11 12 13 14 15 16 17 18 19 20 21>;
|
<10 11 12 13 14 15 16 17 18 19 20 21>;
|
||||||
};
|
};
|
||||||
|
|
||||||
uart_B: serial@7a000 {
|
uart_b: serial@7a000 {
|
||||||
compatible = "amlogic,meson-s4-uart",
|
compatible = "amlogic,meson-s4-uart",
|
||||||
"amlogic,meson-ao-uart";
|
"amlogic,meson-ao-uart";
|
||||||
reg = <0x0 0x7a000 0x0 0x18>;
|
reg = <0x0 0x7a000 0x0 0x18>;
|
||||||
interrupts = <GIC_SPI 169 IRQ_TYPE_EDGE_RISING>;
|
interrupts = <GIC_SPI 169 IRQ_TYPE_EDGE_RISING>;
|
||||||
status = "disabled";
|
|
||||||
clocks = <&xtal>, <&xtal>, <&xtal>;
|
clocks = <&xtal>, <&xtal>, <&xtal>;
|
||||||
clock-names = "xtal", "pclk", "baud";
|
clock-names = "xtal", "pclk", "baud";
|
||||||
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
reset: reset-controller@2000 {
|
reset: reset-controller@2000 {
|
||||||
|
@ -390,6 +390,19 @@ memory@80000000 {
|
|||||||
reg = <0x0 0x80000000 0x0 0x0>;
|
reg = <0x0 0x80000000 0x0 0x0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
etm {
|
||||||
|
compatible = "qcom,coresight-remote-etm";
|
||||||
|
|
||||||
|
out-ports {
|
||||||
|
port {
|
||||||
|
modem_etm_out_funnel_in2: endpoint {
|
||||||
|
remote-endpoint =
|
||||||
|
<&funnel_in2_in_modem_etm>;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
psci {
|
psci {
|
||||||
compatible = "arm,psci-1.0";
|
compatible = "arm,psci-1.0";
|
||||||
method = "smc";
|
method = "smc";
|
||||||
@ -2565,6 +2578,14 @@ funnel@3023000 {
|
|||||||
clocks = <&rpmcc RPM_QDSS_CLK>, <&rpmcc RPM_QDSS_A_CLK>;
|
clocks = <&rpmcc RPM_QDSS_CLK>, <&rpmcc RPM_QDSS_A_CLK>;
|
||||||
clock-names = "apb_pclk", "atclk";
|
clock-names = "apb_pclk", "atclk";
|
||||||
|
|
||||||
|
in-ports {
|
||||||
|
port {
|
||||||
|
funnel_in2_in_modem_etm: endpoint {
|
||||||
|
remote-endpoint =
|
||||||
|
<&modem_etm_out_funnel_in2>;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
out-ports {
|
out-ports {
|
||||||
port {
|
port {
|
||||||
|
@ -1903,9 +1903,11 @@ etm5: etm@7c40000 {
|
|||||||
|
|
||||||
cpu = <&CPU4>;
|
cpu = <&CPU4>;
|
||||||
|
|
||||||
port{
|
out-ports {
|
||||||
etm4_out: endpoint {
|
port{
|
||||||
remote-endpoint = <&apss_funnel_in4>;
|
etm4_out: endpoint {
|
||||||
|
remote-endpoint = <&apss_funnel_in4>;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
@ -1920,9 +1922,11 @@ etm6: etm@7d40000 {
|
|||||||
|
|
||||||
cpu = <&CPU5>;
|
cpu = <&CPU5>;
|
||||||
|
|
||||||
port{
|
out-ports {
|
||||||
etm5_out: endpoint {
|
port{
|
||||||
remote-endpoint = <&apss_funnel_in5>;
|
etm5_out: endpoint {
|
||||||
|
remote-endpoint = <&apss_funnel_in5>;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
@ -1937,9 +1941,11 @@ etm7: etm@7e40000 {
|
|||||||
|
|
||||||
cpu = <&CPU6>;
|
cpu = <&CPU6>;
|
||||||
|
|
||||||
port{
|
out-ports {
|
||||||
etm6_out: endpoint {
|
port{
|
||||||
remote-endpoint = <&apss_funnel_in6>;
|
etm6_out: endpoint {
|
||||||
|
remote-endpoint = <&apss_funnel_in6>;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
@ -1954,9 +1960,11 @@ etm8: etm@7f40000 {
|
|||||||
|
|
||||||
cpu = <&CPU7>;
|
cpu = <&CPU7>;
|
||||||
|
|
||||||
port{
|
out-ports {
|
||||||
etm7_out: endpoint {
|
port{
|
||||||
remote-endpoint = <&apss_funnel_in7>;
|
etm7_out: endpoint {
|
||||||
|
remote-endpoint = <&apss_funnel_in7>;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -2,8 +2,6 @@
|
|||||||
#ifndef __ASM_IRQ_WORK_H
|
#ifndef __ASM_IRQ_WORK_H
|
||||||
#define __ASM_IRQ_WORK_H
|
#define __ASM_IRQ_WORK_H
|
||||||
|
|
||||||
extern void arch_irq_work_raise(void);
|
|
||||||
|
|
||||||
static inline bool arch_irq_work_has_interrupt(void)
|
static inline bool arch_irq_work_has_interrupt(void)
|
||||||
{
|
{
|
||||||
return true;
|
return true;
|
||||||
|
@ -22,6 +22,7 @@
|
|||||||
#include <linux/vmalloc.h>
|
#include <linux/vmalloc.h>
|
||||||
#include <asm/daifflags.h>
|
#include <asm/daifflags.h>
|
||||||
#include <asm/exception.h>
|
#include <asm/exception.h>
|
||||||
|
#include <asm/numa.h>
|
||||||
#include <asm/vmap_stack.h>
|
#include <asm/vmap_stack.h>
|
||||||
#include <asm/softirq_stack.h>
|
#include <asm/softirq_stack.h>
|
||||||
|
|
||||||
@ -46,17 +47,17 @@ static void init_irq_scs(void)
|
|||||||
|
|
||||||
for_each_possible_cpu(cpu)
|
for_each_possible_cpu(cpu)
|
||||||
per_cpu(irq_shadow_call_stack_ptr, cpu) =
|
per_cpu(irq_shadow_call_stack_ptr, cpu) =
|
||||||
scs_alloc(cpu_to_node(cpu));
|
scs_alloc(early_cpu_to_node(cpu));
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_VMAP_STACK
|
#ifdef CONFIG_VMAP_STACK
|
||||||
static void init_irq_stacks(void)
|
static void __init init_irq_stacks(void)
|
||||||
{
|
{
|
||||||
int cpu;
|
int cpu;
|
||||||
unsigned long *p;
|
unsigned long *p;
|
||||||
|
|
||||||
for_each_possible_cpu(cpu) {
|
for_each_possible_cpu(cpu) {
|
||||||
p = arch_alloc_vmap_stack(IRQ_STACK_SIZE, cpu_to_node(cpu));
|
p = arch_alloc_vmap_stack(IRQ_STACK_SIZE, early_cpu_to_node(cpu));
|
||||||
per_cpu(irq_stack_ptr, cpu) = p;
|
per_cpu(irq_stack_ptr, cpu) = p;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -168,7 +168,11 @@ armv8pmu_events_sysfs_show(struct device *dev,
|
|||||||
PMU_EVENT_ATTR_ID(name, armv8pmu_events_sysfs_show, config)
|
PMU_EVENT_ATTR_ID(name, armv8pmu_events_sysfs_show, config)
|
||||||
|
|
||||||
static struct attribute *armv8_pmuv3_event_attrs[] = {
|
static struct attribute *armv8_pmuv3_event_attrs[] = {
|
||||||
ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_SW_INCR),
|
/*
|
||||||
|
* Don't expose the sw_incr event in /sys. It's not usable as writes to
|
||||||
|
* PMSWINC_EL0 will trap as PMUSERENR.{SW,EN}=={0,0} and event rotation
|
||||||
|
* means we don't have a fixed event<->counter relationship regardless.
|
||||||
|
*/
|
||||||
ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL),
|
ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL),
|
||||||
ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL),
|
ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL),
|
||||||
ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL),
|
ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL),
|
||||||
|
@ -7,5 +7,5 @@ static inline bool arch_irq_work_has_interrupt(void)
|
|||||||
{
|
{
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
extern void arch_irq_work_raise(void);
|
|
||||||
#endif /* __ASM_CSKY_IRQ_WORK_H */
|
#endif /* __ASM_CSKY_IRQ_WORK_H */
|
||||||
|
@ -473,7 +473,6 @@ asmlinkage void start_secondary(void)
|
|||||||
sync_counter();
|
sync_counter();
|
||||||
cpu = raw_smp_processor_id();
|
cpu = raw_smp_processor_id();
|
||||||
set_my_cpu_offset(per_cpu_offset(cpu));
|
set_my_cpu_offset(per_cpu_offset(cpu));
|
||||||
rcu_cpu_starting(cpu);
|
|
||||||
|
|
||||||
cpu_probe();
|
cpu_probe();
|
||||||
constant_clockevent_init();
|
constant_clockevent_init();
|
||||||
|
@ -271,12 +271,16 @@ void setup_tlb_handler(int cpu)
|
|||||||
set_handler(EXCCODE_TLBNR * VECSIZE, handle_tlb_protect, VECSIZE);
|
set_handler(EXCCODE_TLBNR * VECSIZE, handle_tlb_protect, VECSIZE);
|
||||||
set_handler(EXCCODE_TLBNX * VECSIZE, handle_tlb_protect, VECSIZE);
|
set_handler(EXCCODE_TLBNX * VECSIZE, handle_tlb_protect, VECSIZE);
|
||||||
set_handler(EXCCODE_TLBPE * VECSIZE, handle_tlb_protect, VECSIZE);
|
set_handler(EXCCODE_TLBPE * VECSIZE, handle_tlb_protect, VECSIZE);
|
||||||
}
|
} else {
|
||||||
|
int vec_sz __maybe_unused;
|
||||||
|
void *addr __maybe_unused;
|
||||||
|
struct page *page __maybe_unused;
|
||||||
|
|
||||||
|
/* Avoid lockdep warning */
|
||||||
|
rcu_cpu_starting(cpu);
|
||||||
|
|
||||||
#ifdef CONFIG_NUMA
|
#ifdef CONFIG_NUMA
|
||||||
else {
|
vec_sz = sizeof(exception_handlers);
|
||||||
void *addr;
|
|
||||||
struct page *page;
|
|
||||||
const int vec_sz = sizeof(exception_handlers);
|
|
||||||
|
|
||||||
if (pcpu_handlers[cpu])
|
if (pcpu_handlers[cpu])
|
||||||
return;
|
return;
|
||||||
@ -292,8 +296,8 @@ void setup_tlb_handler(int cpu)
|
|||||||
csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_EENTRY);
|
csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_EENTRY);
|
||||||
csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_MERRENTRY);
|
csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_MERRENTRY);
|
||||||
csr_write64(pcpu_handlers[cpu] + 80*VECSIZE, LOONGARCH_CSR_TLBRENTRY);
|
csr_write64(pcpu_handlers[cpu] + 80*VECSIZE, LOONGARCH_CSR_TLBRENTRY);
|
||||||
}
|
|
||||||
#endif
|
#endif
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void tlb_init(int cpu)
|
void tlb_init(int cpu)
|
||||||
|
@ -6,6 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
|
|||||||
{
|
{
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
extern void arch_irq_work_raise(void);
|
|
||||||
|
|
||||||
#endif /* _ASM_POWERPC_IRQ_WORK_H */
|
#endif /* _ASM_POWERPC_IRQ_WORK_H */
|
||||||
|
@ -417,5 +417,9 @@ extern void *abatron_pteptrs[2];
|
|||||||
#include <asm/nohash/mmu.h>
|
#include <asm/nohash/mmu.h>
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
#if defined(CONFIG_FA_DUMP) || defined(CONFIG_PRESERVE_FA_DUMP)
|
||||||
|
#define __HAVE_ARCH_RESERVED_KERNEL_PAGES
|
||||||
|
#endif
|
||||||
|
|
||||||
#endif /* __KERNEL__ */
|
#endif /* __KERNEL__ */
|
||||||
#endif /* _ASM_POWERPC_MMU_H_ */
|
#endif /* _ASM_POWERPC_MMU_H_ */
|
||||||
|
@ -42,14 +42,6 @@ u64 memory_hotplug_max(void);
|
|||||||
#else
|
#else
|
||||||
#define memory_hotplug_max() memblock_end_of_DRAM()
|
#define memory_hotplug_max() memblock_end_of_DRAM()
|
||||||
#endif /* CONFIG_NUMA */
|
#endif /* CONFIG_NUMA */
|
||||||
#ifdef CONFIG_FA_DUMP
|
|
||||||
#define __HAVE_ARCH_RESERVED_KERNEL_PAGES
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifdef CONFIG_MEMORY_HOTPLUG
|
|
||||||
extern int create_section_mapping(unsigned long start, unsigned long end,
|
|
||||||
int nid, pgprot_t prot);
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#endif /* __KERNEL__ */
|
#endif /* __KERNEL__ */
|
||||||
#endif /* _ASM_MMZONE_H_ */
|
#endif /* _ASM_MMZONE_H_ */
|
||||||
|
@ -1439,10 +1439,12 @@ static int emulate_instruction(struct pt_regs *regs)
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_GENERIC_BUG
|
||||||
int is_valid_bugaddr(unsigned long addr)
|
int is_valid_bugaddr(unsigned long addr)
|
||||||
{
|
{
|
||||||
return is_kernel_addr(addr);
|
return is_kernel_addr(addr);
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
#ifdef CONFIG_MATH_EMULATION
|
#ifdef CONFIG_MATH_EMULATION
|
||||||
static int emulate_math(struct pt_regs *regs)
|
static int emulate_math(struct pt_regs *regs)
|
||||||
|
@ -586,6 +586,8 @@ static int do_fp_load(struct instruction_op *op, unsigned long ea,
|
|||||||
} u;
|
} u;
|
||||||
|
|
||||||
nb = GETSIZE(op->type);
|
nb = GETSIZE(op->type);
|
||||||
|
if (nb > sizeof(u))
|
||||||
|
return -EINVAL;
|
||||||
if (!address_ok(regs, ea, nb))
|
if (!address_ok(regs, ea, nb))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
rn = op->reg;
|
rn = op->reg;
|
||||||
@ -636,6 +638,8 @@ static int do_fp_store(struct instruction_op *op, unsigned long ea,
|
|||||||
} u;
|
} u;
|
||||||
|
|
||||||
nb = GETSIZE(op->type);
|
nb = GETSIZE(op->type);
|
||||||
|
if (nb > sizeof(u))
|
||||||
|
return -EINVAL;
|
||||||
if (!address_ok(regs, ea, nb))
|
if (!address_ok(regs, ea, nb))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
rn = op->reg;
|
rn = op->reg;
|
||||||
@ -680,6 +684,9 @@ static nokprobe_inline int do_vec_load(int rn, unsigned long ea,
|
|||||||
u8 b[sizeof(__vector128)];
|
u8 b[sizeof(__vector128)];
|
||||||
} u = {};
|
} u = {};
|
||||||
|
|
||||||
|
if (size > sizeof(u))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
if (!address_ok(regs, ea & ~0xfUL, 16))
|
if (!address_ok(regs, ea & ~0xfUL, 16))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
/* align to multiple of size */
|
/* align to multiple of size */
|
||||||
@ -707,6 +714,9 @@ static nokprobe_inline int do_vec_store(int rn, unsigned long ea,
|
|||||||
u8 b[sizeof(__vector128)];
|
u8 b[sizeof(__vector128)];
|
||||||
} u;
|
} u;
|
||||||
|
|
||||||
|
if (size > sizeof(u))
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
if (!address_ok(regs, ea & ~0xfUL, 16))
|
if (!address_ok(regs, ea & ~0xfUL, 16))
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
/* align to multiple of size */
|
/* align to multiple of size */
|
||||||
|
@ -463,6 +463,7 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
|
|||||||
set_pte_at(vma->vm_mm, addr, ptep, pte);
|
set_pte_at(vma->vm_mm, addr, ptep, pte);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||||
/*
|
/*
|
||||||
* For hash translation mode, we use the deposited table to store hash slot
|
* For hash translation mode, we use the deposited table to store hash slot
|
||||||
* information and they are stored at PTRS_PER_PMD offset from related pmd
|
* information and they are stored at PTRS_PER_PMD offset from related pmd
|
||||||
@ -484,6 +485,7 @@ int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
|
|||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Does the CPU support tlbie?
|
* Does the CPU support tlbie?
|
||||||
|
@ -126,7 +126,7 @@ void pgtable_cache_add(unsigned int shift)
|
|||||||
* as to leave enough 0 bits in the address to contain it. */
|
* as to leave enough 0 bits in the address to contain it. */
|
||||||
unsigned long minalign = max(MAX_PGTABLE_INDEX_SIZE + 1,
|
unsigned long minalign = max(MAX_PGTABLE_INDEX_SIZE + 1,
|
||||||
HUGEPD_SHIFT_MASK + 1);
|
HUGEPD_SHIFT_MASK + 1);
|
||||||
struct kmem_cache *new;
|
struct kmem_cache *new = NULL;
|
||||||
|
|
||||||
/* It would be nice if this was a BUILD_BUG_ON(), but at the
|
/* It would be nice if this was a BUILD_BUG_ON(), but at the
|
||||||
* moment, gcc doesn't seem to recognize is_power_of_2 as a
|
* moment, gcc doesn't seem to recognize is_power_of_2 as a
|
||||||
@ -139,7 +139,8 @@ void pgtable_cache_add(unsigned int shift)
|
|||||||
|
|
||||||
align = max_t(unsigned long, align, minalign);
|
align = max_t(unsigned long, align, minalign);
|
||||||
name = kasprintf(GFP_KERNEL, "pgtable-2^%d", shift);
|
name = kasprintf(GFP_KERNEL, "pgtable-2^%d", shift);
|
||||||
new = kmem_cache_create(name, table_size, align, 0, ctor(shift));
|
if (name)
|
||||||
|
new = kmem_cache_create(name, table_size, align, 0, ctor(shift));
|
||||||
if (!new)
|
if (!new)
|
||||||
panic("Could not allocate pgtable cache for order %d", shift);
|
panic("Could not allocate pgtable cache for order %d", shift);
|
||||||
|
|
||||||
|
@ -179,3 +179,8 @@ static inline bool debug_pagealloc_enabled_or_kfence(void)
|
|||||||
{
|
{
|
||||||
return IS_ENABLED(CONFIG_KFENCE) || debug_pagealloc_enabled();
|
return IS_ENABLED(CONFIG_KFENCE) || debug_pagealloc_enabled();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef CONFIG_MEMORY_HOTPLUG
|
||||||
|
int create_section_mapping(unsigned long start, unsigned long end,
|
||||||
|
int nid, pgprot_t prot);
|
||||||
|
#endif
|
||||||
|
@ -6,5 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
|
|||||||
{
|
{
|
||||||
return IS_ENABLED(CONFIG_SMP);
|
return IS_ENABLED(CONFIG_SMP);
|
||||||
}
|
}
|
||||||
extern void arch_irq_work_raise(void);
|
|
||||||
#endif /* _ASM_RISCV_IRQ_WORK_H */
|
#endif /* _ASM_RISCV_IRQ_WORK_H */
|
||||||
|
@ -7,6 +7,4 @@ static inline bool arch_irq_work_has_interrupt(void)
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
void arch_irq_work_raise(void);
|
|
||||||
|
|
||||||
#endif /* _ASM_S390_IRQ_WORK_H */
|
#endif /* _ASM_S390_IRQ_WORK_H */
|
||||||
|
@ -385,6 +385,7 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
|
|||||||
/*
|
/*
|
||||||
* floating point control reg. is in the thread structure
|
* floating point control reg. is in the thread structure
|
||||||
*/
|
*/
|
||||||
|
save_fpu_regs();
|
||||||
if ((unsigned int) data != 0 ||
|
if ((unsigned int) data != 0 ||
|
||||||
test_fp_ctl(data >> (BITS_PER_LONG - 32)))
|
test_fp_ctl(data >> (BITS_PER_LONG - 32)))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@ -741,6 +742,7 @@ static int __poke_user_compat(struct task_struct *child,
|
|||||||
/*
|
/*
|
||||||
* floating point control reg. is in the thread structure
|
* floating point control reg. is in the thread structure
|
||||||
*/
|
*/
|
||||||
|
save_fpu_regs();
|
||||||
if (test_fp_ctl(tmp))
|
if (test_fp_ctl(tmp))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
child->thread.fpu.fpc = data;
|
child->thread.fpu.fpc = data;
|
||||||
@ -904,9 +906,7 @@ static int s390_fpregs_set(struct task_struct *target,
|
|||||||
int rc = 0;
|
int rc = 0;
|
||||||
freg_t fprs[__NUM_FPRS];
|
freg_t fprs[__NUM_FPRS];
|
||||||
|
|
||||||
if (target == current)
|
save_fpu_regs();
|
||||||
save_fpu_regs();
|
|
||||||
|
|
||||||
if (MACHINE_HAS_VX)
|
if (MACHINE_HAS_VX)
|
||||||
convert_vx_to_fp(fprs, target->thread.fpu.vxrs);
|
convert_vx_to_fp(fprs, target->thread.fpu.vxrs);
|
||||||
else
|
else
|
||||||
|
@ -4138,10 +4138,6 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
|
|||||||
|
|
||||||
vcpu_load(vcpu);
|
vcpu_load(vcpu);
|
||||||
|
|
||||||
if (test_fp_ctl(fpu->fpc)) {
|
|
||||||
ret = -EINVAL;
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
vcpu->run->s.regs.fpc = fpu->fpc;
|
vcpu->run->s.regs.fpc = fpu->fpc;
|
||||||
if (MACHINE_HAS_VX)
|
if (MACHINE_HAS_VX)
|
||||||
convert_fp_to_vx((__vector128 *) vcpu->run->s.regs.vrs,
|
convert_fp_to_vx((__vector128 *) vcpu->run->s.regs.vrs,
|
||||||
@ -4149,7 +4145,6 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
|
|||||||
else
|
else
|
||||||
memcpy(vcpu->run->s.regs.fprs, &fpu->fprs, sizeof(fpu->fprs));
|
memcpy(vcpu->run->s.regs.fprs, &fpu->fprs, sizeof(fpu->fprs));
|
||||||
|
|
||||||
out:
|
|
||||||
vcpu_put(vcpu);
|
vcpu_put(vcpu);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -204,7 +204,7 @@ static int uml_net_close(struct net_device *dev)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
static netdev_tx_t uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||||
{
|
{
|
||||||
struct uml_net_private *lp = netdev_priv(dev);
|
struct uml_net_private *lp = netdev_priv(dev);
|
||||||
unsigned long flags;
|
unsigned long flags;
|
||||||
|
@ -50,7 +50,7 @@ extern void do_uml_exitcalls(void);
|
|||||||
* Are we disallowed to sleep? Used to choose between GFP_KERNEL and
|
* Are we disallowed to sleep? Used to choose between GFP_KERNEL and
|
||||||
* GFP_ATOMIC.
|
* GFP_ATOMIC.
|
||||||
*/
|
*/
|
||||||
extern int __cant_sleep(void);
|
extern int __uml_cant_sleep(void);
|
||||||
extern int get_current_pid(void);
|
extern int get_current_pid(void);
|
||||||
extern int copy_from_user_proc(void *to, void *from, int size);
|
extern int copy_from_user_proc(void *to, void *from, int size);
|
||||||
extern char *uml_strdup(const char *string);
|
extern char *uml_strdup(const char *string);
|
||||||
|
@ -220,7 +220,7 @@ void arch_cpu_idle(void)
|
|||||||
raw_local_irq_enable();
|
raw_local_irq_enable();
|
||||||
}
|
}
|
||||||
|
|
||||||
int __cant_sleep(void) {
|
int __uml_cant_sleep(void) {
|
||||||
return in_atomic() || irqs_disabled() || in_interrupt();
|
return in_atomic() || irqs_disabled() || in_interrupt();
|
||||||
/* Is in_interrupt() really needed? */
|
/* Is in_interrupt() really needed? */
|
||||||
}
|
}
|
||||||
|
@ -432,9 +432,29 @@ static void time_travel_update_time(unsigned long long next, bool idle)
|
|||||||
time_travel_del_event(&ne);
|
time_travel_del_event(&ne);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void time_travel_update_time_rel(unsigned long long offs)
|
||||||
|
{
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Disable interrupts before calculating the new time so
|
||||||
|
* that a real timer interrupt (signal) can't happen at
|
||||||
|
* a bad time e.g. after we read time_travel_time but
|
||||||
|
* before we've completed updating the time.
|
||||||
|
*/
|
||||||
|
local_irq_save(flags);
|
||||||
|
time_travel_update_time(time_travel_time + offs, false);
|
||||||
|
local_irq_restore(flags);
|
||||||
|
}
|
||||||
|
|
||||||
void time_travel_ndelay(unsigned long nsec)
|
void time_travel_ndelay(unsigned long nsec)
|
||||||
{
|
{
|
||||||
time_travel_update_time(time_travel_time + nsec, false);
|
/*
|
||||||
|
* Not strictly needed to use _rel() version since this is
|
||||||
|
* only used in INFCPU/EXT modes, but it doesn't hurt and
|
||||||
|
* is more readable too.
|
||||||
|
*/
|
||||||
|
time_travel_update_time_rel(nsec);
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(time_travel_ndelay);
|
EXPORT_SYMBOL(time_travel_ndelay);
|
||||||
|
|
||||||
@ -568,7 +588,11 @@ static void time_travel_set_start(void)
|
|||||||
#define time_travel_time 0
|
#define time_travel_time 0
|
||||||
#define time_travel_ext_waiting 0
|
#define time_travel_ext_waiting 0
|
||||||
|
|
||||||
static inline void time_travel_update_time(unsigned long long ns, bool retearly)
|
static inline void time_travel_update_time(unsigned long long ns, bool idle)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
|
static inline void time_travel_update_time_rel(unsigned long long offs)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -720,9 +744,7 @@ static u64 timer_read(struct clocksource *cs)
|
|||||||
*/
|
*/
|
||||||
if (!irqs_disabled() && !in_interrupt() && !in_softirq() &&
|
if (!irqs_disabled() && !in_interrupt() && !in_softirq() &&
|
||||||
!time_travel_ext_waiting)
|
!time_travel_ext_waiting)
|
||||||
time_travel_update_time(time_travel_time +
|
time_travel_update_time_rel(TIMER_MULTIPLIER);
|
||||||
TIMER_MULTIPLIER,
|
|
||||||
false);
|
|
||||||
return time_travel_time / TIMER_MULTIPLIER;
|
return time_travel_time / TIMER_MULTIPLIER;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -46,7 +46,7 @@ int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv)
|
|||||||
unsigned long stack, sp;
|
unsigned long stack, sp;
|
||||||
int pid, fds[2], ret, n;
|
int pid, fds[2], ret, n;
|
||||||
|
|
||||||
stack = alloc_stack(0, __cant_sleep());
|
stack = alloc_stack(0, __uml_cant_sleep());
|
||||||
if (stack == 0)
|
if (stack == 0)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
@ -70,7 +70,7 @@ int run_helper(void (*pre_exec)(void *), void *pre_data, char **argv)
|
|||||||
data.pre_data = pre_data;
|
data.pre_data = pre_data;
|
||||||
data.argv = argv;
|
data.argv = argv;
|
||||||
data.fd = fds[1];
|
data.fd = fds[1];
|
||||||
data.buf = __cant_sleep() ? uml_kmalloc(PATH_MAX, UM_GFP_ATOMIC) :
|
data.buf = __uml_cant_sleep() ? uml_kmalloc(PATH_MAX, UM_GFP_ATOMIC) :
|
||||||
uml_kmalloc(PATH_MAX, UM_GFP_KERNEL);
|
uml_kmalloc(PATH_MAX, UM_GFP_KERNEL);
|
||||||
pid = clone(helper_child, (void *) sp, CLONE_VM, &data);
|
pid = clone(helper_child, (void *) sp, CLONE_VM, &data);
|
||||||
if (pid < 0) {
|
if (pid < 0) {
|
||||||
@ -121,7 +121,7 @@ int run_helper_thread(int (*proc)(void *), void *arg, unsigned int flags,
|
|||||||
unsigned long stack, sp;
|
unsigned long stack, sp;
|
||||||
int pid, status, err;
|
int pid, status, err;
|
||||||
|
|
||||||
stack = alloc_stack(0, __cant_sleep());
|
stack = alloc_stack(0, __uml_cant_sleep());
|
||||||
if (stack == 0)
|
if (stack == 0)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
|
@ -173,23 +173,38 @@ __uml_setup("quiet", quiet_cmd_param,
|
|||||||
"quiet\n"
|
"quiet\n"
|
||||||
" Turns off information messages during boot.\n\n");
|
" Turns off information messages during boot.\n\n");
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The os_info/os_warn functions will be called by helper threads. These
|
||||||
|
* have a very limited stack size and using the libc formatting functions
|
||||||
|
* may overflow the stack.
|
||||||
|
* So pull in the kernel vscnprintf and use that instead with a fixed
|
||||||
|
* on-stack buffer.
|
||||||
|
*/
|
||||||
|
int vscnprintf(char *buf, size_t size, const char *fmt, va_list args);
|
||||||
|
|
||||||
void os_info(const char *fmt, ...)
|
void os_info(const char *fmt, ...)
|
||||||
{
|
{
|
||||||
|
char buf[256];
|
||||||
va_list list;
|
va_list list;
|
||||||
|
int len;
|
||||||
|
|
||||||
if (quiet_info)
|
if (quiet_info)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
va_start(list, fmt);
|
va_start(list, fmt);
|
||||||
vfprintf(stderr, fmt, list);
|
len = vscnprintf(buf, sizeof(buf), fmt, list);
|
||||||
|
fwrite(buf, len, 1, stderr);
|
||||||
va_end(list);
|
va_end(list);
|
||||||
}
|
}
|
||||||
|
|
||||||
void os_warn(const char *fmt, ...)
|
void os_warn(const char *fmt, ...)
|
||||||
{
|
{
|
||||||
|
char buf[256];
|
||||||
va_list list;
|
va_list list;
|
||||||
|
int len;
|
||||||
|
|
||||||
va_start(list, fmt);
|
va_start(list, fmt);
|
||||||
vfprintf(stderr, fmt, list);
|
len = vscnprintf(buf, sizeof(buf), fmt, list);
|
||||||
|
fwrite(buf, len, 1, stderr);
|
||||||
va_end(list);
|
va_end(list);
|
||||||
}
|
}
|
||||||
|
@ -393,3 +393,8 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code)
|
|||||||
*/
|
*/
|
||||||
kernel_add_identity_map(address, end);
|
kernel_add_identity_map(address, end);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void do_boot_nmi_trap(struct pt_regs *regs, unsigned long error_code)
|
||||||
|
{
|
||||||
|
/* Empty handler to ignore NMI during early boot */
|
||||||
|
}
|
||||||
|
@ -61,6 +61,7 @@ void load_stage2_idt(void)
|
|||||||
boot_idt_desc.address = (unsigned long)boot_idt;
|
boot_idt_desc.address = (unsigned long)boot_idt;
|
||||||
|
|
||||||
set_idt_entry(X86_TRAP_PF, boot_page_fault);
|
set_idt_entry(X86_TRAP_PF, boot_page_fault);
|
||||||
|
set_idt_entry(X86_TRAP_NMI, boot_nmi_trap);
|
||||||
|
|
||||||
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
||||||
/*
|
/*
|
||||||
|
@ -70,6 +70,7 @@ SYM_FUNC_END(\name)
|
|||||||
.code64
|
.code64
|
||||||
|
|
||||||
EXCEPTION_HANDLER boot_page_fault do_boot_page_fault error_code=1
|
EXCEPTION_HANDLER boot_page_fault do_boot_page_fault error_code=1
|
||||||
|
EXCEPTION_HANDLER boot_nmi_trap do_boot_nmi_trap error_code=0
|
||||||
|
|
||||||
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
||||||
EXCEPTION_HANDLER boot_stage1_vc do_vc_no_ghcb error_code=1
|
EXCEPTION_HANDLER boot_stage1_vc do_vc_no_ghcb error_code=1
|
||||||
|
@ -190,6 +190,7 @@ static inline void cleanup_exception_handling(void) { }
|
|||||||
|
|
||||||
/* IDT Entry Points */
|
/* IDT Entry Points */
|
||||||
void boot_page_fault(void);
|
void boot_page_fault(void);
|
||||||
|
void boot_nmi_trap(void);
|
||||||
void boot_stage1_vc(void);
|
void boot_stage1_vc(void);
|
||||||
void boot_stage2_vc(void);
|
void boot_stage2_vc(void);
|
||||||
|
|
||||||
|
@ -9,7 +9,6 @@ static inline bool arch_irq_work_has_interrupt(void)
|
|||||||
{
|
{
|
||||||
return boot_cpu_has(X86_FEATURE_APIC);
|
return boot_cpu_has(X86_FEATURE_APIC);
|
||||||
}
|
}
|
||||||
extern void arch_irq_work_raise(void);
|
|
||||||
#else
|
#else
|
||||||
static inline bool arch_irq_work_has_interrupt(void)
|
static inline bool arch_irq_work_has_interrupt(void)
|
||||||
{
|
{
|
||||||
|
@ -64,6 +64,7 @@ static inline bool kmsan_virt_addr_valid(void *addr)
|
|||||||
{
|
{
|
||||||
unsigned long x = (unsigned long)addr;
|
unsigned long x = (unsigned long)addr;
|
||||||
unsigned long y = x - __START_KERNEL_map;
|
unsigned long y = x - __START_KERNEL_map;
|
||||||
|
bool ret;
|
||||||
|
|
||||||
/* use the carry flag to determine if x was < __START_KERNEL_map */
|
/* use the carry flag to determine if x was < __START_KERNEL_map */
|
||||||
if (unlikely(x > y)) {
|
if (unlikely(x > y)) {
|
||||||
@ -79,7 +80,21 @@ static inline bool kmsan_virt_addr_valid(void *addr)
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
return pfn_valid(x >> PAGE_SHIFT);
|
/*
|
||||||
|
* pfn_valid() relies on RCU, and may call into the scheduler on exiting
|
||||||
|
* the critical section. However, this would result in recursion with
|
||||||
|
* KMSAN. Therefore, disable preemption here, and re-enable preemption
|
||||||
|
* below while suppressing reschedules to avoid recursion.
|
||||||
|
*
|
||||||
|
* Note, this sacrifices occasionally breaking scheduling guarantees.
|
||||||
|
* Although, a kernel compiled with KMSAN has already given up on any
|
||||||
|
* performance guarantees due to being heavily instrumented.
|
||||||
|
*/
|
||||||
|
preempt_disable();
|
||||||
|
ret = pfn_valid(x >> PAGE_SHIFT);
|
||||||
|
preempt_enable_no_resched();
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
#endif /* !MODULE */
|
#endif /* !MODULE */
|
||||||
|
@ -44,6 +44,7 @@
|
|||||||
#include <linux/sync_core.h>
|
#include <linux/sync_core.h>
|
||||||
#include <linux/task_work.h>
|
#include <linux/task_work.h>
|
||||||
#include <linux/hardirq.h>
|
#include <linux/hardirq.h>
|
||||||
|
#include <linux/kexec.h>
|
||||||
|
|
||||||
#include <asm/intel-family.h>
|
#include <asm/intel-family.h>
|
||||||
#include <asm/processor.h>
|
#include <asm/processor.h>
|
||||||
@ -239,6 +240,7 @@ static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
|
|||||||
struct llist_node *pending;
|
struct llist_node *pending;
|
||||||
struct mce_evt_llist *l;
|
struct mce_evt_llist *l;
|
||||||
int apei_err = 0;
|
int apei_err = 0;
|
||||||
|
struct page *p;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Allow instrumentation around external facilities usage. Not that it
|
* Allow instrumentation around external facilities usage. Not that it
|
||||||
@ -292,6 +294,20 @@ static noinstr void mce_panic(const char *msg, struct mce *final, char *exp)
|
|||||||
if (!fake_panic) {
|
if (!fake_panic) {
|
||||||
if (panic_timeout == 0)
|
if (panic_timeout == 0)
|
||||||
panic_timeout = mca_cfg.panic_timeout;
|
panic_timeout = mca_cfg.panic_timeout;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Kdump skips the poisoned page in order to avoid
|
||||||
|
* touching the error bits again. Poison the page even
|
||||||
|
* if the error is fatal and the machine is about to
|
||||||
|
* panic.
|
||||||
|
*/
|
||||||
|
if (kexec_crash_loaded()) {
|
||||||
|
if (final && (final->status & MCI_STATUS_ADDRV)) {
|
||||||
|
p = pfn_to_online_page(final->addr >> PAGE_SHIFT);
|
||||||
|
if (p)
|
||||||
|
SetPageHWPoison(p);
|
||||||
|
}
|
||||||
|
}
|
||||||
panic(msg);
|
panic(msg);
|
||||||
} else
|
} else
|
||||||
pr_emerg(HW_ERR "Fake kernel panic: %s\n", msg);
|
pr_emerg(HW_ERR "Fake kernel panic: %s\n", msg);
|
||||||
|
@ -930,7 +930,7 @@ static bool bio_try_merge_hw_seg(struct request_queue *q, struct bio *bio,
|
|||||||
|
|
||||||
if ((addr1 | mask) != (addr2 | mask))
|
if ((addr1 | mask) != (addr2 | mask))
|
||||||
return false;
|
return false;
|
||||||
if (bv->bv_len + len > queue_max_segment_size(q))
|
if (len > queue_max_segment_size(q) - bv->bv_len)
|
||||||
return false;
|
return false;
|
||||||
return __bio_try_merge_page(bio, page, len, offset, same_page);
|
return __bio_try_merge_page(bio, page, len, offset, same_page);
|
||||||
}
|
}
|
||||||
|
@ -1857,6 +1857,22 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
|
|||||||
wait->flags &= ~WQ_FLAG_EXCLUSIVE;
|
wait->flags &= ~WQ_FLAG_EXCLUSIVE;
|
||||||
__add_wait_queue(wq, wait);
|
__add_wait_queue(wq, wait);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Add one explicit barrier since blk_mq_get_driver_tag() may
|
||||||
|
* not imply barrier in case of failure.
|
||||||
|
*
|
||||||
|
* Order adding us to wait queue and allocating driver tag.
|
||||||
|
*
|
||||||
|
* The pair is the one implied in sbitmap_queue_wake_up() which
|
||||||
|
* orders clearing sbitmap tag bits and waitqueue_active() in
|
||||||
|
* __sbitmap_queue_wake_up(), since waitqueue_active() is lockless
|
||||||
|
*
|
||||||
|
* Otherwise, re-order of adding wait queue and getting driver tag
|
||||||
|
* may cause __sbitmap_queue_wake_up() to wake up nothing because
|
||||||
|
* the waitqueue_active() may not observe us in wait queue.
|
||||||
|
*/
|
||||||
|
smp_mb();
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* It's possible that a tag was freed in the window between the
|
* It's possible that a tag was freed in the window between the
|
||||||
* allocation failure and adding the hardware queue to the wait
|
* allocation failure and adding the hardware queue to the wait
|
||||||
|
@ -308,9 +308,10 @@ static int __init extlog_init(void)
|
|||||||
static void __exit extlog_exit(void)
|
static void __exit extlog_exit(void)
|
||||||
{
|
{
|
||||||
mce_unregister_decode_chain(&extlog_mce_dec);
|
mce_unregister_decode_chain(&extlog_mce_dec);
|
||||||
((struct extlog_l1_head *)extlog_l1_addr)->flags &= ~FLAG_OS_OPTIN;
|
if (extlog_l1_addr) {
|
||||||
if (extlog_l1_addr)
|
((struct extlog_l1_head *)extlog_l1_addr)->flags &= ~FLAG_OS_OPTIN;
|
||||||
acpi_os_unmap_iomem(extlog_l1_addr, l1_size);
|
acpi_os_unmap_iomem(extlog_l1_addr, l1_size);
|
||||||
|
}
|
||||||
if (elog_addr)
|
if (elog_addr)
|
||||||
acpi_os_unmap_iomem(elog_addr, elog_size);
|
acpi_os_unmap_iomem(elog_addr, elog_size);
|
||||||
release_mem_region(elog_base, elog_size);
|
release_mem_region(elog_base, elog_size);
|
||||||
|
@ -513,6 +513,15 @@ static const struct dmi_system_id video_dmi_table[] = {
|
|||||||
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3350"),
|
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3350"),
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
.callback = video_set_report_key_events,
|
||||||
|
.driver_data = (void *)((uintptr_t)REPORT_BRIGHTNESS_KEY_EVENTS),
|
||||||
|
.ident = "COLORFUL X15 AT 23",
|
||||||
|
.matches = {
|
||||||
|
DMI_MATCH(DMI_SYS_VENDOR, "COLORFUL"),
|
||||||
|
DMI_MATCH(DMI_PRODUCT_NAME, "X15 AT 23"),
|
||||||
|
},
|
||||||
|
},
|
||||||
/*
|
/*
|
||||||
* Some machines change the brightness themselves when a brightness
|
* Some machines change the brightness themselves when a brightness
|
||||||
* hotkey gets pressed, despite us telling them not to. In this case
|
* hotkey gets pressed, despite us telling them not to. In this case
|
||||||
|
@ -99,6 +99,20 @@ static inline bool is_hest_type_generic_v2(struct ghes *ghes)
|
|||||||
return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2;
|
return ghes->generic->header.type == ACPI_HEST_TYPE_GENERIC_ERROR_V2;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* A platform may describe one error source for the handling of synchronous
|
||||||
|
* errors (e.g. MCE or SEA), or for handling asynchronous errors (e.g. SCI
|
||||||
|
* or External Interrupt). On x86, the HEST notifications are always
|
||||||
|
* asynchronous, so only SEA on ARM is delivered as a synchronous
|
||||||
|
* notification.
|
||||||
|
*/
|
||||||
|
static inline bool is_hest_sync_notify(struct ghes *ghes)
|
||||||
|
{
|
||||||
|
u8 notify_type = ghes->generic->notify.type;
|
||||||
|
|
||||||
|
return notify_type == ACPI_HEST_NOTIFY_SEA;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* This driver isn't really modular, however for the time being,
|
* This driver isn't really modular, however for the time being,
|
||||||
* continuing to use module_param is the easiest way to remain
|
* continuing to use module_param is the easiest way to remain
|
||||||
@ -461,7 +475,7 @@ static bool ghes_do_memory_failure(u64 physical_addr, int flags)
|
|||||||
}
|
}
|
||||||
|
|
||||||
static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
|
static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
|
||||||
int sev)
|
int sev, bool sync)
|
||||||
{
|
{
|
||||||
int flags = -1;
|
int flags = -1;
|
||||||
int sec_sev = ghes_severity(gdata->error_severity);
|
int sec_sev = ghes_severity(gdata->error_severity);
|
||||||
@ -475,7 +489,7 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
|
|||||||
(gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED))
|
(gdata->flags & CPER_SEC_ERROR_THRESHOLD_EXCEEDED))
|
||||||
flags = MF_SOFT_OFFLINE;
|
flags = MF_SOFT_OFFLINE;
|
||||||
if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
|
if (sev == GHES_SEV_RECOVERABLE && sec_sev == GHES_SEV_RECOVERABLE)
|
||||||
flags = 0;
|
flags = sync ? MF_ACTION_REQUIRED : 0;
|
||||||
|
|
||||||
if (flags != -1)
|
if (flags != -1)
|
||||||
return ghes_do_memory_failure(mem_err->physical_addr, flags);
|
return ghes_do_memory_failure(mem_err->physical_addr, flags);
|
||||||
@ -483,9 +497,11 @@ static bool ghes_handle_memory_failure(struct acpi_hest_generic_data *gdata,
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int sev)
|
static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata,
|
||||||
|
int sev, bool sync)
|
||||||
{
|
{
|
||||||
struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
|
struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
|
||||||
|
int flags = sync ? MF_ACTION_REQUIRED : 0;
|
||||||
bool queued = false;
|
bool queued = false;
|
||||||
int sec_sev, i;
|
int sec_sev, i;
|
||||||
char *p;
|
char *p;
|
||||||
@ -510,7 +526,7 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata, int s
|
|||||||
* and don't filter out 'corrected' error here.
|
* and don't filter out 'corrected' error here.
|
||||||
*/
|
*/
|
||||||
if (is_cache && has_pa) {
|
if (is_cache && has_pa) {
|
||||||
queued = ghes_do_memory_failure(err_info->physical_fault_addr, 0);
|
queued = ghes_do_memory_failure(err_info->physical_fault_addr, flags);
|
||||||
p += err_info->length;
|
p += err_info->length;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
@ -631,6 +647,7 @@ static bool ghes_do_proc(struct ghes *ghes,
|
|||||||
const guid_t *fru_id = &guid_null;
|
const guid_t *fru_id = &guid_null;
|
||||||
char *fru_text = "";
|
char *fru_text = "";
|
||||||
bool queued = false;
|
bool queued = false;
|
||||||
|
bool sync = is_hest_sync_notify(ghes);
|
||||||
|
|
||||||
sev = ghes_severity(estatus->error_severity);
|
sev = ghes_severity(estatus->error_severity);
|
||||||
apei_estatus_for_each_section(estatus, gdata) {
|
apei_estatus_for_each_section(estatus, gdata) {
|
||||||
@ -648,13 +665,13 @@ static bool ghes_do_proc(struct ghes *ghes,
|
|||||||
ghes_edac_report_mem_error(sev, mem_err);
|
ghes_edac_report_mem_error(sev, mem_err);
|
||||||
|
|
||||||
arch_apei_report_mem_error(sev, mem_err);
|
arch_apei_report_mem_error(sev, mem_err);
|
||||||
queued = ghes_handle_memory_failure(gdata, sev);
|
queued = ghes_handle_memory_failure(gdata, sev, sync);
|
||||||
}
|
}
|
||||||
else if (guid_equal(sec_type, &CPER_SEC_PCIE)) {
|
else if (guid_equal(sec_type, &CPER_SEC_PCIE)) {
|
||||||
ghes_handle_aer(gdata);
|
ghes_handle_aer(gdata);
|
||||||
}
|
}
|
||||||
else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
|
else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
|
||||||
queued = ghes_handle_arm_hw_error(gdata, sev);
|
queued = ghes_handle_arm_hw_error(gdata, sev, sync);
|
||||||
} else {
|
} else {
|
||||||
void *err = acpi_hest_get_payload(gdata);
|
void *err = acpi_hest_get_payload(gdata);
|
||||||
|
|
||||||
|
@ -183,7 +183,7 @@ static int __init slit_valid(struct acpi_table_slit *slit)
|
|||||||
int i, j;
|
int i, j;
|
||||||
int d = slit->locality_count;
|
int d = slit->locality_count;
|
||||||
for (i = 0; i < d; i++) {
|
for (i = 0; i < d; i++) {
|
||||||
for (j = 0; j < d; j++) {
|
for (j = 0; j < d; j++) {
|
||||||
u8 val = slit->entry[d*i + j];
|
u8 val = slit->entry[d*i + j];
|
||||||
if (i == j) {
|
if (i == j) {
|
||||||
if (val != LOCAL_DISTANCE)
|
if (val != LOCAL_DISTANCE)
|
||||||
@ -532,7 +532,7 @@ int __init acpi_numa_init(void)
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
/* fake_pxm is the next unused PXM value after SRAT parsing */
|
/* fake_pxm is the next unused PXM value after SRAT parsing */
|
||||||
for (i = 0, fake_pxm = -1; i < MAX_NUMNODES - 1; i++) {
|
for (i = 0, fake_pxm = -1; i < MAX_NUMNODES; i++) {
|
||||||
if (node_to_pxm_map[i] > fake_pxm)
|
if (node_to_pxm_map[i] > fake_pxm)
|
||||||
fake_pxm = node_to_pxm_map[i];
|
fake_pxm = node_to_pxm_map[i];
|
||||||
}
|
}
|
||||||
|
@ -144,7 +144,7 @@ void __init early_map_cpu_to_node(unsigned int cpu, int nid)
|
|||||||
unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
|
unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
|
||||||
EXPORT_SYMBOL(__per_cpu_offset);
|
EXPORT_SYMBOL(__per_cpu_offset);
|
||||||
|
|
||||||
static int __init early_cpu_to_node(int cpu)
|
int __init early_cpu_to_node(int cpu)
|
||||||
{
|
{
|
||||||
return cpu_to_node_map[cpu];
|
return cpu_to_node_map[cpu];
|
||||||
}
|
}
|
||||||
|
@ -587,6 +587,7 @@ static char *rnbd_srv_get_full_path(struct rnbd_srv_session *srv_sess,
|
|||||||
{
|
{
|
||||||
char *full_path;
|
char *full_path;
|
||||||
char *a, *b;
|
char *a, *b;
|
||||||
|
int len;
|
||||||
|
|
||||||
full_path = kmalloc(PATH_MAX, GFP_KERNEL);
|
full_path = kmalloc(PATH_MAX, GFP_KERNEL);
|
||||||
if (!full_path)
|
if (!full_path)
|
||||||
@ -598,19 +599,19 @@ static char *rnbd_srv_get_full_path(struct rnbd_srv_session *srv_sess,
|
|||||||
*/
|
*/
|
||||||
a = strnstr(dev_search_path, "%SESSNAME%", sizeof(dev_search_path));
|
a = strnstr(dev_search_path, "%SESSNAME%", sizeof(dev_search_path));
|
||||||
if (a) {
|
if (a) {
|
||||||
int len = a - dev_search_path;
|
len = a - dev_search_path;
|
||||||
|
|
||||||
len = snprintf(full_path, PATH_MAX, "%.*s/%s/%s", len,
|
len = snprintf(full_path, PATH_MAX, "%.*s/%s/%s", len,
|
||||||
dev_search_path, srv_sess->sessname, dev_name);
|
dev_search_path, srv_sess->sessname, dev_name);
|
||||||
if (len >= PATH_MAX) {
|
|
||||||
pr_err("Too long path: %s, %s, %s\n",
|
|
||||||
dev_search_path, srv_sess->sessname, dev_name);
|
|
||||||
kfree(full_path);
|
|
||||||
return ERR_PTR(-EINVAL);
|
|
||||||
}
|
|
||||||
} else {
|
} else {
|
||||||
snprintf(full_path, PATH_MAX, "%s/%s",
|
len = snprintf(full_path, PATH_MAX, "%s/%s",
|
||||||
dev_search_path, dev_name);
|
dev_search_path, dev_name);
|
||||||
|
}
|
||||||
|
if (len >= PATH_MAX) {
|
||||||
|
pr_err("Too long path: %s, %s, %s\n",
|
||||||
|
dev_search_path, srv_sess->sessname, dev_name);
|
||||||
|
kfree(full_path);
|
||||||
|
return ERR_PTR(-EINVAL);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* eliminitate duplicated slashes */
|
/* eliminitate duplicated slashes */
|
||||||
|
@ -1861,6 +1861,7 @@ static const struct qca_device_data qca_soc_data_wcn3998 = {
|
|||||||
static const struct qca_device_data qca_soc_data_qca6390 = {
|
static const struct qca_device_data qca_soc_data_qca6390 = {
|
||||||
.soc_type = QCA_QCA6390,
|
.soc_type = QCA_QCA6390,
|
||||||
.num_vregs = 0,
|
.num_vregs = 0,
|
||||||
|
.capabilities = QCA_CAP_WIDEBAND_SPEECH | QCA_CAP_VALID_LE_STATES,
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct qca_device_data qca_soc_data_wcn6750 = {
|
static const struct qca_device_data qca_soc_data_wcn6750 = {
|
||||||
|
@ -467,8 +467,10 @@ static void __init hi3620_mmc_clk_init(struct device_node *node)
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
clk_data->clks = kcalloc(num, sizeof(*clk_data->clks), GFP_KERNEL);
|
clk_data->clks = kcalloc(num, sizeof(*clk_data->clks), GFP_KERNEL);
|
||||||
if (!clk_data->clks)
|
if (!clk_data->clks) {
|
||||||
|
kfree(clk_data);
|
||||||
return;
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
for (i = 0; i < num; i++) {
|
for (i = 0; i < num; i++) {
|
||||||
struct hisi_mmc_clock *mmc_clk = &hi3620_mmc_clks[i];
|
struct hisi_mmc_clock *mmc_clk = &hi3620_mmc_clks[i];
|
||||||
|
@ -67,6 +67,22 @@ static const char * const lcd_pxl_sels[] = {
|
|||||||
"lcd_pxl_bypass_div_clk",
|
"lcd_pxl_bypass_div_clk",
|
||||||
};
|
};
|
||||||
|
|
||||||
|
static const char *const lvds0_sels[] = {
|
||||||
|
"clk_dummy",
|
||||||
|
"clk_dummy",
|
||||||
|
"clk_dummy",
|
||||||
|
"clk_dummy",
|
||||||
|
"mipi0_lvds_bypass_clk",
|
||||||
|
};
|
||||||
|
|
||||||
|
static const char *const lvds1_sels[] = {
|
||||||
|
"clk_dummy",
|
||||||
|
"clk_dummy",
|
||||||
|
"clk_dummy",
|
||||||
|
"clk_dummy",
|
||||||
|
"mipi1_lvds_bypass_clk",
|
||||||
|
};
|
||||||
|
|
||||||
static const char * const mipi_sels[] = {
|
static const char * const mipi_sels[] = {
|
||||||
"clk_dummy",
|
"clk_dummy",
|
||||||
"clk_dummy",
|
"clk_dummy",
|
||||||
@ -201,9 +217,9 @@ static int imx8qxp_clk_probe(struct platform_device *pdev)
|
|||||||
/* MIPI-LVDS SS */
|
/* MIPI-LVDS SS */
|
||||||
imx_clk_scu("mipi0_bypass_clk", IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_BYPASS);
|
imx_clk_scu("mipi0_bypass_clk", IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_BYPASS);
|
||||||
imx_clk_scu("mipi0_pixel_clk", IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_PER);
|
imx_clk_scu("mipi0_pixel_clk", IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_PER);
|
||||||
imx_clk_scu("mipi0_lvds_pixel_clk", IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_MISC2);
|
|
||||||
imx_clk_scu("mipi0_lvds_bypass_clk", IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_BYPASS);
|
imx_clk_scu("mipi0_lvds_bypass_clk", IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_BYPASS);
|
||||||
imx_clk_scu("mipi0_lvds_phy_clk", IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_MISC3);
|
imx_clk_scu2("mipi0_lvds_pixel_clk", lvds0_sels, ARRAY_SIZE(lvds0_sels), IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_MISC2);
|
||||||
|
imx_clk_scu2("mipi0_lvds_phy_clk", lvds0_sels, ARRAY_SIZE(lvds0_sels), IMX_SC_R_LVDS_0, IMX_SC_PM_CLK_MISC3);
|
||||||
imx_clk_scu2("mipi0_dsi_tx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_MST_BUS);
|
imx_clk_scu2("mipi0_dsi_tx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_MST_BUS);
|
||||||
imx_clk_scu2("mipi0_dsi_rx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_SLV_BUS);
|
imx_clk_scu2("mipi0_dsi_rx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_SLV_BUS);
|
||||||
imx_clk_scu2("mipi0_dsi_phy_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_PHY);
|
imx_clk_scu2("mipi0_dsi_phy_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_0, IMX_SC_PM_CLK_PHY);
|
||||||
@ -213,9 +229,9 @@ static int imx8qxp_clk_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
imx_clk_scu("mipi1_bypass_clk", IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_BYPASS);
|
imx_clk_scu("mipi1_bypass_clk", IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_BYPASS);
|
||||||
imx_clk_scu("mipi1_pixel_clk", IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_PER);
|
imx_clk_scu("mipi1_pixel_clk", IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_PER);
|
||||||
imx_clk_scu("mipi1_lvds_pixel_clk", IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_MISC2);
|
|
||||||
imx_clk_scu("mipi1_lvds_bypass_clk", IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_BYPASS);
|
imx_clk_scu("mipi1_lvds_bypass_clk", IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_BYPASS);
|
||||||
imx_clk_scu("mipi1_lvds_phy_clk", IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_MISC3);
|
imx_clk_scu2("mipi1_lvds_pixel_clk", lvds1_sels, ARRAY_SIZE(lvds1_sels), IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_MISC2);
|
||||||
|
imx_clk_scu2("mipi1_lvds_phy_clk", lvds1_sels, ARRAY_SIZE(lvds1_sels), IMX_SC_R_LVDS_1, IMX_SC_PM_CLK_MISC3);
|
||||||
|
|
||||||
imx_clk_scu2("mipi1_dsi_tx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_MST_BUS);
|
imx_clk_scu2("mipi1_dsi_tx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_MST_BUS);
|
||||||
imx_clk_scu2("mipi1_dsi_rx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_SLV_BUS);
|
imx_clk_scu2("mipi1_dsi_rx_esc_clk", mipi_sels, ARRAY_SIZE(mipi_sels), IMX_SC_R_MIPI_1, IMX_SC_PM_CLK_SLV_BUS);
|
||||||
|
@ -306,18 +306,21 @@ static void __init pxa168_clk_init(struct device_node *np)
|
|||||||
pxa_unit->mpmu_base = of_iomap(np, 0);
|
pxa_unit->mpmu_base = of_iomap(np, 0);
|
||||||
if (!pxa_unit->mpmu_base) {
|
if (!pxa_unit->mpmu_base) {
|
||||||
pr_err("failed to map mpmu registers\n");
|
pr_err("failed to map mpmu registers\n");
|
||||||
|
kfree(pxa_unit);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
pxa_unit->apmu_base = of_iomap(np, 1);
|
pxa_unit->apmu_base = of_iomap(np, 1);
|
||||||
if (!pxa_unit->apmu_base) {
|
if (!pxa_unit->apmu_base) {
|
||||||
pr_err("failed to map apmu registers\n");
|
pr_err("failed to map apmu registers\n");
|
||||||
|
kfree(pxa_unit);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
pxa_unit->apbc_base = of_iomap(np, 2);
|
pxa_unit->apbc_base = of_iomap(np, 2);
|
||||||
if (!pxa_unit->apbc_base) {
|
if (!pxa_unit->apbc_base) {
|
||||||
pr_err("failed to map apbc registers\n");
|
pr_err("failed to map apbc registers\n");
|
||||||
|
kfree(pxa_unit);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -419,8 +419,8 @@ int otx2_cptlf_init(struct otx2_cptlfs_info *lfs, u8 eng_grp_mask, int pri,
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
free_iq:
|
free_iq:
|
||||||
otx2_cpt_free_instruction_queues(lfs);
|
|
||||||
cptlf_hw_cleanup(lfs);
|
cptlf_hw_cleanup(lfs);
|
||||||
|
otx2_cpt_free_instruction_queues(lfs);
|
||||||
detach_rsrcs:
|
detach_rsrcs:
|
||||||
otx2_cpt_detach_rsrcs_msg(lfs);
|
otx2_cpt_detach_rsrcs_msg(lfs);
|
||||||
clear_lfs_num:
|
clear_lfs_num:
|
||||||
@ -431,11 +431,13 @@ EXPORT_SYMBOL_NS_GPL(otx2_cptlf_init, CRYPTO_DEV_OCTEONTX2_CPT);
|
|||||||
|
|
||||||
void otx2_cptlf_shutdown(struct otx2_cptlfs_info *lfs)
|
void otx2_cptlf_shutdown(struct otx2_cptlfs_info *lfs)
|
||||||
{
|
{
|
||||||
lfs->lfs_num = 0;
|
|
||||||
/* Cleanup LFs hardware side */
|
/* Cleanup LFs hardware side */
|
||||||
cptlf_hw_cleanup(lfs);
|
cptlf_hw_cleanup(lfs);
|
||||||
|
/* Free instruction queues */
|
||||||
|
otx2_cpt_free_instruction_queues(lfs);
|
||||||
/* Send request to detach LFs */
|
/* Send request to detach LFs */
|
||||||
otx2_cpt_detach_rsrcs_msg(lfs);
|
otx2_cpt_detach_rsrcs_msg(lfs);
|
||||||
|
lfs->lfs_num = 0;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_NS_GPL(otx2_cptlf_shutdown, CRYPTO_DEV_OCTEONTX2_CPT);
|
EXPORT_SYMBOL_NS_GPL(otx2_cptlf_shutdown, CRYPTO_DEV_OCTEONTX2_CPT);
|
||||||
|
|
||||||
|
@ -249,8 +249,11 @@ static void cptvf_lf_shutdown(struct otx2_cptlfs_info *lfs)
|
|||||||
otx2_cptlf_unregister_interrupts(lfs);
|
otx2_cptlf_unregister_interrupts(lfs);
|
||||||
/* Cleanup LFs software side */
|
/* Cleanup LFs software side */
|
||||||
lf_sw_cleanup(lfs);
|
lf_sw_cleanup(lfs);
|
||||||
|
/* Free instruction queues */
|
||||||
|
otx2_cpt_free_instruction_queues(lfs);
|
||||||
/* Send request to detach LFs */
|
/* Send request to detach LFs */
|
||||||
otx2_cpt_detach_rsrcs_msg(lfs);
|
otx2_cpt_detach_rsrcs_msg(lfs);
|
||||||
|
lfs->lfs_num = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int cptvf_lf_init(struct otx2_cptvf_dev *cptvf)
|
static int cptvf_lf_init(struct otx2_cptvf_dev *cptvf)
|
||||||
|
@ -104,7 +104,7 @@ static struct stm32_crc *stm32_crc_get_next_crc(void)
|
|||||||
struct stm32_crc *crc;
|
struct stm32_crc *crc;
|
||||||
|
|
||||||
spin_lock_bh(&crc_list.lock);
|
spin_lock_bh(&crc_list.lock);
|
||||||
crc = list_first_entry(&crc_list.dev_list, struct stm32_crc, list);
|
crc = list_first_entry_or_null(&crc_list.dev_list, struct stm32_crc, list);
|
||||||
if (crc)
|
if (crc)
|
||||||
list_move_tail(&crc->list, &crc_list.dev_list);
|
list_move_tail(&crc->list, &crc_list.dev_list);
|
||||||
spin_unlock_bh(&crc_list.lock);
|
spin_unlock_bh(&crc_list.lock);
|
||||||
|
@ -333,6 +333,7 @@ aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
|
|||||||
{
|
{
|
||||||
struct list_head *reset_device_list = reset_context->reset_device_list;
|
struct list_head *reset_device_list = reset_context->reset_device_list;
|
||||||
struct amdgpu_device *tmp_adev = NULL;
|
struct amdgpu_device *tmp_adev = NULL;
|
||||||
|
struct amdgpu_ras *con;
|
||||||
int r;
|
int r;
|
||||||
|
|
||||||
if (reset_device_list == NULL)
|
if (reset_device_list == NULL)
|
||||||
@ -358,7 +359,30 @@ aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
|
|||||||
*/
|
*/
|
||||||
amdgpu_register_gpu_instance(tmp_adev);
|
amdgpu_register_gpu_instance(tmp_adev);
|
||||||
|
|
||||||
/* Resume RAS */
|
/* Resume RAS, ecc_irq */
|
||||||
|
con = amdgpu_ras_get_context(tmp_adev);
|
||||||
|
if (!amdgpu_sriov_vf(tmp_adev) && con) {
|
||||||
|
if (tmp_adev->sdma.ras &&
|
||||||
|
tmp_adev->sdma.ras->ras_block.ras_late_init) {
|
||||||
|
r = tmp_adev->sdma.ras->ras_block.ras_late_init(tmp_adev,
|
||||||
|
&tmp_adev->sdma.ras->ras_block.ras_comm);
|
||||||
|
if (r) {
|
||||||
|
dev_err(tmp_adev->dev, "SDMA failed to execute ras_late_init! ret:%d\n", r);
|
||||||
|
goto end;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (tmp_adev->gfx.ras &&
|
||||||
|
tmp_adev->gfx.ras->ras_block.ras_late_init) {
|
||||||
|
r = tmp_adev->gfx.ras->ras_block.ras_late_init(tmp_adev,
|
||||||
|
&tmp_adev->gfx.ras->ras_block.ras_comm);
|
||||||
|
if (r) {
|
||||||
|
dev_err(tmp_adev->dev, "GFX failed to execute ras_late_init! ret:%d\n", r);
|
||||||
|
goto end;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
amdgpu_ras_resume(tmp_adev);
|
amdgpu_ras_resume(tmp_adev);
|
||||||
|
|
||||||
/* Update PSP FW topology after reset */
|
/* Update PSP FW topology after reset */
|
||||||
|
@ -90,7 +90,7 @@ struct amdgpu_amdkfd_fence *to_amdgpu_amdkfd_fence(struct dma_fence *f)
|
|||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
fence = container_of(f, struct amdgpu_amdkfd_fence, base);
|
fence = container_of(f, struct amdgpu_amdkfd_fence, base);
|
||||||
if (fence && f->ops == &amdkfd_fence_ops)
|
if (f->ops == &amdkfd_fence_ops)
|
||||||
return fence;
|
return fence;
|
||||||
|
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -1310,6 +1310,7 @@ bool amdgpu_device_need_post(struct amdgpu_device *adev)
|
|||||||
return true;
|
return true;
|
||||||
|
|
||||||
fw_ver = *((uint32_t *)adev->pm.fw->data + 69);
|
fw_ver = *((uint32_t *)adev->pm.fw->data + 69);
|
||||||
|
release_firmware(adev->pm.fw);
|
||||||
if (fw_ver < 0x00160e00)
|
if (fw_ver < 0x00160e00)
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
@ -808,19 +808,26 @@ int amdgpu_gmc_vram_checking(struct amdgpu_device *adev)
|
|||||||
* seconds, so here, we just pick up three parts for emulation.
|
* seconds, so here, we just pick up three parts for emulation.
|
||||||
*/
|
*/
|
||||||
ret = memcmp(vram_ptr, cptr, 10);
|
ret = memcmp(vram_ptr, cptr, 10);
|
||||||
if (ret)
|
if (ret) {
|
||||||
return ret;
|
ret = -EIO;
|
||||||
|
goto release_buffer;
|
||||||
|
}
|
||||||
|
|
||||||
ret = memcmp(vram_ptr + (size / 2), cptr, 10);
|
ret = memcmp(vram_ptr + (size / 2), cptr, 10);
|
||||||
if (ret)
|
if (ret) {
|
||||||
return ret;
|
ret = -EIO;
|
||||||
|
goto release_buffer;
|
||||||
|
}
|
||||||
|
|
||||||
ret = memcmp(vram_ptr + size - 10, cptr, 10);
|
ret = memcmp(vram_ptr + size - 10, cptr, 10);
|
||||||
if (ret)
|
if (ret) {
|
||||||
return ret;
|
ret = -EIO;
|
||||||
|
goto release_buffer;
|
||||||
|
}
|
||||||
|
|
||||||
|
release_buffer:
|
||||||
amdgpu_bo_free_kernel(&vram_bo, &vram_gpu,
|
amdgpu_bo_free_kernel(&vram_bo, &vram_gpu,
|
||||||
&vram_ptr);
|
&vram_ptr);
|
||||||
|
|
||||||
return 0;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -1222,19 +1222,15 @@ int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
|
|||||||
* amdgpu_bo_move_notify - notification about a memory move
|
* amdgpu_bo_move_notify - notification about a memory move
|
||||||
* @bo: pointer to a buffer object
|
* @bo: pointer to a buffer object
|
||||||
* @evict: if this move is evicting the buffer from the graphics address space
|
* @evict: if this move is evicting the buffer from the graphics address space
|
||||||
* @new_mem: new information of the bufer object
|
|
||||||
*
|
*
|
||||||
* Marks the corresponding &amdgpu_bo buffer object as invalid, also performs
|
* Marks the corresponding &amdgpu_bo buffer object as invalid, also performs
|
||||||
* bookkeeping.
|
* bookkeeping.
|
||||||
* TTM driver callback which is called when ttm moves a buffer.
|
* TTM driver callback which is called when ttm moves a buffer.
|
||||||
*/
|
*/
|
||||||
void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
|
void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, bool evict)
|
||||||
bool evict,
|
|
||||||
struct ttm_resource *new_mem)
|
|
||||||
{
|
{
|
||||||
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
|
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
|
||||||
struct amdgpu_bo *abo;
|
struct amdgpu_bo *abo;
|
||||||
struct ttm_resource *old_mem = bo->resource;
|
|
||||||
|
|
||||||
if (!amdgpu_bo_is_amdgpu_bo(bo))
|
if (!amdgpu_bo_is_amdgpu_bo(bo))
|
||||||
return;
|
return;
|
||||||
@ -1251,13 +1247,6 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
|
|||||||
/* remember the eviction */
|
/* remember the eviction */
|
||||||
if (evict)
|
if (evict)
|
||||||
atomic64_inc(&adev->num_evictions);
|
atomic64_inc(&adev->num_evictions);
|
||||||
|
|
||||||
/* update statistics */
|
|
||||||
if (!new_mem)
|
|
||||||
return;
|
|
||||||
|
|
||||||
/* move_notify is called before move happens */
|
|
||||||
trace_amdgpu_bo_move(abo, new_mem->mem_type, old_mem->mem_type);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void amdgpu_bo_get_memory(struct amdgpu_bo *bo, uint64_t *vram_mem,
|
void amdgpu_bo_get_memory(struct amdgpu_bo *bo, uint64_t *vram_mem,
|
||||||
|
@ -312,9 +312,7 @@ int amdgpu_bo_set_metadata (struct amdgpu_bo *bo, void *metadata,
|
|||||||
int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
|
int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
|
||||||
size_t buffer_size, uint32_t *metadata_size,
|
size_t buffer_size, uint32_t *metadata_size,
|
||||||
uint64_t *flags);
|
uint64_t *flags);
|
||||||
void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
|
void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, bool evict);
|
||||||
bool evict,
|
|
||||||
struct ttm_resource *new_mem);
|
|
||||||
void amdgpu_bo_release_notify(struct ttm_buffer_object *bo);
|
void amdgpu_bo_release_notify(struct ttm_buffer_object *bo);
|
||||||
vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
|
vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
|
||||||
void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,
|
void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,
|
||||||
|
@ -191,7 +191,8 @@ static bool amdgpu_sync_test_fence(struct amdgpu_device *adev,
|
|||||||
|
|
||||||
/* Never sync to VM updates either. */
|
/* Never sync to VM updates either. */
|
||||||
if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
|
if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
|
||||||
owner != AMDGPU_FENCE_OWNER_UNDEFINED)
|
owner != AMDGPU_FENCE_OWNER_UNDEFINED &&
|
||||||
|
owner != AMDGPU_FENCE_OWNER_KFD)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
/* Ignore fences depending on the sync mode */
|
/* Ignore fences depending on the sync mode */
|
||||||
|
@ -555,10 +555,11 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict,
|
|||||||
return r;
|
return r;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
trace_amdgpu_bo_move(abo, new_mem->mem_type, old_mem->mem_type);
|
||||||
out:
|
out:
|
||||||
/* update statistics */
|
/* update statistics */
|
||||||
atomic64_add(bo->base.size, &adev->num_bytes_moved);
|
atomic64_add(bo->base.size, &adev->num_bytes_moved);
|
||||||
amdgpu_bo_move_notify(bo, evict, new_mem);
|
amdgpu_bo_move_notify(bo, evict);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1503,7 +1504,7 @@ static int amdgpu_ttm_access_memory(struct ttm_buffer_object *bo,
|
|||||||
static void
|
static void
|
||||||
amdgpu_bo_delete_mem_notify(struct ttm_buffer_object *bo)
|
amdgpu_bo_delete_mem_notify(struct ttm_buffer_object *bo)
|
||||||
{
|
{
|
||||||
amdgpu_bo_move_notify(bo, false, NULL);
|
amdgpu_bo_move_notify(bo, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct ttm_device_funcs amdgpu_bo_driver = {
|
static struct ttm_device_funcs amdgpu_bo_driver = {
|
||||||
|
@ -1110,9 +1110,13 @@ int amdgpu_ucode_request(struct amdgpu_device *adev, const struct firmware **fw,
|
|||||||
|
|
||||||
if (err)
|
if (err)
|
||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
|
|
||||||
err = amdgpu_ucode_validate(*fw);
|
err = amdgpu_ucode_validate(*fw);
|
||||||
if (err)
|
if (err) {
|
||||||
dev_dbg(adev->dev, "\"%s\" failed to validate\n", fw_name);
|
dev_dbg(adev->dev, "\"%s\" failed to validate\n", fw_name);
|
||||||
|
release_firmware(*fw);
|
||||||
|
*fw = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
@ -1144,6 +1144,10 @@ static int gmc_v10_0_hw_fini(void *handle)
|
|||||||
|
|
||||||
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
|
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
|
||||||
|
|
||||||
|
if (adev->gmc.ecc_irq.funcs &&
|
||||||
|
amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC))
|
||||||
|
amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -951,6 +951,11 @@ static int gmc_v11_0_hw_fini(void *handle)
|
|||||||
}
|
}
|
||||||
|
|
||||||
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
|
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
|
||||||
|
|
||||||
|
if (adev->gmc.ecc_irq.funcs &&
|
||||||
|
amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC))
|
||||||
|
amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
|
||||||
|
|
||||||
gmc_v11_0_gart_disable(adev);
|
gmc_v11_0_gart_disable(adev);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -921,8 +921,8 @@ static int gmc_v6_0_hw_init(void *handle)
|
|||||||
|
|
||||||
if (amdgpu_emu_mode == 1)
|
if (amdgpu_emu_mode == 1)
|
||||||
return amdgpu_gmc_vram_checking(adev);
|
return amdgpu_gmc_vram_checking(adev);
|
||||||
else
|
|
||||||
return r;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int gmc_v6_0_hw_fini(void *handle)
|
static int gmc_v6_0_hw_fini(void *handle)
|
||||||
|
@ -1110,8 +1110,8 @@ static int gmc_v7_0_hw_init(void *handle)
|
|||||||
|
|
||||||
if (amdgpu_emu_mode == 1)
|
if (amdgpu_emu_mode == 1)
|
||||||
return amdgpu_gmc_vram_checking(adev);
|
return amdgpu_gmc_vram_checking(adev);
|
||||||
else
|
|
||||||
return r;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int gmc_v7_0_hw_fini(void *handle)
|
static int gmc_v7_0_hw_fini(void *handle)
|
||||||
|
@ -1240,8 +1240,8 @@ static int gmc_v8_0_hw_init(void *handle)
|
|||||||
|
|
||||||
if (amdgpu_emu_mode == 1)
|
if (amdgpu_emu_mode == 1)
|
||||||
return amdgpu_gmc_vram_checking(adev);
|
return amdgpu_gmc_vram_checking(adev);
|
||||||
else
|
|
||||||
return r;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int gmc_v8_0_hw_fini(void *handle)
|
static int gmc_v8_0_hw_fini(void *handle)
|
||||||
|
@ -1861,8 +1861,8 @@ static int gmc_v9_0_hw_init(void *handle)
|
|||||||
|
|
||||||
if (amdgpu_emu_mode == 1)
|
if (amdgpu_emu_mode == 1)
|
||||||
return amdgpu_gmc_vram_checking(adev);
|
return amdgpu_gmc_vram_checking(adev);
|
||||||
else
|
|
||||||
return r;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -1900,6 +1900,10 @@ static int gmc_v9_0_hw_fini(void *handle)
|
|||||||
|
|
||||||
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
|
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
|
||||||
|
|
||||||
|
if (adev->gmc.ecc_irq.funcs &&
|
||||||
|
amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC))
|
||||||
|
amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -380,14 +380,9 @@ static void svm_range_bo_release(struct kref *kref)
|
|||||||
spin_lock(&svm_bo->list_lock);
|
spin_lock(&svm_bo->list_lock);
|
||||||
}
|
}
|
||||||
spin_unlock(&svm_bo->list_lock);
|
spin_unlock(&svm_bo->list_lock);
|
||||||
if (!dma_fence_is_signaled(&svm_bo->eviction_fence->base)) {
|
if (!dma_fence_is_signaled(&svm_bo->eviction_fence->base))
|
||||||
/* We're not in the eviction worker.
|
/* We're not in the eviction worker. Signal the fence. */
|
||||||
* Signal the fence and synchronize with any
|
|
||||||
* pending eviction work.
|
|
||||||
*/
|
|
||||||
dma_fence_signal(&svm_bo->eviction_fence->base);
|
dma_fence_signal(&svm_bo->eviction_fence->base);
|
||||||
cancel_work_sync(&svm_bo->eviction_work);
|
|
||||||
}
|
|
||||||
dma_fence_put(&svm_bo->eviction_fence->base);
|
dma_fence_put(&svm_bo->eviction_fence->base);
|
||||||
amdgpu_bo_unref(&svm_bo->bo);
|
amdgpu_bo_unref(&svm_bo->bo);
|
||||||
kfree(svm_bo);
|
kfree(svm_bo);
|
||||||
@ -2246,8 +2241,10 @@ static void svm_range_deferred_list_work(struct work_struct *work)
|
|||||||
mutex_unlock(&svms->lock);
|
mutex_unlock(&svms->lock);
|
||||||
mmap_write_unlock(mm);
|
mmap_write_unlock(mm);
|
||||||
|
|
||||||
/* Pairs with mmget in svm_range_add_list_work */
|
/* Pairs with mmget in svm_range_add_list_work. If dropping the
|
||||||
mmput(mm);
|
* last mm refcount, schedule release work to avoid circular locking
|
||||||
|
*/
|
||||||
|
mmput_async(mm);
|
||||||
|
|
||||||
spin_lock(&svms->deferred_list_lock);
|
spin_lock(&svms->deferred_list_lock);
|
||||||
}
|
}
|
||||||
@ -2556,6 +2553,7 @@ svm_range_get_range_boundaries(struct kfd_process *p, int64_t addr,
|
|||||||
{
|
{
|
||||||
struct vm_area_struct *vma;
|
struct vm_area_struct *vma;
|
||||||
struct interval_tree_node *node;
|
struct interval_tree_node *node;
|
||||||
|
struct rb_node *rb_node;
|
||||||
unsigned long start_limit, end_limit;
|
unsigned long start_limit, end_limit;
|
||||||
|
|
||||||
vma = find_vma(p->mm, addr << PAGE_SHIFT);
|
vma = find_vma(p->mm, addr << PAGE_SHIFT);
|
||||||
@ -2578,16 +2576,15 @@ svm_range_get_range_boundaries(struct kfd_process *p, int64_t addr,
|
|||||||
if (node) {
|
if (node) {
|
||||||
end_limit = min(end_limit, node->start);
|
end_limit = min(end_limit, node->start);
|
||||||
/* Last range that ends before the fault address */
|
/* Last range that ends before the fault address */
|
||||||
node = container_of(rb_prev(&node->rb),
|
rb_node = rb_prev(&node->rb);
|
||||||
struct interval_tree_node, rb);
|
|
||||||
} else {
|
} else {
|
||||||
/* Last range must end before addr because
|
/* Last range must end before addr because
|
||||||
* there was no range after addr
|
* there was no range after addr
|
||||||
*/
|
*/
|
||||||
node = container_of(rb_last(&p->svms.objects.rb_root),
|
rb_node = rb_last(&p->svms.objects.rb_root);
|
||||||
struct interval_tree_node, rb);
|
|
||||||
}
|
}
|
||||||
if (node) {
|
if (rb_node) {
|
||||||
|
node = container_of(rb_node, struct interval_tree_node, rb);
|
||||||
if (node->last >= addr) {
|
if (node->last >= addr) {
|
||||||
WARN(1, "Overlap with prev node and page fault addr\n");
|
WARN(1, "Overlap with prev node and page fault addr\n");
|
||||||
return -EFAULT;
|
return -EFAULT;
|
||||||
@ -3310,13 +3307,14 @@ svm_range_trigger_migration(struct mm_struct *mm, struct svm_range *prange,
|
|||||||
|
|
||||||
int svm_range_schedule_evict_svm_bo(struct amdgpu_amdkfd_fence *fence)
|
int svm_range_schedule_evict_svm_bo(struct amdgpu_amdkfd_fence *fence)
|
||||||
{
|
{
|
||||||
if (!fence)
|
/* Dereferencing fence->svm_bo is safe here because the fence hasn't
|
||||||
return -EINVAL;
|
* signaled yet and we're under the protection of the fence->lock.
|
||||||
|
* After the fence is signaled in svm_range_bo_release, we cannot get
|
||||||
if (dma_fence_is_signaled(&fence->base))
|
* here any more.
|
||||||
return 0;
|
*
|
||||||
|
* Reference is dropped in svm_range_evict_svm_bo_worker.
|
||||||
if (fence->svm_bo) {
|
*/
|
||||||
|
if (svm_bo_ref_unless_zero(fence->svm_bo)) {
|
||||||
WRITE_ONCE(fence->svm_bo->evicting, 1);
|
WRITE_ONCE(fence->svm_bo->evicting, 1);
|
||||||
schedule_work(&fence->svm_bo->eviction_work);
|
schedule_work(&fence->svm_bo->eviction_work);
|
||||||
}
|
}
|
||||||
@ -3331,8 +3329,6 @@ static void svm_range_evict_svm_bo_worker(struct work_struct *work)
|
|||||||
int r = 0;
|
int r = 0;
|
||||||
|
|
||||||
svm_bo = container_of(work, struct svm_range_bo, eviction_work);
|
svm_bo = container_of(work, struct svm_range_bo, eviction_work);
|
||||||
if (!svm_bo_ref_unless_zero(svm_bo))
|
|
||||||
return; /* svm_bo was freed while eviction was pending */
|
|
||||||
|
|
||||||
if (mmget_not_zero(svm_bo->eviction_fence->mm)) {
|
if (mmget_not_zero(svm_bo->eviction_fence->mm)) {
|
||||||
mm = svm_bo->eviction_fence->mm;
|
mm = svm_bo->eviction_fence->mm;
|
||||||
|
@ -1513,17 +1513,19 @@ static int kfd_add_peer_prop(struct kfd_topology_device *kdev,
|
|||||||
/* CPU->CPU link*/
|
/* CPU->CPU link*/
|
||||||
cpu_dev = kfd_topology_device_by_proximity_domain(iolink1->node_to);
|
cpu_dev = kfd_topology_device_by_proximity_domain(iolink1->node_to);
|
||||||
if (cpu_dev) {
|
if (cpu_dev) {
|
||||||
list_for_each_entry(iolink3, &cpu_dev->io_link_props, list)
|
list_for_each_entry(iolink3, &cpu_dev->io_link_props, list) {
|
||||||
if (iolink3->node_to == iolink2->node_to)
|
if (iolink3->node_to != iolink2->node_to)
|
||||||
break;
|
continue;
|
||||||
|
|
||||||
props->weight += iolink3->weight;
|
props->weight += iolink3->weight;
|
||||||
props->min_latency += iolink3->min_latency;
|
props->min_latency += iolink3->min_latency;
|
||||||
props->max_latency += iolink3->max_latency;
|
props->max_latency += iolink3->max_latency;
|
||||||
props->min_bandwidth = min(props->min_bandwidth,
|
props->min_bandwidth = min(props->min_bandwidth,
|
||||||
iolink3->min_bandwidth);
|
iolink3->min_bandwidth);
|
||||||
props->max_bandwidth = min(props->max_bandwidth,
|
props->max_bandwidth = min(props->max_bandwidth,
|
||||||
iolink3->max_bandwidth);
|
iolink3->max_bandwidth);
|
||||||
|
break;
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
WARN(1, "CPU node not found");
|
WARN(1, "CPU node not found");
|
||||||
}
|
}
|
||||||
|
@ -1903,6 +1903,10 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
|
|||||||
wait_for_no_pipes_pending(dc, context);
|
wait_for_no_pipes_pending(dc, context);
|
||||||
/* pplib is notified if disp_num changed */
|
/* pplib is notified if disp_num changed */
|
||||||
dc->hwss.optimize_bandwidth(dc, context);
|
dc->hwss.optimize_bandwidth(dc, context);
|
||||||
|
/* Need to do otg sync again as otg could be out of sync due to otg
|
||||||
|
* workaround applied during clock update
|
||||||
|
*/
|
||||||
|
dc_trigger_sync(dc, context);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (dc->hwss.update_dsc_pg)
|
if (dc->hwss.update_dsc_pg)
|
||||||
|
@ -244,7 +244,7 @@ enum pixel_format {
|
|||||||
#define DC_MAX_DIRTY_RECTS 3
|
#define DC_MAX_DIRTY_RECTS 3
|
||||||
struct dc_flip_addrs {
|
struct dc_flip_addrs {
|
||||||
struct dc_plane_address address;
|
struct dc_plane_address address;
|
||||||
unsigned int flip_timestamp_in_us;
|
unsigned long long flip_timestamp_in_us;
|
||||||
bool flip_immediate;
|
bool flip_immediate;
|
||||||
/* TODO: add flip duration for FreeSync */
|
/* TODO: add flip duration for FreeSync */
|
||||||
bool triplebuffer_flips;
|
bool triplebuffer_flips;
|
||||||
|
@ -810,6 +810,8 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
|
|||||||
(v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ ||
|
(v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ ||
|
||||||
v->DCFCLKPerState[mode_lib->vba.VoltageLevel] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ?
|
v->DCFCLKPerState[mode_lib->vba.VoltageLevel] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ?
|
||||||
mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
|
mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
|
||||||
|
mode_lib->vba.PrefetchModePerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] > 0 || mode_lib->vba.DRAMClockChangeRequirementFinal == false,
|
||||||
|
|
||||||
/* Output */
|
/* Output */
|
||||||
&v->DSTXAfterScaler[k],
|
&v->DSTXAfterScaler[k],
|
||||||
&v->DSTYAfterScaler[k],
|
&v->DSTYAfterScaler[k],
|
||||||
@ -3291,6 +3293,7 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
|
|||||||
v->SwathHeightCThisState[k], v->TWait,
|
v->SwathHeightCThisState[k], v->TWait,
|
||||||
(v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ || v->DCFCLKState[i][j] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ?
|
(v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ || v->DCFCLKState[i][j] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ?
|
||||||
mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
|
mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
|
||||||
|
mode_lib->vba.PrefetchModePerState[i][j] > 0 || mode_lib->vba.DRAMClockChangeRequirementFinal == false,
|
||||||
|
|
||||||
/* Output */
|
/* Output */
|
||||||
&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.DSTXAfterScaler[k],
|
&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.DSTXAfterScaler[k],
|
||||||
|
@ -3418,6 +3418,7 @@ bool dml32_CalculatePrefetchSchedule(
|
|||||||
unsigned int SwathHeightC,
|
unsigned int SwathHeightC,
|
||||||
double TWait,
|
double TWait,
|
||||||
double TPreReq,
|
double TPreReq,
|
||||||
|
bool ExtendPrefetchIfPossible,
|
||||||
/* Output */
|
/* Output */
|
||||||
double *DSTXAfterScaler,
|
double *DSTXAfterScaler,
|
||||||
double *DSTYAfterScaler,
|
double *DSTYAfterScaler,
|
||||||
@ -3887,12 +3888,32 @@ bool dml32_CalculatePrefetchSchedule(
|
|||||||
/* Clamp to oto for bandwidth calculation */
|
/* Clamp to oto for bandwidth calculation */
|
||||||
LinesForPrefetchBandwidth = dst_y_prefetch_oto;
|
LinesForPrefetchBandwidth = dst_y_prefetch_oto;
|
||||||
} else {
|
} else {
|
||||||
*DestinationLinesForPrefetch = dst_y_prefetch_equ;
|
/* For mode programming we want to extend the prefetch as much as possible
|
||||||
TimeForFetchingMetaPTE = Tvm_equ;
|
* (up to oto, or as long as we can for equ) if we're not already applying
|
||||||
TimeForFetchingRowInVBlank = Tr0_equ;
|
* the 60us prefetch requirement. This is to avoid intermittent underflow
|
||||||
*PrefetchBandwidth = prefetch_bw_equ;
|
* issues during prefetch.
|
||||||
/* Clamp to equ for bandwidth calculation */
|
*
|
||||||
LinesForPrefetchBandwidth = dst_y_prefetch_equ;
|
* The prefetch extension is applied under the following scenarios:
|
||||||
|
* 1. We're in prefetch mode > 0 (i.e. we don't support MCLK switch in blank)
|
||||||
|
* 2. We're using subvp or drr methods of p-state switch, in which case we
|
||||||
|
* we don't care if prefetch takes up more of the blanking time
|
||||||
|
*
|
||||||
|
* Mode programming typically chooses the smallest prefetch time possible
|
||||||
|
* (i.e. highest bandwidth during prefetch) presumably to create margin between
|
||||||
|
* p-states / c-states that happen in vblank and prefetch. Therefore we only
|
||||||
|
* apply this prefetch extension when p-state in vblank is not required (UCLK
|
||||||
|
* p-states take up the most vblank time).
|
||||||
|
*/
|
||||||
|
if (ExtendPrefetchIfPossible && TPreReq == 0 && VStartup < MaxVStartup) {
|
||||||
|
MyError = true;
|
||||||
|
} else {
|
||||||
|
*DestinationLinesForPrefetch = dst_y_prefetch_equ;
|
||||||
|
TimeForFetchingMetaPTE = Tvm_equ;
|
||||||
|
TimeForFetchingRowInVBlank = Tr0_equ;
|
||||||
|
*PrefetchBandwidth = prefetch_bw_equ;
|
||||||
|
/* Clamp to equ for bandwidth calculation */
|
||||||
|
LinesForPrefetchBandwidth = dst_y_prefetch_equ;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
*DestinationLinesToRequestVMInVBlank = dml_ceil(4.0 * TimeForFetchingMetaPTE / LineTime, 1.0) / 4.0;
|
*DestinationLinesToRequestVMInVBlank = dml_ceil(4.0 * TimeForFetchingMetaPTE / LineTime, 1.0) / 4.0;
|
||||||
|
@ -744,6 +744,7 @@ bool dml32_CalculatePrefetchSchedule(
|
|||||||
unsigned int SwathHeightC,
|
unsigned int SwathHeightC,
|
||||||
double TWait,
|
double TWait,
|
||||||
double TPreReq,
|
double TPreReq,
|
||||||
|
bool ExtendPrefetchIfPossible,
|
||||||
/* Output */
|
/* Output */
|
||||||
double *DSTXAfterScaler,
|
double *DSTXAfterScaler,
|
||||||
double *DSTYAfterScaler,
|
double *DSTYAfterScaler,
|
||||||
|
@ -818,8 +818,6 @@ bool is_psr_su_specific_panel(struct dc_link *link)
|
|||||||
isPSRSUSupported = false;
|
isPSRSUSupported = false;
|
||||||
else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
|
else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
|
||||||
isPSRSUSupported = false;
|
isPSRSUSupported = false;
|
||||||
else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
|
|
||||||
isPSRSUSupported = false;
|
|
||||||
else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
|
else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
|
||||||
isPSRSUSupported = true;
|
isPSRSUSupported = true;
|
||||||
}
|
}
|
||||||
|
@ -200,7 +200,7 @@ static int get_platform_power_management_table(
|
|||||||
struct pp_hwmgr *hwmgr,
|
struct pp_hwmgr *hwmgr,
|
||||||
ATOM_Tonga_PPM_Table *atom_ppm_table)
|
ATOM_Tonga_PPM_Table *atom_ppm_table)
|
||||||
{
|
{
|
||||||
struct phm_ppm_table *ptr = kzalloc(sizeof(ATOM_Tonga_PPM_Table), GFP_KERNEL);
|
struct phm_ppm_table *ptr = kzalloc(sizeof(*ptr), GFP_KERNEL);
|
||||||
struct phm_ppt_v1_information *pp_table_information =
|
struct phm_ppt_v1_information *pp_table_information =
|
||||||
(struct phm_ppt_v1_information *)(hwmgr->pptable);
|
(struct phm_ppt_v1_information *)(hwmgr->pptable);
|
||||||
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user