This is the 6.1.4 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmO5RX4ACgkQONu9yGCS aT7NFRAAlqi2Wwx1NZU3HE9nr/fgdGFDlEJbKXP9jiwIjiIx5mAPaysb9uhOG5qj 11uG/S3+xsb5ryhSpCR5I0rPnymK/XWDQtfYlKMuB+bLr+w03BBYzZX5v+YihUmw mWa32+xiei1KzIjxMGVFciiSYoMxWR5smqe559LPiabB3dcRfRPSHQDJCOWE5T2y 2Tlts/gUpfqMh+MPNQOYgB0TZmUhzin9XW4AcDqLsyupKRLKEusDVIA5QfznuCyN UFjTem9h+4qPSZOgzdmSX9QljYL8Jqh4gwXUcl4/EUoObPN/tTbjCYZkuyOT4r3F FsN/w6+32C/TjQSBg50d4yT4TFC4bjnc3VCb2dI+6WazLqAjS7RrkoqTl37K/rwC Gb3FmjwQNNx/iNq5kM5NvuJgLWuLAVZBn+WxWBk1hqE5f8l0l5/NgxdHfSwjMvJL toqXT98yoSTMMGaQ8QA4MLh9Gx2CYC0JeyaWnFavMOBs9Oxy+SbNyoUKo0Wn8kHh z/6/Pdk6Qd3PKv9pXsrTpWAPzQs5w23+XbZ2pJw+K5yVKvmNQJEhSJ9fzFRksldW ykDMzVmZ0kswcNuJgQFUi4QTu17p3FA4UJG1xvJNCKPsgelLe8153TFDds0yXfu1 IwqZiHpwBF1tyGHYEBdqmivj77eWtNsWMSZHHDN9jJ9sNxS+kSM= =CeYO -----END PGP SIGNATURE----- Merge 6.1.4 into android14-6.1 Changes in 6.1.4 drm/amdgpu: skip MES for S0ix as well since it's part of GFX drm/amdgpu: skip mes self test after s0i3 resume for MES IP v11.0 media: stv0288: use explicitly signed char cxl/region: Fix memdev reuse check arm64: dts: qcom: sc8280xp: fix UFS DMA coherency arm64: Prohibit instrumentation on arch_stack_walk() soc: qcom: Select REMAP_MMIO for LLCC driver soc: qcom: Select REMAP_MMIO for ICC_BWMON driver kest.pl: Fix grub2 menu handling for rebooting ktest.pl minconfig: Unset configs instead of just removing them jbd2: use the correct print format perf/x86/intel/uncore: Disable I/O stacks to PMU mapping on ICX-D perf/x86/intel/uncore: Clear attr_update properly arm64: dts: qcom: sdm845-db845c: correct SPI2 pins drive strength arm64: dts: qcom: sc8280xp: fix UFS reference clocks mmc: sdhci-sprd: Disable CLK_AUTO when the clock is less than 400K phy: qcom-qmp-combo: fix out-of-bounds clock access drm/amd/pm: update SMU13.0.0 reported maximum shader clock drm/amd/pm: correct SMU13.0.0 pstate profiling clock settings btrfs: fix uninitialized parent in insert_state btrfs: fix extent map use-after-free when handling missing device in read_one_chunk btrfs: fix resolving backrefs for inline extent followed by prealloc ARM: ux500: do not directly dereference __iomem arm64: dts: qcom: sdm850-samsung-w737: correct I2C12 pins drive strength random: use rejection sampling for uniform bounded random integers x86/fpu/xstate: Fix XSTATE_WARN_ON() to emit relevant diagnostics arm64: dts: qcom: sdm850-lenovo-yoga-c630: correct I2C12 pins drive strength cxl/region: Fix missing probe failure EDAC/mc_sysfs: Increase legacy channel support to 12 selftests: Use optional USERCFLAGS and USERLDFLAGS x86/MCE/AMD: Clear DFR errors found in THR handler random: add helpers for random numbers with given floor or range PM/devfreq: governor: Add a private governor_data for governor cpufreq: Init completion before kobject_init_and_add() ext2: unbugger ext2_empty_dir() media: s5p-mfc: Fix to handle reference queue during finishing media: s5p-mfc: Clear workbit to handle error condition media: s5p-mfc: Fix in register read and write for H264 bpf: Resolve fext program type when checking map compatibility ALSA: patch_realtek: Fix Dell Inspiron Plus 16 ALSA: hda/realtek: Apply dual codec fixup for Dell Latitude laptops platform/x86: thinkpad_acpi: Fix max_brightness of thinklight platform/x86: ideapad-laptop: Revert "check for touchpad support in _CFG" platform/x86: ideapad-laptop: Add new _CFG bit numbers for future use platform/x86: ideapad-laptop: support for more special keys in WMI ACPI: video: Simplify __acpi_video_get_backlight_type() ACPI: video: Prefer native over vendor platform/x86: ideapad-laptop: Refactor ideapad_sync_touchpad_state() platform/x86: ideapad-laptop: Do not send KEY_TOUCHPAD* events on probe / resume platform/x86: ideapad-laptop: Only toggle ps2 aux port on/off on select models platform/x86: ideapad-laptop: Send KEY_TOUCHPAD_TOGGLE on some models platform/x86: ideapad-laptop: Stop writing VPCCMD_W_TOUCHPAD at probe time platform/x86: intel-uncore-freq: add Emerald Rapids support ALSA: hda/cirrus: Add extra 10 ms delay to allow PLL settle and lock. platform/x86: x86-android-tablets: Add Medion Lifetab S10346 data platform/x86: x86-android-tablets: Add Lenovo Yoga Tab 3 (YT3-X90F) charger + fuel-gauge data platform/x86: x86-android-tablets: Add Advantech MICA-071 extra button HID: Ignore HP Envy x360 eu0009nv stylus battery ALSA: usb-audio: Add new quirk FIXED_RATE for JBL Quantum810 Wireless fs: dlm: fix sock release if listen fails fs: dlm: retry accept() until -EAGAIN or error returns mptcp: netlink: fix some error return code mptcp: remove MPTCP 'ifdef' in TCP SYN cookies mptcp: dedicated request sock for subflow in v6 mptcp: use proper req destructor for IPv6 dm cache: Fix ABBA deadlock between shrink_slab and dm_cache_metadata_abort dm thin: Fix ABBA deadlock between shrink_slab and dm_pool_abort_metadata dm thin: Use last transaction's pmd->root when commit failed dm thin: resume even if in FAIL mode dm thin: Fix UAF in run_timer_softirq() dm integrity: Fix UAF in dm_integrity_dtr() dm clone: Fix UAF in clone_dtr() dm cache: Fix UAF in destroy() dm cache: set needs_check flag after aborting metadata ata: ahci: fix enum constants for gcc-13 PCI/DOE: Fix maximum data object length miscalculation tracing/hist: Fix out-of-bound write on 'action_data.var_ref_idx' perf/core: Call LSM hook after copying perf_event_attr xtensa: add __umulsidi3 helper of/kexec: Fix reading 32-bit "linux,initrd-{start,end}" values ima: Fix hash dependency to correct algorithm KVM: VMX: Resume guest immediately when injecting #GP on ECREATE KVM: nVMX: Inject #GP, not #UD, if "generic" VMXON CR0/CR4 check fails KVM: x86: fix APICv/x2AVIC disabled when vm reboot by itself KVM: nVMX: Properly expose ENABLE_USR_WAIT_PAUSE control to L1 x86/microcode/intel: Do not retry microcode reloading on the APs ftrace/x86: Add back ftrace_expected for ftrace bug reports x86/kprobes: Fix kprobes instruction boudary check with CONFIG_RETHUNK x86/kprobes: Fix optprobe optimization check with CONFIG_RETHUNK tracing: Fix race where eprobes can be called before the event powerpc/ftrace: fix syscall tracing on PPC64_ELF_ABI_V1 tracing: Fix complicated dependency of CONFIG_TRACER_MAX_TRACE tracing/hist: Fix wrong return value in parse_action_params() tracing/probes: Handle system names with hyphens tracing: Fix issue of missing one synthetic field tracing: Fix infinite loop in tracing_read_pipe on overflowed print_trace_line staging: media: tegra-video: fix chan->mipi value on error staging: media: tegra-video: fix device_node use after free arm64: dts: mediatek: mt8195-demo: fix the memory size of node secmon ARM: 9256/1: NWFPE: avoid compiler-generated __aeabi_uldivmod media: dvb-core: Fix double free in dvb_register_device() media: dvb-core: Fix UAF due to refcount races at releasing cifs: fix confusing debug message cifs: fix missing display of three mount options cifs: set correct tcon status after initial tree connect cifs: set correct ipc status after initial tree connect cifs: set correct status of tcon ipc when reconnecting ravb: Fix "failed to switch device to config mode" message during unbind rtc: ds1347: fix value written to century register drm/amdgpu: fix mmhub register base coding error block: mq-deadline: Fix dd_finish_request() for zoned devices block: mq-deadline: Do not break sequential write streams to zoned HDDs md/bitmap: Fix bitmap chunk size overflow issues efi: Add iMac Pro 2017 to uefi skip cert quirk wifi: wilc1000: sdio: fix module autoloading ASoC: jz4740-i2s: Handle independent FIFO flush bits ipu3-imgu: Fix NULL pointer dereference in imgu_subdev_set_selection() ipmi: fix long wait in unload when IPMI disconnect mtd: spi-nor: Check for zero erase size in spi_nor_find_best_erase_type() ima: Fix a potential NULL pointer access in ima_restore_measurement_list ipmi: fix use after free in _ipmi_destroy_user() mtd: spi-nor: gigadevice: gd25q256: replace gd25q256_default_init with gd25q256_post_bfpt ima: Fix memory leak in __ima_inode_hash() um: virt-pci: Avoid GCC non-NULL warning crypto: ccree,hisilicon - Fix dependencies to correct algorithm PCI: Fix pci_device_is_present() for VFs by checking PF PCI/sysfs: Fix double free in error path RISC-V: kexec: Fix memory leak of fdt buffer riscv: Fixup compile error with !MMU RISC-V: kexec: Fix memory leak of elf header buffer riscv: stacktrace: Fixup ftrace_graph_ret_addr retp argument riscv: mm: notify remote harts about mmu cache updates crypto: n2 - add missing hash statesize crypto: ccp - Add support for TEE for PCI ID 0x14CA driver core: Fix bus_type.match() error handling in __driver_attach() bus: mhi: host: Fix race between channel preparation and M0 event phy: qcom-qmp-combo: fix sdm845 reset phy: qcom-qmp-combo: fix sc8180x reset iommu/amd: Fix ivrs_acpihid cmdline parsing code iommu/amd: Fix ill-formed ivrs_ioapic, ivrs_hpet and ivrs_acpihid options test_kprobes: Fix implicit declaration error of test_kprobes hugetlb: really allocate vma lock for all sharable vmas remoteproc: imx_dsp_rproc: Add mutex protection for workqueue remoteproc: core: Do pm_relax when in RPROC_OFFLINE state remoteproc: imx_rproc: Correct i.MX93 DRAM mapping parisc: led: Fix potential null-ptr-deref in start_task() parisc: Drop locking in pdc console code parisc: Fix locking in pdc_iodc_print() firmware call parisc: Add missing FORCE prerequisites in Makefile parisc: Drop duplicate kgdb_pdc console parisc: Drop PMD_SHIFT from calculation in pgtable.h device_cgroup: Roll back to original exceptions after copy failure drm/connector: send hotplug uevent on connector cleanup drm/vmwgfx: Validate the box size for the snooped cursor drm/mgag200: Fix PLL setup for G200_SE_A rev >=4 drm/etnaviv: move idle mapping reaping into separate function drm/i915/dsi: fix VBT send packet port selection for dual link DSI drm/ingenic: Fix missing platform_driver_unregister() call in ingenic_drm_init() drm/etnaviv: reap idle mapping if it doesn't match the softpin address ext4: silence the warning when evicting inode with dioread_nolock ext4: add inode table check in __ext4_get_inode_loc to aovid possible infinite loop ext4: remove trailing newline from ext4_msg() message ext4: correct inconsistent error msg in nojournal mode fs: ext4: initialize fsdata in pagecache_write() ext4: fix use-after-free in ext4_orphan_cleanup ext4: fix undefined behavior in bit shift for ext4_check_flag_values ext4: add EXT4_IGET_BAD flag to prevent unexpected bad inode ext4: add helper to check quota inums ext4: fix bug_on in __es_tree_search caused by bad quota inode ext4: fix reserved cluster accounting in __es_remove_extent() ext4: journal_path mount options should follow links ext4: check and assert if marking an no_delete evicting inode dirty ext4: fix bug_on in __es_tree_search caused by bad boot loader inode ext4: don't allow journal inode to have encrypt flag ext4: disable fast-commit of encrypted dir operations ext4: fix leaking uninitialized memory in fast-commit journal ext4: don't set up encryption key during jbd2 transaction ext4: add missing validation of fast-commit record lengths ext4: fix unaligned memory access in ext4_fc_reserve_space() ext4: fix off-by-one errors in fast-commit block filling ext4: fix uninititialized value in 'ext4_evict_inode' ext4: init quota for 'old.inode' in 'ext4_rename' ext4: don't fail GETFSUUID when the caller provides a long buffer ext4: fix delayed allocation bug in ext4_clu_mapped for bigalloc + inline ext4: fix corruption when online resizing a 1K bigalloc fs ext4: fix error code return to user-space in ext4_get_branch() ext4: fix bad checksum after online resize ext4: dont return EINVAL from GETFSUUID when reporting UUID length ext4: fix corrupt backup group descriptors after online resize ext4: avoid BUG_ON when creating xattrs ext4: fix deadlock due to mbcache entry corruption ext4: fix kernel BUG in 'ext4_write_inline_data_end()' ext4: fix inode leak in ext4_xattr_inode_create() on an error path ext4: initialize quota before expanding inode in setproject ioctl ext4: avoid unaccounted block allocation when expanding inode ext4: allocate extended attribute value in vmalloc area drm/i915/ttm: consider CCS for backup objects drm/amd/display: Add DCN314 display SG Support drm/amdgpu: handle polaris10/11 overlap asics (v2) drm/amdgpu: make display pinning more flexible (v2) drm/i915: improve the catch-all evict to handle lock contention drm/i915/migrate: Account for the reserved_space drm/amd/pm: add missing SMU13.0.0 mm_dpm feature mapping drm/amd/pm: add missing SMU13.0.7 mm_dpm feature mapping drm/amd/pm: bump SMU13.0.0 driver_if header to version 0x34 drm/amd/pm: correct the fan speed retrieving in PWM for some SMU13 asics Linux 6.1.4 Change-Id: I79c26bc73b1275fab4e9984d2a32a8b915bbfa1c Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
66746d4cf7
@ -2311,7 +2311,13 @@
|
||||
Provide an override to the IOAPIC-ID<->DEVICE-ID
|
||||
mapping provided in the IVRS ACPI table.
|
||||
By default, PCI segment is 0, and can be omitted.
|
||||
For example:
|
||||
|
||||
For example, to map IOAPIC-ID decimal 10 to
|
||||
PCI segment 0x1 and PCI device 00:14.0,
|
||||
write the parameter as:
|
||||
ivrs_ioapic=10@0001:00:14.0
|
||||
|
||||
Deprecated formats:
|
||||
* To map IOAPIC-ID decimal 10 to PCI device 00:14.0
|
||||
write the parameter as:
|
||||
ivrs_ioapic[10]=00:14.0
|
||||
@ -2323,7 +2329,13 @@
|
||||
Provide an override to the HPET-ID<->DEVICE-ID
|
||||
mapping provided in the IVRS ACPI table.
|
||||
By default, PCI segment is 0, and can be omitted.
|
||||
For example:
|
||||
|
||||
For example, to map HPET-ID decimal 10 to
|
||||
PCI segment 0x1 and PCI device 00:14.0,
|
||||
write the parameter as:
|
||||
ivrs_hpet=10@0001:00:14.0
|
||||
|
||||
Deprecated formats:
|
||||
* To map HPET-ID decimal 0 to PCI device 00:14.0
|
||||
write the parameter as:
|
||||
ivrs_hpet[0]=00:14.0
|
||||
@ -2334,15 +2346,20 @@
|
||||
ivrs_acpihid [HW,X86-64]
|
||||
Provide an override to the ACPI-HID:UID<->DEVICE-ID
|
||||
mapping provided in the IVRS ACPI table.
|
||||
By default, PCI segment is 0, and can be omitted.
|
||||
|
||||
For example, to map UART-HID:UID AMD0020:0 to
|
||||
PCI segment 0x1 and PCI device ID 00:14.5,
|
||||
write the parameter as:
|
||||
ivrs_acpihid[0001:00:14.5]=AMD0020:0
|
||||
ivrs_acpihid=AMD0020:0@0001:00:14.5
|
||||
|
||||
By default, PCI segment is 0, and can be omitted.
|
||||
For example, PCI device 00:14.5 write the parameter as:
|
||||
Deprecated formats:
|
||||
* To map UART-HID:UID AMD0020:0 to PCI segment is 0,
|
||||
PCI device ID 00:14.5, write the parameter as:
|
||||
ivrs_acpihid[00:14.5]=AMD0020:0
|
||||
* To map UART-HID:UID AMD0020:0 to PCI segment 0x1 and
|
||||
PCI device ID 00:14.5, write the parameter as:
|
||||
ivrs_acpihid[0001:00:14.5]=AMD0020:0
|
||||
|
||||
js= [HW,JOY] Analog joystick
|
||||
See Documentation/input/joydev/joystick.rst.
|
||||
|
@ -814,6 +814,7 @@ process the parameters it is given.
|
||||
int fs_lookup_param(struct fs_context *fc,
|
||||
struct fs_parameter *value,
|
||||
bool want_bdev,
|
||||
unsigned int flags,
|
||||
struct path *_path);
|
||||
|
||||
This takes a parameter that carries a string or filename type and attempts
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 1
|
||||
SUBLEVEL = 3
|
||||
SUBLEVEL = 4
|
||||
EXTRAVERSION =
|
||||
NAME = Hurr durr I'ma ninja sloth
|
||||
|
||||
|
@ -56,10 +56,10 @@ reserved-memory {
|
||||
#size-cells = <2>;
|
||||
ranges;
|
||||
|
||||
/* 192 KiB reserved for ARM Trusted Firmware (BL31) */
|
||||
/* 2 MiB reserved for ARM Trusted Firmware (BL31) */
|
||||
bl31_secmon_reserved: secmon@54600000 {
|
||||
no-map;
|
||||
reg = <0 0x54600000 0x0 0x30000>;
|
||||
reg = <0 0x54600000 0x0 0x200000>;
|
||||
};
|
||||
|
||||
/* 12 MiB reserved for OP-TEE (BL32)
|
||||
|
@ -855,12 +855,13 @@ ufs_mem_hc: ufs@1d84000 {
|
||||
required-opps = <&rpmhpd_opp_nom>;
|
||||
|
||||
iommus = <&apps_smmu 0xe0 0x0>;
|
||||
dma-coherent;
|
||||
|
||||
clocks = <&gcc GCC_UFS_PHY_AXI_CLK>,
|
||||
<&gcc GCC_AGGRE_UFS_PHY_AXI_CLK>,
|
||||
<&gcc GCC_UFS_PHY_AHB_CLK>,
|
||||
<&gcc GCC_UFS_PHY_UNIPRO_CORE_CLK>,
|
||||
<&rpmhcc RPMH_CXO_CLK>,
|
||||
<&gcc GCC_UFS_REF_CLKREF_CLK>,
|
||||
<&gcc GCC_UFS_PHY_TX_SYMBOL_0_CLK>,
|
||||
<&gcc GCC_UFS_PHY_RX_SYMBOL_0_CLK>,
|
||||
<&gcc GCC_UFS_PHY_RX_SYMBOL_1_CLK>;
|
||||
@ -891,7 +892,7 @@ ufs_mem_phy: phy@1d87000 {
|
||||
ranges;
|
||||
clock-names = "ref",
|
||||
"ref_aux";
|
||||
clocks = <&gcc GCC_UFS_REF_CLKREF_CLK>,
|
||||
clocks = <&gcc GCC_UFS_CARD_CLKREF_CLK>,
|
||||
<&gcc GCC_UFS_PHY_PHY_AUX_CLK>;
|
||||
|
||||
resets = <&ufs_mem_hc 0>;
|
||||
@ -923,12 +924,13 @@ ufs_card_hc: ufs@1da4000 {
|
||||
power-domains = <&gcc UFS_CARD_GDSC>;
|
||||
|
||||
iommus = <&apps_smmu 0x4a0 0x0>;
|
||||
dma-coherent;
|
||||
|
||||
clocks = <&gcc GCC_UFS_CARD_AXI_CLK>,
|
||||
<&gcc GCC_AGGRE_UFS_CARD_AXI_CLK>,
|
||||
<&gcc GCC_UFS_CARD_AHB_CLK>,
|
||||
<&gcc GCC_UFS_CARD_UNIPRO_CORE_CLK>,
|
||||
<&rpmhcc RPMH_CXO_CLK>,
|
||||
<&gcc GCC_UFS_REF_CLKREF_CLK>,
|
||||
<&gcc GCC_UFS_CARD_TX_SYMBOL_0_CLK>,
|
||||
<&gcc GCC_UFS_CARD_RX_SYMBOL_0_CLK>,
|
||||
<&gcc GCC_UFS_CARD_RX_SYMBOL_1_CLK>;
|
||||
@ -959,7 +961,7 @@ ufs_card_phy: phy@1da7000 {
|
||||
ranges;
|
||||
clock-names = "ref",
|
||||
"ref_aux";
|
||||
clocks = <&gcc GCC_UFS_REF_CLKREF_CLK>,
|
||||
clocks = <&gcc GCC_UFS_1_CARD_CLKREF_CLK>,
|
||||
<&gcc GCC_UFS_CARD_PHY_AUX_CLK>;
|
||||
|
||||
resets = <&ufs_card_hc 0>;
|
||||
|
@ -1123,7 +1123,10 @@ &wifi {
|
||||
|
||||
/* PINCTRL - additions to nodes defined in sdm845.dtsi */
|
||||
&qup_spi2_default {
|
||||
drive-strength = <16>;
|
||||
pinconf {
|
||||
pins = "gpio27", "gpio28", "gpio29", "gpio30";
|
||||
drive-strength = <16>;
|
||||
};
|
||||
};
|
||||
|
||||
&qup_uart3_default{
|
||||
|
@ -487,8 +487,10 @@ pinconf {
|
||||
};
|
||||
|
||||
&qup_i2c12_default {
|
||||
drive-strength = <2>;
|
||||
bias-disable;
|
||||
pinmux {
|
||||
drive-strength = <2>;
|
||||
bias-disable;
|
||||
};
|
||||
};
|
||||
|
||||
&qup_uart6_default {
|
||||
|
@ -415,8 +415,10 @@ pinconf {
|
||||
};
|
||||
|
||||
&qup_i2c12_default {
|
||||
drive-strength = <2>;
|
||||
bias-disable;
|
||||
pinmux {
|
||||
drive-strength = <2>;
|
||||
bias-disable;
|
||||
};
|
||||
};
|
||||
|
||||
&qup_uart6_default {
|
||||
|
@ -23,8 +23,8 @@
|
||||
*
|
||||
* The regs must be on a stack currently owned by the calling task.
|
||||
*/
|
||||
static inline void unwind_init_from_regs(struct unwind_state *state,
|
||||
struct pt_regs *regs)
|
||||
static __always_inline void unwind_init_from_regs(struct unwind_state *state,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
unwind_init_common(state, current);
|
||||
|
||||
@ -58,8 +58,8 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state)
|
||||
* duration of the unwind, or the unwind will be bogus. It is never valid to
|
||||
* call this for the current task.
|
||||
*/
|
||||
static inline void unwind_init_from_task(struct unwind_state *state,
|
||||
struct task_struct *task)
|
||||
static __always_inline void unwind_init_from_task(struct unwind_state *state,
|
||||
struct task_struct *task)
|
||||
{
|
||||
unwind_init_common(state, task);
|
||||
|
||||
@ -187,7 +187,7 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
|
||||
: stackinfo_get_unknown(); \
|
||||
})
|
||||
|
||||
noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
|
||||
noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry,
|
||||
void *cookie, struct task_struct *task,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
|
@ -166,8 +166,8 @@ extern void __update_cache(pte_t pte);
|
||||
|
||||
/* This calculates the number of initial pages we need for the initial
|
||||
* page tables */
|
||||
#if (KERNEL_INITIAL_ORDER) >= (PMD_SHIFT)
|
||||
# define PT_INITIAL (1 << (KERNEL_INITIAL_ORDER - PMD_SHIFT))
|
||||
#if (KERNEL_INITIAL_ORDER) >= (PLD_SHIFT + BITS_PER_PTE)
|
||||
# define PT_INITIAL (1 << (KERNEL_INITIAL_ORDER - PLD_SHIFT - BITS_PER_PTE))
|
||||
#else
|
||||
# define PT_INITIAL (1) /* all initial PTEs fit into one page */
|
||||
#endif
|
||||
|
@ -1288,9 +1288,8 @@ void pdc_io_reset_devices(void)
|
||||
|
||||
#endif /* defined(BOOTLOADER) */
|
||||
|
||||
/* locked by pdc_console_lock */
|
||||
static int __attribute__((aligned(8))) iodc_retbuf[32];
|
||||
static char __attribute__((aligned(64))) iodc_dbuf[4096];
|
||||
/* locked by pdc_lock */
|
||||
static char iodc_dbuf[4096] __page_aligned_bss;
|
||||
|
||||
/**
|
||||
* pdc_iodc_print - Console print using IODC.
|
||||
@ -1307,6 +1306,9 @@ int pdc_iodc_print(const unsigned char *str, unsigned count)
|
||||
unsigned int i;
|
||||
unsigned long flags;
|
||||
|
||||
count = min_t(unsigned int, count, sizeof(iodc_dbuf));
|
||||
|
||||
spin_lock_irqsave(&pdc_lock, flags);
|
||||
for (i = 0; i < count;) {
|
||||
switch(str[i]) {
|
||||
case '\n':
|
||||
@ -1322,12 +1324,11 @@ int pdc_iodc_print(const unsigned char *str, unsigned count)
|
||||
}
|
||||
|
||||
print:
|
||||
spin_lock_irqsave(&pdc_lock, flags);
|
||||
real32_call(PAGE0->mem_cons.iodc_io,
|
||||
(unsigned long)PAGE0->mem_cons.hpa, ENTRY_IO_COUT,
|
||||
PAGE0->mem_cons.spa, __pa(PAGE0->mem_cons.dp.layers),
|
||||
__pa(iodc_retbuf), 0, __pa(iodc_dbuf), i, 0);
|
||||
spin_unlock_irqrestore(&pdc_lock, flags);
|
||||
real32_call(PAGE0->mem_cons.iodc_io,
|
||||
(unsigned long)PAGE0->mem_cons.hpa, ENTRY_IO_COUT,
|
||||
PAGE0->mem_cons.spa, __pa(PAGE0->mem_cons.dp.layers),
|
||||
__pa(pdc_result), 0, __pa(iodc_dbuf), i, 0);
|
||||
spin_unlock_irqrestore(&pdc_lock, flags);
|
||||
|
||||
return i;
|
||||
}
|
||||
@ -1354,10 +1355,11 @@ int pdc_iodc_getc(void)
|
||||
real32_call(PAGE0->mem_kbd.iodc_io,
|
||||
(unsigned long)PAGE0->mem_kbd.hpa, ENTRY_IO_CIN,
|
||||
PAGE0->mem_kbd.spa, __pa(PAGE0->mem_kbd.dp.layers),
|
||||
__pa(iodc_retbuf), 0, __pa(iodc_dbuf), 1, 0);
|
||||
__pa(pdc_result), 0, __pa(iodc_dbuf), 1, 0);
|
||||
|
||||
ch = *iodc_dbuf;
|
||||
status = *iodc_retbuf;
|
||||
/* like convert_to_wide() but for first return value only: */
|
||||
status = *(int *)&pdc_result;
|
||||
spin_unlock_irqrestore(&pdc_lock, flags);
|
||||
|
||||
if (status == 0)
|
||||
|
@ -208,23 +208,3 @@ int kgdb_arch_handle_exception(int trap, int signo,
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* KGDB console driver which uses PDC to read chars from keyboard */
|
||||
|
||||
static void kgdb_pdc_write_char(u8 chr)
|
||||
{
|
||||
/* no need to print char. kgdb will do it. */
|
||||
}
|
||||
|
||||
static struct kgdb_io kgdb_pdc_io_ops = {
|
||||
.name = "kgdb_pdc",
|
||||
.read_char = pdc_iodc_getc,
|
||||
.write_char = kgdb_pdc_write_char,
|
||||
};
|
||||
|
||||
static int __init kgdb_pdc_init(void)
|
||||
{
|
||||
kgdb_register_io_module(&kgdb_pdc_io_ops);
|
||||
return 0;
|
||||
}
|
||||
early_initcall(kgdb_pdc_init);
|
||||
|
@ -12,37 +12,27 @@
|
||||
#include <asm/page.h> /* for PAGE0 */
|
||||
#include <asm/pdc.h> /* for iodc_call() proto and friends */
|
||||
|
||||
static DEFINE_SPINLOCK(pdc_console_lock);
|
||||
|
||||
static void pdc_console_write(struct console *co, const char *s, unsigned count)
|
||||
{
|
||||
int i = 0;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&pdc_console_lock, flags);
|
||||
do {
|
||||
i += pdc_iodc_print(s + i, count - i);
|
||||
} while (i < count);
|
||||
spin_unlock_irqrestore(&pdc_console_lock, flags);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_KGDB
|
||||
static int kgdb_pdc_read_char(void)
|
||||
{
|
||||
int c;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&pdc_console_lock, flags);
|
||||
c = pdc_iodc_getc();
|
||||
spin_unlock_irqrestore(&pdc_console_lock, flags);
|
||||
int c = pdc_iodc_getc();
|
||||
|
||||
return (c <= 0) ? NO_POLL_CHAR : c;
|
||||
}
|
||||
|
||||
static void kgdb_pdc_write_char(u8 chr)
|
||||
{
|
||||
if (PAGE0->mem_cons.cl_class != CL_DUPLEX)
|
||||
pdc_console_write(NULL, &chr, 1);
|
||||
/* no need to print char as it's shown on standard console */
|
||||
/* pdc_iodc_print(&chr, 1); */
|
||||
}
|
||||
|
||||
static struct kgdb_io kgdb_pdc_io_ops = {
|
||||
|
@ -26,7 +26,7 @@ $(obj)/vdso32_wrapper.o : $(obj)/vdso32.so FORCE
|
||||
|
||||
# Force dependency (incbin is bad)
|
||||
# link rule for the .so file, .lds has to be first
|
||||
$(obj)/vdso32.so: $(src)/vdso32.lds $(obj-vdso32) $(obj-cvdso32) $(VDSO_LIBGCC)
|
||||
$(obj)/vdso32.so: $(src)/vdso32.lds $(obj-vdso32) $(obj-cvdso32) $(VDSO_LIBGCC) FORCE
|
||||
$(call if_changed,vdso32ld)
|
||||
|
||||
# assembly rules for the .S files
|
||||
@ -38,7 +38,7 @@ $(obj-cvdso32): %.o: %.c FORCE
|
||||
|
||||
# actual build commands
|
||||
quiet_cmd_vdso32ld = VDSO32L $@
|
||||
cmd_vdso32ld = $(CROSS32CC) $(c_flags) -Wl,-T $^ -o $@
|
||||
cmd_vdso32ld = $(CROSS32CC) $(c_flags) -Wl,-T $(filter-out FORCE, $^) -o $@
|
||||
quiet_cmd_vdso32as = VDSO32A $@
|
||||
cmd_vdso32as = $(CROSS32CC) $(a_flags) -c -o $@ $<
|
||||
quiet_cmd_vdso32cc = VDSO32C $@
|
||||
|
@ -26,7 +26,7 @@ $(obj)/vdso64_wrapper.o : $(obj)/vdso64.so FORCE
|
||||
|
||||
# Force dependency (incbin is bad)
|
||||
# link rule for the .so file, .lds has to be first
|
||||
$(obj)/vdso64.so: $(src)/vdso64.lds $(obj-vdso64) $(VDSO_LIBGCC)
|
||||
$(obj)/vdso64.so: $(src)/vdso64.lds $(obj-vdso64) $(VDSO_LIBGCC) FORCE
|
||||
$(call if_changed,vdso64ld)
|
||||
|
||||
# assembly rules for the .S files
|
||||
@ -35,7 +35,7 @@ $(obj-vdso64): %.o: %.S FORCE
|
||||
|
||||
# actual build commands
|
||||
quiet_cmd_vdso64ld = VDSO64L $@
|
||||
cmd_vdso64ld = $(CC) $(c_flags) -Wl,-T $^ -o $@
|
||||
cmd_vdso64ld = $(CC) $(c_flags) -Wl,-T $(filter-out FORCE, $^) -o $@
|
||||
quiet_cmd_vdso64as = VDSO64A $@
|
||||
cmd_vdso64as = $(CC) $(a_flags) -c -o $@ $<
|
||||
|
||||
|
@ -64,17 +64,6 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
|
||||
* those.
|
||||
*/
|
||||
#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME
|
||||
#ifdef CONFIG_PPC64_ELF_ABI_V1
|
||||
static inline bool arch_syscall_match_sym_name(const char *sym, const char *name)
|
||||
{
|
||||
/* We need to skip past the initial dot, and the __se_sys alias */
|
||||
return !strcmp(sym + 1, name) ||
|
||||
(!strncmp(sym, ".__se_sys", 9) && !strcmp(sym + 6, name)) ||
|
||||
(!strncmp(sym, ".ppc_", 5) && !strcmp(sym + 5, name + 4)) ||
|
||||
(!strncmp(sym, ".ppc32_", 7) && !strcmp(sym + 7, name + 4)) ||
|
||||
(!strncmp(sym, ".ppc64_", 7) && !strcmp(sym + 7, name + 4));
|
||||
}
|
||||
#else
|
||||
static inline bool arch_syscall_match_sym_name(const char *sym, const char *name)
|
||||
{
|
||||
return !strcmp(sym, name) ||
|
||||
@ -83,7 +72,6 @@ static inline bool arch_syscall_match_sym_name(const char *sym, const char *name
|
||||
(!strncmp(sym, "ppc32_", 6) && !strcmp(sym + 6, name + 4)) ||
|
||||
(!strncmp(sym, "ppc64_", 6) && !strcmp(sym + 6, name + 4));
|
||||
}
|
||||
#endif /* CONFIG_PPC64_ELF_ABI_V1 */
|
||||
#endif /* CONFIG_FTRACE_SYSCALLS */
|
||||
|
||||
#if defined(CONFIG_PPC64) && defined(CONFIG_FUNCTION_TRACER)
|
||||
|
@ -502,7 +502,7 @@ config KEXEC_FILE
|
||||
select KEXEC_CORE
|
||||
select KEXEC_ELF
|
||||
select HAVE_IMA_KEXEC if IMA
|
||||
depends on 64BIT
|
||||
depends on 64BIT && MMU
|
||||
help
|
||||
This is new version of kexec system call. This system call is
|
||||
file based and takes file descriptors as system call argument
|
||||
|
@ -39,6 +39,7 @@ crash_setup_regs(struct pt_regs *newregs,
|
||||
#define ARCH_HAS_KIMAGE_ARCH
|
||||
|
||||
struct kimage_arch {
|
||||
void *fdt; /* For CONFIG_KEXEC_FILE */
|
||||
unsigned long fdt_addr;
|
||||
};
|
||||
|
||||
@ -62,6 +63,10 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
|
||||
const Elf_Shdr *relsec,
|
||||
const Elf_Shdr *symtab);
|
||||
#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add
|
||||
|
||||
struct kimage;
|
||||
int arch_kimage_file_post_load_cleanup(struct kimage *image);
|
||||
#define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
@ -19,6 +19,8 @@ typedef struct {
|
||||
#ifdef CONFIG_SMP
|
||||
/* A local icache flush is needed before user execution can resume. */
|
||||
cpumask_t icache_stale_mask;
|
||||
/* A local tlb flush is needed before user execution can resume. */
|
||||
cpumask_t tlb_stale_mask;
|
||||
#endif
|
||||
} mm_context_t;
|
||||
|
||||
|
@ -415,7 +415,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
|
||||
* Relying on flush_tlb_fix_spurious_fault would suffice, but
|
||||
* the extra traps reduce performance. So, eagerly SFENCE.VMA.
|
||||
*/
|
||||
local_flush_tlb_page(address);
|
||||
flush_tlb_page(vma, address);
|
||||
}
|
||||
|
||||
static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
|
||||
|
@ -22,6 +22,24 @@ static inline void local_flush_tlb_page(unsigned long addr)
|
||||
{
|
||||
ALT_FLUSH_TLB_PAGE(__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory"));
|
||||
}
|
||||
|
||||
static inline void local_flush_tlb_all_asid(unsigned long asid)
|
||||
{
|
||||
__asm__ __volatile__ ("sfence.vma x0, %0"
|
||||
:
|
||||
: "r" (asid)
|
||||
: "memory");
|
||||
}
|
||||
|
||||
static inline void local_flush_tlb_page_asid(unsigned long addr,
|
||||
unsigned long asid)
|
||||
{
|
||||
__asm__ __volatile__ ("sfence.vma %0, %1"
|
||||
:
|
||||
: "r" (addr), "r" (asid)
|
||||
: "memory");
|
||||
}
|
||||
|
||||
#else /* CONFIG_MMU */
|
||||
#define local_flush_tlb_all() do { } while (0)
|
||||
#define local_flush_tlb_page(addr) do { } while (0)
|
||||
|
@ -21,6 +21,18 @@
|
||||
#include <linux/memblock.h>
|
||||
#include <asm/setup.h>
|
||||
|
||||
int arch_kimage_file_post_load_cleanup(struct kimage *image)
|
||||
{
|
||||
kvfree(image->arch.fdt);
|
||||
image->arch.fdt = NULL;
|
||||
|
||||
vfree(image->elf_headers);
|
||||
image->elf_headers = NULL;
|
||||
image->elf_headers_sz = 0;
|
||||
|
||||
return kexec_image_post_load_cleanup_default(image);
|
||||
}
|
||||
|
||||
static int riscv_kexec_elf_load(struct kimage *image, struct elfhdr *ehdr,
|
||||
struct kexec_elf_info *elf_info, unsigned long old_pbase,
|
||||
unsigned long new_pbase)
|
||||
@ -298,6 +310,8 @@ static void *elf_kexec_load(struct kimage *image, char *kernel_buf,
|
||||
pr_err("Error add DTB kbuf ret=%d\n", ret);
|
||||
goto out_free_fdt;
|
||||
}
|
||||
/* Cache the fdt buffer address for memory cleanup */
|
||||
image->arch.fdt = fdt;
|
||||
pr_notice("Loaded device tree at 0x%lx\n", kbuf.mem);
|
||||
goto out;
|
||||
|
||||
|
@ -58,7 +58,7 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
|
||||
} else {
|
||||
fp = frame->fp;
|
||||
pc = ftrace_graph_ret_addr(current, NULL, frame->ra,
|
||||
(unsigned long *)(fp - 8));
|
||||
&frame->ra);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -196,6 +196,16 @@ static void set_mm_asid(struct mm_struct *mm, unsigned int cpu)
|
||||
|
||||
if (need_flush_tlb)
|
||||
local_flush_tlb_all();
|
||||
#ifdef CONFIG_SMP
|
||||
else {
|
||||
cpumask_t *mask = &mm->context.tlb_stale_mask;
|
||||
|
||||
if (cpumask_test_cpu(cpu, mask)) {
|
||||
cpumask_clear_cpu(cpu, mask);
|
||||
local_flush_tlb_all_asid(cntx & asid_mask);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
static void set_mm_noasid(struct mm_struct *mm)
|
||||
|
@ -5,23 +5,7 @@
|
||||
#include <linux/sched.h>
|
||||
#include <asm/sbi.h>
|
||||
#include <asm/mmu_context.h>
|
||||
|
||||
static inline void local_flush_tlb_all_asid(unsigned long asid)
|
||||
{
|
||||
__asm__ __volatile__ ("sfence.vma x0, %0"
|
||||
:
|
||||
: "r" (asid)
|
||||
: "memory");
|
||||
}
|
||||
|
||||
static inline void local_flush_tlb_page_asid(unsigned long addr,
|
||||
unsigned long asid)
|
||||
{
|
||||
__asm__ __volatile__ ("sfence.vma %0, %1"
|
||||
:
|
||||
: "r" (addr), "r" (asid)
|
||||
: "memory");
|
||||
}
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
void flush_tlb_all(void)
|
||||
{
|
||||
@ -31,6 +15,7 @@ void flush_tlb_all(void)
|
||||
static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start,
|
||||
unsigned long size, unsigned long stride)
|
||||
{
|
||||
struct cpumask *pmask = &mm->context.tlb_stale_mask;
|
||||
struct cpumask *cmask = mm_cpumask(mm);
|
||||
unsigned int cpuid;
|
||||
bool broadcast;
|
||||
@ -44,6 +29,15 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start,
|
||||
if (static_branch_unlikely(&use_asid_allocator)) {
|
||||
unsigned long asid = atomic_long_read(&mm->context.id);
|
||||
|
||||
/*
|
||||
* TLB will be immediately flushed on harts concurrently
|
||||
* executing this MM context. TLB flush on other harts
|
||||
* is deferred until this MM context migrates there.
|
||||
*/
|
||||
cpumask_setall(pmask);
|
||||
cpumask_clear_cpu(cpuid, pmask);
|
||||
cpumask_andnot(pmask, pmask, cmask);
|
||||
|
||||
if (broadcast) {
|
||||
sbi_remote_sfence_vma_asid(cmask, start, size, asid);
|
||||
} else if (size <= stride) {
|
||||
|
@ -97,7 +97,8 @@ static int um_pci_send_cmd(struct um_pci_device *dev,
|
||||
}
|
||||
|
||||
buf = get_cpu_var(um_pci_msg_bufs);
|
||||
memcpy(buf, cmd, cmd_size);
|
||||
if (buf)
|
||||
memcpy(buf, cmd, cmd_size);
|
||||
|
||||
if (posted) {
|
||||
u8 *ncmd = kmalloc(cmd_size + extra_size, GFP_ATOMIC);
|
||||
@ -182,6 +183,7 @@ static unsigned long um_pci_cfgspace_read(void *priv, unsigned int offset,
|
||||
struct um_pci_message_buffer *buf;
|
||||
u8 *data;
|
||||
unsigned long ret = ULONG_MAX;
|
||||
size_t bytes = sizeof(buf->data);
|
||||
|
||||
if (!dev)
|
||||
return ULONG_MAX;
|
||||
@ -189,7 +191,8 @@ static unsigned long um_pci_cfgspace_read(void *priv, unsigned int offset,
|
||||
buf = get_cpu_var(um_pci_msg_bufs);
|
||||
data = buf->data;
|
||||
|
||||
memset(buf->data, 0xff, sizeof(buf->data));
|
||||
if (buf)
|
||||
memset(data, 0xff, bytes);
|
||||
|
||||
switch (size) {
|
||||
case 1:
|
||||
@ -204,7 +207,7 @@ static unsigned long um_pci_cfgspace_read(void *priv, unsigned int offset,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (um_pci_send_cmd(dev, &hdr, sizeof(hdr), NULL, 0, data, 8))
|
||||
if (um_pci_send_cmd(dev, &hdr, sizeof(hdr), NULL, 0, data, bytes))
|
||||
goto out;
|
||||
|
||||
switch (size) {
|
||||
|
@ -2,6 +2,7 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/pci.h>
|
||||
#include <asm/apicdef.h>
|
||||
#include <asm/intel-family.h>
|
||||
#include <linux/io-64-nonatomic-lo-hi.h>
|
||||
|
||||
#include <linux/perf_event.h>
|
||||
|
@ -3804,6 +3804,21 @@ static const struct attribute_group *skx_iio_attr_update[] = {
|
||||
NULL,
|
||||
};
|
||||
|
||||
static void pmu_clear_mapping_attr(const struct attribute_group **groups,
|
||||
struct attribute_group *ag)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; groups[i]; i++) {
|
||||
if (groups[i] == ag) {
|
||||
for (i++; groups[i]; i++)
|
||||
groups[i - 1] = groups[i];
|
||||
groups[i - 1] = NULL;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int
|
||||
pmu_iio_set_mapping(struct intel_uncore_type *type, struct attribute_group *ag)
|
||||
{
|
||||
@ -3852,7 +3867,7 @@ pmu_iio_set_mapping(struct intel_uncore_type *type, struct attribute_group *ag)
|
||||
clear_topology:
|
||||
kfree(type->topology);
|
||||
clear_attr_update:
|
||||
type->attr_update = NULL;
|
||||
pmu_clear_mapping_attr(type->attr_update, ag);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -5144,6 +5159,11 @@ static int icx_iio_get_topology(struct intel_uncore_type *type)
|
||||
|
||||
static int icx_iio_set_mapping(struct intel_uncore_type *type)
|
||||
{
|
||||
/* Detect ICX-D system. This case is not supported */
|
||||
if (boot_cpu_data.x86_model == INTEL_FAM6_ICELAKE_D) {
|
||||
pmu_clear_mapping_attr(type->attr_update, &icx_iio_mapping_group);
|
||||
return -EPERM;
|
||||
}
|
||||
return pmu_iio_set_mapping(type, &icx_iio_mapping_group);
|
||||
}
|
||||
|
||||
|
@ -788,6 +788,24 @@ _log_error_bank(unsigned int bank, u32 msr_stat, u32 msr_addr, u64 misc)
|
||||
return status & MCI_STATUS_DEFERRED;
|
||||
}
|
||||
|
||||
static bool _log_error_deferred(unsigned int bank, u32 misc)
|
||||
{
|
||||
if (!_log_error_bank(bank, mca_msr_reg(bank, MCA_STATUS),
|
||||
mca_msr_reg(bank, MCA_ADDR), misc))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* Non-SMCA systems don't have MCA_DESTAT/MCA_DEADDR registers.
|
||||
* Return true here to avoid accessing these registers.
|
||||
*/
|
||||
if (!mce_flags.smca)
|
||||
return true;
|
||||
|
||||
/* Clear MCA_DESTAT if the deferred error was logged from MCA_STATUS. */
|
||||
wrmsrl(MSR_AMD64_SMCA_MCx_DESTAT(bank), 0);
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* We have three scenarios for checking for Deferred errors:
|
||||
*
|
||||
@ -799,20 +817,9 @@ _log_error_bank(unsigned int bank, u32 msr_stat, u32 msr_addr, u64 misc)
|
||||
*/
|
||||
static void log_error_deferred(unsigned int bank)
|
||||
{
|
||||
bool defrd;
|
||||
|
||||
defrd = _log_error_bank(bank, mca_msr_reg(bank, MCA_STATUS),
|
||||
mca_msr_reg(bank, MCA_ADDR), 0);
|
||||
|
||||
if (!mce_flags.smca)
|
||||
if (_log_error_deferred(bank, 0))
|
||||
return;
|
||||
|
||||
/* Clear MCA_DESTAT if we logged the deferred error from MCA_STATUS. */
|
||||
if (defrd) {
|
||||
wrmsrl(MSR_AMD64_SMCA_MCx_DESTAT(bank), 0);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Only deferred errors are logged in MCA_DE{STAT,ADDR} so just check
|
||||
* for a valid error.
|
||||
@ -832,7 +839,7 @@ static void amd_deferred_error_interrupt(void)
|
||||
|
||||
static void log_error_thresholding(unsigned int bank, u64 misc)
|
||||
{
|
||||
_log_error_bank(bank, mca_msr_reg(bank, MCA_STATUS), mca_msr_reg(bank, MCA_ADDR), misc);
|
||||
_log_error_deferred(bank, misc);
|
||||
}
|
||||
|
||||
static void log_and_reset_block(struct threshold_block *block)
|
||||
|
@ -621,7 +621,6 @@ void load_ucode_intel_ap(void)
|
||||
else
|
||||
iup = &intel_ucode_patch;
|
||||
|
||||
reget:
|
||||
if (!*iup) {
|
||||
patch = __load_ucode_intel(&uci);
|
||||
if (!patch)
|
||||
@ -632,12 +631,7 @@ void load_ucode_intel_ap(void)
|
||||
|
||||
uci.mc = *iup;
|
||||
|
||||
if (apply_microcode_early(&uci, true)) {
|
||||
/* Mixed-silicon system? Try to refetch the proper patch: */
|
||||
*iup = NULL;
|
||||
|
||||
goto reget;
|
||||
}
|
||||
apply_microcode_early(&uci, true);
|
||||
}
|
||||
|
||||
static struct microcode_intel *find_patch(struct ucode_cpu_info *uci)
|
||||
|
@ -440,8 +440,8 @@ static void __init __xstate_dump_leaves(void)
|
||||
}
|
||||
}
|
||||
|
||||
#define XSTATE_WARN_ON(x) do { \
|
||||
if (WARN_ONCE(x, "XSAVE consistency problem, dumping leaves")) { \
|
||||
#define XSTATE_WARN_ON(x, fmt, ...) do { \
|
||||
if (WARN_ONCE(x, "XSAVE consistency problem: " fmt, ##__VA_ARGS__)) { \
|
||||
__xstate_dump_leaves(); \
|
||||
} \
|
||||
} while (0)
|
||||
@ -554,8 +554,7 @@ static bool __init check_xstate_against_struct(int nr)
|
||||
(nr >= XFEATURE_MAX) ||
|
||||
(nr == XFEATURE_PT_UNIMPLEMENTED_SO_FAR) ||
|
||||
((nr >= XFEATURE_RSRVD_COMP_11) && (nr <= XFEATURE_RSRVD_COMP_16))) {
|
||||
WARN_ONCE(1, "no structure for xstate: %d\n", nr);
|
||||
XSTATE_WARN_ON(1);
|
||||
XSTATE_WARN_ON(1, "No structure for xstate: %d\n", nr);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
@ -598,12 +597,13 @@ static bool __init paranoid_xstate_size_valid(unsigned int kernel_size)
|
||||
* XSAVES.
|
||||
*/
|
||||
if (!xsaves && xfeature_is_supervisor(i)) {
|
||||
XSTATE_WARN_ON(1);
|
||||
XSTATE_WARN_ON(1, "Got supervisor feature %d, but XSAVES not advertised\n", i);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
size = xstate_calculate_size(fpu_kernel_cfg.max_features, compacted);
|
||||
XSTATE_WARN_ON(size != kernel_size);
|
||||
XSTATE_WARN_ON(size != kernel_size,
|
||||
"size %u != kernel_size %u\n", size, kernel_size);
|
||||
return size == kernel_size;
|
||||
}
|
||||
|
||||
|
@ -217,7 +217,9 @@ void ftrace_replace_code(int enable)
|
||||
|
||||
ret = ftrace_verify_code(rec->ip, old);
|
||||
if (ret) {
|
||||
ftrace_expected = old;
|
||||
ftrace_bug(ret, rec);
|
||||
ftrace_expected = NULL;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
@ -37,6 +37,7 @@
|
||||
#include <linux/extable.h>
|
||||
#include <linux/kdebug.h>
|
||||
#include <linux/kallsyms.h>
|
||||
#include <linux/kgdb.h>
|
||||
#include <linux/ftrace.h>
|
||||
#include <linux/kasan.h>
|
||||
#include <linux/moduleloader.h>
|
||||
@ -281,12 +282,15 @@ static int can_probe(unsigned long paddr)
|
||||
if (ret < 0)
|
||||
return 0;
|
||||
|
||||
#ifdef CONFIG_KGDB
|
||||
/*
|
||||
* Another debugging subsystem might insert this breakpoint.
|
||||
* In that case, we can't recover it.
|
||||
* If there is a dynamically installed kgdb sw breakpoint,
|
||||
* this function should not be probed.
|
||||
*/
|
||||
if (insn.opcode.bytes[0] == INT3_INSN_OPCODE)
|
||||
if (insn.opcode.bytes[0] == INT3_INSN_OPCODE &&
|
||||
kgdb_has_hit_break(addr))
|
||||
return 0;
|
||||
#endif
|
||||
addr += insn.length;
|
||||
}
|
||||
|
||||
|
@ -15,6 +15,7 @@
|
||||
#include <linux/extable.h>
|
||||
#include <linux/kdebug.h>
|
||||
#include <linux/kallsyms.h>
|
||||
#include <linux/kgdb.h>
|
||||
#include <linux/ftrace.h>
|
||||
#include <linux/objtool.h>
|
||||
#include <linux/pgtable.h>
|
||||
@ -279,19 +280,6 @@ static int insn_is_indirect_jump(struct insn *insn)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static bool is_padding_int3(unsigned long addr, unsigned long eaddr)
|
||||
{
|
||||
unsigned char ops;
|
||||
|
||||
for (; addr < eaddr; addr++) {
|
||||
if (get_kernel_nofault(ops, (void *)addr) < 0 ||
|
||||
ops != INT3_INSN_OPCODE)
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/* Decode whole function to ensure any instructions don't jump into target */
|
||||
static int can_optimize(unsigned long paddr)
|
||||
{
|
||||
@ -334,15 +322,15 @@ static int can_optimize(unsigned long paddr)
|
||||
ret = insn_decode_kernel(&insn, (void *)recovered_insn);
|
||||
if (ret < 0)
|
||||
return 0;
|
||||
|
||||
#ifdef CONFIG_KGDB
|
||||
/*
|
||||
* In the case of detecting unknown breakpoint, this could be
|
||||
* a padding INT3 between functions. Let's check that all the
|
||||
* rest of the bytes are also INT3.
|
||||
* If there is a dynamically installed kgdb sw breakpoint,
|
||||
* this function should not be probed.
|
||||
*/
|
||||
if (insn.opcode.bytes[0] == INT3_INSN_OPCODE)
|
||||
return is_padding_int3(addr, paddr - offset + size) ? 1 : 0;
|
||||
|
||||
if (insn.opcode.bytes[0] == INT3_INSN_OPCODE &&
|
||||
kgdb_has_hit_break(addr))
|
||||
return 0;
|
||||
#endif
|
||||
/* Recover address */
|
||||
insn.kaddr = (void *)addr;
|
||||
insn.next_byte = (void *)(addr + insn.length);
|
||||
|
@ -2722,8 +2722,6 @@ static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu,
|
||||
icr = __kvm_lapic_get_reg64(s->regs, APIC_ICR);
|
||||
__kvm_lapic_set_reg(s->regs, APIC_ICR2, icr >> 32);
|
||||
}
|
||||
} else {
|
||||
kvm_lapic_xapic_id_updated(vcpu->arch.apic);
|
||||
}
|
||||
|
||||
return 0;
|
||||
@ -2759,6 +2757,9 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
|
||||
}
|
||||
memcpy(vcpu->arch.apic->regs, s->regs, sizeof(*s));
|
||||
|
||||
if (!apic_x2apic_mode(apic))
|
||||
kvm_lapic_xapic_id_updated(apic);
|
||||
|
||||
atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY);
|
||||
kvm_recalculate_apic_map(vcpu->kvm);
|
||||
kvm_apic_set_version(vcpu);
|
||||
|
@ -5100,24 +5100,35 @@ static int handle_vmxon(struct kvm_vcpu *vcpu)
|
||||
| FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX;
|
||||
|
||||
/*
|
||||
* Note, KVM cannot rely on hardware to perform the CR0/CR4 #UD checks
|
||||
* that have higher priority than VM-Exit (see Intel SDM's pseudocode
|
||||
* for VMXON), as KVM must load valid CR0/CR4 values into hardware while
|
||||
* running the guest, i.e. KVM needs to check the _guest_ values.
|
||||
* Manually check CR4.VMXE checks, KVM must force CR4.VMXE=1 to enter
|
||||
* the guest and so cannot rely on hardware to perform the check,
|
||||
* which has higher priority than VM-Exit (see Intel SDM's pseudocode
|
||||
* for VMXON).
|
||||
*
|
||||
* Rely on hardware for the other two pre-VM-Exit checks, !VM86 and
|
||||
* !COMPATIBILITY modes. KVM may run the guest in VM86 to emulate Real
|
||||
* Mode, but KVM will never take the guest out of those modes.
|
||||
* Rely on hardware for the other pre-VM-Exit checks, CR0.PE=1, !VM86
|
||||
* and !COMPATIBILITY modes. For an unrestricted guest, KVM doesn't
|
||||
* force any of the relevant guest state. For a restricted guest, KVM
|
||||
* does force CR0.PE=1, but only to also force VM86 in order to emulate
|
||||
* Real Mode, and so there's no need to check CR0.PE manually.
|
||||
*/
|
||||
if (!nested_host_cr0_valid(vcpu, kvm_read_cr0(vcpu)) ||
|
||||
!nested_host_cr4_valid(vcpu, kvm_read_cr4(vcpu))) {
|
||||
if (!kvm_read_cr4_bits(vcpu, X86_CR4_VMXE)) {
|
||||
kvm_queue_exception(vcpu, UD_VECTOR);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* CPL=0 and all other checks that are lower priority than VM-Exit must
|
||||
* be checked manually.
|
||||
* The CPL is checked for "not in VMX operation" and for "in VMX root",
|
||||
* and has higher priority than the VM-Fail due to being post-VMXON,
|
||||
* i.e. VMXON #GPs outside of VMX non-root if CPL!=0. In VMX non-root,
|
||||
* VMXON causes VM-Exit and KVM unconditionally forwards VMXON VM-Exits
|
||||
* from L2 to L1, i.e. there's no need to check for the vCPU being in
|
||||
* VMX non-root.
|
||||
*
|
||||
* Forwarding the VM-Exit unconditionally, i.e. without performing the
|
||||
* #UD checks (see above), is functionally ok because KVM doesn't allow
|
||||
* L1 to run L2 without CR4.VMXE=0, and because KVM never modifies L2's
|
||||
* CR0 or CR4, i.e. it's L2's responsibility to emulate #UDs that are
|
||||
* missed by hardware due to shadowing CR0 and/or CR4.
|
||||
*/
|
||||
if (vmx_get_cpl(vcpu)) {
|
||||
kvm_inject_gp(vcpu, 0);
|
||||
@ -5127,6 +5138,17 @@ static int handle_vmxon(struct kvm_vcpu *vcpu)
|
||||
if (vmx->nested.vmxon)
|
||||
return nested_vmx_fail(vcpu, VMXERR_VMXON_IN_VMX_ROOT_OPERATION);
|
||||
|
||||
/*
|
||||
* Invalid CR0/CR4 generates #GP. These checks are performed if and
|
||||
* only if the vCPU isn't already in VMX operation, i.e. effectively
|
||||
* have lower priority than the VM-Fail above.
|
||||
*/
|
||||
if (!nested_host_cr0_valid(vcpu, kvm_read_cr0(vcpu)) ||
|
||||
!nested_host_cr4_valid(vcpu, kvm_read_cr4(vcpu))) {
|
||||
kvm_inject_gp(vcpu, 0);
|
||||
return 1;
|
||||
}
|
||||
|
||||
if ((vmx->msr_ia32_feature_control & VMXON_NEEDED_FEATURES)
|
||||
!= VMXON_NEEDED_FEATURES) {
|
||||
kvm_inject_gp(vcpu, 0);
|
||||
@ -6808,7 +6830,8 @@ void nested_vmx_setup_ctls_msrs(struct vmcs_config *vmcs_conf, u32 ept_caps)
|
||||
SECONDARY_EXEC_ENABLE_INVPCID |
|
||||
SECONDARY_EXEC_RDSEED_EXITING |
|
||||
SECONDARY_EXEC_XSAVES |
|
||||
SECONDARY_EXEC_TSC_SCALING;
|
||||
SECONDARY_EXEC_TSC_SCALING |
|
||||
SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE;
|
||||
|
||||
/*
|
||||
* We can emulate "VMCS shadowing," even if the hardware
|
||||
|
@ -182,8 +182,10 @@ static int __handle_encls_ecreate(struct kvm_vcpu *vcpu,
|
||||
/* Enforce CPUID restriction on max enclave size. */
|
||||
max_size_log2 = (attributes & SGX_ATTR_MODE64BIT) ? sgx_12_0->edx >> 8 :
|
||||
sgx_12_0->edx;
|
||||
if (size >= BIT_ULL(max_size_log2))
|
||||
if (size >= BIT_ULL(max_size_log2)) {
|
||||
kvm_inject_gp(vcpu, 0);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* sgx_virt_ecreate() returns:
|
||||
|
@ -62,6 +62,7 @@ extern int __modsi3(int, int);
|
||||
extern int __mulsi3(int, int);
|
||||
extern unsigned int __udivsi3(unsigned int, unsigned int);
|
||||
extern unsigned int __umodsi3(unsigned int, unsigned int);
|
||||
extern unsigned long long __umulsidi3(unsigned int, unsigned int);
|
||||
|
||||
EXPORT_SYMBOL(__ashldi3);
|
||||
EXPORT_SYMBOL(__ashrdi3);
|
||||
@ -71,6 +72,7 @@ EXPORT_SYMBOL(__modsi3);
|
||||
EXPORT_SYMBOL(__mulsi3);
|
||||
EXPORT_SYMBOL(__udivsi3);
|
||||
EXPORT_SYMBOL(__umodsi3);
|
||||
EXPORT_SYMBOL(__umulsidi3);
|
||||
|
||||
unsigned int __sync_fetch_and_and_4(volatile void *p, unsigned int v)
|
||||
{
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
lib-y += memcopy.o memset.o checksum.o \
|
||||
ashldi3.o ashrdi3.o lshrdi3.o \
|
||||
divsi3.o udivsi3.o modsi3.o umodsi3.o mulsi3.o \
|
||||
divsi3.o udivsi3.o modsi3.o umodsi3.o mulsi3.o umulsidi3.o \
|
||||
usercopy.o strncpy_user.o strnlen_user.o
|
||||
lib-$(CONFIG_PCI) += pci-auto.o
|
||||
lib-$(CONFIG_KCSAN) += kcsan-stubs.o
|
||||
|
230
arch/xtensa/lib/umulsidi3.S
Normal file
230
arch/xtensa/lib/umulsidi3.S
Normal file
@ -0,0 +1,230 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later WITH GCC-exception-2.0 */
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/asmmacro.h>
|
||||
#include <asm/core.h>
|
||||
|
||||
#if !XCHAL_HAVE_MUL16 && !XCHAL_HAVE_MUL32 && !XCHAL_HAVE_MAC16
|
||||
#define XCHAL_NO_MUL 1
|
||||
#endif
|
||||
|
||||
ENTRY(__umulsidi3)
|
||||
|
||||
#ifdef __XTENSA_CALL0_ABI__
|
||||
abi_entry(32)
|
||||
s32i a12, sp, 16
|
||||
s32i a13, sp, 20
|
||||
s32i a14, sp, 24
|
||||
s32i a15, sp, 28
|
||||
#elif XCHAL_NO_MUL
|
||||
/* This is not really a leaf function; allocate enough stack space
|
||||
to allow CALL12s to a helper function. */
|
||||
abi_entry(32)
|
||||
#else
|
||||
abi_entry_default
|
||||
#endif
|
||||
|
||||
#ifdef __XTENSA_EB__
|
||||
#define wh a2
|
||||
#define wl a3
|
||||
#else
|
||||
#define wh a3
|
||||
#define wl a2
|
||||
#endif /* __XTENSA_EB__ */
|
||||
|
||||
/* This code is taken from the mulsf3 routine in ieee754-sf.S.
|
||||
See more comments there. */
|
||||
|
||||
#if XCHAL_HAVE_MUL32_HIGH
|
||||
mull a6, a2, a3
|
||||
muluh wh, a2, a3
|
||||
mov wl, a6
|
||||
|
||||
#else /* ! MUL32_HIGH */
|
||||
|
||||
#if defined(__XTENSA_CALL0_ABI__) && XCHAL_NO_MUL
|
||||
/* a0 and a8 will be clobbered by calling the multiply function
|
||||
but a8 is not used here and need not be saved. */
|
||||
s32i a0, sp, 0
|
||||
#endif
|
||||
|
||||
#if XCHAL_HAVE_MUL16 || XCHAL_HAVE_MUL32
|
||||
|
||||
#define a2h a4
|
||||
#define a3h a5
|
||||
|
||||
/* Get the high halves of the inputs into registers. */
|
||||
srli a2h, a2, 16
|
||||
srli a3h, a3, 16
|
||||
|
||||
#define a2l a2
|
||||
#define a3l a3
|
||||
|
||||
#if XCHAL_HAVE_MUL32 && !XCHAL_HAVE_MUL16
|
||||
/* Clear the high halves of the inputs. This does not matter
|
||||
for MUL16 because the high bits are ignored. */
|
||||
extui a2, a2, 0, 16
|
||||
extui a3, a3, 0, 16
|
||||
#endif
|
||||
#endif /* MUL16 || MUL32 */
|
||||
|
||||
|
||||
#if XCHAL_HAVE_MUL16
|
||||
|
||||
#define do_mul(dst, xreg, xhalf, yreg, yhalf) \
|
||||
mul16u dst, xreg ## xhalf, yreg ## yhalf
|
||||
|
||||
#elif XCHAL_HAVE_MUL32
|
||||
|
||||
#define do_mul(dst, xreg, xhalf, yreg, yhalf) \
|
||||
mull dst, xreg ## xhalf, yreg ## yhalf
|
||||
|
||||
#elif XCHAL_HAVE_MAC16
|
||||
|
||||
/* The preprocessor insists on inserting a space when concatenating after
|
||||
a period in the definition of do_mul below. These macros are a workaround
|
||||
using underscores instead of periods when doing the concatenation. */
|
||||
#define umul_aa_ll umul.aa.ll
|
||||
#define umul_aa_lh umul.aa.lh
|
||||
#define umul_aa_hl umul.aa.hl
|
||||
#define umul_aa_hh umul.aa.hh
|
||||
|
||||
#define do_mul(dst, xreg, xhalf, yreg, yhalf) \
|
||||
umul_aa_ ## xhalf ## yhalf xreg, yreg; \
|
||||
rsr dst, ACCLO
|
||||
|
||||
#else /* no multiply hardware */
|
||||
|
||||
#define set_arg_l(dst, src) \
|
||||
extui dst, src, 0, 16
|
||||
#define set_arg_h(dst, src) \
|
||||
srli dst, src, 16
|
||||
|
||||
#ifdef __XTENSA_CALL0_ABI__
|
||||
#define do_mul(dst, xreg, xhalf, yreg, yhalf) \
|
||||
set_arg_ ## xhalf (a13, xreg); \
|
||||
set_arg_ ## yhalf (a14, yreg); \
|
||||
call0 .Lmul_mulsi3; \
|
||||
mov dst, a12
|
||||
#else
|
||||
#define do_mul(dst, xreg, xhalf, yreg, yhalf) \
|
||||
set_arg_ ## xhalf (a14, xreg); \
|
||||
set_arg_ ## yhalf (a15, yreg); \
|
||||
call12 .Lmul_mulsi3; \
|
||||
mov dst, a14
|
||||
#endif /* __XTENSA_CALL0_ABI__ */
|
||||
|
||||
#endif /* no multiply hardware */
|
||||
|
||||
/* Add pp1 and pp2 into a6 with carry-out in a9. */
|
||||
do_mul(a6, a2, l, a3, h) /* pp 1 */
|
||||
do_mul(a11, a2, h, a3, l) /* pp 2 */
|
||||
movi a9, 0
|
||||
add a6, a6, a11
|
||||
bgeu a6, a11, 1f
|
||||
addi a9, a9, 1
|
||||
1:
|
||||
/* Shift the high half of a9/a6 into position in a9. Note that
|
||||
this value can be safely incremented without any carry-outs. */
|
||||
ssai 16
|
||||
src a9, a9, a6
|
||||
|
||||
/* Compute the low word into a6. */
|
||||
do_mul(a11, a2, l, a3, l) /* pp 0 */
|
||||
sll a6, a6
|
||||
add a6, a6, a11
|
||||
bgeu a6, a11, 1f
|
||||
addi a9, a9, 1
|
||||
1:
|
||||
/* Compute the high word into wh. */
|
||||
do_mul(wh, a2, h, a3, h) /* pp 3 */
|
||||
add wh, wh, a9
|
||||
mov wl, a6
|
||||
|
||||
#endif /* !MUL32_HIGH */
|
||||
|
||||
#if defined(__XTENSA_CALL0_ABI__) && XCHAL_NO_MUL
|
||||
/* Restore the original return address. */
|
||||
l32i a0, sp, 0
|
||||
#endif
|
||||
#ifdef __XTENSA_CALL0_ABI__
|
||||
l32i a12, sp, 16
|
||||
l32i a13, sp, 20
|
||||
l32i a14, sp, 24
|
||||
l32i a15, sp, 28
|
||||
abi_ret(32)
|
||||
#else
|
||||
abi_ret_default
|
||||
#endif
|
||||
|
||||
#if XCHAL_NO_MUL
|
||||
|
||||
.macro do_addx2 dst, as, at, tmp
|
||||
#if XCHAL_HAVE_ADDX
|
||||
addx2 \dst, \as, \at
|
||||
#else
|
||||
slli \tmp, \as, 1
|
||||
add \dst, \tmp, \at
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro do_addx4 dst, as, at, tmp
|
||||
#if XCHAL_HAVE_ADDX
|
||||
addx4 \dst, \as, \at
|
||||
#else
|
||||
slli \tmp, \as, 2
|
||||
add \dst, \tmp, \at
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro do_addx8 dst, as, at, tmp
|
||||
#if XCHAL_HAVE_ADDX
|
||||
addx8 \dst, \as, \at
|
||||
#else
|
||||
slli \tmp, \as, 3
|
||||
add \dst, \tmp, \at
|
||||
#endif
|
||||
.endm
|
||||
|
||||
/* For Xtensa processors with no multiply hardware, this simplified
|
||||
version of _mulsi3 is used for multiplying 16-bit chunks of
|
||||
the floating-point mantissas. When using CALL0, this function
|
||||
uses a custom ABI: the inputs are passed in a13 and a14, the
|
||||
result is returned in a12, and a8 and a15 are clobbered. */
|
||||
.align 4
|
||||
.Lmul_mulsi3:
|
||||
abi_entry_default
|
||||
|
||||
.macro mul_mulsi3_body dst, src1, src2, tmp1, tmp2
|
||||
movi \dst, 0
|
||||
1: add \tmp1, \src2, \dst
|
||||
extui \tmp2, \src1, 0, 1
|
||||
movnez \dst, \tmp1, \tmp2
|
||||
|
||||
do_addx2 \tmp1, \src2, \dst, \tmp1
|
||||
extui \tmp2, \src1, 1, 1
|
||||
movnez \dst, \tmp1, \tmp2
|
||||
|
||||
do_addx4 \tmp1, \src2, \dst, \tmp1
|
||||
extui \tmp2, \src1, 2, 1
|
||||
movnez \dst, \tmp1, \tmp2
|
||||
|
||||
do_addx8 \tmp1, \src2, \dst, \tmp1
|
||||
extui \tmp2, \src1, 3, 1
|
||||
movnez \dst, \tmp1, \tmp2
|
||||
|
||||
srli \src1, \src1, 4
|
||||
slli \src2, \src2, 4
|
||||
bnez \src1, 1b
|
||||
.endm
|
||||
|
||||
#ifdef __XTENSA_CALL0_ABI__
|
||||
mul_mulsi3_body a12, a13, a14, a15, a8
|
||||
#else
|
||||
/* The result will be written into a2, so save that argument in a4. */
|
||||
mov a4, a2
|
||||
mul_mulsi3_body a2, a4, a3, a5, a6
|
||||
#endif
|
||||
abi_ret_default
|
||||
#endif /* XCHAL_NO_MUL */
|
||||
|
||||
ENDPROC(__umulsidi3)
|
@ -130,6 +130,20 @@ static u8 dd_rq_ioclass(struct request *rq)
|
||||
return IOPRIO_PRIO_CLASS(req_get_ioprio(rq));
|
||||
}
|
||||
|
||||
/*
|
||||
* get the request before `rq' in sector-sorted order
|
||||
*/
|
||||
static inline struct request *
|
||||
deadline_earlier_request(struct request *rq)
|
||||
{
|
||||
struct rb_node *node = rb_prev(&rq->rb_node);
|
||||
|
||||
if (node)
|
||||
return rb_entry_rq(node);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* get the request after `rq' in sector-sorted order
|
||||
*/
|
||||
@ -277,6 +291,39 @@ static inline int deadline_check_fifo(struct dd_per_prio *per_prio,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check if rq has a sequential request preceding it.
|
||||
*/
|
||||
static bool deadline_is_seq_writes(struct deadline_data *dd, struct request *rq)
|
||||
{
|
||||
struct request *prev = deadline_earlier_request(rq);
|
||||
|
||||
if (!prev)
|
||||
return false;
|
||||
|
||||
return blk_rq_pos(prev) + blk_rq_sectors(prev) == blk_rq_pos(rq);
|
||||
}
|
||||
|
||||
/*
|
||||
* Skip all write requests that are sequential from @rq, even if we cross
|
||||
* a zone boundary.
|
||||
*/
|
||||
static struct request *deadline_skip_seq_writes(struct deadline_data *dd,
|
||||
struct request *rq)
|
||||
{
|
||||
sector_t pos = blk_rq_pos(rq);
|
||||
sector_t skipped_sectors = 0;
|
||||
|
||||
while (rq) {
|
||||
if (blk_rq_pos(rq) != pos + skipped_sectors)
|
||||
break;
|
||||
skipped_sectors += blk_rq_sectors(rq);
|
||||
rq = deadline_latter_request(rq);
|
||||
}
|
||||
|
||||
return rq;
|
||||
}
|
||||
|
||||
/*
|
||||
* For the specified data direction, return the next request to
|
||||
* dispatch using arrival ordered lists.
|
||||
@ -297,11 +344,16 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
|
||||
|
||||
/*
|
||||
* Look for a write request that can be dispatched, that is one with
|
||||
* an unlocked target zone.
|
||||
* an unlocked target zone. For some HDDs, breaking a sequential
|
||||
* write stream can lead to lower throughput, so make sure to preserve
|
||||
* sequential write streams, even if that stream crosses into the next
|
||||
* zones and these zones are unlocked.
|
||||
*/
|
||||
spin_lock_irqsave(&dd->zone_lock, flags);
|
||||
list_for_each_entry(rq, &per_prio->fifo_list[DD_WRITE], queuelist) {
|
||||
if (blk_req_can_dispatch_to_zone(rq))
|
||||
if (blk_req_can_dispatch_to_zone(rq) &&
|
||||
(blk_queue_nonrot(rq->q) ||
|
||||
!deadline_is_seq_writes(dd, rq)))
|
||||
goto out;
|
||||
}
|
||||
rq = NULL;
|
||||
@ -331,13 +383,19 @@ deadline_next_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
|
||||
|
||||
/*
|
||||
* Look for a write request that can be dispatched, that is one with
|
||||
* an unlocked target zone.
|
||||
* an unlocked target zone. For some HDDs, breaking a sequential
|
||||
* write stream can lead to lower throughput, so make sure to preserve
|
||||
* sequential write streams, even if that stream crosses into the next
|
||||
* zones and these zones are unlocked.
|
||||
*/
|
||||
spin_lock_irqsave(&dd->zone_lock, flags);
|
||||
while (rq) {
|
||||
if (blk_req_can_dispatch_to_zone(rq))
|
||||
break;
|
||||
rq = deadline_latter_request(rq);
|
||||
if (blk_queue_nonrot(rq->q))
|
||||
rq = deadline_latter_request(rq);
|
||||
else
|
||||
rq = deadline_skip_seq_writes(dd, rq);
|
||||
}
|
||||
spin_unlock_irqrestore(&dd->zone_lock, flags);
|
||||
|
||||
@ -789,6 +847,18 @@ static void dd_prepare_request(struct request *rq)
|
||||
rq->elv.priv[0] = NULL;
|
||||
}
|
||||
|
||||
static bool dd_has_write_work(struct blk_mq_hw_ctx *hctx)
|
||||
{
|
||||
struct deadline_data *dd = hctx->queue->elevator->elevator_data;
|
||||
enum dd_prio p;
|
||||
|
||||
for (p = 0; p <= DD_PRIO_MAX; p++)
|
||||
if (!list_empty_careful(&dd->per_prio[p].fifo_list[DD_WRITE]))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* Callback from inside blk_mq_free_request().
|
||||
*
|
||||
@ -828,9 +898,10 @@ static void dd_finish_request(struct request *rq)
|
||||
|
||||
spin_lock_irqsave(&dd->zone_lock, flags);
|
||||
blk_req_zone_write_unlock(rq);
|
||||
if (!list_empty(&per_prio->fifo_list[DD_WRITE]))
|
||||
blk_mq_sched_mark_restart_hctx(rq->mq_hctx);
|
||||
spin_unlock_irqrestore(&dd->zone_lock, flags);
|
||||
|
||||
if (dd_has_write_work(rq->mq_hctx))
|
||||
blk_mq_sched_mark_restart_hctx(rq->mq_hctx);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -734,6 +734,16 @@ static bool google_cros_ec_present(void)
|
||||
return acpi_dev_found("GOOG0004") || acpi_dev_found("GOOG000C");
|
||||
}
|
||||
|
||||
/*
|
||||
* Windows 8 and newer no longer use the ACPI video interface, so it often
|
||||
* does not work. So on win8+ systems prefer native brightness control.
|
||||
* Chromebooks should always prefer native backlight control.
|
||||
*/
|
||||
static bool prefer_native_over_acpi_video(void)
|
||||
{
|
||||
return acpi_osi_is_win8() || google_cros_ec_present();
|
||||
}
|
||||
|
||||
/*
|
||||
* Determine which type of backlight interface to use on this system,
|
||||
* First check cmdline, then dmi quirks, then do autodetect.
|
||||
@ -779,28 +789,16 @@ static enum acpi_backlight_type __acpi_video_get_backlight_type(bool native)
|
||||
if (apple_gmux_backlight_present())
|
||||
return acpi_backlight_apple_gmux;
|
||||
|
||||
/* Chromebooks should always prefer native backlight control. */
|
||||
if (google_cros_ec_present() && native_available)
|
||||
/* Use ACPI video if available, except when native should be preferred. */
|
||||
if ((video_caps & ACPI_VIDEO_BACKLIGHT) &&
|
||||
!(native_available && prefer_native_over_acpi_video()))
|
||||
return acpi_backlight_video;
|
||||
|
||||
/* Use native if available */
|
||||
if (native_available)
|
||||
return acpi_backlight_native;
|
||||
|
||||
/* On systems with ACPI video use either native or ACPI video. */
|
||||
if (video_caps & ACPI_VIDEO_BACKLIGHT) {
|
||||
/*
|
||||
* Windows 8 and newer no longer use the ACPI video interface,
|
||||
* so it often does not work. If the ACPI tables are written
|
||||
* for win8 and native brightness ctl is available, use that.
|
||||
*
|
||||
* The native check deliberately is inside the if acpi-video
|
||||
* block on older devices without acpi-video support native
|
||||
* is usually not the best choice.
|
||||
*/
|
||||
if (acpi_osi_is_win8() && native_available)
|
||||
return acpi_backlight_native;
|
||||
else
|
||||
return acpi_backlight_video;
|
||||
}
|
||||
|
||||
/* No ACPI video (old hw), use vendor specific fw methods. */
|
||||
/* No ACPI video/native (old hw), use vendor specific fw methods. */
|
||||
return acpi_backlight_vendor;
|
||||
}
|
||||
|
||||
@ -812,18 +810,6 @@ EXPORT_SYMBOL(acpi_video_get_backlight_type);
|
||||
|
||||
bool acpi_video_backlight_use_native(void)
|
||||
{
|
||||
/*
|
||||
* Call __acpi_video_get_backlight_type() to let it know that
|
||||
* a native backlight is available.
|
||||
*/
|
||||
__acpi_video_get_backlight_type(true);
|
||||
|
||||
/*
|
||||
* For now just always return true. There is a whole bunch of laptop
|
||||
* models where (video_caps & ACPI_VIDEO_BACKLIGHT) is false causing
|
||||
* __acpi_video_get_backlight_type() to return vendor, while these
|
||||
* models only have a native backlight control.
|
||||
*/
|
||||
return true;
|
||||
return __acpi_video_get_backlight_type(true) == acpi_backlight_native;
|
||||
}
|
||||
EXPORT_SYMBOL(acpi_video_backlight_use_native);
|
||||
|
@ -24,6 +24,7 @@
|
||||
#include <linux/libata.h>
|
||||
#include <linux/phy/phy.h>
|
||||
#include <linux/regulator/consumer.h>
|
||||
#include <linux/bits.h>
|
||||
|
||||
/* Enclosure Management Control */
|
||||
#define EM_CTRL_MSG_TYPE 0x000f0000
|
||||
@ -53,12 +54,12 @@ enum {
|
||||
AHCI_PORT_PRIV_FBS_DMA_SZ = AHCI_CMD_SLOT_SZ +
|
||||
AHCI_CMD_TBL_AR_SZ +
|
||||
(AHCI_RX_FIS_SZ * 16),
|
||||
AHCI_IRQ_ON_SG = (1 << 31),
|
||||
AHCI_CMD_ATAPI = (1 << 5),
|
||||
AHCI_CMD_WRITE = (1 << 6),
|
||||
AHCI_CMD_PREFETCH = (1 << 7),
|
||||
AHCI_CMD_RESET = (1 << 8),
|
||||
AHCI_CMD_CLR_BUSY = (1 << 10),
|
||||
AHCI_IRQ_ON_SG = BIT(31),
|
||||
AHCI_CMD_ATAPI = BIT(5),
|
||||
AHCI_CMD_WRITE = BIT(6),
|
||||
AHCI_CMD_PREFETCH = BIT(7),
|
||||
AHCI_CMD_RESET = BIT(8),
|
||||
AHCI_CMD_CLR_BUSY = BIT(10),
|
||||
|
||||
RX_FIS_PIO_SETUP = 0x20, /* offset of PIO Setup FIS data */
|
||||
RX_FIS_D2H_REG = 0x40, /* offset of D2H Register FIS data */
|
||||
@ -76,37 +77,37 @@ enum {
|
||||
HOST_CAP2 = 0x24, /* host capabilities, extended */
|
||||
|
||||
/* HOST_CTL bits */
|
||||
HOST_RESET = (1 << 0), /* reset controller; self-clear */
|
||||
HOST_IRQ_EN = (1 << 1), /* global IRQ enable */
|
||||
HOST_MRSM = (1 << 2), /* MSI Revert to Single Message */
|
||||
HOST_AHCI_EN = (1 << 31), /* AHCI enabled */
|
||||
HOST_RESET = BIT(0), /* reset controller; self-clear */
|
||||
HOST_IRQ_EN = BIT(1), /* global IRQ enable */
|
||||
HOST_MRSM = BIT(2), /* MSI Revert to Single Message */
|
||||
HOST_AHCI_EN = BIT(31), /* AHCI enabled */
|
||||
|
||||
/* HOST_CAP bits */
|
||||
HOST_CAP_SXS = (1 << 5), /* Supports External SATA */
|
||||
HOST_CAP_EMS = (1 << 6), /* Enclosure Management support */
|
||||
HOST_CAP_CCC = (1 << 7), /* Command Completion Coalescing */
|
||||
HOST_CAP_PART = (1 << 13), /* Partial state capable */
|
||||
HOST_CAP_SSC = (1 << 14), /* Slumber state capable */
|
||||
HOST_CAP_PIO_MULTI = (1 << 15), /* PIO multiple DRQ support */
|
||||
HOST_CAP_FBS = (1 << 16), /* FIS-based switching support */
|
||||
HOST_CAP_PMP = (1 << 17), /* Port Multiplier support */
|
||||
HOST_CAP_ONLY = (1 << 18), /* Supports AHCI mode only */
|
||||
HOST_CAP_CLO = (1 << 24), /* Command List Override support */
|
||||
HOST_CAP_LED = (1 << 25), /* Supports activity LED */
|
||||
HOST_CAP_ALPM = (1 << 26), /* Aggressive Link PM support */
|
||||
HOST_CAP_SSS = (1 << 27), /* Staggered Spin-up */
|
||||
HOST_CAP_MPS = (1 << 28), /* Mechanical presence switch */
|
||||
HOST_CAP_SNTF = (1 << 29), /* SNotification register */
|
||||
HOST_CAP_NCQ = (1 << 30), /* Native Command Queueing */
|
||||
HOST_CAP_64 = (1 << 31), /* PCI DAC (64-bit DMA) support */
|
||||
HOST_CAP_SXS = BIT(5), /* Supports External SATA */
|
||||
HOST_CAP_EMS = BIT(6), /* Enclosure Management support */
|
||||
HOST_CAP_CCC = BIT(7), /* Command Completion Coalescing */
|
||||
HOST_CAP_PART = BIT(13), /* Partial state capable */
|
||||
HOST_CAP_SSC = BIT(14), /* Slumber state capable */
|
||||
HOST_CAP_PIO_MULTI = BIT(15), /* PIO multiple DRQ support */
|
||||
HOST_CAP_FBS = BIT(16), /* FIS-based switching support */
|
||||
HOST_CAP_PMP = BIT(17), /* Port Multiplier support */
|
||||
HOST_CAP_ONLY = BIT(18), /* Supports AHCI mode only */
|
||||
HOST_CAP_CLO = BIT(24), /* Command List Override support */
|
||||
HOST_CAP_LED = BIT(25), /* Supports activity LED */
|
||||
HOST_CAP_ALPM = BIT(26), /* Aggressive Link PM support */
|
||||
HOST_CAP_SSS = BIT(27), /* Staggered Spin-up */
|
||||
HOST_CAP_MPS = BIT(28), /* Mechanical presence switch */
|
||||
HOST_CAP_SNTF = BIT(29), /* SNotification register */
|
||||
HOST_CAP_NCQ = BIT(30), /* Native Command Queueing */
|
||||
HOST_CAP_64 = BIT(31), /* PCI DAC (64-bit DMA) support */
|
||||
|
||||
/* HOST_CAP2 bits */
|
||||
HOST_CAP2_BOH = (1 << 0), /* BIOS/OS handoff supported */
|
||||
HOST_CAP2_NVMHCI = (1 << 1), /* NVMHCI supported */
|
||||
HOST_CAP2_APST = (1 << 2), /* Automatic partial to slumber */
|
||||
HOST_CAP2_SDS = (1 << 3), /* Support device sleep */
|
||||
HOST_CAP2_SADM = (1 << 4), /* Support aggressive DevSlp */
|
||||
HOST_CAP2_DESO = (1 << 5), /* DevSlp from slumber only */
|
||||
HOST_CAP2_BOH = BIT(0), /* BIOS/OS handoff supported */
|
||||
HOST_CAP2_NVMHCI = BIT(1), /* NVMHCI supported */
|
||||
HOST_CAP2_APST = BIT(2), /* Automatic partial to slumber */
|
||||
HOST_CAP2_SDS = BIT(3), /* Support device sleep */
|
||||
HOST_CAP2_SADM = BIT(4), /* Support aggressive DevSlp */
|
||||
HOST_CAP2_DESO = BIT(5), /* DevSlp from slumber only */
|
||||
|
||||
/* registers for each SATA port */
|
||||
PORT_LST_ADDR = 0x00, /* command list DMA addr */
|
||||
@ -128,24 +129,24 @@ enum {
|
||||
PORT_DEVSLP = 0x44, /* device sleep */
|
||||
|
||||
/* PORT_IRQ_{STAT,MASK} bits */
|
||||
PORT_IRQ_COLD_PRES = (1 << 31), /* cold presence detect */
|
||||
PORT_IRQ_TF_ERR = (1 << 30), /* task file error */
|
||||
PORT_IRQ_HBUS_ERR = (1 << 29), /* host bus fatal error */
|
||||
PORT_IRQ_HBUS_DATA_ERR = (1 << 28), /* host bus data error */
|
||||
PORT_IRQ_IF_ERR = (1 << 27), /* interface fatal error */
|
||||
PORT_IRQ_IF_NONFATAL = (1 << 26), /* interface non-fatal error */
|
||||
PORT_IRQ_OVERFLOW = (1 << 24), /* xfer exhausted available S/G */
|
||||
PORT_IRQ_BAD_PMP = (1 << 23), /* incorrect port multiplier */
|
||||
PORT_IRQ_COLD_PRES = BIT(31), /* cold presence detect */
|
||||
PORT_IRQ_TF_ERR = BIT(30), /* task file error */
|
||||
PORT_IRQ_HBUS_ERR = BIT(29), /* host bus fatal error */
|
||||
PORT_IRQ_HBUS_DATA_ERR = BIT(28), /* host bus data error */
|
||||
PORT_IRQ_IF_ERR = BIT(27), /* interface fatal error */
|
||||
PORT_IRQ_IF_NONFATAL = BIT(26), /* interface non-fatal error */
|
||||
PORT_IRQ_OVERFLOW = BIT(24), /* xfer exhausted available S/G */
|
||||
PORT_IRQ_BAD_PMP = BIT(23), /* incorrect port multiplier */
|
||||
|
||||
PORT_IRQ_PHYRDY = (1 << 22), /* PhyRdy changed */
|
||||
PORT_IRQ_DMPS = (1 << 7), /* mechanical presence status */
|
||||
PORT_IRQ_CONNECT = (1 << 6), /* port connect change status */
|
||||
PORT_IRQ_SG_DONE = (1 << 5), /* descriptor processed */
|
||||
PORT_IRQ_UNK_FIS = (1 << 4), /* unknown FIS rx'd */
|
||||
PORT_IRQ_SDB_FIS = (1 << 3), /* Set Device Bits FIS rx'd */
|
||||
PORT_IRQ_DMAS_FIS = (1 << 2), /* DMA Setup FIS rx'd */
|
||||
PORT_IRQ_PIOS_FIS = (1 << 1), /* PIO Setup FIS rx'd */
|
||||
PORT_IRQ_D2H_REG_FIS = (1 << 0), /* D2H Register FIS rx'd */
|
||||
PORT_IRQ_PHYRDY = BIT(22), /* PhyRdy changed */
|
||||
PORT_IRQ_DMPS = BIT(7), /* mechanical presence status */
|
||||
PORT_IRQ_CONNECT = BIT(6), /* port connect change status */
|
||||
PORT_IRQ_SG_DONE = BIT(5), /* descriptor processed */
|
||||
PORT_IRQ_UNK_FIS = BIT(4), /* unknown FIS rx'd */
|
||||
PORT_IRQ_SDB_FIS = BIT(3), /* Set Device Bits FIS rx'd */
|
||||
PORT_IRQ_DMAS_FIS = BIT(2), /* DMA Setup FIS rx'd */
|
||||
PORT_IRQ_PIOS_FIS = BIT(1), /* PIO Setup FIS rx'd */
|
||||
PORT_IRQ_D2H_REG_FIS = BIT(0), /* D2H Register FIS rx'd */
|
||||
|
||||
PORT_IRQ_FREEZE = PORT_IRQ_HBUS_ERR |
|
||||
PORT_IRQ_IF_ERR |
|
||||
@ -161,27 +162,27 @@ enum {
|
||||
PORT_IRQ_PIOS_FIS | PORT_IRQ_D2H_REG_FIS,
|
||||
|
||||
/* PORT_CMD bits */
|
||||
PORT_CMD_ASP = (1 << 27), /* Aggressive Slumber/Partial */
|
||||
PORT_CMD_ALPE = (1 << 26), /* Aggressive Link PM enable */
|
||||
PORT_CMD_ATAPI = (1 << 24), /* Device is ATAPI */
|
||||
PORT_CMD_FBSCP = (1 << 22), /* FBS Capable Port */
|
||||
PORT_CMD_ESP = (1 << 21), /* External Sata Port */
|
||||
PORT_CMD_CPD = (1 << 20), /* Cold Presence Detection */
|
||||
PORT_CMD_MPSP = (1 << 19), /* Mechanical Presence Switch */
|
||||
PORT_CMD_HPCP = (1 << 18), /* HotPlug Capable Port */
|
||||
PORT_CMD_PMP = (1 << 17), /* PMP attached */
|
||||
PORT_CMD_LIST_ON = (1 << 15), /* cmd list DMA engine running */
|
||||
PORT_CMD_FIS_ON = (1 << 14), /* FIS DMA engine running */
|
||||
PORT_CMD_FIS_RX = (1 << 4), /* Enable FIS receive DMA engine */
|
||||
PORT_CMD_CLO = (1 << 3), /* Command list override */
|
||||
PORT_CMD_POWER_ON = (1 << 2), /* Power up device */
|
||||
PORT_CMD_SPIN_UP = (1 << 1), /* Spin up device */
|
||||
PORT_CMD_START = (1 << 0), /* Enable port DMA engine */
|
||||
PORT_CMD_ASP = BIT(27), /* Aggressive Slumber/Partial */
|
||||
PORT_CMD_ALPE = BIT(26), /* Aggressive Link PM enable */
|
||||
PORT_CMD_ATAPI = BIT(24), /* Device is ATAPI */
|
||||
PORT_CMD_FBSCP = BIT(22), /* FBS Capable Port */
|
||||
PORT_CMD_ESP = BIT(21), /* External Sata Port */
|
||||
PORT_CMD_CPD = BIT(20), /* Cold Presence Detection */
|
||||
PORT_CMD_MPSP = BIT(19), /* Mechanical Presence Switch */
|
||||
PORT_CMD_HPCP = BIT(18), /* HotPlug Capable Port */
|
||||
PORT_CMD_PMP = BIT(17), /* PMP attached */
|
||||
PORT_CMD_LIST_ON = BIT(15), /* cmd list DMA engine running */
|
||||
PORT_CMD_FIS_ON = BIT(14), /* FIS DMA engine running */
|
||||
PORT_CMD_FIS_RX = BIT(4), /* Enable FIS receive DMA engine */
|
||||
PORT_CMD_CLO = BIT(3), /* Command list override */
|
||||
PORT_CMD_POWER_ON = BIT(2), /* Power up device */
|
||||
PORT_CMD_SPIN_UP = BIT(1), /* Spin up device */
|
||||
PORT_CMD_START = BIT(0), /* Enable port DMA engine */
|
||||
|
||||
PORT_CMD_ICC_MASK = (0xf << 28), /* i/f ICC state mask */
|
||||
PORT_CMD_ICC_ACTIVE = (0x1 << 28), /* Put i/f in active state */
|
||||
PORT_CMD_ICC_PARTIAL = (0x2 << 28), /* Put i/f in partial state */
|
||||
PORT_CMD_ICC_SLUMBER = (0x6 << 28), /* Put i/f in slumber state */
|
||||
PORT_CMD_ICC_MASK = (0xfu << 28), /* i/f ICC state mask */
|
||||
PORT_CMD_ICC_ACTIVE = (0x1u << 28), /* Put i/f in active state */
|
||||
PORT_CMD_ICC_PARTIAL = (0x2u << 28), /* Put i/f in partial state */
|
||||
PORT_CMD_ICC_SLUMBER = (0x6u << 28), /* Put i/f in slumber state */
|
||||
|
||||
/* PORT_CMD capabilities mask */
|
||||
PORT_CMD_CAP = PORT_CMD_HPCP | PORT_CMD_MPSP |
|
||||
@ -192,9 +193,9 @@ enum {
|
||||
PORT_FBS_ADO_OFFSET = 12, /* FBS active dev optimization offset */
|
||||
PORT_FBS_DEV_OFFSET = 8, /* FBS device to issue offset */
|
||||
PORT_FBS_DEV_MASK = (0xf << PORT_FBS_DEV_OFFSET), /* FBS.DEV */
|
||||
PORT_FBS_SDE = (1 << 2), /* FBS single device error */
|
||||
PORT_FBS_DEC = (1 << 1), /* FBS device error clear */
|
||||
PORT_FBS_EN = (1 << 0), /* Enable FBS */
|
||||
PORT_FBS_SDE = BIT(2), /* FBS single device error */
|
||||
PORT_FBS_DEC = BIT(1), /* FBS device error clear */
|
||||
PORT_FBS_EN = BIT(0), /* Enable FBS */
|
||||
|
||||
/* PORT_DEVSLP bits */
|
||||
PORT_DEVSLP_DM_OFFSET = 25, /* DITO multiplier offset */
|
||||
@ -202,50 +203,50 @@ enum {
|
||||
PORT_DEVSLP_DITO_OFFSET = 15, /* DITO offset */
|
||||
PORT_DEVSLP_MDAT_OFFSET = 10, /* Minimum assertion time */
|
||||
PORT_DEVSLP_DETO_OFFSET = 2, /* DevSlp exit timeout */
|
||||
PORT_DEVSLP_DSP = (1 << 1), /* DevSlp present */
|
||||
PORT_DEVSLP_ADSE = (1 << 0), /* Aggressive DevSlp enable */
|
||||
PORT_DEVSLP_DSP = BIT(1), /* DevSlp present */
|
||||
PORT_DEVSLP_ADSE = BIT(0), /* Aggressive DevSlp enable */
|
||||
|
||||
/* hpriv->flags bits */
|
||||
|
||||
#define AHCI_HFLAGS(flags) .private_data = (void *)(flags)
|
||||
|
||||
AHCI_HFLAG_NO_NCQ = (1 << 0),
|
||||
AHCI_HFLAG_IGN_IRQ_IF_ERR = (1 << 1), /* ignore IRQ_IF_ERR */
|
||||
AHCI_HFLAG_IGN_SERR_INTERNAL = (1 << 2), /* ignore SERR_INTERNAL */
|
||||
AHCI_HFLAG_32BIT_ONLY = (1 << 3), /* force 32bit */
|
||||
AHCI_HFLAG_MV_PATA = (1 << 4), /* PATA port */
|
||||
AHCI_HFLAG_NO_MSI = (1 << 5), /* no PCI MSI */
|
||||
AHCI_HFLAG_NO_PMP = (1 << 6), /* no PMP */
|
||||
AHCI_HFLAG_SECT255 = (1 << 8), /* max 255 sectors */
|
||||
AHCI_HFLAG_YES_NCQ = (1 << 9), /* force NCQ cap on */
|
||||
AHCI_HFLAG_NO_SUSPEND = (1 << 10), /* don't suspend */
|
||||
AHCI_HFLAG_SRST_TOUT_IS_OFFLINE = (1 << 11), /* treat SRST timeout as
|
||||
link offline */
|
||||
AHCI_HFLAG_NO_SNTF = (1 << 12), /* no sntf */
|
||||
AHCI_HFLAG_NO_FPDMA_AA = (1 << 13), /* no FPDMA AA */
|
||||
AHCI_HFLAG_YES_FBS = (1 << 14), /* force FBS cap on */
|
||||
AHCI_HFLAG_DELAY_ENGINE = (1 << 15), /* do not start engine on
|
||||
port start (wait until
|
||||
error-handling stage) */
|
||||
AHCI_HFLAG_NO_DEVSLP = (1 << 17), /* no device sleep */
|
||||
AHCI_HFLAG_NO_FBS = (1 << 18), /* no FBS */
|
||||
AHCI_HFLAG_NO_NCQ = BIT(0),
|
||||
AHCI_HFLAG_IGN_IRQ_IF_ERR = BIT(1), /* ignore IRQ_IF_ERR */
|
||||
AHCI_HFLAG_IGN_SERR_INTERNAL = BIT(2), /* ignore SERR_INTERNAL */
|
||||
AHCI_HFLAG_32BIT_ONLY = BIT(3), /* force 32bit */
|
||||
AHCI_HFLAG_MV_PATA = BIT(4), /* PATA port */
|
||||
AHCI_HFLAG_NO_MSI = BIT(5), /* no PCI MSI */
|
||||
AHCI_HFLAG_NO_PMP = BIT(6), /* no PMP */
|
||||
AHCI_HFLAG_SECT255 = BIT(8), /* max 255 sectors */
|
||||
AHCI_HFLAG_YES_NCQ = BIT(9), /* force NCQ cap on */
|
||||
AHCI_HFLAG_NO_SUSPEND = BIT(10), /* don't suspend */
|
||||
AHCI_HFLAG_SRST_TOUT_IS_OFFLINE = BIT(11), /* treat SRST timeout as
|
||||
link offline */
|
||||
AHCI_HFLAG_NO_SNTF = BIT(12), /* no sntf */
|
||||
AHCI_HFLAG_NO_FPDMA_AA = BIT(13), /* no FPDMA AA */
|
||||
AHCI_HFLAG_YES_FBS = BIT(14), /* force FBS cap on */
|
||||
AHCI_HFLAG_DELAY_ENGINE = BIT(15), /* do not start engine on
|
||||
port start (wait until
|
||||
error-handling stage) */
|
||||
AHCI_HFLAG_NO_DEVSLP = BIT(17), /* no device sleep */
|
||||
AHCI_HFLAG_NO_FBS = BIT(18), /* no FBS */
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
AHCI_HFLAG_MULTI_MSI = (1 << 20), /* per-port MSI(-X) */
|
||||
AHCI_HFLAG_MULTI_MSI = BIT(20), /* per-port MSI(-X) */
|
||||
#else
|
||||
/* compile out MSI infrastructure */
|
||||
AHCI_HFLAG_MULTI_MSI = 0,
|
||||
#endif
|
||||
AHCI_HFLAG_WAKE_BEFORE_STOP = (1 << 22), /* wake before DMA stop */
|
||||
AHCI_HFLAG_YES_ALPM = (1 << 23), /* force ALPM cap on */
|
||||
AHCI_HFLAG_NO_WRITE_TO_RO = (1 << 24), /* don't write to read
|
||||
only registers */
|
||||
AHCI_HFLAG_USE_LPM_POLICY = (1 << 25), /* chipset that should use
|
||||
SATA_MOBILE_LPM_POLICY
|
||||
as default lpm_policy */
|
||||
AHCI_HFLAG_SUSPEND_PHYS = (1 << 26), /* handle PHYs during
|
||||
suspend/resume */
|
||||
AHCI_HFLAG_NO_SXS = (1 << 28), /* SXS not supported */
|
||||
AHCI_HFLAG_WAKE_BEFORE_STOP = BIT(22), /* wake before DMA stop */
|
||||
AHCI_HFLAG_YES_ALPM = BIT(23), /* force ALPM cap on */
|
||||
AHCI_HFLAG_NO_WRITE_TO_RO = BIT(24), /* don't write to read
|
||||
only registers */
|
||||
AHCI_HFLAG_USE_LPM_POLICY = BIT(25), /* chipset that should use
|
||||
SATA_MOBILE_LPM_POLICY
|
||||
as default lpm_policy */
|
||||
AHCI_HFLAG_SUSPEND_PHYS = BIT(26), /* handle PHYs during
|
||||
suspend/resume */
|
||||
AHCI_HFLAG_NO_SXS = BIT(28), /* SXS not supported */
|
||||
|
||||
/* ap->flags bits */
|
||||
|
||||
@ -261,22 +262,22 @@ enum {
|
||||
EM_MAX_RETRY = 5,
|
||||
|
||||
/* em_ctl bits */
|
||||
EM_CTL_RST = (1 << 9), /* Reset */
|
||||
EM_CTL_TM = (1 << 8), /* Transmit Message */
|
||||
EM_CTL_MR = (1 << 0), /* Message Received */
|
||||
EM_CTL_ALHD = (1 << 26), /* Activity LED */
|
||||
EM_CTL_XMT = (1 << 25), /* Transmit Only */
|
||||
EM_CTL_SMB = (1 << 24), /* Single Message Buffer */
|
||||
EM_CTL_SGPIO = (1 << 19), /* SGPIO messages supported */
|
||||
EM_CTL_SES = (1 << 18), /* SES-2 messages supported */
|
||||
EM_CTL_SAFTE = (1 << 17), /* SAF-TE messages supported */
|
||||
EM_CTL_LED = (1 << 16), /* LED messages supported */
|
||||
EM_CTL_RST = BIT(9), /* Reset */
|
||||
EM_CTL_TM = BIT(8), /* Transmit Message */
|
||||
EM_CTL_MR = BIT(0), /* Message Received */
|
||||
EM_CTL_ALHD = BIT(26), /* Activity LED */
|
||||
EM_CTL_XMT = BIT(25), /* Transmit Only */
|
||||
EM_CTL_SMB = BIT(24), /* Single Message Buffer */
|
||||
EM_CTL_SGPIO = BIT(19), /* SGPIO messages supported */
|
||||
EM_CTL_SES = BIT(18), /* SES-2 messages supported */
|
||||
EM_CTL_SAFTE = BIT(17), /* SAF-TE messages supported */
|
||||
EM_CTL_LED = BIT(16), /* LED messages supported */
|
||||
|
||||
/* em message type */
|
||||
EM_MSG_TYPE_LED = (1 << 0), /* LED */
|
||||
EM_MSG_TYPE_SAFTE = (1 << 1), /* SAF-TE */
|
||||
EM_MSG_TYPE_SES2 = (1 << 2), /* SES-2 */
|
||||
EM_MSG_TYPE_SGPIO = (1 << 3), /* SGPIO */
|
||||
EM_MSG_TYPE_LED = BIT(0), /* LED */
|
||||
EM_MSG_TYPE_SAFTE = BIT(1), /* SAF-TE */
|
||||
EM_MSG_TYPE_SES2 = BIT(2), /* SES-2 */
|
||||
EM_MSG_TYPE_SGPIO = BIT(3), /* SGPIO */
|
||||
};
|
||||
|
||||
struct ahci_cmd_hdr {
|
||||
|
@ -1162,7 +1162,11 @@ static int __driver_attach(struct device *dev, void *data)
|
||||
return 0;
|
||||
} else if (ret < 0) {
|
||||
dev_dbg(dev, "Bus failed to match device: %d\n", ret);
|
||||
return ret;
|
||||
/*
|
||||
* Driver could not match with device, but may match with
|
||||
* another device on the bus.
|
||||
*/
|
||||
return 0;
|
||||
} /* ret > 0 means positive match */
|
||||
|
||||
if (driver_allows_async_probing(drv)) {
|
||||
|
@ -301,7 +301,8 @@ int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
|
||||
read_lock_irq(&mhi_chan->lock);
|
||||
|
||||
/* Only ring DB if ring is not empty */
|
||||
if (tre_ring->base && tre_ring->wp != tre_ring->rp)
|
||||
if (tre_ring->base && tre_ring->wp != tre_ring->rp &&
|
||||
mhi_chan->ch_state == MHI_CH_STATE_ENABLED)
|
||||
mhi_ring_chan_db(mhi_cntrl, mhi_chan);
|
||||
read_unlock_irq(&mhi_chan->lock);
|
||||
}
|
||||
|
@ -1330,6 +1330,7 @@ static void _ipmi_destroy_user(struct ipmi_user *user)
|
||||
unsigned long flags;
|
||||
struct cmd_rcvr *rcvr;
|
||||
struct cmd_rcvr *rcvrs = NULL;
|
||||
struct module *owner;
|
||||
|
||||
if (!acquire_ipmi_user(user, &i)) {
|
||||
/*
|
||||
@ -1392,8 +1393,9 @@ static void _ipmi_destroy_user(struct ipmi_user *user)
|
||||
kfree(rcvr);
|
||||
}
|
||||
|
||||
owner = intf->owner;
|
||||
kref_put(&intf->refcount, intf_free);
|
||||
module_put(intf->owner);
|
||||
module_put(owner);
|
||||
}
|
||||
|
||||
int ipmi_destroy_user(struct ipmi_user *user)
|
||||
|
@ -2153,6 +2153,20 @@ static int __init init_ipmi_si(void)
|
||||
}
|
||||
module_init(init_ipmi_si);
|
||||
|
||||
static void wait_msg_processed(struct smi_info *smi_info)
|
||||
{
|
||||
unsigned long jiffies_now;
|
||||
long time_diff;
|
||||
|
||||
while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {
|
||||
jiffies_now = jiffies;
|
||||
time_diff = (((long)jiffies_now - (long)smi_info->last_timeout_jiffies)
|
||||
* SI_USEC_PER_JIFFY);
|
||||
smi_event_handler(smi_info, time_diff);
|
||||
schedule_timeout_uninterruptible(1);
|
||||
}
|
||||
}
|
||||
|
||||
static void shutdown_smi(void *send_info)
|
||||
{
|
||||
struct smi_info *smi_info = send_info;
|
||||
@ -2187,16 +2201,13 @@ static void shutdown_smi(void *send_info)
|
||||
* in the BMC. Note that timers and CPU interrupts are off,
|
||||
* so no need for locks.
|
||||
*/
|
||||
while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {
|
||||
poll(smi_info);
|
||||
schedule_timeout_uninterruptible(1);
|
||||
}
|
||||
wait_msg_processed(smi_info);
|
||||
|
||||
if (smi_info->handlers)
|
||||
disable_si_irq(smi_info);
|
||||
while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {
|
||||
poll(smi_info);
|
||||
schedule_timeout_uninterruptible(1);
|
||||
}
|
||||
|
||||
wait_msg_processed(smi_info);
|
||||
|
||||
if (smi_info->handlers)
|
||||
smi_info->handlers->cleanup(smi_info->si_sm);
|
||||
|
||||
|
@ -160,6 +160,9 @@ EXPORT_SYMBOL(wait_for_random_bytes);
|
||||
* u8 get_random_u8()
|
||||
* u16 get_random_u16()
|
||||
* u32 get_random_u32()
|
||||
* u32 get_random_u32_below(u32 ceil)
|
||||
* u32 get_random_u32_above(u32 floor)
|
||||
* u32 get_random_u32_inclusive(u32 floor, u32 ceil)
|
||||
* u64 get_random_u64()
|
||||
* unsigned long get_random_long()
|
||||
*
|
||||
@ -510,6 +513,41 @@ DEFINE_BATCHED_ENTROPY(u16)
|
||||
DEFINE_BATCHED_ENTROPY(u32)
|
||||
DEFINE_BATCHED_ENTROPY(u64)
|
||||
|
||||
u32 __get_random_u32_below(u32 ceil)
|
||||
{
|
||||
/*
|
||||
* This is the slow path for variable ceil. It is still fast, most of
|
||||
* the time, by doing traditional reciprocal multiplication and
|
||||
* opportunistically comparing the lower half to ceil itself, before
|
||||
* falling back to computing a larger bound, and then rejecting samples
|
||||
* whose lower half would indicate a range indivisible by ceil. The use
|
||||
* of `-ceil % ceil` is analogous to `2^32 % ceil`, but is computable
|
||||
* in 32-bits.
|
||||
*/
|
||||
u32 rand = get_random_u32();
|
||||
u64 mult;
|
||||
|
||||
/*
|
||||
* This function is technically undefined for ceil == 0, and in fact
|
||||
* for the non-underscored constant version in the header, we build bug
|
||||
* on that. But for the non-constant case, it's convenient to have that
|
||||
* evaluate to being a straight call to get_random_u32(), so that
|
||||
* get_random_u32_inclusive() can work over its whole range without
|
||||
* undefined behavior.
|
||||
*/
|
||||
if (unlikely(!ceil))
|
||||
return rand;
|
||||
|
||||
mult = (u64)ceil * rand;
|
||||
if (unlikely((u32)mult < ceil)) {
|
||||
u32 bound = -ceil % ceil;
|
||||
while (unlikely((u32)mult < bound))
|
||||
mult = (u64)ceil * get_random_u32();
|
||||
}
|
||||
return mult >> 32;
|
||||
}
|
||||
EXPORT_SYMBOL(__get_random_u32_below);
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
/*
|
||||
* This function is called when the CPU is coming up, with entry
|
||||
|
@ -1220,6 +1220,7 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
|
||||
if (!zalloc_cpumask_var(&policy->real_cpus, GFP_KERNEL))
|
||||
goto err_free_rcpumask;
|
||||
|
||||
init_completion(&policy->kobj_unregister);
|
||||
ret = kobject_init_and_add(&policy->kobj, &ktype_cpufreq,
|
||||
cpufreq_global_kobject, "policy%u", cpu);
|
||||
if (ret) {
|
||||
@ -1258,7 +1259,6 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
|
||||
init_rwsem(&policy->rwsem);
|
||||
spin_lock_init(&policy->transition_lock);
|
||||
init_waitqueue_head(&policy->transition_wait);
|
||||
init_completion(&policy->kobj_unregister);
|
||||
INIT_WORK(&policy->update, handle_update);
|
||||
|
||||
policy->cpu = cpu;
|
||||
|
@ -790,8 +790,8 @@ config CRYPTO_DEV_CCREE
|
||||
select CRYPTO_ECB
|
||||
select CRYPTO_CTR
|
||||
select CRYPTO_XTS
|
||||
select CRYPTO_SM4
|
||||
select CRYPTO_SM3
|
||||
select CRYPTO_SM4_GENERIC
|
||||
select CRYPTO_SM3_GENERIC
|
||||
help
|
||||
Say 'Y' to enable a driver for the REE interface of the Arm
|
||||
TrustZone CryptoCell family of processors. Currently the
|
||||
|
@ -381,6 +381,15 @@ static const struct psp_vdata pspv3 = {
|
||||
.inten_reg = 0x10690,
|
||||
.intsts_reg = 0x10694,
|
||||
};
|
||||
|
||||
static const struct psp_vdata pspv4 = {
|
||||
.sev = &sevv2,
|
||||
.tee = &teev1,
|
||||
.feature_reg = 0x109fc,
|
||||
.inten_reg = 0x10690,
|
||||
.intsts_reg = 0x10694,
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
static const struct sp_dev_vdata dev_vdata[] = {
|
||||
@ -426,7 +435,7 @@ static const struct sp_dev_vdata dev_vdata[] = {
|
||||
{ /* 5 */
|
||||
.bar = 2,
|
||||
#ifdef CONFIG_CRYPTO_DEV_SP_PSP
|
||||
.psp_vdata = &pspv2,
|
||||
.psp_vdata = &pspv4,
|
||||
#endif
|
||||
},
|
||||
{ /* 6 */
|
||||
|
@ -26,7 +26,7 @@ config CRYPTO_DEV_HISI_SEC2
|
||||
select CRYPTO_SHA1
|
||||
select CRYPTO_SHA256
|
||||
select CRYPTO_SHA512
|
||||
select CRYPTO_SM4
|
||||
select CRYPTO_SM4_GENERIC
|
||||
depends on PCI && PCI_MSI
|
||||
depends on UACCE || UACCE=n
|
||||
depends on ARM64 || (COMPILE_TEST && 64BIT)
|
||||
|
@ -1229,6 +1229,7 @@ struct n2_hash_tmpl {
|
||||
const u8 *hash_init;
|
||||
u8 hw_op_hashsz;
|
||||
u8 digest_size;
|
||||
u8 statesize;
|
||||
u8 block_size;
|
||||
u8 auth_type;
|
||||
u8 hmac_type;
|
||||
@ -1260,6 +1261,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
|
||||
.hmac_type = AUTH_TYPE_HMAC_MD5,
|
||||
.hw_op_hashsz = MD5_DIGEST_SIZE,
|
||||
.digest_size = MD5_DIGEST_SIZE,
|
||||
.statesize = sizeof(struct md5_state),
|
||||
.block_size = MD5_HMAC_BLOCK_SIZE },
|
||||
{ .name = "sha1",
|
||||
.hash_zero = sha1_zero_message_hash,
|
||||
@ -1268,6 +1270,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
|
||||
.hmac_type = AUTH_TYPE_HMAC_SHA1,
|
||||
.hw_op_hashsz = SHA1_DIGEST_SIZE,
|
||||
.digest_size = SHA1_DIGEST_SIZE,
|
||||
.statesize = sizeof(struct sha1_state),
|
||||
.block_size = SHA1_BLOCK_SIZE },
|
||||
{ .name = "sha256",
|
||||
.hash_zero = sha256_zero_message_hash,
|
||||
@ -1276,6 +1279,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
|
||||
.hmac_type = AUTH_TYPE_HMAC_SHA256,
|
||||
.hw_op_hashsz = SHA256_DIGEST_SIZE,
|
||||
.digest_size = SHA256_DIGEST_SIZE,
|
||||
.statesize = sizeof(struct sha256_state),
|
||||
.block_size = SHA256_BLOCK_SIZE },
|
||||
{ .name = "sha224",
|
||||
.hash_zero = sha224_zero_message_hash,
|
||||
@ -1284,6 +1288,7 @@ static const struct n2_hash_tmpl hash_tmpls[] = {
|
||||
.hmac_type = AUTH_TYPE_RESERVED,
|
||||
.hw_op_hashsz = SHA256_DIGEST_SIZE,
|
||||
.digest_size = SHA224_DIGEST_SIZE,
|
||||
.statesize = sizeof(struct sha256_state),
|
||||
.block_size = SHA224_BLOCK_SIZE },
|
||||
};
|
||||
#define NUM_HASH_TMPLS ARRAY_SIZE(hash_tmpls)
|
||||
@ -1424,6 +1429,7 @@ static int __n2_register_one_ahash(const struct n2_hash_tmpl *tmpl)
|
||||
|
||||
halg = &ahash->halg;
|
||||
halg->digestsize = tmpl->digest_size;
|
||||
halg->statesize = tmpl->statesize;
|
||||
|
||||
base = &halg->base;
|
||||
snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "%s", tmpl->name);
|
||||
|
@ -1226,7 +1226,7 @@ static int cxl_region_attach(struct cxl_region *cxlr,
|
||||
struct cxl_endpoint_decoder *cxled_target;
|
||||
struct cxl_memdev *cxlmd_target;
|
||||
|
||||
cxled_target = p->targets[pos];
|
||||
cxled_target = p->targets[i];
|
||||
if (!cxled_target)
|
||||
continue;
|
||||
|
||||
@ -1923,6 +1923,9 @@ static int cxl_region_probe(struct device *dev)
|
||||
*/
|
||||
up_read(&cxl_region_rwsem);
|
||||
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
switch (cxlr->mode) {
|
||||
case CXL_DECODER_PMEM:
|
||||
return devm_cxl_add_pmem_region(cxlr);
|
||||
|
@ -776,8 +776,7 @@ static void remove_sysfs_files(struct devfreq *devfreq,
|
||||
* @dev: the device to add devfreq feature.
|
||||
* @profile: device-specific profile to run devfreq.
|
||||
* @governor_name: name of the policy to choose frequency.
|
||||
* @data: private data for the governor. The devfreq framework does not
|
||||
* touch this value.
|
||||
* @data: devfreq driver pass to governors, governor should not change it.
|
||||
*/
|
||||
struct devfreq *devfreq_add_device(struct device *dev,
|
||||
struct devfreq_dev_profile *profile,
|
||||
@ -1011,8 +1010,7 @@ static void devm_devfreq_dev_release(struct device *dev, void *res)
|
||||
* @dev: the device to add devfreq feature.
|
||||
* @profile: device-specific profile to run devfreq.
|
||||
* @governor_name: name of the policy to choose frequency.
|
||||
* @data: private data for the governor. The devfreq framework does not
|
||||
* touch this value.
|
||||
* @data: devfreq driver pass to governors, governor should not change it.
|
||||
*
|
||||
* This function manages automatically the memory of devfreq device using device
|
||||
* resource management and simplify the free operation for memory of devfreq
|
||||
|
@ -21,7 +21,7 @@ struct userspace_data {
|
||||
|
||||
static int devfreq_userspace_func(struct devfreq *df, unsigned long *freq)
|
||||
{
|
||||
struct userspace_data *data = df->data;
|
||||
struct userspace_data *data = df->governor_data;
|
||||
|
||||
if (data->valid)
|
||||
*freq = data->user_frequency;
|
||||
@ -40,7 +40,7 @@ static ssize_t set_freq_store(struct device *dev, struct device_attribute *attr,
|
||||
int err = 0;
|
||||
|
||||
mutex_lock(&devfreq->lock);
|
||||
data = devfreq->data;
|
||||
data = devfreq->governor_data;
|
||||
|
||||
sscanf(buf, "%lu", &wanted);
|
||||
data->user_frequency = wanted;
|
||||
@ -60,7 +60,7 @@ static ssize_t set_freq_show(struct device *dev,
|
||||
int err = 0;
|
||||
|
||||
mutex_lock(&devfreq->lock);
|
||||
data = devfreq->data;
|
||||
data = devfreq->governor_data;
|
||||
|
||||
if (data->valid)
|
||||
err = sprintf(buf, "%lu\n", data->user_frequency);
|
||||
@ -91,7 +91,7 @@ static int userspace_init(struct devfreq *devfreq)
|
||||
goto out;
|
||||
}
|
||||
data->valid = false;
|
||||
devfreq->data = data;
|
||||
devfreq->governor_data = data;
|
||||
|
||||
err = sysfs_create_group(&devfreq->dev.kobj, &dev_attr_group);
|
||||
out:
|
||||
@ -107,8 +107,8 @@ static void userspace_exit(struct devfreq *devfreq)
|
||||
if (devfreq->dev.kobj.sd)
|
||||
sysfs_remove_group(&devfreq->dev.kobj, &dev_attr_group);
|
||||
|
||||
kfree(devfreq->data);
|
||||
devfreq->data = NULL;
|
||||
kfree(devfreq->governor_data);
|
||||
devfreq->governor_data = NULL;
|
||||
}
|
||||
|
||||
static int devfreq_userspace_handler(struct devfreq *devfreq,
|
||||
|
@ -298,6 +298,14 @@ DEVICE_CHANNEL(ch6_dimm_label, S_IRUGO | S_IWUSR,
|
||||
channel_dimm_label_show, channel_dimm_label_store, 6);
|
||||
DEVICE_CHANNEL(ch7_dimm_label, S_IRUGO | S_IWUSR,
|
||||
channel_dimm_label_show, channel_dimm_label_store, 7);
|
||||
DEVICE_CHANNEL(ch8_dimm_label, S_IRUGO | S_IWUSR,
|
||||
channel_dimm_label_show, channel_dimm_label_store, 8);
|
||||
DEVICE_CHANNEL(ch9_dimm_label, S_IRUGO | S_IWUSR,
|
||||
channel_dimm_label_show, channel_dimm_label_store, 9);
|
||||
DEVICE_CHANNEL(ch10_dimm_label, S_IRUGO | S_IWUSR,
|
||||
channel_dimm_label_show, channel_dimm_label_store, 10);
|
||||
DEVICE_CHANNEL(ch11_dimm_label, S_IRUGO | S_IWUSR,
|
||||
channel_dimm_label_show, channel_dimm_label_store, 11);
|
||||
|
||||
/* Total possible dynamic DIMM Label attribute file table */
|
||||
static struct attribute *dynamic_csrow_dimm_attr[] = {
|
||||
@ -309,6 +317,10 @@ static struct attribute *dynamic_csrow_dimm_attr[] = {
|
||||
&dev_attr_legacy_ch5_dimm_label.attr.attr,
|
||||
&dev_attr_legacy_ch6_dimm_label.attr.attr,
|
||||
&dev_attr_legacy_ch7_dimm_label.attr.attr,
|
||||
&dev_attr_legacy_ch8_dimm_label.attr.attr,
|
||||
&dev_attr_legacy_ch9_dimm_label.attr.attr,
|
||||
&dev_attr_legacy_ch10_dimm_label.attr.attr,
|
||||
&dev_attr_legacy_ch11_dimm_label.attr.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
@ -329,6 +341,14 @@ DEVICE_CHANNEL(ch6_ce_count, S_IRUGO,
|
||||
channel_ce_count_show, NULL, 6);
|
||||
DEVICE_CHANNEL(ch7_ce_count, S_IRUGO,
|
||||
channel_ce_count_show, NULL, 7);
|
||||
DEVICE_CHANNEL(ch8_ce_count, S_IRUGO,
|
||||
channel_ce_count_show, NULL, 8);
|
||||
DEVICE_CHANNEL(ch9_ce_count, S_IRUGO,
|
||||
channel_ce_count_show, NULL, 9);
|
||||
DEVICE_CHANNEL(ch10_ce_count, S_IRUGO,
|
||||
channel_ce_count_show, NULL, 10);
|
||||
DEVICE_CHANNEL(ch11_ce_count, S_IRUGO,
|
||||
channel_ce_count_show, NULL, 11);
|
||||
|
||||
/* Total possible dynamic ce_count attribute file table */
|
||||
static struct attribute *dynamic_csrow_ce_count_attr[] = {
|
||||
@ -340,6 +360,10 @@ static struct attribute *dynamic_csrow_ce_count_attr[] = {
|
||||
&dev_attr_legacy_ch5_ce_count.attr.attr,
|
||||
&dev_attr_legacy_ch6_ce_count.attr.attr,
|
||||
&dev_attr_legacy_ch7_ce_count.attr.attr,
|
||||
&dev_attr_legacy_ch8_ce_count.attr.attr,
|
||||
&dev_attr_legacy_ch9_ce_count.attr.attr,
|
||||
&dev_attr_legacy_ch10_ce_count.attr.attr,
|
||||
&dev_attr_legacy_ch11_ce_count.attr.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
|
@ -3005,14 +3005,15 @@ static int amdgpu_device_ip_suspend_phase2(struct amdgpu_device *adev)
|
||||
continue;
|
||||
}
|
||||
|
||||
/* skip suspend of gfx and psp for S0ix
|
||||
/* skip suspend of gfx/mes and psp for S0ix
|
||||
* gfx is in gfxoff state, so on resume it will exit gfxoff just
|
||||
* like at runtime. PSP is also part of the always on hardware
|
||||
* so no need to suspend it.
|
||||
*/
|
||||
if (adev->in_s0ix &&
|
||||
(adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_PSP ||
|
||||
adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX))
|
||||
adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX ||
|
||||
adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_MES))
|
||||
continue;
|
||||
|
||||
/* XXX handle errors */
|
||||
|
@ -2040,6 +2040,15 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
|
||||
"See modparam exp_hw_support\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
/* differentiate between P10 and P11 asics with the same DID */
|
||||
if (pdev->device == 0x67FF &&
|
||||
(pdev->revision == 0xE3 ||
|
||||
pdev->revision == 0xE7 ||
|
||||
pdev->revision == 0xF3 ||
|
||||
pdev->revision == 0xF7)) {
|
||||
flags &= ~AMD_ASIC_MASK;
|
||||
flags |= CHIP_POLARIS10;
|
||||
}
|
||||
|
||||
/* Due to hardware bugs, S/G Display on raven requires a 1:1 IOMMU mapping,
|
||||
* however, SME requires an indirect IOMMU mapping because the encryption
|
||||
@ -2109,12 +2118,12 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
|
||||
|
||||
pci_set_drvdata(pdev, ddev);
|
||||
|
||||
ret = amdgpu_driver_load_kms(adev, ent->driver_data);
|
||||
ret = amdgpu_driver_load_kms(adev, flags);
|
||||
if (ret)
|
||||
goto err_pci;
|
||||
|
||||
retry_init:
|
||||
ret = drm_dev_register(ddev, ent->driver_data);
|
||||
ret = drm_dev_register(ddev, flags);
|
||||
if (ret == -EAGAIN && ++retry <= 3) {
|
||||
DRM_INFO("retry init %d\n", retry);
|
||||
/* Don't request EX mode too frequently which is attacking */
|
||||
|
@ -1509,7 +1509,8 @@ u64 amdgpu_bo_gpu_offset_no_check(struct amdgpu_bo *bo)
|
||||
uint32_t amdgpu_bo_get_preferred_domain(struct amdgpu_device *adev,
|
||||
uint32_t domain)
|
||||
{
|
||||
if (domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) {
|
||||
if ((domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) &&
|
||||
((adev->asic_type == CHIP_CARRIZO) || (adev->asic_type == CHIP_STONEY))) {
|
||||
domain = AMDGPU_GEM_DOMAIN_VRAM;
|
||||
if (adev->gmc.real_vram_size <= AMDGPU_SG_THRESHOLD)
|
||||
domain = AMDGPU_GEM_DOMAIN_GTT;
|
||||
|
@ -1339,7 +1339,8 @@ static int mes_v11_0_late_init(void *handle)
|
||||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
|
||||
if (!amdgpu_in_reset(adev) &&
|
||||
/* it's only intended for use in mes_self_test case, not for s0ix and reset */
|
||||
if (!amdgpu_in_reset(adev) && !adev->in_s0ix &&
|
||||
(adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3)))
|
||||
amdgpu_mes_self_test(adev);
|
||||
|
||||
|
@ -319,7 +319,7 @@ static void mmhub_v2_0_init_cache_regs(struct amdgpu_device *adev)
|
||||
|
||||
tmp = mmMMVM_L2_CNTL5_DEFAULT;
|
||||
tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL5, L2_CACHE_SMALLK_FRAGMENT_SIZE, 0);
|
||||
WREG32_SOC15(GC, 0, mmMMVM_L2_CNTL5, tmp);
|
||||
WREG32_SOC15(MMHUB, 0, mmMMVM_L2_CNTL5, tmp);
|
||||
}
|
||||
|
||||
static void mmhub_v2_0_enable_system_domain(struct amdgpu_device *adev)
|
||||
|
@ -243,7 +243,7 @@ static void mmhub_v2_3_init_cache_regs(struct amdgpu_device *adev)
|
||||
|
||||
tmp = mmMMVM_L2_CNTL5_DEFAULT;
|
||||
tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL5, L2_CACHE_SMALLK_FRAGMENT_SIZE, 0);
|
||||
WREG32_SOC15(GC, 0, mmMMVM_L2_CNTL5, tmp);
|
||||
WREG32_SOC15(MMHUB, 0, mmMMVM_L2_CNTL5, tmp);
|
||||
}
|
||||
|
||||
static void mmhub_v2_3_enable_system_domain(struct amdgpu_device *adev)
|
||||
|
@ -275,7 +275,7 @@ static void mmhub_v3_0_init_cache_regs(struct amdgpu_device *adev)
|
||||
|
||||
tmp = regMMVM_L2_CNTL5_DEFAULT;
|
||||
tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL5, L2_CACHE_SMALLK_FRAGMENT_SIZE, 0);
|
||||
WREG32_SOC15(GC, 0, regMMVM_L2_CNTL5, tmp);
|
||||
WREG32_SOC15(MMHUB, 0, regMMVM_L2_CNTL5, tmp);
|
||||
}
|
||||
|
||||
static void mmhub_v3_0_enable_system_domain(struct amdgpu_device *adev)
|
||||
|
@ -269,7 +269,7 @@ static void mmhub_v3_0_1_init_cache_regs(struct amdgpu_device *adev)
|
||||
|
||||
tmp = regMMVM_L2_CNTL5_DEFAULT;
|
||||
tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL5, L2_CACHE_SMALLK_FRAGMENT_SIZE, 0);
|
||||
WREG32_SOC15(GC, 0, regMMVM_L2_CNTL5, tmp);
|
||||
WREG32_SOC15(MMHUB, 0, regMMVM_L2_CNTL5, tmp);
|
||||
}
|
||||
|
||||
static void mmhub_v3_0_1_enable_system_domain(struct amdgpu_device *adev)
|
||||
|
@ -268,7 +268,7 @@ static void mmhub_v3_0_2_init_cache_regs(struct amdgpu_device *adev)
|
||||
|
||||
tmp = regMMVM_L2_CNTL5_DEFAULT;
|
||||
tmp = REG_SET_FIELD(tmp, MMVM_L2_CNTL5, L2_CACHE_SMALLK_FRAGMENT_SIZE, 0);
|
||||
WREG32_SOC15(GC, 0, regMMVM_L2_CNTL5, tmp);
|
||||
WREG32_SOC15(MMHUB, 0, regMMVM_L2_CNTL5, tmp);
|
||||
}
|
||||
|
||||
static void mmhub_v3_0_2_enable_system_domain(struct amdgpu_device *adev)
|
||||
|
@ -1512,6 +1512,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
|
||||
case IP_VERSION(3, 0, 1):
|
||||
case IP_VERSION(3, 1, 2):
|
||||
case IP_VERSION(3, 1, 3):
|
||||
case IP_VERSION(3, 1, 4):
|
||||
case IP_VERSION(3, 1, 5):
|
||||
case IP_VERSION(3, 1, 6):
|
||||
init_data.flags.gpu_vm_support = true;
|
||||
|
@ -522,9 +522,9 @@ typedef enum {
|
||||
TEMP_HOTSPOT_M,
|
||||
TEMP_MEM,
|
||||
TEMP_VR_GFX,
|
||||
TEMP_VR_SOC,
|
||||
TEMP_VR_MEM0,
|
||||
TEMP_VR_MEM1,
|
||||
TEMP_VR_SOC,
|
||||
TEMP_VR_U,
|
||||
TEMP_LIQUID0,
|
||||
TEMP_LIQUID1,
|
||||
|
@ -28,6 +28,7 @@
|
||||
#define SMU13_DRIVER_IF_VERSION_INV 0xFFFFFFFF
|
||||
#define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x04
|
||||
#define SMU13_DRIVER_IF_VERSION_ALDE 0x08
|
||||
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_0 0x34
|
||||
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x07
|
||||
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_5 0x04
|
||||
#define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_10 0x32
|
||||
|
@ -289,6 +289,8 @@ int smu_v13_0_check_fw_version(struct smu_context *smu)
|
||||
smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_ALDE;
|
||||
break;
|
||||
case IP_VERSION(13, 0, 0):
|
||||
smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_0;
|
||||
break;
|
||||
case IP_VERSION(13, 0, 10):
|
||||
smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_10;
|
||||
break;
|
||||
|
@ -187,6 +187,8 @@ static struct cmn2asic_mapping smu_v13_0_0_feature_mask_map[SMU_FEATURE_COUNT] =
|
||||
FEA_MAP(MEM_TEMP_READ),
|
||||
FEA_MAP(ATHUB_MMHUB_PG),
|
||||
FEA_MAP(SOC_PCC),
|
||||
[SMU_FEATURE_DPM_VCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
||||
[SMU_FEATURE_DPM_DCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
||||
};
|
||||
|
||||
static struct cmn2asic_mapping smu_v13_0_0_table_map[SMU_TABLE_COUNT] = {
|
||||
@ -517,6 +519,23 @@ static int smu_v13_0_0_set_default_dpm_table(struct smu_context *smu)
|
||||
dpm_table);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Update the reported maximum shader clock to the value
|
||||
* which can be guarded to be achieved on all cards. This
|
||||
* is aligned with Window setting. And considering that value
|
||||
* might be not the peak frequency the card can achieve, it
|
||||
* is normal some real-time clock frequency can overtake this
|
||||
* labelled maximum clock frequency(for example in pp_dpm_sclk
|
||||
* sysfs output).
|
||||
*/
|
||||
if (skutable->DriverReportedClocks.GameClockAc &&
|
||||
(dpm_table->dpm_levels[dpm_table->count - 1].value >
|
||||
skutable->DriverReportedClocks.GameClockAc)) {
|
||||
dpm_table->dpm_levels[dpm_table->count - 1].value =
|
||||
skutable->DriverReportedClocks.GameClockAc;
|
||||
dpm_table->max = skutable->DriverReportedClocks.GameClockAc;
|
||||
}
|
||||
} else {
|
||||
dpm_table->count = 1;
|
||||
dpm_table->dpm_levels[0].value = smu->smu_table.boot_values.gfxclk / 100;
|
||||
@ -779,6 +798,57 @@ static int smu_v13_0_0_get_smu_metrics_data(struct smu_context *smu,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int smu_v13_0_0_get_dpm_ultimate_freq(struct smu_context *smu,
|
||||
enum smu_clk_type clk_type,
|
||||
uint32_t *min,
|
||||
uint32_t *max)
|
||||
{
|
||||
struct smu_13_0_dpm_context *dpm_context =
|
||||
smu->smu_dpm.dpm_context;
|
||||
struct smu_13_0_dpm_table *dpm_table;
|
||||
|
||||
switch (clk_type) {
|
||||
case SMU_MCLK:
|
||||
case SMU_UCLK:
|
||||
/* uclk dpm table */
|
||||
dpm_table = &dpm_context->dpm_tables.uclk_table;
|
||||
break;
|
||||
case SMU_GFXCLK:
|
||||
case SMU_SCLK:
|
||||
/* gfxclk dpm table */
|
||||
dpm_table = &dpm_context->dpm_tables.gfx_table;
|
||||
break;
|
||||
case SMU_SOCCLK:
|
||||
/* socclk dpm table */
|
||||
dpm_table = &dpm_context->dpm_tables.soc_table;
|
||||
break;
|
||||
case SMU_FCLK:
|
||||
/* fclk dpm table */
|
||||
dpm_table = &dpm_context->dpm_tables.fclk_table;
|
||||
break;
|
||||
case SMU_VCLK:
|
||||
case SMU_VCLK1:
|
||||
/* vclk dpm table */
|
||||
dpm_table = &dpm_context->dpm_tables.vclk_table;
|
||||
break;
|
||||
case SMU_DCLK:
|
||||
case SMU_DCLK1:
|
||||
/* dclk dpm table */
|
||||
dpm_table = &dpm_context->dpm_tables.dclk_table;
|
||||
break;
|
||||
default:
|
||||
dev_err(smu->adev->dev, "Unsupported clock type!\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (min)
|
||||
*min = dpm_table->min;
|
||||
if (max)
|
||||
*max = dpm_table->max;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu_v13_0_0_read_sensor(struct smu_context *smu,
|
||||
enum amd_pp_sensors sensor,
|
||||
void *data,
|
||||
@ -1281,9 +1351,17 @@ static int smu_v13_0_0_populate_umd_state_clk(struct smu_context *smu)
|
||||
&dpm_context->dpm_tables.fclk_table;
|
||||
struct smu_umd_pstate_table *pstate_table =
|
||||
&smu->pstate_table;
|
||||
struct smu_table_context *table_context = &smu->smu_table;
|
||||
PPTable_t *pptable = table_context->driver_pptable;
|
||||
DriverReportedClocks_t driver_clocks =
|
||||
pptable->SkuTable.DriverReportedClocks;
|
||||
|
||||
pstate_table->gfxclk_pstate.min = gfx_table->min;
|
||||
pstate_table->gfxclk_pstate.peak = gfx_table->max;
|
||||
if (driver_clocks.GameClockAc &&
|
||||
(driver_clocks.GameClockAc < gfx_table->max))
|
||||
pstate_table->gfxclk_pstate.peak = driver_clocks.GameClockAc;
|
||||
else
|
||||
pstate_table->gfxclk_pstate.peak = gfx_table->max;
|
||||
|
||||
pstate_table->uclk_pstate.min = mem_table->min;
|
||||
pstate_table->uclk_pstate.peak = mem_table->max;
|
||||
@ -1300,12 +1378,12 @@ static int smu_v13_0_0_populate_umd_state_clk(struct smu_context *smu)
|
||||
pstate_table->fclk_pstate.min = fclk_table->min;
|
||||
pstate_table->fclk_pstate.peak = fclk_table->max;
|
||||
|
||||
/*
|
||||
* For now, just use the mininum clock frequency.
|
||||
* TODO: update them when the real pstate settings available
|
||||
*/
|
||||
pstate_table->gfxclk_pstate.standard = gfx_table->min;
|
||||
pstate_table->uclk_pstate.standard = mem_table->min;
|
||||
if (driver_clocks.BaseClockAc &&
|
||||
driver_clocks.BaseClockAc < gfx_table->max)
|
||||
pstate_table->gfxclk_pstate.standard = driver_clocks.BaseClockAc;
|
||||
else
|
||||
pstate_table->gfxclk_pstate.standard = gfx_table->max;
|
||||
pstate_table->uclk_pstate.standard = mem_table->max;
|
||||
pstate_table->socclk_pstate.standard = soc_table->min;
|
||||
pstate_table->vclk_pstate.standard = vclk_table->min;
|
||||
pstate_table->dclk_pstate.standard = dclk_table->min;
|
||||
@ -1339,12 +1417,23 @@ static void smu_v13_0_0_get_unique_id(struct smu_context *smu)
|
||||
static int smu_v13_0_0_get_fan_speed_pwm(struct smu_context *smu,
|
||||
uint32_t *speed)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!speed)
|
||||
return -EINVAL;
|
||||
|
||||
return smu_v13_0_0_get_smu_metrics_data(smu,
|
||||
METRICS_CURR_FANPWM,
|
||||
speed);
|
||||
ret = smu_v13_0_0_get_smu_metrics_data(smu,
|
||||
METRICS_CURR_FANPWM,
|
||||
speed);
|
||||
if (ret) {
|
||||
dev_err(smu->adev->dev, "Failed to get fan speed(PWM)!");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Convert the PMFW output which is in percent to pwm(255) based */
|
||||
*speed = MIN(*speed * 255 / 100, 255);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu_v13_0_0_get_fan_speed_rpm(struct smu_context *smu,
|
||||
@ -1813,7 +1902,7 @@ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
|
||||
.get_enabled_mask = smu_cmn_get_enabled_mask,
|
||||
.dpm_set_vcn_enable = smu_v13_0_set_vcn_enable,
|
||||
.dpm_set_jpeg_enable = smu_v13_0_set_jpeg_enable,
|
||||
.get_dpm_ultimate_freq = smu_v13_0_get_dpm_ultimate_freq,
|
||||
.get_dpm_ultimate_freq = smu_v13_0_0_get_dpm_ultimate_freq,
|
||||
.get_vbios_bootup_values = smu_v13_0_get_vbios_bootup_values,
|
||||
.read_sensor = smu_v13_0_0_read_sensor,
|
||||
.feature_is_enabled = smu_cmn_feature_is_enabled,
|
||||
|
@ -189,6 +189,8 @@ static struct cmn2asic_mapping smu_v13_0_7_feature_mask_map[SMU_FEATURE_COUNT] =
|
||||
FEA_MAP(MEM_TEMP_READ),
|
||||
FEA_MAP(ATHUB_MMHUB_PG),
|
||||
FEA_MAP(SOC_PCC),
|
||||
[SMU_FEATURE_DPM_VCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
||||
[SMU_FEATURE_DPM_DCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
||||
};
|
||||
|
||||
static struct cmn2asic_mapping smu_v13_0_7_table_map[SMU_TABLE_COUNT] = {
|
||||
@ -1359,12 +1361,23 @@ static int smu_v13_0_7_populate_umd_state_clk(struct smu_context *smu)
|
||||
static int smu_v13_0_7_get_fan_speed_pwm(struct smu_context *smu,
|
||||
uint32_t *speed)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!speed)
|
||||
return -EINVAL;
|
||||
|
||||
return smu_v13_0_7_get_smu_metrics_data(smu,
|
||||
METRICS_CURR_FANPWM,
|
||||
speed);
|
||||
ret = smu_v13_0_7_get_smu_metrics_data(smu,
|
||||
METRICS_CURR_FANPWM,
|
||||
speed);
|
||||
if (ret) {
|
||||
dev_err(smu->adev->dev, "Failed to get fan speed(PWM)!");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Convert the PMFW output which is in percent to pwm(255) based */
|
||||
*speed = MIN(*speed * 255 / 100, 255);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu_v13_0_7_get_fan_speed_rpm(struct smu_context *smu,
|
||||
|
@ -582,6 +582,9 @@ void drm_connector_cleanup(struct drm_connector *connector)
|
||||
mutex_destroy(&connector->mutex);
|
||||
|
||||
memset(connector, 0, sizeof(*connector));
|
||||
|
||||
if (dev->registered)
|
||||
drm_sysfs_hotplug_event(dev);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_connector_cleanup);
|
||||
|
||||
|
@ -258,7 +258,12 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get(
|
||||
if (mapping->use == 0) {
|
||||
mutex_lock(&mmu_context->lock);
|
||||
if (mapping->context == mmu_context)
|
||||
mapping->use += 1;
|
||||
if (va && mapping->iova != va) {
|
||||
etnaviv_iommu_reap_mapping(mapping);
|
||||
mapping = NULL;
|
||||
} else {
|
||||
mapping->use += 1;
|
||||
}
|
||||
else
|
||||
mapping = NULL;
|
||||
mutex_unlock(&mmu_context->lock);
|
||||
|
@ -135,6 +135,19 @@ static void etnaviv_iommu_remove_mapping(struct etnaviv_iommu_context *context,
|
||||
drm_mm_remove_node(&mapping->vram_node);
|
||||
}
|
||||
|
||||
void etnaviv_iommu_reap_mapping(struct etnaviv_vram_mapping *mapping)
|
||||
{
|
||||
struct etnaviv_iommu_context *context = mapping->context;
|
||||
|
||||
lockdep_assert_held(&context->lock);
|
||||
WARN_ON(mapping->use);
|
||||
|
||||
etnaviv_iommu_remove_mapping(context, mapping);
|
||||
etnaviv_iommu_context_put(mapping->context);
|
||||
mapping->context = NULL;
|
||||
list_del_init(&mapping->mmu_node);
|
||||
}
|
||||
|
||||
static int etnaviv_iommu_find_iova(struct etnaviv_iommu_context *context,
|
||||
struct drm_mm_node *node, size_t size)
|
||||
{
|
||||
@ -202,10 +215,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu_context *context,
|
||||
* this mapping.
|
||||
*/
|
||||
list_for_each_entry_safe(m, n, &list, scan_node) {
|
||||
etnaviv_iommu_remove_mapping(context, m);
|
||||
etnaviv_iommu_context_put(m->context);
|
||||
m->context = NULL;
|
||||
list_del_init(&m->mmu_node);
|
||||
etnaviv_iommu_reap_mapping(m);
|
||||
list_del_init(&m->scan_node);
|
||||
}
|
||||
|
||||
@ -257,10 +267,7 @@ static int etnaviv_iommu_insert_exact(struct etnaviv_iommu_context *context,
|
||||
}
|
||||
|
||||
list_for_each_entry_safe(m, n, &scan_list, scan_node) {
|
||||
etnaviv_iommu_remove_mapping(context, m);
|
||||
etnaviv_iommu_context_put(m->context);
|
||||
m->context = NULL;
|
||||
list_del_init(&m->mmu_node);
|
||||
etnaviv_iommu_reap_mapping(m);
|
||||
list_del_init(&m->scan_node);
|
||||
}
|
||||
|
||||
|
@ -91,6 +91,7 @@ int etnaviv_iommu_map_gem(struct etnaviv_iommu_context *context,
|
||||
struct etnaviv_vram_mapping *mapping, u64 va);
|
||||
void etnaviv_iommu_unmap_gem(struct etnaviv_iommu_context *context,
|
||||
struct etnaviv_vram_mapping *mapping);
|
||||
void etnaviv_iommu_reap_mapping(struct etnaviv_vram_mapping *mapping);
|
||||
|
||||
int etnaviv_iommu_get_suballoc_va(struct etnaviv_iommu_context *ctx,
|
||||
struct etnaviv_vram_mapping *mapping,
|
||||
|
@ -137,9 +137,9 @@ static enum port intel_dsi_seq_port_to_port(struct intel_dsi *intel_dsi,
|
||||
return ffs(intel_dsi->ports) - 1;
|
||||
|
||||
if (seq_port) {
|
||||
if (intel_dsi->ports & PORT_B)
|
||||
if (intel_dsi->ports & BIT(PORT_B))
|
||||
return PORT_B;
|
||||
else if (intel_dsi->ports & PORT_C)
|
||||
else if (intel_dsi->ports & BIT(PORT_C))
|
||||
return PORT_C;
|
||||
}
|
||||
|
||||
|
@ -729,37 +729,74 @@ static int eb_reserve(struct i915_execbuffer *eb)
|
||||
bool unpinned;
|
||||
|
||||
/*
|
||||
* Attempt to pin all of the buffers into the GTT.
|
||||
* This is done in 2 phases:
|
||||
* We have one more buffers that we couldn't bind, which could be due to
|
||||
* various reasons. To resolve this we have 4 passes, with every next
|
||||
* level turning the screws tighter:
|
||||
*
|
||||
* 1. Unbind all objects that do not match the GTT constraints for
|
||||
* the execbuffer (fenceable, mappable, alignment etc).
|
||||
* 2. Bind new objects.
|
||||
* 0. Unbind all objects that do not match the GTT constraints for the
|
||||
* execbuffer (fenceable, mappable, alignment etc). Bind all new
|
||||
* objects. This avoids unnecessary unbinding of later objects in order
|
||||
* to make room for the earlier objects *unless* we need to defragment.
|
||||
*
|
||||
* This avoid unnecessary unbinding of later objects in order to make
|
||||
* room for the earlier objects *unless* we need to defragment.
|
||||
* 1. Reorder the buffers, where objects with the most restrictive
|
||||
* placement requirements go first (ignoring fixed location buffers for
|
||||
* now). For example, objects needing the mappable aperture (the first
|
||||
* 256M of GTT), should go first vs objects that can be placed just
|
||||
* about anywhere. Repeat the previous pass.
|
||||
*
|
||||
* Defragmenting is skipped if all objects are pinned at a fixed location.
|
||||
* 2. Consider buffers that are pinned at a fixed location. Also try to
|
||||
* evict the entire VM this time, leaving only objects that we were
|
||||
* unable to lock. Try again to bind the buffers. (still using the new
|
||||
* buffer order).
|
||||
*
|
||||
* 3. We likely have object lock contention for one or more stubborn
|
||||
* objects in the VM, for which we need to evict to make forward
|
||||
* progress (perhaps we are fighting the shrinker?). When evicting the
|
||||
* VM this time around, anything that we can't lock we now track using
|
||||
* the busy_bo, using the full lock (after dropping the vm->mutex to
|
||||
* prevent deadlocks), instead of trylock. We then continue to evict the
|
||||
* VM, this time with the stubborn object locked, which we can now
|
||||
* hopefully unbind (if still bound in the VM). Repeat until the VM is
|
||||
* evicted. Finally we should be able bind everything.
|
||||
*/
|
||||
for (pass = 0; pass <= 2; pass++) {
|
||||
for (pass = 0; pass <= 3; pass++) {
|
||||
int pin_flags = PIN_USER | PIN_VALIDATE;
|
||||
|
||||
if (pass == 0)
|
||||
pin_flags |= PIN_NONBLOCK;
|
||||
|
||||
if (pass >= 1)
|
||||
unpinned = eb_unbind(eb, pass == 2);
|
||||
unpinned = eb_unbind(eb, pass >= 2);
|
||||
|
||||
if (pass == 2) {
|
||||
err = mutex_lock_interruptible(&eb->context->vm->mutex);
|
||||
if (!err) {
|
||||
err = i915_gem_evict_vm(eb->context->vm, &eb->ww);
|
||||
err = i915_gem_evict_vm(eb->context->vm, &eb->ww, NULL);
|
||||
mutex_unlock(&eb->context->vm->mutex);
|
||||
}
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
if (pass == 3) {
|
||||
retry:
|
||||
err = mutex_lock_interruptible(&eb->context->vm->mutex);
|
||||
if (!err) {
|
||||
struct drm_i915_gem_object *busy_bo = NULL;
|
||||
|
||||
err = i915_gem_evict_vm(eb->context->vm, &eb->ww, &busy_bo);
|
||||
mutex_unlock(&eb->context->vm->mutex);
|
||||
if (err && busy_bo) {
|
||||
err = i915_gem_object_lock(busy_bo, &eb->ww);
|
||||
i915_gem_object_put(busy_bo);
|
||||
if (!err)
|
||||
goto retry;
|
||||
}
|
||||
}
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
list_for_each_entry(ev, &eb->unbound, bind_link) {
|
||||
err = eb_reserve_vma(eb, ev, pin_flags);
|
||||
if (err)
|
||||
|
@ -369,7 +369,7 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
|
||||
if (vma == ERR_PTR(-ENOSPC)) {
|
||||
ret = mutex_lock_interruptible(&ggtt->vm.mutex);
|
||||
if (!ret) {
|
||||
ret = i915_gem_evict_vm(&ggtt->vm, &ww);
|
||||
ret = i915_gem_evict_vm(&ggtt->vm, &ww, NULL);
|
||||
mutex_unlock(&ggtt->vm.mutex);
|
||||
}
|
||||
if (ret)
|
||||
|
@ -761,6 +761,9 @@ bool i915_gem_object_needs_ccs_pages(struct drm_i915_gem_object *obj)
|
||||
if (!HAS_FLAT_CCS(to_i915(obj->base.dev)))
|
||||
return false;
|
||||
|
||||
if (obj->flags & I915_BO_ALLOC_CCS_AUX)
|
||||
return true;
|
||||
|
||||
for (i = 0; i < obj->mm.n_placements; i++) {
|
||||
/* Compression is not allowed for the objects with smem placement */
|
||||
if (obj->mm.placements[i]->type == INTEL_MEMORY_SYSTEM)
|
||||
|
@ -327,16 +327,18 @@ struct drm_i915_gem_object {
|
||||
* dealing with userspace objects the CPU fault handler is free to ignore this.
|
||||
*/
|
||||
#define I915_BO_ALLOC_GPU_ONLY BIT(6)
|
||||
#define I915_BO_ALLOC_CCS_AUX BIT(7)
|
||||
#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \
|
||||
I915_BO_ALLOC_VOLATILE | \
|
||||
I915_BO_ALLOC_CPU_CLEAR | \
|
||||
I915_BO_ALLOC_USER | \
|
||||
I915_BO_ALLOC_PM_VOLATILE | \
|
||||
I915_BO_ALLOC_PM_EARLY | \
|
||||
I915_BO_ALLOC_GPU_ONLY)
|
||||
#define I915_BO_READONLY BIT(7)
|
||||
#define I915_TILING_QUIRK_BIT 8 /* unknown swizzling; do not release! */
|
||||
#define I915_BO_PROTECTED BIT(9)
|
||||
I915_BO_ALLOC_GPU_ONLY | \
|
||||
I915_BO_ALLOC_CCS_AUX)
|
||||
#define I915_BO_READONLY BIT(8)
|
||||
#define I915_TILING_QUIRK_BIT 9 /* unknown swizzling; do not release! */
|
||||
#define I915_BO_PROTECTED BIT(10)
|
||||
/**
|
||||
* @mem_flags - Mutable placement-related flags
|
||||
*
|
||||
|
@ -50,6 +50,7 @@ static int i915_ttm_backup(struct i915_gem_apply_to_region *apply,
|
||||
container_of(bo->bdev, typeof(*i915), bdev);
|
||||
struct drm_i915_gem_object *backup;
|
||||
struct ttm_operation_ctx ctx = {};
|
||||
unsigned int flags;
|
||||
int err = 0;
|
||||
|
||||
if (bo->resource->mem_type == I915_PL_SYSTEM || obj->ttm.backup)
|
||||
@ -65,7 +66,22 @@ static int i915_ttm_backup(struct i915_gem_apply_to_region *apply,
|
||||
if (obj->flags & I915_BO_ALLOC_PM_VOLATILE)
|
||||
return 0;
|
||||
|
||||
backup = i915_gem_object_create_shmem(i915, obj->base.size);
|
||||
/*
|
||||
* It seems that we might have some framebuffers still pinned at this
|
||||
* stage, but for such objects we might also need to deal with the CCS
|
||||
* aux state. Make sure we force the save/restore of the CCS state,
|
||||
* otherwise we might observe display corruption, when returning from
|
||||
* suspend.
|
||||
*/
|
||||
flags = 0;
|
||||
if (i915_gem_object_needs_ccs_pages(obj)) {
|
||||
WARN_ON_ONCE(!i915_gem_object_is_framebuffer(obj));
|
||||
WARN_ON_ONCE(!pm_apply->allow_gpu);
|
||||
|
||||
flags = I915_BO_ALLOC_CCS_AUX;
|
||||
}
|
||||
backup = i915_gem_object_create_region(i915->mm.regions[INTEL_REGION_SMEM],
|
||||
obj->base.size, 0, flags);
|
||||
if (IS_ERR(backup))
|
||||
return PTR_ERR(backup);
|
||||
|
||||
|
@ -341,6 +341,16 @@ static int emit_no_arbitration(struct i915_request *rq)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int max_pte_pkt_size(struct i915_request *rq, int pkt)
|
||||
{
|
||||
struct intel_ring *ring = rq->ring;
|
||||
|
||||
pkt = min_t(int, pkt, (ring->space - rq->reserved_space) / sizeof(u32) + 5);
|
||||
pkt = min_t(int, pkt, (ring->size - ring->emit) / sizeof(u32) + 5);
|
||||
|
||||
return pkt;
|
||||
}
|
||||
|
||||
static int emit_pte(struct i915_request *rq,
|
||||
struct sgt_dma *it,
|
||||
enum i915_cache_level cache_level,
|
||||
@ -387,8 +397,7 @@ static int emit_pte(struct i915_request *rq,
|
||||
return PTR_ERR(cs);
|
||||
|
||||
/* Pack as many PTE updates as possible into a single MI command */
|
||||
pkt = min_t(int, dword_length, ring->space / sizeof(u32) + 5);
|
||||
pkt = min_t(int, pkt, (ring->size - ring->emit) / sizeof(u32) + 5);
|
||||
pkt = max_pte_pkt_size(rq, dword_length);
|
||||
|
||||
hdr = cs;
|
||||
*cs++ = MI_STORE_DATA_IMM | REG_BIT(21); /* as qword elements */
|
||||
@ -421,8 +430,7 @@ static int emit_pte(struct i915_request *rq,
|
||||
}
|
||||
}
|
||||
|
||||
pkt = min_t(int, dword_rem, ring->space / sizeof(u32) + 5);
|
||||
pkt = min_t(int, pkt, (ring->size - ring->emit) / sizeof(u32) + 5);
|
||||
pkt = max_pte_pkt_size(rq, dword_rem);
|
||||
|
||||
hdr = cs;
|
||||
*cs++ = MI_STORE_DATA_IMM | REG_BIT(21);
|
||||
|
@ -416,6 +416,11 @@ int i915_gem_evict_for_node(struct i915_address_space *vm,
|
||||
* @vm: Address space to cleanse
|
||||
* @ww: An optional struct i915_gem_ww_ctx. If not NULL, i915_gem_evict_vm
|
||||
* will be able to evict vma's locked by the ww as well.
|
||||
* @busy_bo: Optional pointer to struct drm_i915_gem_object. If not NULL, then
|
||||
* in the event i915_gem_evict_vm() is unable to trylock an object for eviction,
|
||||
* then @busy_bo will point to it. -EBUSY is also returned. The caller must drop
|
||||
* the vm->mutex, before trying again to acquire the contended lock. The caller
|
||||
* also owns a reference to the object.
|
||||
*
|
||||
* This function evicts all vmas from a vm.
|
||||
*
|
||||
@ -425,7 +430,8 @@ int i915_gem_evict_for_node(struct i915_address_space *vm,
|
||||
* To clarify: This is for freeing up virtual address space, not for freeing
|
||||
* memory in e.g. the shrinker.
|
||||
*/
|
||||
int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww)
|
||||
int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww,
|
||||
struct drm_i915_gem_object **busy_bo)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
@ -457,15 +463,22 @@ int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww)
|
||||
* the resv is shared among multiple objects, we still
|
||||
* need the object ref.
|
||||
*/
|
||||
if (dying_vma(vma) ||
|
||||
if (!i915_gem_object_get_rcu(vma->obj) ||
|
||||
(ww && (dma_resv_locking_ctx(vma->obj->base.resv) == &ww->ctx))) {
|
||||
__i915_vma_pin(vma);
|
||||
list_add(&vma->evict_link, &locked_eviction_list);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!i915_gem_object_trylock(vma->obj, ww))
|
||||
if (!i915_gem_object_trylock(vma->obj, ww)) {
|
||||
if (busy_bo) {
|
||||
*busy_bo = vma->obj; /* holds ref */
|
||||
ret = -EBUSY;
|
||||
break;
|
||||
}
|
||||
i915_gem_object_put(vma->obj);
|
||||
continue;
|
||||
}
|
||||
|
||||
__i915_vma_pin(vma);
|
||||
list_add(&vma->evict_link, &eviction_list);
|
||||
@ -473,25 +486,29 @@ int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww)
|
||||
if (list_empty(&eviction_list) && list_empty(&locked_eviction_list))
|
||||
break;
|
||||
|
||||
ret = 0;
|
||||
/* Unbind locked objects first, before unlocking the eviction_list */
|
||||
list_for_each_entry_safe(vma, vn, &locked_eviction_list, evict_link) {
|
||||
__i915_vma_unpin(vma);
|
||||
|
||||
if (ret == 0)
|
||||
if (ret == 0) {
|
||||
ret = __i915_vma_unbind(vma);
|
||||
if (ret != -EINTR) /* "Get me out of here!" */
|
||||
ret = 0;
|
||||
if (ret != -EINTR) /* "Get me out of here!" */
|
||||
ret = 0;
|
||||
}
|
||||
if (!dying_vma(vma))
|
||||
i915_gem_object_put(vma->obj);
|
||||
}
|
||||
|
||||
list_for_each_entry_safe(vma, vn, &eviction_list, evict_link) {
|
||||
__i915_vma_unpin(vma);
|
||||
if (ret == 0)
|
||||
if (ret == 0) {
|
||||
ret = __i915_vma_unbind(vma);
|
||||
if (ret != -EINTR) /* "Get me out of here!" */
|
||||
ret = 0;
|
||||
if (ret != -EINTR) /* "Get me out of here!" */
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
i915_gem_object_unlock(vma->obj);
|
||||
i915_gem_object_put(vma->obj);
|
||||
}
|
||||
} while (ret == 0);
|
||||
|
||||
|
@ -11,6 +11,7 @@
|
||||
struct drm_mm_node;
|
||||
struct i915_address_space;
|
||||
struct i915_gem_ww_ctx;
|
||||
struct drm_i915_gem_object;
|
||||
|
||||
int __must_check i915_gem_evict_something(struct i915_address_space *vm,
|
||||
struct i915_gem_ww_ctx *ww,
|
||||
@ -23,6 +24,7 @@ int __must_check i915_gem_evict_for_node(struct i915_address_space *vm,
|
||||
struct drm_mm_node *node,
|
||||
unsigned int flags);
|
||||
int i915_gem_evict_vm(struct i915_address_space *vm,
|
||||
struct i915_gem_ww_ctx *ww);
|
||||
struct i915_gem_ww_ctx *ww,
|
||||
struct drm_i915_gem_object **busy_bo);
|
||||
|
||||
#endif /* __I915_GEM_EVICT_H__ */
|
||||
|
@ -1569,7 +1569,7 @@ static int __i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
|
||||
* locked objects when called from execbuf when pinning
|
||||
* is removed. This would probably regress badly.
|
||||
*/
|
||||
i915_gem_evict_vm(vm, NULL);
|
||||
i915_gem_evict_vm(vm, NULL, NULL);
|
||||
mutex_unlock(&vm->mutex);
|
||||
}
|
||||
} while (1);
|
||||
|
@ -344,7 +344,7 @@ static int igt_evict_vm(void *arg)
|
||||
|
||||
/* Everything is pinned, nothing should happen */
|
||||
mutex_lock(&ggtt->vm.mutex);
|
||||
err = i915_gem_evict_vm(&ggtt->vm, NULL);
|
||||
err = i915_gem_evict_vm(&ggtt->vm, NULL, NULL);
|
||||
mutex_unlock(&ggtt->vm.mutex);
|
||||
if (err) {
|
||||
pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n",
|
||||
@ -356,7 +356,7 @@ static int igt_evict_vm(void *arg)
|
||||
|
||||
for_i915_gem_ww(&ww, err, false) {
|
||||
mutex_lock(&ggtt->vm.mutex);
|
||||
err = i915_gem_evict_vm(&ggtt->vm, &ww);
|
||||
err = i915_gem_evict_vm(&ggtt->vm, &ww, NULL);
|
||||
mutex_unlock(&ggtt->vm.mutex);
|
||||
}
|
||||
|
||||
|
@ -1629,7 +1629,11 @@ static int ingenic_drm_init(void)
|
||||
return err;
|
||||
}
|
||||
|
||||
return platform_driver_register(&ingenic_drm_driver);
|
||||
err = platform_driver_register(&ingenic_drm_driver);
|
||||
if (IS_ENABLED(CONFIG_DRM_INGENIC_IPU) && err)
|
||||
platform_driver_unregister(ingenic_ipu_driver_ptr);
|
||||
|
||||
return err;
|
||||
}
|
||||
module_init(ingenic_drm_init);
|
||||
|
||||
|
@ -284,7 +284,8 @@ static void mgag200_g200se_04_pixpllc_atomic_update(struct drm_crtc *crtc,
|
||||
pixpllcp = pixpllc->p - 1;
|
||||
pixpllcs = pixpllc->s;
|
||||
|
||||
xpixpllcm = pixpllcm | ((pixpllcn & BIT(8)) >> 1);
|
||||
// For G200SE A, BIT(7) should be set unconditionally.
|
||||
xpixpllcm = BIT(7) | pixpllcm;
|
||||
xpixpllcn = pixpllcn;
|
||||
xpixpllcp = (pixpllcs << 3) | pixpllcp;
|
||||
|
||||
|
@ -308,7 +308,8 @@ void vmw_kms_cursor_snoop(struct vmw_surface *srf,
|
||||
if (cmd->dma.guest.ptr.offset % PAGE_SIZE ||
|
||||
box->x != 0 || box->y != 0 || box->z != 0 ||
|
||||
box->srcx != 0 || box->srcy != 0 || box->srcz != 0 ||
|
||||
box->d != 1 || box_count != 1) {
|
||||
box->d != 1 || box_count != 1 ||
|
||||
box->w > 64 || box->h > 64) {
|
||||
/* TODO handle none page aligned offsets */
|
||||
/* TODO handle more dst & src != 0 */
|
||||
/* TODO handle more then one copy */
|
||||
|
@ -412,6 +412,7 @@
|
||||
#define USB_DEVICE_ID_HP_X2_10_COVER 0x0755
|
||||
#define I2C_DEVICE_ID_HP_ENVY_X360_15 0x2d05
|
||||
#define I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100 0x29CF
|
||||
#define I2C_DEVICE_ID_HP_ENVY_X360_EU0009NV 0x2CF9
|
||||
#define I2C_DEVICE_ID_HP_SPECTRE_X360_15 0x2817
|
||||
#define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN 0x2544
|
||||
#define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706
|
||||
|
@ -380,6 +380,8 @@ static const struct hid_device_id hid_battery_quirks[] = {
|
||||
HID_BATTERY_QUIRK_IGNORE },
|
||||
{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100),
|
||||
HID_BATTERY_QUIRK_IGNORE },
|
||||
{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_EU0009NV),
|
||||
HID_BATTERY_QUIRK_IGNORE },
|
||||
{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15),
|
||||
HID_BATTERY_QUIRK_IGNORE },
|
||||
{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN),
|
||||
|
@ -3402,18 +3402,24 @@ static int __init parse_amd_iommu_options(char *str)
|
||||
static int __init parse_ivrs_ioapic(char *str)
|
||||
{
|
||||
u32 seg = 0, bus, dev, fn;
|
||||
int ret, id, i;
|
||||
int id, i;
|
||||
u32 devid;
|
||||
|
||||
ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn);
|
||||
if (ret != 4) {
|
||||
ret = sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn);
|
||||
if (ret != 5) {
|
||||
pr_err("Invalid command line: ivrs_ioapic%s\n", str);
|
||||
return 1;
|
||||
}
|
||||
if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
|
||||
sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5)
|
||||
goto found;
|
||||
|
||||
if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
|
||||
sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) {
|
||||
pr_warn("ivrs_ioapic%s option format deprecated; use ivrs_ioapic=%d@%04x:%02x:%02x.%d instead\n",
|
||||
str, id, seg, bus, dev, fn);
|
||||
goto found;
|
||||
}
|
||||
|
||||
pr_err("Invalid command line: ivrs_ioapic%s\n", str);
|
||||
return 1;
|
||||
|
||||
found:
|
||||
if (early_ioapic_map_size == EARLY_MAP_SIZE) {
|
||||
pr_err("Early IOAPIC map overflow - ignoring ivrs_ioapic%s\n",
|
||||
str);
|
||||
@ -3434,18 +3440,24 @@ static int __init parse_ivrs_ioapic(char *str)
|
||||
static int __init parse_ivrs_hpet(char *str)
|
||||
{
|
||||
u32 seg = 0, bus, dev, fn;
|
||||
int ret, id, i;
|
||||
int id, i;
|
||||
u32 devid;
|
||||
|
||||
ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn);
|
||||
if (ret != 4) {
|
||||
ret = sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn);
|
||||
if (ret != 5) {
|
||||
pr_err("Invalid command line: ivrs_hpet%s\n", str);
|
||||
return 1;
|
||||
}
|
||||
if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
|
||||
sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5)
|
||||
goto found;
|
||||
|
||||
if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
|
||||
sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) {
|
||||
pr_warn("ivrs_hpet%s option format deprecated; use ivrs_hpet=%d@%04x:%02x:%02x.%d instead\n",
|
||||
str, id, seg, bus, dev, fn);
|
||||
goto found;
|
||||
}
|
||||
|
||||
pr_err("Invalid command line: ivrs_hpet%s\n", str);
|
||||
return 1;
|
||||
|
||||
found:
|
||||
if (early_hpet_map_size == EARLY_MAP_SIZE) {
|
||||
pr_err("Early HPET map overflow - ignoring ivrs_hpet%s\n",
|
||||
str);
|
||||
@ -3466,19 +3478,36 @@ static int __init parse_ivrs_hpet(char *str)
|
||||
static int __init parse_ivrs_acpihid(char *str)
|
||||
{
|
||||
u32 seg = 0, bus, dev, fn;
|
||||
char *hid, *uid, *p;
|
||||
char *hid, *uid, *p, *addr;
|
||||
char acpiid[ACPIHID_UID_LEN + ACPIHID_HID_LEN] = {0};
|
||||
int ret, i;
|
||||
int i;
|
||||
|
||||
ret = sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid);
|
||||
if (ret != 4) {
|
||||
ret = sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid);
|
||||
if (ret != 5) {
|
||||
pr_err("Invalid command line: ivrs_acpihid(%s)\n", str);
|
||||
return 1;
|
||||
addr = strchr(str, '@');
|
||||
if (!addr) {
|
||||
if (sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid) == 4 ||
|
||||
sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid) == 5) {
|
||||
pr_warn("ivrs_acpihid%s option format deprecated; use ivrs_acpihid=%s@%04x:%02x:%02x.%d instead\n",
|
||||
str, acpiid, seg, bus, dev, fn);
|
||||
goto found;
|
||||
}
|
||||
goto not_found;
|
||||
}
|
||||
|
||||
/* We have the '@', make it the terminator to get just the acpiid */
|
||||
*addr++ = 0;
|
||||
|
||||
if (sscanf(str, "=%s", acpiid) != 1)
|
||||
goto not_found;
|
||||
|
||||
if (sscanf(addr, "%x:%x.%x", &bus, &dev, &fn) == 3 ||
|
||||
sscanf(addr, "%x:%x:%x.%x", &seg, &bus, &dev, &fn) == 4)
|
||||
goto found;
|
||||
|
||||
not_found:
|
||||
pr_err("Invalid command line: ivrs_acpihid%s\n", str);
|
||||
return 1;
|
||||
|
||||
found:
|
||||
p = acpiid;
|
||||
hid = strsep(&p, ":");
|
||||
uid = p;
|
||||
@ -3488,6 +3517,13 @@ static int __init parse_ivrs_acpihid(char *str)
|
||||
return 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Ignore leading zeroes after ':', so e.g., AMDI0095:00
|
||||
* will match AMDI0095:0 in the second strcmp in acpi_dev_hid_uid_match
|
||||
*/
|
||||
while (*uid == '0' && *(uid + 1))
|
||||
uid++;
|
||||
|
||||
i = early_acpihid_map_size++;
|
||||
memcpy(early_acpihid_map[i].hid, hid, strlen(hid));
|
||||
memcpy(early_acpihid_map[i].uid, uid, strlen(uid));
|
||||
|
@ -551,11 +551,13 @@ static int __create_persistent_data_objects(struct dm_cache_metadata *cmd,
|
||||
return r;
|
||||
}
|
||||
|
||||
static void __destroy_persistent_data_objects(struct dm_cache_metadata *cmd)
|
||||
static void __destroy_persistent_data_objects(struct dm_cache_metadata *cmd,
|
||||
bool destroy_bm)
|
||||
{
|
||||
dm_sm_destroy(cmd->metadata_sm);
|
||||
dm_tm_destroy(cmd->tm);
|
||||
dm_block_manager_destroy(cmd->bm);
|
||||
if (destroy_bm)
|
||||
dm_block_manager_destroy(cmd->bm);
|
||||
}
|
||||
|
||||
typedef unsigned long (*flags_mutator)(unsigned long);
|
||||
@ -826,7 +828,7 @@ static struct dm_cache_metadata *lookup_or_open(struct block_device *bdev,
|
||||
cmd2 = lookup(bdev);
|
||||
if (cmd2) {
|
||||
mutex_unlock(&table_lock);
|
||||
__destroy_persistent_data_objects(cmd);
|
||||
__destroy_persistent_data_objects(cmd, true);
|
||||
kfree(cmd);
|
||||
return cmd2;
|
||||
}
|
||||
@ -874,7 +876,7 @@ void dm_cache_metadata_close(struct dm_cache_metadata *cmd)
|
||||
mutex_unlock(&table_lock);
|
||||
|
||||
if (!cmd->fail_io)
|
||||
__destroy_persistent_data_objects(cmd);
|
||||
__destroy_persistent_data_objects(cmd, true);
|
||||
kfree(cmd);
|
||||
}
|
||||
}
|
||||
@ -1807,14 +1809,52 @@ int dm_cache_metadata_needs_check(struct dm_cache_metadata *cmd, bool *result)
|
||||
|
||||
int dm_cache_metadata_abort(struct dm_cache_metadata *cmd)
|
||||
{
|
||||
int r;
|
||||
int r = -EINVAL;
|
||||
struct dm_block_manager *old_bm = NULL, *new_bm = NULL;
|
||||
|
||||
/* fail_io is double-checked with cmd->root_lock held below */
|
||||
if (unlikely(cmd->fail_io))
|
||||
return r;
|
||||
|
||||
/*
|
||||
* Replacement block manager (new_bm) is created and old_bm destroyed outside of
|
||||
* cmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of
|
||||
* shrinker associated with the block manager's bufio client vs cmd root_lock).
|
||||
* - must take shrinker_rwsem without holding cmd->root_lock
|
||||
*/
|
||||
new_bm = dm_block_manager_create(cmd->bdev, DM_CACHE_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
|
||||
CACHE_MAX_CONCURRENT_LOCKS);
|
||||
|
||||
WRITE_LOCK(cmd);
|
||||
__destroy_persistent_data_objects(cmd);
|
||||
r = __create_persistent_data_objects(cmd, false);
|
||||
if (cmd->fail_io) {
|
||||
WRITE_UNLOCK(cmd);
|
||||
goto out;
|
||||
}
|
||||
|
||||
__destroy_persistent_data_objects(cmd, false);
|
||||
old_bm = cmd->bm;
|
||||
if (IS_ERR(new_bm)) {
|
||||
DMERR("could not create block manager during abort");
|
||||
cmd->bm = NULL;
|
||||
r = PTR_ERR(new_bm);
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
cmd->bm = new_bm;
|
||||
r = __open_or_format_metadata(cmd, false);
|
||||
if (r) {
|
||||
cmd->bm = NULL;
|
||||
goto out_unlock;
|
||||
}
|
||||
new_bm = NULL;
|
||||
out_unlock:
|
||||
if (r)
|
||||
cmd->fail_io = true;
|
||||
WRITE_UNLOCK(cmd);
|
||||
dm_block_manager_destroy(old_bm);
|
||||
out:
|
||||
if (new_bm && !IS_ERR(new_bm))
|
||||
dm_block_manager_destroy(new_bm);
|
||||
|
||||
return r;
|
||||
}
|
||||
|
@ -907,16 +907,16 @@ static void abort_transaction(struct cache *cache)
|
||||
if (get_cache_mode(cache) >= CM_READ_ONLY)
|
||||
return;
|
||||
|
||||
if (dm_cache_metadata_set_needs_check(cache->cmd)) {
|
||||
DMERR("%s: failed to set 'needs_check' flag in metadata", dev_name);
|
||||
set_cache_mode(cache, CM_FAIL);
|
||||
}
|
||||
|
||||
DMERR_LIMIT("%s: aborting current metadata transaction", dev_name);
|
||||
if (dm_cache_metadata_abort(cache->cmd)) {
|
||||
DMERR("%s: failed to abort metadata transaction", dev_name);
|
||||
set_cache_mode(cache, CM_FAIL);
|
||||
}
|
||||
|
||||
if (dm_cache_metadata_set_needs_check(cache->cmd)) {
|
||||
DMERR("%s: failed to set 'needs_check' flag in metadata", dev_name);
|
||||
set_cache_mode(cache, CM_FAIL);
|
||||
}
|
||||
}
|
||||
|
||||
static void metadata_operation_failed(struct cache *cache, const char *op, int r)
|
||||
@ -1887,6 +1887,7 @@ static void destroy(struct cache *cache)
|
||||
if (cache->prison)
|
||||
dm_bio_prison_destroy_v2(cache->prison);
|
||||
|
||||
cancel_delayed_work_sync(&cache->waker);
|
||||
if (cache->wq)
|
||||
destroy_workqueue(cache->wq);
|
||||
|
||||
|
@ -1958,6 +1958,7 @@ static void clone_dtr(struct dm_target *ti)
|
||||
|
||||
mempool_exit(&clone->hydration_pool);
|
||||
dm_kcopyd_client_destroy(clone->kcopyd_client);
|
||||
cancel_delayed_work_sync(&clone->waker);
|
||||
destroy_workqueue(clone->wq);
|
||||
hash_table_exit(clone);
|
||||
dm_clone_metadata_close(clone->cmd);
|
||||
|
@ -4558,6 +4558,8 @@ static void dm_integrity_dtr(struct dm_target *ti)
|
||||
BUG_ON(!RB_EMPTY_ROOT(&ic->in_progress));
|
||||
BUG_ON(!list_empty(&ic->wait_list));
|
||||
|
||||
if (ic->mode == 'B')
|
||||
cancel_delayed_work_sync(&ic->bitmap_flush_work);
|
||||
if (ic->metadata_wq)
|
||||
destroy_workqueue(ic->metadata_wq);
|
||||
if (ic->wait_wq)
|
||||
|
@ -724,6 +724,15 @@ static int __open_metadata(struct dm_pool_metadata *pmd)
|
||||
goto bad_cleanup_data_sm;
|
||||
}
|
||||
|
||||
/*
|
||||
* For pool metadata opening process, root setting is redundant
|
||||
* because it will be set again in __begin_transaction(). But dm
|
||||
* pool aborting process really needs to get last transaction's
|
||||
* root to avoid accessing broken btree.
|
||||
*/
|
||||
pmd->root = le64_to_cpu(disk_super->data_mapping_root);
|
||||
pmd->details_root = le64_to_cpu(disk_super->device_details_root);
|
||||
|
||||
__setup_btree_details(pmd);
|
||||
dm_bm_unlock(sblock);
|
||||
|
||||
@ -776,13 +785,15 @@ static int __create_persistent_data_objects(struct dm_pool_metadata *pmd, bool f
|
||||
return r;
|
||||
}
|
||||
|
||||
static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd)
|
||||
static void __destroy_persistent_data_objects(struct dm_pool_metadata *pmd,
|
||||
bool destroy_bm)
|
||||
{
|
||||
dm_sm_destroy(pmd->data_sm);
|
||||
dm_sm_destroy(pmd->metadata_sm);
|
||||
dm_tm_destroy(pmd->nb_tm);
|
||||
dm_tm_destroy(pmd->tm);
|
||||
dm_block_manager_destroy(pmd->bm);
|
||||
if (destroy_bm)
|
||||
dm_block_manager_destroy(pmd->bm);
|
||||
}
|
||||
|
||||
static int __begin_transaction(struct dm_pool_metadata *pmd)
|
||||
@ -989,7 +1000,7 @@ int dm_pool_metadata_close(struct dm_pool_metadata *pmd)
|
||||
}
|
||||
pmd_write_unlock(pmd);
|
||||
if (!pmd->fail_io)
|
||||
__destroy_persistent_data_objects(pmd);
|
||||
__destroy_persistent_data_objects(pmd, true);
|
||||
|
||||
kfree(pmd);
|
||||
return 0;
|
||||
@ -1860,19 +1871,52 @@ static void __set_abort_with_changes_flags(struct dm_pool_metadata *pmd)
|
||||
int dm_pool_abort_metadata(struct dm_pool_metadata *pmd)
|
||||
{
|
||||
int r = -EINVAL;
|
||||
struct dm_block_manager *old_bm = NULL, *new_bm = NULL;
|
||||
|
||||
/* fail_io is double-checked with pmd->root_lock held below */
|
||||
if (unlikely(pmd->fail_io))
|
||||
return r;
|
||||
|
||||
/*
|
||||
* Replacement block manager (new_bm) is created and old_bm destroyed outside of
|
||||
* pmd root_lock to avoid ABBA deadlock that would result (due to life-cycle of
|
||||
* shrinker associated with the block manager's bufio client vs pmd root_lock).
|
||||
* - must take shrinker_rwsem without holding pmd->root_lock
|
||||
*/
|
||||
new_bm = dm_block_manager_create(pmd->bdev, THIN_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
|
||||
THIN_MAX_CONCURRENT_LOCKS);
|
||||
|
||||
pmd_write_lock(pmd);
|
||||
if (pmd->fail_io)
|
||||
if (pmd->fail_io) {
|
||||
pmd_write_unlock(pmd);
|
||||
goto out;
|
||||
}
|
||||
|
||||
__set_abort_with_changes_flags(pmd);
|
||||
__destroy_persistent_data_objects(pmd);
|
||||
r = __create_persistent_data_objects(pmd, false);
|
||||
__destroy_persistent_data_objects(pmd, false);
|
||||
old_bm = pmd->bm;
|
||||
if (IS_ERR(new_bm)) {
|
||||
DMERR("could not create block manager during abort");
|
||||
pmd->bm = NULL;
|
||||
r = PTR_ERR(new_bm);
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
pmd->bm = new_bm;
|
||||
r = __open_or_format_metadata(pmd, false);
|
||||
if (r) {
|
||||
pmd->bm = NULL;
|
||||
goto out_unlock;
|
||||
}
|
||||
new_bm = NULL;
|
||||
out_unlock:
|
||||
if (r)
|
||||
pmd->fail_io = true;
|
||||
|
||||
out:
|
||||
pmd_write_unlock(pmd);
|
||||
dm_block_manager_destroy(old_bm);
|
||||
out:
|
||||
if (new_bm && !IS_ERR(new_bm))
|
||||
dm_block_manager_destroy(new_bm);
|
||||
|
||||
return r;
|
||||
}
|
||||
|
@ -2889,6 +2889,8 @@ static void __pool_destroy(struct pool *pool)
|
||||
dm_bio_prison_destroy(pool->prison);
|
||||
dm_kcopyd_client_destroy(pool->copier);
|
||||
|
||||
cancel_delayed_work_sync(&pool->waker);
|
||||
cancel_delayed_work_sync(&pool->no_space_timeout);
|
||||
if (pool->wq)
|
||||
destroy_workqueue(pool->wq);
|
||||
|
||||
@ -3540,20 +3542,28 @@ static int pool_preresume(struct dm_target *ti)
|
||||
*/
|
||||
r = bind_control_target(pool, ti);
|
||||
if (r)
|
||||
return r;
|
||||
goto out;
|
||||
|
||||
r = maybe_resize_data_dev(ti, &need_commit1);
|
||||
if (r)
|
||||
return r;
|
||||
goto out;
|
||||
|
||||
r = maybe_resize_metadata_dev(ti, &need_commit2);
|
||||
if (r)
|
||||
return r;
|
||||
goto out;
|
||||
|
||||
if (need_commit1 || need_commit2)
|
||||
(void) commit(pool);
|
||||
out:
|
||||
/*
|
||||
* When a thin-pool is PM_FAIL, it cannot be rebuilt if
|
||||
* bio is in deferred list. Therefore need to return 0
|
||||
* to allow pool_resume() to flush IO.
|
||||
*/
|
||||
if (r && get_pool_mode(pool) == PM_FAIL)
|
||||
r = 0;
|
||||
|
||||
return 0;
|
||||
return r;
|
||||
}
|
||||
|
||||
static void pool_suspend_active_thins(struct pool *pool)
|
||||
|
@ -486,7 +486,7 @@ void md_bitmap_print_sb(struct bitmap *bitmap)
|
||||
sb = kmap_atomic(bitmap->storage.sb_page);
|
||||
pr_debug("%s: bitmap file superblock:\n", bmname(bitmap));
|
||||
pr_debug(" magic: %08x\n", le32_to_cpu(sb->magic));
|
||||
pr_debug(" version: %d\n", le32_to_cpu(sb->version));
|
||||
pr_debug(" version: %u\n", le32_to_cpu(sb->version));
|
||||
pr_debug(" uuid: %08x.%08x.%08x.%08x\n",
|
||||
le32_to_cpu(*(__le32 *)(sb->uuid+0)),
|
||||
le32_to_cpu(*(__le32 *)(sb->uuid+4)),
|
||||
@ -497,11 +497,11 @@ void md_bitmap_print_sb(struct bitmap *bitmap)
|
||||
pr_debug("events cleared: %llu\n",
|
||||
(unsigned long long) le64_to_cpu(sb->events_cleared));
|
||||
pr_debug(" state: %08x\n", le32_to_cpu(sb->state));
|
||||
pr_debug(" chunksize: %d B\n", le32_to_cpu(sb->chunksize));
|
||||
pr_debug(" daemon sleep: %ds\n", le32_to_cpu(sb->daemon_sleep));
|
||||
pr_debug(" chunksize: %u B\n", le32_to_cpu(sb->chunksize));
|
||||
pr_debug(" daemon sleep: %us\n", le32_to_cpu(sb->daemon_sleep));
|
||||
pr_debug(" sync size: %llu KB\n",
|
||||
(unsigned long long)le64_to_cpu(sb->sync_size)/2);
|
||||
pr_debug("max write behind: %d\n", le32_to_cpu(sb->write_behind));
|
||||
pr_debug("max write behind: %u\n", le32_to_cpu(sb->write_behind));
|
||||
kunmap_atomic(sb);
|
||||
}
|
||||
|
||||
@ -2105,7 +2105,8 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
|
||||
bytes = DIV_ROUND_UP(chunks, 8);
|
||||
if (!bitmap->mddev->bitmap_info.external)
|
||||
bytes += sizeof(bitmap_super_t);
|
||||
} while (bytes > (space << 9));
|
||||
} while (bytes > (space << 9) && (chunkshift + BITMAP_BLOCK_SHIFT) <
|
||||
(BITS_PER_BYTE * sizeof(((bitmap_super_t *)0)->chunksize) - 1));
|
||||
} else
|
||||
chunkshift = ffz(~chunksize) - BITMAP_BLOCK_SHIFT;
|
||||
|
||||
@ -2150,7 +2151,7 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
|
||||
bitmap->counts.missing_pages = pages;
|
||||
bitmap->counts.chunkshift = chunkshift;
|
||||
bitmap->counts.chunks = chunks;
|
||||
bitmap->mddev->bitmap_info.chunksize = 1 << (chunkshift +
|
||||
bitmap->mddev->bitmap_info.chunksize = 1UL << (chunkshift +
|
||||
BITMAP_BLOCK_SHIFT);
|
||||
|
||||
blocks = min(old_counts.chunks << old_counts.chunkshift,
|
||||
@ -2176,8 +2177,8 @@ int md_bitmap_resize(struct bitmap *bitmap, sector_t blocks,
|
||||
bitmap->counts.missing_pages = old_counts.pages;
|
||||
bitmap->counts.chunkshift = old_counts.chunkshift;
|
||||
bitmap->counts.chunks = old_counts.chunks;
|
||||
bitmap->mddev->bitmap_info.chunksize = 1 << (old_counts.chunkshift +
|
||||
BITMAP_BLOCK_SHIFT);
|
||||
bitmap->mddev->bitmap_info.chunksize =
|
||||
1UL << (old_counts.chunkshift + BITMAP_BLOCK_SHIFT);
|
||||
blocks = old_counts.chunks << old_counts.chunkshift;
|
||||
pr_warn("Could not pre-allocate in-memory bitmap for cluster raid\n");
|
||||
break;
|
||||
@ -2537,6 +2538,9 @@ chunksize_store(struct mddev *mddev, const char *buf, size_t len)
|
||||
if (csize < 512 ||
|
||||
!is_power_of_2(csize))
|
||||
return -EINVAL;
|
||||
if (BITS_PER_LONG > 32 && csize >= (1ULL << (BITS_PER_BYTE *
|
||||
sizeof(((bitmap_super_t *)0)->chunksize))))
|
||||
return -EOVERFLOW;
|
||||
mddev->bitmap_info.chunksize = csize;
|
||||
return len;
|
||||
}
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user