Merge 6.1.46 into android14-6.1-lts
Changes in 6.1.46 gcc-plugins: Reorganize gimple includes for GCC 13 Revert "loongarch/cpu: Switch to arch_cpu_finalize_init()" tpm: Disable RNG for all AMD fTPMs tpm: Add a helper for checking hwrng enabled ksmbd: validate command request size ksmbd: fix wrong next length validation of ea buffer in smb2_set_ea() KVM: SEV: snapshot the GHCB before accessing it KVM: SEV: only access GHCB fields once wifi: nl80211: fix integer overflow in nl80211_parse_mbssid_elems() wifi: rtw89: fix 8852AE disconnection caused by RX full flags selftests: forwarding: Set default IPv6 traceroute utility wireguard: allowedips: expand maximum node depth mmc: moxart: read scr register without changing byte order ipv6: adjust ndisc_is_useropt() to also return true for PIO selftests: mptcp: join: fix 'delete and re-add' test selftests: mptcp: join: fix 'implicit EP' test mptcp: avoid bogus reset on fallback close mptcp: fix disconnect vs accept race dmaengine: pl330: Return DMA_PAUSED when transaction is paused net: mana: Fix MANA VF unload when hardware is unresponsive riscv/kexec: load initrd high in available memory riscv,mmio: Fix readX()-to-delay() ordering riscv/kexec: handle R_RISCV_CALL_PLT relocation type nvme-pci: add NVME_QUIRK_BOGUS_NID for Samsung PM9B1 256G and 512G drm/nouveau/gr: enable memory loads on helper invocation on all channels drm/nouveau/nvkm/dp: Add workaround to fix DP 1.3+ DPCD issues drm/shmem-helper: Reset vma->vm_ops before calling dma_buf_mmap() drm/amdgpu: fix possible UAF in amdgpu_cs_pass1() drm/amd/display: check attr flag before set cursor degamma on DCN3+ drm/amdgpu: add S/G display parameter drm/amd: Disable S/G for APUs when 64GB or more host memory drm/amd/display: limit DPIA link rate to HBR3 cpuidle: dt_idle_genpd: Add helper function to remove genpd topology hwmon: (pmbus/bel-pfe) Enable PMBUS_SKIP_STATUS_CHECK for pfe1100 radix tree test suite: fix incorrect allocation size for pthreads nilfs2: fix use-after-free of nilfs_root in dirtying inodes via iput drm/amd/pm: fulfill swsmu peak profiling mode shader/memory clock settings drm/amd/pm: expose swctf threshold setting for legacy powerplay drm/amd/pm: fulfill powerplay peak profiling mode shader/memory clock settings drm/amd/pm: avoid unintentional shutdown due to temperature momentary fluctuation drm/amd/display: Handle virtual hardware detect drm/amd/display: Add function for validate and update new stream drm/amd/display: Handle seamless boot stream drm/amd/display: Update OTG instance in the commit stream drm/amd/display: Avoid ABM when ODM combine is enabled for eDP drm/amd/display: Use update plane and stream routine for DCN32x drm/amd/display: Disable phantom OTG after enable for plane disable drm/amd/display: Retain phantom plane/stream if validation fails drm/amd/display: fix the build when DRM_AMD_DC_DCN is not set drm/amd/display: trigger timing sync only if TG is running io_uring: correct check for O_TMPFILE iio: cros_ec: Fix the allocation size for cros_ec_command iio: frequency: admv1013: propagate errors from regulator_get_voltage() iio: adc: ad7192: Fix ac excitation feature iio: adc: ina2xx: avoid NULL pointer dereference on OF device match binder: fix memory leak in binder_init() misc: rtsx: judge ASPM Mode to set PETXCFG Reg usb-storage: alauda: Fix uninit-value in alauda_check_media() usb: dwc3: Properly handle processing of pending events USB: Gadget: core: Help prevent panic during UVC unconfigure usb: common: usb-conn-gpio: Prevent bailing out if initial role is none usb: typec: tcpm: Fix response to vsafe0V event usb: typec: altmodes/displayport: Signal hpd when configuring pin assignment x86/srso: Fix build breakage with the LLVM linker x86/cpu/amd: Enable Zenbleed fix for AMD Custom APU 0405 x86/mm: Fix VDSO and VVAR placement on 5-level paging machines x86/sev: Do not try to parse for the CC blob on non-AMD hardware x86/speculation: Add cpu_show_gds() prototype x86: Move gds_ucode_mitigated() declaration to header drm/nouveau/disp: Revert a NULL check inside nouveau_connector_get_modes iio: core: Prevent invalid memory access when there is no parent interconnect: qcom: Add support for mask-based BCMs interconnect: qcom: sm8450: add enable_mask for bcm nodes selftests/rseq: Fix build with undefined __weak selftests: forwarding: Add a helper to skip test when using veth pairs selftests: forwarding: ethtool: Skip when using veth pairs selftests: forwarding: ethtool_extended_state: Skip when using veth pairs selftests: forwarding: hw_stats_l3_gre: Skip when using veth pairs selftests: forwarding: Skip test when no interfaces are specified selftests: forwarding: Switch off timeout selftests: forwarding: tc_flower: Relax success criterion net: core: remove unnecessary frame_sz check in bpf_xdp_adjust_tail() bpf, sockmap: Fix map type error in sock_map_del_link bpf, sockmap: Fix bug that strp_done cannot be called mISDN: Update parameter type of dsp_cmx_send() macsec: use DEV_STATS_INC() mptcp: fix the incorrect judgment for msk->cb_flags net/packet: annotate data-races around tp->status net/smc: Use correct buffer sizes when switching between TCP and SMC tcp: add missing family to tcp_set_ca_state() tracepoint tunnels: fix kasan splat when generating ipv4 pmtu error xsk: fix refcount underflow in error path bonding: Fix incorrect deletion of ETH_P_8021AD protocol vid from slaves dccp: fix data-race around dp->dccps_mss_cache drivers: net: prevent tun_build_skb() to exceed the packet size limit drivers: vxlan: vnifilter: free percpu vni stats on error path iavf: fix potential races for FDIR filters IB/hfi1: Fix possible panic during hotplug remove drm/rockchip: Don't spam logs in atomic check wifi: cfg80211: fix sband iftype data lookup for AP_VLAN RDMA/umem: Set iova in ODP flow net: tls: avoid discarding data on record close net: marvell: prestera: fix handling IPv4 routes with nhid net: phy: at803x: remove set/get wol callbacks for AR8032 net: dsa: ocelot: call dsa_tag_8021q_unregister() under rtnl_lock() on driver remove net: hns3: refactor hclge_mac_link_status_wait for interface reuse net: hns3: add wait until mac link down net: hns3: fix deadlock issue when externel_lb and reset are executed together nexthop: Fix infinite nexthop dump when using maximum nexthop ID nexthop: Make nexthop bucket dump more efficient nexthop: Fix infinite nexthop bucket dump when using maximum nexthop ID net: hns3: fix strscpy causing content truncation issue dmaengine: mcf-edma: Fix a potential un-allocated memory access dmaengine: owl-dma: Modify mismatched function name net/mlx5: Allow 0 for total host VFs net/mlx5: LAG, Check correct bucket when modifying LAG net/mlx5: Skip clock update work when device is in error state net/mlx5: Reload auxiliary devices in pci error handlers ibmvnic: Enforce stronger sanity checks on login response ibmvnic: Unmap DMA login rsp buffer on send login fail ibmvnic: Handle DMA unmapping of login buffs in release functions ibmvnic: Do partial reset on login failure ibmvnic: Ensure login failure recovery is safe from other resets gpio: ws16c48: Fix off-by-one error in WS16C48 resource region extent gpio: sim: mark the GPIO chip as a one that can sleep btrfs: wait for actual caching progress during allocation btrfs: don't stop integrity writeback too early btrfs: properly clear end of the unreserved range in cow_file_range btrfs: exit gracefully if reloc roots don't match btrfs: reject invalid reloc tree root keys with stack dump btrfs: set cache_block_group_error if we find an error nvme-tcp: fix potential unbalanced freeze & unfreeze nvme-rdma: fix potential unbalanced freeze & unfreeze netfilter: nf_tables: report use refcount overflow scsi: core: Fix legacy /proc parsing buffer overflow scsi: storvsc: Fix handling of virtual Fibre Channel timeouts scsi: ufs: renesas: Fix private allocation scsi: 53c700: Check that command slot is not NULL scsi: snic: Fix possible memory leak if device_add() fails scsi: core: Fix possible memory leak if device_add() fails scsi: fnic: Replace return codes in fnic_clean_pending_aborts() scsi: qedi: Fix firmware halt over suspend and resume scsi: qedf: Fix firmware halt over suspend and resume platform/x86: serial-multi-instantiate: Auto detect IRQ resource for CSC3551 ACPI: scan: Create platform device for CS35L56 alpha: remove __init annotation from exported page_is_ram() sch_netem: fix issues in netem_change() vs get_dist_table() drm/amd/pm/smu7: move variables to where they are used Linux 6.1.46 Change-Id: I679c85c2fa9609364ba40c4d6e665447a67a87fd Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
094c282d92
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 1
|
||||
SUBLEVEL = 45
|
||||
SUBLEVEL = 46
|
||||
EXTRAVERSION =
|
||||
NAME = Curry Ramen
|
||||
|
||||
|
@ -385,8 +385,7 @@ setup_memory(void *kernel_end)
|
||||
#endif /* CONFIG_BLK_DEV_INITRD */
|
||||
}
|
||||
|
||||
int __init
|
||||
page_is_ram(unsigned long pfn)
|
||||
int page_is_ram(unsigned long pfn)
|
||||
{
|
||||
struct memclust_struct * cluster;
|
||||
struct memdesc_struct * memdesc;
|
||||
|
@ -10,7 +10,6 @@ config LOONGARCH
|
||||
select ARCH_ENABLE_MEMORY_HOTPLUG
|
||||
select ARCH_ENABLE_MEMORY_HOTREMOVE
|
||||
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
|
||||
select ARCH_HAS_CPU_FINALIZE_INIT
|
||||
select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
|
||||
select ARCH_HAS_PTE_SPECIAL
|
||||
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
|
||||
|
@ -12,7 +12,6 @@
|
||||
*/
|
||||
#include <linux/init.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/dmi.h>
|
||||
#include <linux/efi.h>
|
||||
#include <linux/export.h>
|
||||
@ -81,11 +80,6 @@ const char *get_system_type(void)
|
||||
return "generic-loongson-machine";
|
||||
}
|
||||
|
||||
void __init arch_cpu_finalize_init(void)
|
||||
{
|
||||
alternative_instructions();
|
||||
}
|
||||
|
||||
static const char *dmi_string_parse(const struct dmi_header *dm, u8 s)
|
||||
{
|
||||
const u8 *bp = ((u8 *) dm) + dm->length;
|
||||
|
@ -101,9 +101,9 @@ static inline u64 __raw_readq(const volatile void __iomem *addr)
|
||||
* Relaxed I/O memory access primitives. These follow the Device memory
|
||||
* ordering rules but do not guarantee any ordering relative to Normal memory
|
||||
* accesses. These are defined to order the indicated access (either a read or
|
||||
* write) with all other I/O memory accesses. Since the platform specification
|
||||
* defines that all I/O regions are strongly ordered on channel 2, no explicit
|
||||
* fences are required to enforce this ordering.
|
||||
* write) with all other I/O memory accesses to the same peripheral. Since the
|
||||
* platform specification defines that all I/O regions are strongly ordered on
|
||||
* channel 0, no explicit fences are required to enforce this ordering.
|
||||
*/
|
||||
/* FIXME: These are now the same as asm-generic */
|
||||
#define __io_rbr() do {} while (0)
|
||||
@ -125,14 +125,14 @@ static inline u64 __raw_readq(const volatile void __iomem *addr)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* I/O memory access primitives. Reads are ordered relative to any
|
||||
* following Normal memory access. Writes are ordered relative to any prior
|
||||
* Normal memory access. The memory barriers here are necessary as RISC-V
|
||||
* I/O memory access primitives. Reads are ordered relative to any following
|
||||
* Normal memory read and delay() loop. Writes are ordered relative to any
|
||||
* prior Normal memory write. The memory barriers here are necessary as RISC-V
|
||||
* doesn't define any ordering between the memory space and the I/O space.
|
||||
*/
|
||||
#define __io_br() do {} while (0)
|
||||
#define __io_ar(v) __asm__ __volatile__ ("fence i,r" : : : "memory")
|
||||
#define __io_bw() __asm__ __volatile__ ("fence w,o" : : : "memory")
|
||||
#define __io_ar(v) ({ __asm__ __volatile__ ("fence i,ir" : : : "memory"); })
|
||||
#define __io_bw() ({ __asm__ __volatile__ ("fence w,o" : : : "memory"); })
|
||||
#define __io_aw() mmiowb_set_pending()
|
||||
|
||||
#define readb(c) ({ u8 __v; __io_br(); __v = readb_cpu(c); __io_ar(__v); __v; })
|
||||
|
@ -281,7 +281,7 @@ static void *elf_kexec_load(struct kimage *image, char *kernel_buf,
|
||||
kbuf.buffer = initrd;
|
||||
kbuf.bufsz = kbuf.memsz = initrd_len;
|
||||
kbuf.buf_align = PAGE_SIZE;
|
||||
kbuf.top_down = false;
|
||||
kbuf.top_down = true;
|
||||
kbuf.mem = KEXEC_BUF_MEM_UNKNOWN;
|
||||
ret = kexec_add_buffer(&kbuf);
|
||||
if (ret)
|
||||
@ -425,6 +425,7 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
|
||||
* sym, instead of searching the whole relsec.
|
||||
*/
|
||||
case R_RISCV_PCREL_HI20:
|
||||
case R_RISCV_CALL_PLT:
|
||||
case R_RISCV_CALL:
|
||||
*(u64 *)loc = CLEAN_IMM(UITYPE, *(u64 *)loc) |
|
||||
ENCODE_UJTYPE_IMM(val - addr);
|
||||
|
@ -63,7 +63,14 @@ void load_stage2_idt(void)
|
||||
set_idt_entry(X86_TRAP_PF, boot_page_fault);
|
||||
|
||||
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
||||
set_idt_entry(X86_TRAP_VC, boot_stage2_vc);
|
||||
/*
|
||||
* Clear the second stage #VC handler in case guest types
|
||||
* needing #VC have not been detected.
|
||||
*/
|
||||
if (sev_status & BIT(1))
|
||||
set_idt_entry(X86_TRAP_VC, boot_stage2_vc);
|
||||
else
|
||||
set_idt_entry(X86_TRAP_VC, NULL);
|
||||
#endif
|
||||
|
||||
load_boot_idt(&boot_idt_desc);
|
||||
|
@ -355,10 +355,13 @@ void sev_enable(struct boot_params *bp)
|
||||
bp->cc_blob_address = 0;
|
||||
|
||||
/*
|
||||
* Setup/preliminary detection of SNP. This will be sanity-checked
|
||||
* against CPUID/MSR values later.
|
||||
* Do an initial SEV capability check before snp_init() which
|
||||
* loads the CPUID page and the same checks afterwards are done
|
||||
* without the hypervisor and are trustworthy.
|
||||
*
|
||||
* If the HV fakes SEV support, the guest will crash'n'burn
|
||||
* which is good enough.
|
||||
*/
|
||||
snp = snp_init(bp);
|
||||
|
||||
/* Check for the SME/SEV support leaf */
|
||||
eax = 0x80000000;
|
||||
@ -379,6 +382,36 @@ void sev_enable(struct boot_params *bp)
|
||||
ecx = 0;
|
||||
native_cpuid(&eax, &ebx, &ecx, &edx);
|
||||
/* Check whether SEV is supported */
|
||||
if (!(eax & BIT(1)))
|
||||
return;
|
||||
|
||||
/*
|
||||
* Setup/preliminary detection of SNP. This will be sanity-checked
|
||||
* against CPUID/MSR values later.
|
||||
*/
|
||||
snp = snp_init(bp);
|
||||
|
||||
/* Now repeat the checks with the SNP CPUID table. */
|
||||
|
||||
/* Recheck the SME/SEV support leaf */
|
||||
eax = 0x80000000;
|
||||
ecx = 0;
|
||||
native_cpuid(&eax, &ebx, &ecx, &edx);
|
||||
if (eax < 0x8000001f)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Recheck for the SME/SEV feature:
|
||||
* CPUID Fn8000_001F[EAX]
|
||||
* - Bit 0 - Secure Memory Encryption support
|
||||
* - Bit 1 - Secure Encrypted Virtualization support
|
||||
* CPUID Fn8000_001F[EBX]
|
||||
* - Bits 5:0 - Pagetable bit position used to indicate encryption
|
||||
*/
|
||||
eax = 0x8000001f;
|
||||
ecx = 0;
|
||||
native_cpuid(&eax, &ebx, &ecx, &edx);
|
||||
/* Check whether SEV is supported */
|
||||
if (!(eax & BIT(1))) {
|
||||
if (snp)
|
||||
error("SEV-SNP support indicated by CC blob, but not CPUID.");
|
||||
|
@ -322,8 +322,8 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
|
||||
|
||||
/* Round the lowest possible end address up to a PMD boundary. */
|
||||
end = (start + len + PMD_SIZE - 1) & PMD_MASK;
|
||||
if (end >= TASK_SIZE_MAX)
|
||||
end = TASK_SIZE_MAX;
|
||||
if (end >= DEFAULT_MAP_WINDOW)
|
||||
end = DEFAULT_MAP_WINDOW;
|
||||
end -= len;
|
||||
|
||||
if (end > start) {
|
||||
|
@ -867,4 +867,6 @@ bool arch_is_platform_page(u64 paddr);
|
||||
#define arch_is_platform_page arch_is_platform_page
|
||||
#endif
|
||||
|
||||
extern bool gds_ucode_mitigated(void);
|
||||
|
||||
#endif /* _ASM_X86_PROCESSOR_H */
|
||||
|
@ -73,6 +73,7 @@ static const int amd_erratum_1054[] =
|
||||
static const int amd_zenbleed[] =
|
||||
AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0x30, 0x0, 0x4f, 0xf),
|
||||
AMD_MODEL_RANGE(0x17, 0x60, 0x0, 0x7f, 0xf),
|
||||
AMD_MODEL_RANGE(0x17, 0x90, 0x0, 0x91, 0xf),
|
||||
AMD_MODEL_RANGE(0x17, 0xa0, 0x0, 0xaf, 0xf));
|
||||
|
||||
static const int amd_div0[] =
|
||||
|
@ -514,11 +514,17 @@ INIT_PER_CPU(irq_stack_backing_store);
|
||||
|
||||
#ifdef CONFIG_CPU_SRSO
|
||||
/*
|
||||
* GNU ld cannot do XOR so do: (A | B) - (A & B) in order to compute the XOR
|
||||
* GNU ld cannot do XOR until 2.41.
|
||||
* https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f6f78318fca803c4907fb8d7f6ded8295f1947b1
|
||||
*
|
||||
* LLVM lld cannot do XOR until lld-17.
|
||||
* https://github.com/llvm/llvm-project/commit/fae96104d4378166cbe5c875ef8ed808a356f3fb
|
||||
*
|
||||
* Instead do: (A | B) - (A & B) in order to compute the XOR
|
||||
* of the two function addresses:
|
||||
*/
|
||||
. = ASSERT(((srso_untrain_ret_alias | srso_safe_ret_alias) -
|
||||
(srso_untrain_ret_alias & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)),
|
||||
. = ASSERT(((ABSOLUTE(srso_untrain_ret_alias) | srso_safe_ret_alias) -
|
||||
(ABSOLUTE(srso_untrain_ret_alias) & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)),
|
||||
"SRSO function pair won't alias");
|
||||
#endif
|
||||
|
||||
|
@ -2410,15 +2410,18 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm)
|
||||
*/
|
||||
memset(vcpu->arch.regs, 0, sizeof(vcpu->arch.regs));
|
||||
|
||||
vcpu->arch.regs[VCPU_REGS_RAX] = ghcb_get_rax_if_valid(ghcb);
|
||||
vcpu->arch.regs[VCPU_REGS_RBX] = ghcb_get_rbx_if_valid(ghcb);
|
||||
vcpu->arch.regs[VCPU_REGS_RCX] = ghcb_get_rcx_if_valid(ghcb);
|
||||
vcpu->arch.regs[VCPU_REGS_RDX] = ghcb_get_rdx_if_valid(ghcb);
|
||||
vcpu->arch.regs[VCPU_REGS_RSI] = ghcb_get_rsi_if_valid(ghcb);
|
||||
BUILD_BUG_ON(sizeof(svm->sev_es.valid_bitmap) != sizeof(ghcb->save.valid_bitmap));
|
||||
memcpy(&svm->sev_es.valid_bitmap, &ghcb->save.valid_bitmap, sizeof(ghcb->save.valid_bitmap));
|
||||
|
||||
svm->vmcb->save.cpl = ghcb_get_cpl_if_valid(ghcb);
|
||||
vcpu->arch.regs[VCPU_REGS_RAX] = kvm_ghcb_get_rax_if_valid(svm, ghcb);
|
||||
vcpu->arch.regs[VCPU_REGS_RBX] = kvm_ghcb_get_rbx_if_valid(svm, ghcb);
|
||||
vcpu->arch.regs[VCPU_REGS_RCX] = kvm_ghcb_get_rcx_if_valid(svm, ghcb);
|
||||
vcpu->arch.regs[VCPU_REGS_RDX] = kvm_ghcb_get_rdx_if_valid(svm, ghcb);
|
||||
vcpu->arch.regs[VCPU_REGS_RSI] = kvm_ghcb_get_rsi_if_valid(svm, ghcb);
|
||||
|
||||
if (ghcb_xcr0_is_valid(ghcb)) {
|
||||
svm->vmcb->save.cpl = kvm_ghcb_get_cpl_if_valid(svm, ghcb);
|
||||
|
||||
if (kvm_ghcb_xcr0_is_valid(svm)) {
|
||||
vcpu->arch.xcr0 = ghcb_get_xcr0(ghcb);
|
||||
kvm_update_cpuid_runtime(vcpu);
|
||||
}
|
||||
@ -2429,14 +2432,21 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm)
|
||||
control->exit_code_hi = upper_32_bits(exit_code);
|
||||
control->exit_info_1 = ghcb_get_sw_exit_info_1(ghcb);
|
||||
control->exit_info_2 = ghcb_get_sw_exit_info_2(ghcb);
|
||||
svm->sev_es.sw_scratch = kvm_ghcb_get_sw_scratch_if_valid(svm, ghcb);
|
||||
|
||||
/* Clear the valid entries fields */
|
||||
memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
|
||||
}
|
||||
|
||||
static u64 kvm_ghcb_get_sw_exit_code(struct vmcb_control_area *control)
|
||||
{
|
||||
return (((u64)control->exit_code_hi) << 32) | control->exit_code;
|
||||
}
|
||||
|
||||
static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
|
||||
{
|
||||
struct kvm_vcpu *vcpu;
|
||||
struct vmcb_control_area *control = &svm->vmcb->control;
|
||||
struct kvm_vcpu *vcpu = &svm->vcpu;
|
||||
struct ghcb *ghcb;
|
||||
u64 exit_code;
|
||||
u64 reason;
|
||||
@ -2447,7 +2457,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
|
||||
* Retrieve the exit code now even though it may not be marked valid
|
||||
* as it could help with debugging.
|
||||
*/
|
||||
exit_code = ghcb_get_sw_exit_code(ghcb);
|
||||
exit_code = kvm_ghcb_get_sw_exit_code(control);
|
||||
|
||||
/* Only GHCB Usage code 0 is supported */
|
||||
if (ghcb->ghcb_usage) {
|
||||
@ -2457,56 +2467,56 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
|
||||
|
||||
reason = GHCB_ERR_MISSING_INPUT;
|
||||
|
||||
if (!ghcb_sw_exit_code_is_valid(ghcb) ||
|
||||
!ghcb_sw_exit_info_1_is_valid(ghcb) ||
|
||||
!ghcb_sw_exit_info_2_is_valid(ghcb))
|
||||
if (!kvm_ghcb_sw_exit_code_is_valid(svm) ||
|
||||
!kvm_ghcb_sw_exit_info_1_is_valid(svm) ||
|
||||
!kvm_ghcb_sw_exit_info_2_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
|
||||
switch (ghcb_get_sw_exit_code(ghcb)) {
|
||||
switch (exit_code) {
|
||||
case SVM_EXIT_READ_DR7:
|
||||
break;
|
||||
case SVM_EXIT_WRITE_DR7:
|
||||
if (!ghcb_rax_is_valid(ghcb))
|
||||
if (!kvm_ghcb_rax_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
break;
|
||||
case SVM_EXIT_RDTSC:
|
||||
break;
|
||||
case SVM_EXIT_RDPMC:
|
||||
if (!ghcb_rcx_is_valid(ghcb))
|
||||
if (!kvm_ghcb_rcx_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
break;
|
||||
case SVM_EXIT_CPUID:
|
||||
if (!ghcb_rax_is_valid(ghcb) ||
|
||||
!ghcb_rcx_is_valid(ghcb))
|
||||
if (!kvm_ghcb_rax_is_valid(svm) ||
|
||||
!kvm_ghcb_rcx_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
if (ghcb_get_rax(ghcb) == 0xd)
|
||||
if (!ghcb_xcr0_is_valid(ghcb))
|
||||
if (vcpu->arch.regs[VCPU_REGS_RAX] == 0xd)
|
||||
if (!kvm_ghcb_xcr0_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
break;
|
||||
case SVM_EXIT_INVD:
|
||||
break;
|
||||
case SVM_EXIT_IOIO:
|
||||
if (ghcb_get_sw_exit_info_1(ghcb) & SVM_IOIO_STR_MASK) {
|
||||
if (!ghcb_sw_scratch_is_valid(ghcb))
|
||||
if (control->exit_info_1 & SVM_IOIO_STR_MASK) {
|
||||
if (!kvm_ghcb_sw_scratch_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
} else {
|
||||
if (!(ghcb_get_sw_exit_info_1(ghcb) & SVM_IOIO_TYPE_MASK))
|
||||
if (!ghcb_rax_is_valid(ghcb))
|
||||
if (!(control->exit_info_1 & SVM_IOIO_TYPE_MASK))
|
||||
if (!kvm_ghcb_rax_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
}
|
||||
break;
|
||||
case SVM_EXIT_MSR:
|
||||
if (!ghcb_rcx_is_valid(ghcb))
|
||||
if (!kvm_ghcb_rcx_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
if (ghcb_get_sw_exit_info_1(ghcb)) {
|
||||
if (!ghcb_rax_is_valid(ghcb) ||
|
||||
!ghcb_rdx_is_valid(ghcb))
|
||||
if (control->exit_info_1) {
|
||||
if (!kvm_ghcb_rax_is_valid(svm) ||
|
||||
!kvm_ghcb_rdx_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
}
|
||||
break;
|
||||
case SVM_EXIT_VMMCALL:
|
||||
if (!ghcb_rax_is_valid(ghcb) ||
|
||||
!ghcb_cpl_is_valid(ghcb))
|
||||
if (!kvm_ghcb_rax_is_valid(svm) ||
|
||||
!kvm_ghcb_cpl_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
break;
|
||||
case SVM_EXIT_RDTSCP:
|
||||
@ -2514,19 +2524,19 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
|
||||
case SVM_EXIT_WBINVD:
|
||||
break;
|
||||
case SVM_EXIT_MONITOR:
|
||||
if (!ghcb_rax_is_valid(ghcb) ||
|
||||
!ghcb_rcx_is_valid(ghcb) ||
|
||||
!ghcb_rdx_is_valid(ghcb))
|
||||
if (!kvm_ghcb_rax_is_valid(svm) ||
|
||||
!kvm_ghcb_rcx_is_valid(svm) ||
|
||||
!kvm_ghcb_rdx_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
break;
|
||||
case SVM_EXIT_MWAIT:
|
||||
if (!ghcb_rax_is_valid(ghcb) ||
|
||||
!ghcb_rcx_is_valid(ghcb))
|
||||
if (!kvm_ghcb_rax_is_valid(svm) ||
|
||||
!kvm_ghcb_rcx_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
break;
|
||||
case SVM_VMGEXIT_MMIO_READ:
|
||||
case SVM_VMGEXIT_MMIO_WRITE:
|
||||
if (!ghcb_sw_scratch_is_valid(ghcb))
|
||||
if (!kvm_ghcb_sw_scratch_is_valid(svm))
|
||||
goto vmgexit_err;
|
||||
break;
|
||||
case SVM_VMGEXIT_NMI_COMPLETE:
|
||||
@ -2542,8 +2552,6 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
|
||||
return 0;
|
||||
|
||||
vmgexit_err:
|
||||
vcpu = &svm->vcpu;
|
||||
|
||||
if (reason == GHCB_ERR_INVALID_USAGE) {
|
||||
vcpu_unimpl(vcpu, "vmgexit: ghcb usage %#x is not valid\n",
|
||||
ghcb->ghcb_usage);
|
||||
@ -2556,9 +2564,6 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
|
||||
dump_ghcb(svm);
|
||||
}
|
||||
|
||||
/* Clear the valid entries fields */
|
||||
memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));
|
||||
|
||||
ghcb_set_sw_exit_info_1(ghcb, 2);
|
||||
ghcb_set_sw_exit_info_2(ghcb, reason);
|
||||
|
||||
@ -2579,7 +2584,7 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm)
|
||||
*/
|
||||
if (svm->sev_es.ghcb_sa_sync) {
|
||||
kvm_write_guest(svm->vcpu.kvm,
|
||||
ghcb_get_sw_scratch(svm->sev_es.ghcb),
|
||||
svm->sev_es.sw_scratch,
|
||||
svm->sev_es.ghcb_sa,
|
||||
svm->sev_es.ghcb_sa_len);
|
||||
svm->sev_es.ghcb_sa_sync = false;
|
||||
@ -2630,7 +2635,7 @@ static int setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
|
||||
u64 scratch_gpa_beg, scratch_gpa_end;
|
||||
void *scratch_va;
|
||||
|
||||
scratch_gpa_beg = ghcb_get_sw_scratch(ghcb);
|
||||
scratch_gpa_beg = svm->sev_es.sw_scratch;
|
||||
if (!scratch_gpa_beg) {
|
||||
pr_err("vmgexit: scratch gpa not provided\n");
|
||||
goto e_scratch;
|
||||
@ -2844,16 +2849,15 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
|
||||
|
||||
trace_kvm_vmgexit_enter(vcpu->vcpu_id, ghcb);
|
||||
|
||||
exit_code = ghcb_get_sw_exit_code(ghcb);
|
||||
|
||||
sev_es_sync_from_ghcb(svm);
|
||||
ret = sev_es_validate_vmgexit(svm);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
sev_es_sync_from_ghcb(svm);
|
||||
ghcb_set_sw_exit_info_1(ghcb, 0);
|
||||
ghcb_set_sw_exit_info_2(ghcb, 0);
|
||||
|
||||
exit_code = kvm_ghcb_get_sw_exit_code(control);
|
||||
switch (exit_code) {
|
||||
case SVM_VMGEXIT_MMIO_READ:
|
||||
ret = setup_vmgexit_scratch(svm, true, control->exit_info_2);
|
||||
|
@ -196,10 +196,12 @@ struct vcpu_sev_es_state {
|
||||
/* SEV-ES support */
|
||||
struct sev_es_save_area *vmsa;
|
||||
struct ghcb *ghcb;
|
||||
u8 valid_bitmap[16];
|
||||
struct kvm_host_map ghcb_map;
|
||||
bool received_first_sipi;
|
||||
|
||||
/* SEV-ES scratch area support */
|
||||
u64 sw_scratch;
|
||||
void *ghcb_sa;
|
||||
u32 ghcb_sa_len;
|
||||
bool ghcb_sa_sync;
|
||||
@ -688,4 +690,28 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm);
|
||||
void __svm_sev_es_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted);
|
||||
void __svm_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted);
|
||||
|
||||
#define DEFINE_KVM_GHCB_ACCESSORS(field) \
|
||||
static __always_inline bool kvm_ghcb_##field##_is_valid(const struct vcpu_svm *svm) \
|
||||
{ \
|
||||
return test_bit(GHCB_BITMAP_IDX(field), \
|
||||
(unsigned long *)&svm->sev_es.valid_bitmap); \
|
||||
} \
|
||||
\
|
||||
static __always_inline u64 kvm_ghcb_get_##field##_if_valid(struct vcpu_svm *svm, struct ghcb *ghcb) \
|
||||
{ \
|
||||
return kvm_ghcb_##field##_is_valid(svm) ? ghcb->save.field : 0; \
|
||||
} \
|
||||
|
||||
DEFINE_KVM_GHCB_ACCESSORS(cpl)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(rax)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(rcx)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(rdx)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(rbx)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(rsi)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(sw_exit_code)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(sw_exit_info_1)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(sw_exit_info_2)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(sw_scratch)
|
||||
DEFINE_KVM_GHCB_ACCESSORS(xcr0)
|
||||
|
||||
#endif
|
||||
|
@ -311,8 +311,6 @@ u64 __read_mostly host_xcr0;
|
||||
|
||||
static struct kmem_cache *x86_emulator_cache;
|
||||
|
||||
extern bool gds_ucode_mitigated(void);
|
||||
|
||||
/*
|
||||
* When called, it means the previous get/set msr reached an invalid msr.
|
||||
* Return true if we want to ignore/silent this failed msr access.
|
||||
|
@ -1712,6 +1712,7 @@ static bool acpi_device_enumeration_by_parent(struct acpi_device *device)
|
||||
{"BSG1160", },
|
||||
{"BSG2150", },
|
||||
{"CSC3551", },
|
||||
{"CSC3556", },
|
||||
{"INT33FE", },
|
||||
{"INT3515", },
|
||||
/* Non-conforming _HID for Cirrus Logic already released */
|
||||
|
@ -6840,6 +6840,7 @@ static int __init binder_init(void)
|
||||
|
||||
err_alloc_device_names_failed:
|
||||
debugfs_remove_recursive(binder_debugfs_dir_entry_root);
|
||||
binder_alloc_shrinker_exit();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -1089,6 +1089,12 @@ int binder_alloc_shrinker_init(void)
|
||||
return ret;
|
||||
}
|
||||
|
||||
void binder_alloc_shrinker_exit(void)
|
||||
{
|
||||
unregister_shrinker(&binder_shrinker);
|
||||
list_lru_destroy(&binder_alloc_lru);
|
||||
}
|
||||
|
||||
/**
|
||||
* check_buffer() - verify that buffer/offset is safe to access
|
||||
* @alloc: binder_alloc for this proc
|
||||
|
@ -129,6 +129,7 @@ extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
|
||||
int pid);
|
||||
extern void binder_alloc_init(struct binder_alloc *alloc);
|
||||
extern int binder_alloc_shrinker_init(void);
|
||||
extern void binder_alloc_shrinker_exit(void);
|
||||
extern void binder_alloc_vma_close(struct binder_alloc *alloc);
|
||||
extern struct binder_buffer *
|
||||
binder_alloc_prepare_to_free(struct binder_alloc *alloc,
|
||||
|
@ -507,70 +507,6 @@ static int tpm_add_legacy_sysfs(struct tpm_chip *chip)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Some AMD fTPM versions may cause stutter
|
||||
* https://www.amd.com/en/support/kb/faq/pa-410
|
||||
*
|
||||
* Fixes are available in two series of fTPM firmware:
|
||||
* 6.x.y.z series: 6.0.18.6 +
|
||||
* 3.x.y.z series: 3.57.y.5 +
|
||||
*/
|
||||
#ifdef CONFIG_X86
|
||||
static bool tpm_amd_is_rng_defective(struct tpm_chip *chip)
|
||||
{
|
||||
u32 val1, val2;
|
||||
u64 version;
|
||||
int ret;
|
||||
|
||||
if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
|
||||
return false;
|
||||
|
||||
ret = tpm_request_locality(chip);
|
||||
if (ret)
|
||||
return false;
|
||||
|
||||
ret = tpm2_get_tpm_pt(chip, TPM2_PT_MANUFACTURER, &val1, NULL);
|
||||
if (ret)
|
||||
goto release;
|
||||
if (val1 != 0x414D4400U /* AMD */) {
|
||||
ret = -ENODEV;
|
||||
goto release;
|
||||
}
|
||||
ret = tpm2_get_tpm_pt(chip, TPM2_PT_FIRMWARE_VERSION_1, &val1, NULL);
|
||||
if (ret)
|
||||
goto release;
|
||||
ret = tpm2_get_tpm_pt(chip, TPM2_PT_FIRMWARE_VERSION_2, &val2, NULL);
|
||||
|
||||
release:
|
||||
tpm_relinquish_locality(chip);
|
||||
|
||||
if (ret)
|
||||
return false;
|
||||
|
||||
version = ((u64)val1 << 32) | val2;
|
||||
if ((version >> 48) == 6) {
|
||||
if (version >= 0x0006000000180006ULL)
|
||||
return false;
|
||||
} else if ((version >> 48) == 3) {
|
||||
if (version >= 0x0003005700000005ULL)
|
||||
return false;
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
|
||||
dev_warn(&chip->dev,
|
||||
"AMD fTPM version 0x%llx causes system stutter; hwrng disabled\n",
|
||||
version);
|
||||
|
||||
return true;
|
||||
}
|
||||
#else
|
||||
static inline bool tpm_amd_is_rng_defective(struct tpm_chip *chip)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
#endif /* CONFIG_X86 */
|
||||
|
||||
static int tpm_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
|
||||
{
|
||||
struct tpm_chip *chip = container_of(rng, struct tpm_chip, hwrng);
|
||||
@ -582,10 +518,20 @@ static int tpm_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
|
||||
return tpm_get_random(chip, data, max);
|
||||
}
|
||||
|
||||
static bool tpm_is_hwrng_enabled(struct tpm_chip *chip)
|
||||
{
|
||||
if (!IS_ENABLED(CONFIG_HW_RANDOM_TPM))
|
||||
return false;
|
||||
if (tpm_is_firmware_upgrade(chip))
|
||||
return false;
|
||||
if (chip->flags & TPM_CHIP_FLAG_HWRNG_DISABLED)
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
static int tpm_add_hwrng(struct tpm_chip *chip)
|
||||
{
|
||||
if (!IS_ENABLED(CONFIG_HW_RANDOM_TPM) || tpm_is_firmware_upgrade(chip) ||
|
||||
tpm_amd_is_rng_defective(chip))
|
||||
if (!tpm_is_hwrng_enabled(chip))
|
||||
return 0;
|
||||
|
||||
snprintf(chip->hwrng_name, sizeof(chip->hwrng_name),
|
||||
@ -690,7 +636,7 @@ int tpm_chip_register(struct tpm_chip *chip)
|
||||
return 0;
|
||||
|
||||
out_hwrng:
|
||||
if (IS_ENABLED(CONFIG_HW_RANDOM_TPM) && !tpm_is_firmware_upgrade(chip))
|
||||
if (tpm_is_hwrng_enabled(chip))
|
||||
hwrng_unregister(&chip->hwrng);
|
||||
out_ppi:
|
||||
tpm_bios_log_teardown(chip);
|
||||
@ -715,8 +661,7 @@ EXPORT_SYMBOL_GPL(tpm_chip_register);
|
||||
void tpm_chip_unregister(struct tpm_chip *chip)
|
||||
{
|
||||
tpm_del_legacy_sysfs(chip);
|
||||
if (IS_ENABLED(CONFIG_HW_RANDOM_TPM) && !tpm_is_firmware_upgrade(chip) &&
|
||||
!tpm_amd_is_rng_defective(chip))
|
||||
if (tpm_is_hwrng_enabled(chip))
|
||||
hwrng_unregister(&chip->hwrng);
|
||||
tpm_bios_log_teardown(chip);
|
||||
if (chip->flags & TPM_CHIP_FLAG_TPM2 && !tpm_is_firmware_upgrade(chip))
|
||||
|
@ -463,6 +463,28 @@ static bool crb_req_canceled(struct tpm_chip *chip, u8 status)
|
||||
return (cancel & CRB_CANCEL_INVOKE) == CRB_CANCEL_INVOKE;
|
||||
}
|
||||
|
||||
static int crb_check_flags(struct tpm_chip *chip)
|
||||
{
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
ret = crb_request_locality(chip, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = tpm2_get_tpm_pt(chip, TPM2_PT_MANUFACTURER, &val, NULL);
|
||||
if (ret)
|
||||
goto release;
|
||||
|
||||
if (val == 0x414D4400U /* AMD */)
|
||||
chip->flags |= TPM_CHIP_FLAG_HWRNG_DISABLED;
|
||||
|
||||
release:
|
||||
crb_relinquish_locality(chip, 0);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct tpm_class_ops tpm_crb = {
|
||||
.flags = TPM_OPS_AUTO_STARTUP,
|
||||
.status = crb_status,
|
||||
@ -800,6 +822,14 @@ static int crb_acpi_add(struct acpi_device *device)
|
||||
chip->acpi_dev_handle = device->handle;
|
||||
chip->flags = TPM_CHIP_FLAG_TPM2;
|
||||
|
||||
rc = tpm_chip_bootstrap(chip);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
rc = crb_check_flags(chip);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
rc = tpm_chip_register(chip);
|
||||
|
||||
out:
|
||||
|
@ -152,6 +152,30 @@ int dt_idle_pd_init_topology(struct device_node *np)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int dt_idle_pd_remove_topology(struct device_node *np)
|
||||
{
|
||||
struct device_node *node;
|
||||
struct of_phandle_args child, parent;
|
||||
int ret;
|
||||
|
||||
for_each_child_of_node(np, node) {
|
||||
if (of_parse_phandle_with_args(node, "power-domains",
|
||||
"#power-domain-cells", 0, &parent))
|
||||
continue;
|
||||
|
||||
child.np = node;
|
||||
child.args_count = 0;
|
||||
ret = of_genpd_remove_subdomain(&parent, &child);
|
||||
of_node_put(parent.np);
|
||||
if (ret) {
|
||||
of_node_put(node);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct device *dt_idle_attach_cpu(int cpu, const char *name)
|
||||
{
|
||||
struct device *dev;
|
||||
|
@ -14,6 +14,8 @@ struct generic_pm_domain *dt_idle_pd_alloc(struct device_node *np,
|
||||
|
||||
int dt_idle_pd_init_topology(struct device_node *np);
|
||||
|
||||
int dt_idle_pd_remove_topology(struct device_node *np);
|
||||
|
||||
struct device *dt_idle_attach_cpu(int cpu, const char *name);
|
||||
|
||||
void dt_idle_detach_cpu(struct device *dev);
|
||||
@ -36,6 +38,11 @@ static inline int dt_idle_pd_init_topology(struct device_node *np)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int dt_idle_pd_remove_topology(struct device_node *np)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline struct device *dt_idle_attach_cpu(int cpu, const char *name)
|
||||
{
|
||||
return NULL;
|
||||
|
@ -191,7 +191,13 @@ static int mcf_edma_probe(struct platform_device *pdev)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
chans = pdata->dma_channels;
|
||||
if (!pdata->dma_channels) {
|
||||
dev_info(&pdev->dev, "setting default channel number to 64");
|
||||
chans = 64;
|
||||
} else {
|
||||
chans = pdata->dma_channels;
|
||||
}
|
||||
|
||||
len = sizeof(*mcf_edma) + sizeof(*mcf_chan) * chans;
|
||||
mcf_edma = devm_kzalloc(&pdev->dev, len, GFP_KERNEL);
|
||||
if (!mcf_edma)
|
||||
@ -203,11 +209,6 @@ static int mcf_edma_probe(struct platform_device *pdev)
|
||||
mcf_edma->drvdata = &mcf_data;
|
||||
mcf_edma->big_endian = 1;
|
||||
|
||||
if (!mcf_edma->n_chans) {
|
||||
dev_info(&pdev->dev, "setting default channel number to 64");
|
||||
mcf_edma->n_chans = 64;
|
||||
}
|
||||
|
||||
mutex_init(&mcf_edma->fsl_edma_mutex);
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
|
@ -192,7 +192,7 @@ struct owl_dma_pchan {
|
||||
};
|
||||
|
||||
/**
|
||||
* struct owl_dma_pchan - Wrapper for DMA ENGINE channel
|
||||
* struct owl_dma_vchan - Wrapper for DMA ENGINE channel
|
||||
* @vc: wrapped virtual channel
|
||||
* @pchan: the physical channel utilized by this channel
|
||||
* @txd: active transaction on this channel
|
||||
|
@ -403,6 +403,12 @@ enum desc_status {
|
||||
* of a channel can be BUSY at any time.
|
||||
*/
|
||||
BUSY,
|
||||
/*
|
||||
* Pause was called while descriptor was BUSY. Due to hardware
|
||||
* limitations, only termination is possible for descriptors
|
||||
* that have been paused.
|
||||
*/
|
||||
PAUSED,
|
||||
/*
|
||||
* Sitting on the channel work_list but xfer done
|
||||
* by PL330 core
|
||||
@ -2041,7 +2047,7 @@ static inline void fill_queue(struct dma_pl330_chan *pch)
|
||||
list_for_each_entry(desc, &pch->work_list, node) {
|
||||
|
||||
/* If already submitted */
|
||||
if (desc->status == BUSY)
|
||||
if (desc->status == BUSY || desc->status == PAUSED)
|
||||
continue;
|
||||
|
||||
ret = pl330_submit_req(pch->thread, desc);
|
||||
@ -2326,6 +2332,7 @@ static int pl330_pause(struct dma_chan *chan)
|
||||
{
|
||||
struct dma_pl330_chan *pch = to_pchan(chan);
|
||||
struct pl330_dmac *pl330 = pch->dmac;
|
||||
struct dma_pl330_desc *desc;
|
||||
unsigned long flags;
|
||||
|
||||
pm_runtime_get_sync(pl330->ddma.dev);
|
||||
@ -2335,6 +2342,10 @@ static int pl330_pause(struct dma_chan *chan)
|
||||
_stop(pch->thread);
|
||||
spin_unlock(&pl330->lock);
|
||||
|
||||
list_for_each_entry(desc, &pch->work_list, node) {
|
||||
if (desc->status == BUSY)
|
||||
desc->status = PAUSED;
|
||||
}
|
||||
spin_unlock_irqrestore(&pch->lock, flags);
|
||||
pm_runtime_mark_last_busy(pl330->ddma.dev);
|
||||
pm_runtime_put_autosuspend(pl330->ddma.dev);
|
||||
@ -2425,7 +2436,7 @@ pl330_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
|
||||
else if (running && desc == running)
|
||||
transferred =
|
||||
pl330_get_current_xferred_count(pch, desc);
|
||||
else if (desc->status == BUSY)
|
||||
else if (desc->status == BUSY || desc->status == PAUSED)
|
||||
/*
|
||||
* Busy but not running means either just enqueued,
|
||||
* or finished and not yet marked done
|
||||
@ -2442,6 +2453,9 @@ pl330_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
|
||||
case DONE:
|
||||
ret = DMA_COMPLETE;
|
||||
break;
|
||||
case PAUSED:
|
||||
ret = DMA_PAUSED;
|
||||
break;
|
||||
case PREP:
|
||||
case BUSY:
|
||||
ret = DMA_IN_PROGRESS;
|
||||
|
@ -425,6 +425,7 @@ static int gpio_sim_add_bank(struct fwnode_handle *swnode, struct device *dev)
|
||||
gc->set_config = gpio_sim_set_config;
|
||||
gc->to_irq = gpio_sim_to_irq;
|
||||
gc->free = gpio_sim_free;
|
||||
gc->can_sleep = true;
|
||||
|
||||
ret = devm_gpiochip_add_data(dev, gc, chip);
|
||||
if (ret)
|
||||
|
@ -18,7 +18,7 @@
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#define WS16C48_EXTENT 10
|
||||
#define WS16C48_EXTENT 11
|
||||
#define MAX_NUM_WS16C48 max_num_isa_dev(WS16C48_EXTENT)
|
||||
|
||||
static unsigned int base[MAX_NUM_WS16C48];
|
||||
|
@ -242,6 +242,7 @@ extern int amdgpu_num_kcq;
|
||||
|
||||
#define AMDGPU_VCNFW_LOG_SIZE (32 * 1024)
|
||||
extern int amdgpu_vcnfw_log;
|
||||
extern int amdgpu_sg_display;
|
||||
|
||||
#define AMDGPU_VM_MAX_NUM_CTX 4096
|
||||
#define AMDGPU_SG_THRESHOLD (256*1024*1024)
|
||||
@ -283,6 +284,9 @@ extern int amdgpu_vcnfw_log;
|
||||
#define AMDGPU_SMARTSHIFT_MAX_BIAS (100)
|
||||
#define AMDGPU_SMARTSHIFT_MIN_BIAS (-100)
|
||||
|
||||
/* Extra time delay(in ms) to eliminate the influence of temperature momentary fluctuation */
|
||||
#define AMDGPU_SWCTF_EXTRA_DELAY 50
|
||||
|
||||
struct amdgpu_device;
|
||||
struct amdgpu_irq_src;
|
||||
struct amdgpu_fpriv;
|
||||
@ -1262,6 +1266,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
||||
void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
|
||||
int amdgpu_device_pci_reset(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_need_post(struct amdgpu_device *adev);
|
||||
bool amdgpu_sg_display_supported(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_pcie_dynamic_switching_supported(void);
|
||||
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_aspm_support_quirk(void);
|
||||
|
@ -287,7 +287,7 @@ static int amdgpu_cs_pass1(struct amdgpu_cs_parser *p,
|
||||
|
||||
if (!p->gang_size) {
|
||||
ret = -EINVAL;
|
||||
goto free_partial_kdata;
|
||||
goto free_all_kdata;
|
||||
}
|
||||
|
||||
for (i = 0; i < p->gang_size; ++i) {
|
||||
|
@ -1333,6 +1333,32 @@ bool amdgpu_device_need_post(struct amdgpu_device *adev)
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* On APUs with >= 64GB white flickering has been observed w/ SG enabled.
|
||||
* Disable S/G on such systems until we have a proper fix.
|
||||
* https://gitlab.freedesktop.org/drm/amd/-/issues/2354
|
||||
* https://gitlab.freedesktop.org/drm/amd/-/issues/2735
|
||||
*/
|
||||
bool amdgpu_sg_display_supported(struct amdgpu_device *adev)
|
||||
{
|
||||
switch (amdgpu_sg_display) {
|
||||
case -1:
|
||||
break;
|
||||
case 0:
|
||||
return false;
|
||||
case 1:
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
if ((totalram_pages() << (PAGE_SHIFT - 10)) +
|
||||
(adev->gmc.real_vram_size / 1024) >= 64000000) {
|
||||
DRM_WARN("Disabling S/G due to >=64GB RAM\n");
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Intel hosts such as Raptor Lake and Sapphire Rapids don't support dynamic
|
||||
* speed switching. Until we have confirmation from Intel that a specific host
|
||||
|
@ -185,6 +185,7 @@ int amdgpu_num_kcq = -1;
|
||||
int amdgpu_smartshift_bias;
|
||||
int amdgpu_use_xgmi_p2p = 1;
|
||||
int amdgpu_vcnfw_log;
|
||||
int amdgpu_sg_display = -1; /* auto */
|
||||
|
||||
static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work);
|
||||
|
||||
@ -929,6 +930,16 @@ module_param_named(num_kcq, amdgpu_num_kcq, int, 0444);
|
||||
MODULE_PARM_DESC(vcnfw_log, "Enable vcnfw log(0 = disable (default value), 1 = enable)");
|
||||
module_param_named(vcnfw_log, amdgpu_vcnfw_log, int, 0444);
|
||||
|
||||
/**
|
||||
* DOC: sg_display (int)
|
||||
* Disable S/G (scatter/gather) display (i.e., display from system memory).
|
||||
* This option is only relevant on APUs. Set this option to 0 to disable
|
||||
* S/G display if you experience flickering or other issues under memory
|
||||
* pressure and report the issue.
|
||||
*/
|
||||
MODULE_PARM_DESC(sg_display, "S/G Display (-1 = auto (default), 0 = disable)");
|
||||
module_param_named(sg_display, amdgpu_sg_display, int, 0444);
|
||||
|
||||
/**
|
||||
* DOC: smu_pptable_id (int)
|
||||
* Used to override pptable id. id = 0 use VBIOS pptable.
|
||||
|
@ -1634,6 +1634,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
|
||||
}
|
||||
break;
|
||||
}
|
||||
if (init_data.flags.gpu_vm_support)
|
||||
init_data.flags.gpu_vm_support = amdgpu_sg_display_supported(adev);
|
||||
|
||||
if (init_data.flags.gpu_vm_support)
|
||||
adev->mode_info.gpu_vm_support = true;
|
||||
|
@ -1079,6 +1079,7 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
|
||||
struct dc_state *dangling_context = dc_create_state(dc);
|
||||
struct dc_state *current_ctx;
|
||||
struct pipe_ctx *pipe;
|
||||
struct timing_generator *tg;
|
||||
|
||||
if (dangling_context == NULL)
|
||||
return;
|
||||
@ -1122,6 +1123,7 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
|
||||
|
||||
if (should_disable && old_stream) {
|
||||
pipe = &dc->current_state->res_ctx.pipe_ctx[i];
|
||||
tg = pipe->stream_res.tg;
|
||||
/* When disabling plane for a phantom pipe, we must turn on the
|
||||
* phantom OTG so the disable programming gets the double buffer
|
||||
* update. Otherwise the pipe will be left in a partially disabled
|
||||
@ -1129,7 +1131,8 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
|
||||
* again for different use.
|
||||
*/
|
||||
if (old_stream->mall_stream_config.type == SUBVP_PHANTOM) {
|
||||
pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg);
|
||||
if (tg->funcs->enable_crtc)
|
||||
tg->funcs->enable_crtc(tg);
|
||||
}
|
||||
dc_rem_all_planes_for_stream(dc, old_stream, dangling_context);
|
||||
disable_all_writeback_pipes_for_stream(dc, old_stream, dangling_context);
|
||||
@ -1146,6 +1149,15 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
|
||||
dc->hwss.interdependent_update_lock(dc, dc->current_state, false);
|
||||
dc->hwss.post_unlock_program_front_end(dc, dangling_context);
|
||||
}
|
||||
/* We need to put the phantom OTG back into it's default (disabled) state or we
|
||||
* can get corruption when transition from one SubVP config to a different one.
|
||||
* The OTG is set to disable on falling edge of VUPDATE so the plane disable
|
||||
* will still get it's double buffer update.
|
||||
*/
|
||||
if (old_stream->mall_stream_config.type == SUBVP_PHANTOM) {
|
||||
if (tg->funcs->disable_phantom_crtc)
|
||||
tg->funcs->disable_phantom_crtc(tg);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -1942,6 +1954,9 @@ enum dc_status dc_commit_streams(struct dc *dc,
|
||||
struct pipe_ctx *pipe;
|
||||
bool handle_exit_odm2to1 = false;
|
||||
|
||||
if (dc->ctx->dce_environment == DCE_ENV_VIRTUAL_HW)
|
||||
return res;
|
||||
|
||||
if (!streams_changed(dc, streams, stream_count))
|
||||
return res;
|
||||
|
||||
@ -1984,21 +1999,33 @@ enum dc_status dc_commit_streams(struct dc *dc,
|
||||
|
||||
dc_resource_state_copy_construct_current(dc, context);
|
||||
|
||||
/*
|
||||
* Previous validation was perfomred with fast_validation = true and
|
||||
* the full DML state required for hardware programming was skipped.
|
||||
*
|
||||
* Re-validate here to calculate these parameters / watermarks.
|
||||
*/
|
||||
res = dc_validate_global_state(dc, context, false);
|
||||
res = dc_validate_with_context(dc, set, stream_count, context, false);
|
||||
if (res != DC_OK) {
|
||||
DC_LOG_ERROR("DC commit global validation failure: %s (%d)",
|
||||
dc_status_to_str(res), res);
|
||||
return res;
|
||||
BREAK_TO_DEBUGGER();
|
||||
goto fail;
|
||||
}
|
||||
|
||||
res = dc_commit_state_no_check(dc, context);
|
||||
|
||||
for (i = 0; i < stream_count; i++) {
|
||||
for (j = 0; j < context->stream_count; j++) {
|
||||
if (streams[i]->stream_id == context->streams[j]->stream_id)
|
||||
streams[i]->out.otg_offset = context->stream_status[j].primary_otg_inst;
|
||||
|
||||
if (dc_is_embedded_signal(streams[i]->signal)) {
|
||||
struct dc_stream_status *status = dc_stream_get_status_from_state(context, streams[i]);
|
||||
|
||||
if (dc->hwss.is_abm_supported)
|
||||
status->is_abm_supported = dc->hwss.is_abm_supported(dc, context, streams[i]);
|
||||
else
|
||||
status->is_abm_supported = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fail:
|
||||
dc_release_state(context);
|
||||
|
||||
context_alloc_fail:
|
||||
|
||||
DC_LOG_DC("%s Finished.\n", __func__);
|
||||
@ -3122,6 +3149,19 @@ static bool update_planes_and_stream_state(struct dc *dc,
|
||||
|
||||
if (update_type == UPDATE_TYPE_FULL) {
|
||||
if (!dc->res_pool->funcs->validate_bandwidth(dc, context, false)) {
|
||||
/* For phantom pipes we remove and create a new set of phantom pipes
|
||||
* for each full update (because we don't know if we'll need phantom
|
||||
* pipes until after the first round of validation). However, if validation
|
||||
* fails we need to keep the existing phantom pipes (because we don't update
|
||||
* the dc->current_state).
|
||||
*
|
||||
* The phantom stream/plane refcount is decremented for validation because
|
||||
* we assume it'll be removed (the free comes when the dc_state is freed),
|
||||
* but if validation fails we have to increment back the refcount so it's
|
||||
* consistent.
|
||||
*/
|
||||
if (dc->res_pool->funcs->retain_phantom_pipes)
|
||||
dc->res_pool->funcs->retain_phantom_pipes(dc, dc->current_state);
|
||||
BREAK_TO_DEBUGGER();
|
||||
goto fail;
|
||||
}
|
||||
@ -3987,6 +4027,18 @@ void dc_commit_updates_for_stream(struct dc *dc,
|
||||
struct dc_context *dc_ctx = dc->ctx;
|
||||
int i, j;
|
||||
|
||||
/* TODO: Since change commit sequence can have a huge impact,
|
||||
* we decided to only enable it for DCN3x. However, as soon as
|
||||
* we get more confident about this change we'll need to enable
|
||||
* the new sequence for all ASICs.
|
||||
*/
|
||||
if (dc->ctx->dce_version >= DCN_VERSION_3_2) {
|
||||
dc_update_planes_and_stream(dc, srf_updates,
|
||||
surface_count, stream,
|
||||
stream_update);
|
||||
return;
|
||||
}
|
||||
|
||||
stream_status = dc_stream_get_status(stream);
|
||||
context = dc->current_state;
|
||||
|
||||
|
@ -1141,6 +1141,11 @@ static bool detect_link_and_local_sink(struct dc_link *link,
|
||||
(link->dpcd_caps.dongle_type !=
|
||||
DISPLAY_DONGLE_DP_HDMI_CONVERTER))
|
||||
converter_disable_audio = true;
|
||||
|
||||
/* limited link rate to HBR3 for DPIA until we implement USB4 V2 */
|
||||
if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA &&
|
||||
link->reported_link_cap.link_rate > LINK_RATE_HIGH3)
|
||||
link->reported_link_cap.link_rate = LINK_RATE_HIGH3;
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -2616,15 +2616,241 @@ bool dc_resource_is_dsc_encoding_supported(const struct dc *dc)
|
||||
return dc->res_pool->res_cap->num_dsc > 0;
|
||||
}
|
||||
|
||||
static bool planes_changed_for_existing_stream(struct dc_state *context,
|
||||
struct dc_stream_state *stream,
|
||||
const struct dc_validation_set set[],
|
||||
int set_count)
|
||||
{
|
||||
int i, j;
|
||||
struct dc_stream_status *stream_status = NULL;
|
||||
|
||||
for (i = 0; i < context->stream_count; i++) {
|
||||
if (context->streams[i] == stream) {
|
||||
stream_status = &context->stream_status[i];
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!stream_status)
|
||||
ASSERT(0);
|
||||
|
||||
for (i = 0; i < set_count; i++)
|
||||
if (set[i].stream == stream)
|
||||
break;
|
||||
|
||||
if (i == set_count)
|
||||
ASSERT(0);
|
||||
|
||||
if (set[i].plane_count != stream_status->plane_count)
|
||||
return true;
|
||||
|
||||
for (j = 0; j < set[i].plane_count; j++)
|
||||
if (set[i].plane_states[j] != stream_status->plane_states[j])
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* dc_validate_global_state() - Determine if HW can support a given state
|
||||
* Checks HW resource availability and bandwidth requirement.
|
||||
* dc_validate_with_context - Validate and update the potential new stream in the context object
|
||||
*
|
||||
* @dc: Used to get the current state status
|
||||
* @set: An array of dc_validation_set with all the current streams reference
|
||||
* @set_count: Total of streams
|
||||
* @context: New context
|
||||
* @fast_validate: Enable or disable fast validation
|
||||
*
|
||||
* This function updates the potential new stream in the context object. It
|
||||
* creates multiple lists for the add, remove, and unchanged streams. In
|
||||
* particular, if the unchanged streams have a plane that changed, it is
|
||||
* necessary to remove all planes from the unchanged streams. In summary, this
|
||||
* function is responsible for validating the new context.
|
||||
*
|
||||
* Return:
|
||||
* In case of success, return DC_OK (1), otherwise, return a DC error.
|
||||
*/
|
||||
enum dc_status dc_validate_with_context(struct dc *dc,
|
||||
const struct dc_validation_set set[],
|
||||
int set_count,
|
||||
struct dc_state *context,
|
||||
bool fast_validate)
|
||||
{
|
||||
struct dc_stream_state *unchanged_streams[MAX_PIPES] = { 0 };
|
||||
struct dc_stream_state *del_streams[MAX_PIPES] = { 0 };
|
||||
struct dc_stream_state *add_streams[MAX_PIPES] = { 0 };
|
||||
int old_stream_count = context->stream_count;
|
||||
enum dc_status res = DC_ERROR_UNEXPECTED;
|
||||
int unchanged_streams_count = 0;
|
||||
int del_streams_count = 0;
|
||||
int add_streams_count = 0;
|
||||
bool found = false;
|
||||
int i, j, k;
|
||||
|
||||
DC_LOGGER_INIT(dc->ctx->logger);
|
||||
|
||||
/* First build a list of streams to be remove from current context */
|
||||
for (i = 0; i < old_stream_count; i++) {
|
||||
struct dc_stream_state *stream = context->streams[i];
|
||||
|
||||
for (j = 0; j < set_count; j++) {
|
||||
if (stream == set[j].stream) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!found)
|
||||
del_streams[del_streams_count++] = stream;
|
||||
|
||||
found = false;
|
||||
}
|
||||
|
||||
/* Second, build a list of new streams */
|
||||
for (i = 0; i < set_count; i++) {
|
||||
struct dc_stream_state *stream = set[i].stream;
|
||||
|
||||
for (j = 0; j < old_stream_count; j++) {
|
||||
if (stream == context->streams[j]) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!found)
|
||||
add_streams[add_streams_count++] = stream;
|
||||
|
||||
found = false;
|
||||
}
|
||||
|
||||
/* Build a list of unchanged streams which is necessary for handling
|
||||
* planes change such as added, removed, and updated.
|
||||
*/
|
||||
for (i = 0; i < set_count; i++) {
|
||||
/* Check if stream is part of the delete list */
|
||||
for (j = 0; j < del_streams_count; j++) {
|
||||
if (set[i].stream == del_streams[j]) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!found) {
|
||||
/* Check if stream is part of the add list */
|
||||
for (j = 0; j < add_streams_count; j++) {
|
||||
if (set[i].stream == add_streams[j]) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!found)
|
||||
unchanged_streams[unchanged_streams_count++] = set[i].stream;
|
||||
|
||||
found = false;
|
||||
}
|
||||
|
||||
/* Remove all planes for unchanged streams if planes changed */
|
||||
for (i = 0; i < unchanged_streams_count; i++) {
|
||||
if (planes_changed_for_existing_stream(context,
|
||||
unchanged_streams[i],
|
||||
set,
|
||||
set_count)) {
|
||||
if (!dc_rem_all_planes_for_stream(dc,
|
||||
unchanged_streams[i],
|
||||
context)) {
|
||||
res = DC_FAIL_DETACH_SURFACES;
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Remove all planes for removed streams and then remove the streams */
|
||||
for (i = 0; i < del_streams_count; i++) {
|
||||
/* Need to cpy the dwb data from the old stream in order to efc to work */
|
||||
if (del_streams[i]->num_wb_info > 0) {
|
||||
for (j = 0; j < add_streams_count; j++) {
|
||||
if (del_streams[i]->sink == add_streams[j]->sink) {
|
||||
add_streams[j]->num_wb_info = del_streams[i]->num_wb_info;
|
||||
for (k = 0; k < del_streams[i]->num_wb_info; k++)
|
||||
add_streams[j]->writeback_info[k] = del_streams[i]->writeback_info[k];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!dc_rem_all_planes_for_stream(dc, del_streams[i], context)) {
|
||||
res = DC_FAIL_DETACH_SURFACES;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
res = dc_remove_stream_from_ctx(dc, context, del_streams[i]);
|
||||
if (res != DC_OK)
|
||||
goto fail;
|
||||
}
|
||||
|
||||
/* Swap seamless boot stream to pipe 0 (if needed) to ensure pipe_ctx
|
||||
* matches. This may change in the future if seamless_boot_stream can be
|
||||
* multiple.
|
||||
*/
|
||||
for (i = 0; i < add_streams_count; i++) {
|
||||
mark_seamless_boot_stream(dc, add_streams[i]);
|
||||
if (add_streams[i]->apply_seamless_boot_optimization && i != 0) {
|
||||
struct dc_stream_state *temp = add_streams[0];
|
||||
|
||||
add_streams[0] = add_streams[i];
|
||||
add_streams[i] = temp;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* Add new streams and then add all planes for the new stream */
|
||||
for (i = 0; i < add_streams_count; i++) {
|
||||
calculate_phy_pix_clks(add_streams[i]);
|
||||
res = dc_add_stream_to_ctx(dc, context, add_streams[i]);
|
||||
if (res != DC_OK)
|
||||
goto fail;
|
||||
|
||||
if (!add_all_planes_for_stream(dc, add_streams[i], set, set_count, context)) {
|
||||
res = DC_FAIL_ATTACH_SURFACES;
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
|
||||
/* Add all planes for unchanged streams if planes changed */
|
||||
for (i = 0; i < unchanged_streams_count; i++) {
|
||||
if (planes_changed_for_existing_stream(context,
|
||||
unchanged_streams[i],
|
||||
set,
|
||||
set_count)) {
|
||||
if (!add_all_planes_for_stream(dc, unchanged_streams[i], set, set_count, context)) {
|
||||
res = DC_FAIL_ATTACH_SURFACES;
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
res = dc_validate_global_state(dc, context, fast_validate);
|
||||
|
||||
fail:
|
||||
if (res != DC_OK)
|
||||
DC_LOG_WARNING("%s:resource validation failed, dc_status:%d\n",
|
||||
__func__,
|
||||
res);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
/**
|
||||
* dc_validate_global_state() - Determine if hardware can support a given state
|
||||
*
|
||||
* @dc: dc struct for this driver
|
||||
* @new_ctx: state to be validated
|
||||
* @fast_validate: set to true if only yes/no to support matters
|
||||
*
|
||||
* Return: DC_OK if the result can be programmed. Otherwise, an error code.
|
||||
* Checks hardware resource availability and bandwidth requirement.
|
||||
*
|
||||
* Return:
|
||||
* DC_OK if the result can be programmed. Otherwise, an error code.
|
||||
*/
|
||||
enum dc_status dc_validate_global_state(
|
||||
struct dc *dc,
|
||||
@ -3757,4 +3983,4 @@ bool dc_resource_acquire_secondary_pipe_for_mpc_odm(
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
@ -1298,6 +1298,12 @@ enum dc_status dc_validate_plane(struct dc *dc, const struct dc_plane_state *pla
|
||||
|
||||
void get_clock_requirements_for_state(struct dc_state *state, struct AsicStateEx *info);
|
||||
|
||||
enum dc_status dc_validate_with_context(struct dc *dc,
|
||||
const struct dc_validation_set set[],
|
||||
int set_count,
|
||||
struct dc_state *context,
|
||||
bool fast_validate);
|
||||
|
||||
bool dc_set_generic_gpio_for_stereo(bool enable,
|
||||
struct gpio_service *gpio_service);
|
||||
|
||||
|
@ -2284,6 +2284,12 @@ void dcn10_enable_timing_synchronization(
|
||||
opp = grouped_pipes[i]->stream_res.opp;
|
||||
tg = grouped_pipes[i]->stream_res.tg;
|
||||
tg->funcs->get_otg_active_size(tg, &width, &height);
|
||||
|
||||
if (!tg->funcs->is_tg_enabled(tg)) {
|
||||
DC_SYNC_INFO("Skipping timing sync on disabled OTG\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (opp->funcs->opp_program_dpg_dimensions)
|
||||
opp->funcs->opp_program_dpg_dimensions(opp, width, 2*(height) + 1);
|
||||
}
|
||||
|
@ -357,8 +357,11 @@ void dpp3_set_cursor_attributes(
|
||||
int cur_rom_en = 0;
|
||||
|
||||
if (color_format == CURSOR_MODE_COLOR_PRE_MULTIPLIED_ALPHA ||
|
||||
color_format == CURSOR_MODE_COLOR_UN_PRE_MULTIPLIED_ALPHA)
|
||||
cur_rom_en = 1;
|
||||
color_format == CURSOR_MODE_COLOR_UN_PRE_MULTIPLIED_ALPHA) {
|
||||
if (cursor_attributes->attribute_flags.bits.ENABLE_CURSOR_DEGAMMA) {
|
||||
cur_rom_en = 1;
|
||||
}
|
||||
}
|
||||
|
||||
REG_UPDATE_3(CURSOR0_CONTROL,
|
||||
CUR0_MODE, color_format,
|
||||
|
@ -167,6 +167,13 @@ static void optc32_phantom_crtc_post_enable(struct timing_generator *optc)
|
||||
REG_WAIT(OTG_CLOCK_CONTROL, OTG_BUSY, 0, 1, 100000);
|
||||
}
|
||||
|
||||
static void optc32_disable_phantom_otg(struct timing_generator *optc)
|
||||
{
|
||||
struct optc *optc1 = DCN10TG_FROM_TG(optc);
|
||||
|
||||
REG_UPDATE(OTG_CONTROL, OTG_MASTER_EN, 0);
|
||||
}
|
||||
|
||||
static void optc32_set_odm_bypass(struct timing_generator *optc,
|
||||
const struct dc_crtc_timing *dc_crtc_timing)
|
||||
{
|
||||
@ -260,6 +267,7 @@ static struct timing_generator_funcs dcn32_tg_funcs = {
|
||||
.enable_crtc = optc32_enable_crtc,
|
||||
.disable_crtc = optc32_disable_crtc,
|
||||
.phantom_crtc_post_enable = optc32_phantom_crtc_post_enable,
|
||||
.disable_phantom_crtc = optc32_disable_phantom_otg,
|
||||
/* used by enable_timing_synchronization. Not need for FPGA */
|
||||
.is_counter_moving = optc1_is_counter_moving,
|
||||
.get_position = optc1_get_position,
|
||||
|
@ -1719,6 +1719,27 @@ static struct dc_stream_state *dcn32_enable_phantom_stream(struct dc *dc,
|
||||
return phantom_stream;
|
||||
}
|
||||
|
||||
void dcn32_retain_phantom_pipes(struct dc *dc, struct dc_state *context)
|
||||
{
|
||||
int i;
|
||||
struct dc_plane_state *phantom_plane = NULL;
|
||||
struct dc_stream_state *phantom_stream = NULL;
|
||||
|
||||
for (i = 0; i < dc->res_pool->pipe_count; i++) {
|
||||
struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
|
||||
|
||||
if (!pipe->top_pipe && !pipe->prev_odm_pipe &&
|
||||
pipe->plane_state && pipe->stream &&
|
||||
pipe->stream->mall_stream_config.type == SUBVP_PHANTOM) {
|
||||
phantom_plane = pipe->plane_state;
|
||||
phantom_stream = pipe->stream;
|
||||
|
||||
dc_plane_state_retain(phantom_plane);
|
||||
dc_stream_retain(phantom_stream);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// return true if removed piped from ctx, false otherwise
|
||||
bool dcn32_remove_phantom_pipes(struct dc *dc, struct dc_state *context)
|
||||
{
|
||||
@ -2035,6 +2056,7 @@ static struct resource_funcs dcn32_res_pool_funcs = {
|
||||
.update_soc_for_wm_a = dcn30_update_soc_for_wm_a,
|
||||
.add_phantom_pipes = dcn32_add_phantom_pipes,
|
||||
.remove_phantom_pipes = dcn32_remove_phantom_pipes,
|
||||
.retain_phantom_pipes = dcn32_retain_phantom_pipes,
|
||||
};
|
||||
|
||||
static uint32_t read_pipe_fuses(struct dc_context *ctx)
|
||||
|
@ -83,6 +83,9 @@ bool dcn32_release_post_bldn_3dlut(
|
||||
bool dcn32_remove_phantom_pipes(struct dc *dc,
|
||||
struct dc_state *context);
|
||||
|
||||
void dcn32_retain_phantom_pipes(struct dc *dc,
|
||||
struct dc_state *context);
|
||||
|
||||
void dcn32_add_phantom_pipes(struct dc *dc,
|
||||
struct dc_state *context,
|
||||
display_e2e_pipe_params_st *pipes,
|
||||
|
@ -1619,6 +1619,7 @@ static struct resource_funcs dcn321_res_pool_funcs = {
|
||||
.update_soc_for_wm_a = dcn30_update_soc_for_wm_a,
|
||||
.add_phantom_pipes = dcn32_add_phantom_pipes,
|
||||
.remove_phantom_pipes = dcn32_remove_phantom_pipes,
|
||||
.retain_phantom_pipes = dcn32_retain_phantom_pipes,
|
||||
};
|
||||
|
||||
static uint32_t read_pipe_fuses(struct dc_context *ctx)
|
||||
|
@ -234,6 +234,7 @@ struct resource_funcs {
|
||||
unsigned int index);
|
||||
|
||||
bool (*remove_phantom_pipes)(struct dc *dc, struct dc_state *context);
|
||||
void (*retain_phantom_pipes)(struct dc *dc, struct dc_state *context);
|
||||
void (*get_panel_config_defaults)(struct dc_panel_config *panel_config);
|
||||
};
|
||||
|
||||
|
@ -185,6 +185,7 @@ struct timing_generator_funcs {
|
||||
#ifdef CONFIG_DRM_AMD_DC_DCN
|
||||
void (*phantom_crtc_post_enable)(struct timing_generator *tg);
|
||||
#endif
|
||||
void (*disable_phantom_crtc)(struct timing_generator *tg);
|
||||
bool (*immediate_disable_crtc)(struct timing_generator *tg);
|
||||
bool (*is_counter_moving)(struct timing_generator *tg);
|
||||
void (*get_position)(struct timing_generator *tg,
|
||||
|
@ -139,6 +139,8 @@ enum amd_pp_sensors {
|
||||
AMDGPU_PP_SENSOR_MIN_FAN_RPM,
|
||||
AMDGPU_PP_SENSOR_MAX_FAN_RPM,
|
||||
AMDGPU_PP_SENSOR_VCN_POWER_STATE,
|
||||
AMDGPU_PP_SENSOR_PEAK_PSTATE_SCLK,
|
||||
AMDGPU_PP_SENSOR_PEAK_PSTATE_MCLK,
|
||||
};
|
||||
|
||||
enum amd_pp_task {
|
||||
|
@ -89,6 +89,8 @@ struct amdgpu_dpm_thermal {
|
||||
int max_mem_crit_temp;
|
||||
/* memory max emergency(shutdown) temp */
|
||||
int max_mem_emergency_temp;
|
||||
/* SWCTF threshold */
|
||||
int sw_ctf_threshold;
|
||||
/* was last interrupt low to high or high to low */
|
||||
bool high_to_low;
|
||||
/* interrupt source */
|
||||
|
@ -26,6 +26,7 @@
|
||||
#include <linux/gfp.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/firmware.h>
|
||||
#include <linux/reboot.h>
|
||||
#include "amd_shared.h"
|
||||
#include "amd_powerplay.h"
|
||||
#include "power_state.h"
|
||||
@ -91,6 +92,45 @@ static int pp_early_init(void *handle)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pp_swctf_delayed_work_handler(struct work_struct *work)
|
||||
{
|
||||
struct pp_hwmgr *hwmgr =
|
||||
container_of(work, struct pp_hwmgr, swctf_delayed_work.work);
|
||||
struct amdgpu_device *adev = hwmgr->adev;
|
||||
struct amdgpu_dpm_thermal *range =
|
||||
&adev->pm.dpm.thermal;
|
||||
uint32_t gpu_temperature, size;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* If the hotspot/edge temperature is confirmed as below SW CTF setting point
|
||||
* after the delay enforced, nothing will be done.
|
||||
* Otherwise, a graceful shutdown will be performed to prevent further damage.
|
||||
*/
|
||||
if (range->sw_ctf_threshold &&
|
||||
hwmgr->hwmgr_func->read_sensor) {
|
||||
ret = hwmgr->hwmgr_func->read_sensor(hwmgr,
|
||||
AMDGPU_PP_SENSOR_HOTSPOT_TEMP,
|
||||
&gpu_temperature,
|
||||
&size);
|
||||
/*
|
||||
* For some legacy ASICs, hotspot temperature retrieving might be not
|
||||
* supported. Check the edge temperature instead then.
|
||||
*/
|
||||
if (ret == -EOPNOTSUPP)
|
||||
ret = hwmgr->hwmgr_func->read_sensor(hwmgr,
|
||||
AMDGPU_PP_SENSOR_EDGE_TEMP,
|
||||
&gpu_temperature,
|
||||
&size);
|
||||
if (!ret && gpu_temperature / 1000 < range->sw_ctf_threshold)
|
||||
return;
|
||||
}
|
||||
|
||||
dev_emerg(adev->dev, "ERROR: GPU over temperature range(SW CTF) detected!\n");
|
||||
dev_emerg(adev->dev, "ERROR: System is going to shutdown due to GPU SW CTF!\n");
|
||||
orderly_poweroff(true);
|
||||
}
|
||||
|
||||
static int pp_sw_init(void *handle)
|
||||
{
|
||||
struct amdgpu_device *adev = handle;
|
||||
@ -101,6 +141,10 @@ static int pp_sw_init(void *handle)
|
||||
|
||||
pr_debug("powerplay sw init %s\n", ret ? "failed" : "successfully");
|
||||
|
||||
if (!ret)
|
||||
INIT_DELAYED_WORK(&hwmgr->swctf_delayed_work,
|
||||
pp_swctf_delayed_work_handler);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -136,6 +180,8 @@ static int pp_hw_fini(void *handle)
|
||||
struct amdgpu_device *adev = handle;
|
||||
struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
|
||||
|
||||
cancel_delayed_work_sync(&hwmgr->swctf_delayed_work);
|
||||
|
||||
hwmgr_hw_fini(hwmgr);
|
||||
|
||||
return 0;
|
||||
@ -222,6 +268,8 @@ static int pp_suspend(void *handle)
|
||||
struct amdgpu_device *adev = handle;
|
||||
struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
|
||||
|
||||
cancel_delayed_work_sync(&hwmgr->swctf_delayed_work);
|
||||
|
||||
return hwmgr_suspend(hwmgr);
|
||||
}
|
||||
|
||||
@ -769,10 +817,16 @@ static int pp_dpm_read_sensor(void *handle, int idx,
|
||||
|
||||
switch (idx) {
|
||||
case AMDGPU_PP_SENSOR_STABLE_PSTATE_SCLK:
|
||||
*((uint32_t *)value) = hwmgr->pstate_sclk;
|
||||
*((uint32_t *)value) = hwmgr->pstate_sclk * 100;
|
||||
return 0;
|
||||
case AMDGPU_PP_SENSOR_STABLE_PSTATE_MCLK:
|
||||
*((uint32_t *)value) = hwmgr->pstate_mclk;
|
||||
*((uint32_t *)value) = hwmgr->pstate_mclk * 100;
|
||||
return 0;
|
||||
case AMDGPU_PP_SENSOR_PEAK_PSTATE_SCLK:
|
||||
*((uint32_t *)value) = hwmgr->pstate_sclk_peak * 100;
|
||||
return 0;
|
||||
case AMDGPU_PP_SENSOR_PEAK_PSTATE_MCLK:
|
||||
*((uint32_t *)value) = hwmgr->pstate_mclk_peak * 100;
|
||||
return 0;
|
||||
case AMDGPU_PP_SENSOR_MIN_FAN_RPM:
|
||||
*((uint32_t *)value) = hwmgr->thermal_controller.fanInfo.ulMinRPM;
|
||||
|
@ -241,7 +241,8 @@ int phm_start_thermal_controller(struct pp_hwmgr *hwmgr)
|
||||
TEMP_RANGE_MAX,
|
||||
TEMP_RANGE_MIN,
|
||||
TEMP_RANGE_MAX,
|
||||
TEMP_RANGE_MAX};
|
||||
TEMP_RANGE_MAX,
|
||||
0};
|
||||
struct amdgpu_device *adev = hwmgr->adev;
|
||||
|
||||
if (!hwmgr->not_vf)
|
||||
@ -265,6 +266,7 @@ int phm_start_thermal_controller(struct pp_hwmgr *hwmgr)
|
||||
adev->pm.dpm.thermal.min_mem_temp = range.mem_min;
|
||||
adev->pm.dpm.thermal.max_mem_crit_temp = range.mem_crit_max;
|
||||
adev->pm.dpm.thermal.max_mem_emergency_temp = range.mem_emergency_max;
|
||||
adev->pm.dpm.thermal.sw_ctf_threshold = range.sw_ctf_threshold;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -375,6 +375,17 @@ static int smu10_enable_gfx_off(struct pp_hwmgr *hwmgr)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void smu10_populate_umdpstate_clocks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
hwmgr->pstate_sclk = SMU10_UMD_PSTATE_GFXCLK;
|
||||
hwmgr->pstate_mclk = SMU10_UMD_PSTATE_FCLK;
|
||||
|
||||
smum_send_msg_to_smc(hwmgr,
|
||||
PPSMC_MSG_GetMaxGfxclkFrequency,
|
||||
&hwmgr->pstate_sclk_peak);
|
||||
hwmgr->pstate_mclk_peak = SMU10_UMD_PSTATE_PEAK_FCLK;
|
||||
}
|
||||
|
||||
static int smu10_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
struct amdgpu_device *adev = hwmgr->adev;
|
||||
@ -398,6 +409,8 @@ static int smu10_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
return ret;
|
||||
}
|
||||
|
||||
smu10_populate_umdpstate_clocks(hwmgr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -574,9 +587,6 @@ static int smu10_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
|
||||
|
||||
hwmgr->platform_descriptor.minimumClocksReductionPercentage = 50;
|
||||
|
||||
hwmgr->pstate_sclk = SMU10_UMD_PSTATE_GFXCLK * 100;
|
||||
hwmgr->pstate_mclk = SMU10_UMD_PSTATE_FCLK * 100;
|
||||
|
||||
/* enable the pp_od_clk_voltage sysfs file */
|
||||
hwmgr->od_enabled = 1;
|
||||
/* disabled fine grain tuning function by default */
|
||||
|
@ -1501,6 +1501,67 @@ static int smu7_populate_edc_leakage_registers(struct pp_hwmgr *hwmgr)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void smu7_populate_umdpstate_clocks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend);
|
||||
struct smu7_dpm_table *golden_dpm_table = &data->golden_dpm_table;
|
||||
int32_t tmp_sclk, count, percentage;
|
||||
|
||||
if (golden_dpm_table->mclk_table.count == 1) {
|
||||
percentage = 70;
|
||||
hwmgr->pstate_mclk = golden_dpm_table->mclk_table.dpm_levels[0].value;
|
||||
} else {
|
||||
percentage = 100 * golden_dpm_table->sclk_table.dpm_levels[golden_dpm_table->sclk_table.count - 1].value /
|
||||
golden_dpm_table->mclk_table.dpm_levels[golden_dpm_table->mclk_table.count - 1].value;
|
||||
hwmgr->pstate_mclk = golden_dpm_table->mclk_table.dpm_levels[golden_dpm_table->mclk_table.count - 2].value;
|
||||
}
|
||||
|
||||
tmp_sclk = hwmgr->pstate_mclk * percentage / 100;
|
||||
|
||||
if (hwmgr->pp_table_version == PP_TABLE_V0) {
|
||||
struct phm_clock_voltage_dependency_table *vddc_dependency_on_sclk =
|
||||
hwmgr->dyn_state.vddc_dependency_on_sclk;
|
||||
|
||||
for (count = vddc_dependency_on_sclk->count - 1; count >= 0; count--) {
|
||||
if (tmp_sclk >= vddc_dependency_on_sclk->entries[count].clk) {
|
||||
hwmgr->pstate_sclk = vddc_dependency_on_sclk->entries[count].clk;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (count < 0)
|
||||
hwmgr->pstate_sclk = vddc_dependency_on_sclk->entries[0].clk;
|
||||
|
||||
hwmgr->pstate_sclk_peak =
|
||||
vddc_dependency_on_sclk->entries[vddc_dependency_on_sclk->count - 1].clk;
|
||||
} else if (hwmgr->pp_table_version == PP_TABLE_V1) {
|
||||
struct phm_ppt_v1_information *table_info =
|
||||
(struct phm_ppt_v1_information *)(hwmgr->pptable);
|
||||
struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_sclk =
|
||||
table_info->vdd_dep_on_sclk;
|
||||
|
||||
for (count = vdd_dep_on_sclk->count - 1; count >= 0; count--) {
|
||||
if (tmp_sclk >= vdd_dep_on_sclk->entries[count].clk) {
|
||||
hwmgr->pstate_sclk = vdd_dep_on_sclk->entries[count].clk;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (count < 0)
|
||||
hwmgr->pstate_sclk = vdd_dep_on_sclk->entries[0].clk;
|
||||
|
||||
hwmgr->pstate_sclk_peak =
|
||||
vdd_dep_on_sclk->entries[vdd_dep_on_sclk->count - 1].clk;
|
||||
}
|
||||
|
||||
hwmgr->pstate_mclk_peak =
|
||||
golden_dpm_table->mclk_table.dpm_levels[golden_dpm_table->mclk_table.count - 1].value;
|
||||
|
||||
/* make sure the output is in Mhz */
|
||||
hwmgr->pstate_sclk /= 100;
|
||||
hwmgr->pstate_mclk /= 100;
|
||||
hwmgr->pstate_sclk_peak /= 100;
|
||||
hwmgr->pstate_mclk_peak /= 100;
|
||||
}
|
||||
|
||||
static int smu7_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
int tmp_result = 0;
|
||||
@ -1625,6 +1686,8 @@ static int smu7_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
PP_ASSERT_WITH_CODE((0 == tmp_result),
|
||||
"pcie performance request failed!", result = tmp_result);
|
||||
|
||||
smu7_populate_umdpstate_clocks(hwmgr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -3143,15 +3206,12 @@ static int smu7_get_profiling_clk(struct pp_hwmgr *hwmgr, enum amd_dpm_forced_le
|
||||
for (count = hwmgr->dyn_state.vddc_dependency_on_sclk->count-1;
|
||||
count >= 0; count--) {
|
||||
if (tmp_sclk >= hwmgr->dyn_state.vddc_dependency_on_sclk->entries[count].clk) {
|
||||
tmp_sclk = hwmgr->dyn_state.vddc_dependency_on_sclk->entries[count].clk;
|
||||
*sclk_mask = count;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (count < 0 || level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK) {
|
||||
if (count < 0 || level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK)
|
||||
*sclk_mask = 0;
|
||||
tmp_sclk = hwmgr->dyn_state.vddc_dependency_on_sclk->entries[0].clk;
|
||||
}
|
||||
|
||||
if (level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK)
|
||||
*sclk_mask = hwmgr->dyn_state.vddc_dependency_on_sclk->count-1;
|
||||
@ -3161,15 +3221,12 @@ static int smu7_get_profiling_clk(struct pp_hwmgr *hwmgr, enum amd_dpm_forced_le
|
||||
|
||||
for (count = table_info->vdd_dep_on_sclk->count-1; count >= 0; count--) {
|
||||
if (tmp_sclk >= table_info->vdd_dep_on_sclk->entries[count].clk) {
|
||||
tmp_sclk = table_info->vdd_dep_on_sclk->entries[count].clk;
|
||||
*sclk_mask = count;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (count < 0 || level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK) {
|
||||
if (count < 0 || level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK)
|
||||
*sclk_mask = 0;
|
||||
tmp_sclk = table_info->vdd_dep_on_sclk->entries[0].clk;
|
||||
}
|
||||
|
||||
if (level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK)
|
||||
*sclk_mask = table_info->vdd_dep_on_sclk->count - 1;
|
||||
@ -3181,8 +3238,6 @@ static int smu7_get_profiling_clk(struct pp_hwmgr *hwmgr, enum amd_dpm_forced_le
|
||||
*mclk_mask = golden_dpm_table->mclk_table.count - 1;
|
||||
|
||||
*pcie_mask = data->dpm_table.pcie_speed_table.count - 1;
|
||||
hwmgr->pstate_sclk = tmp_sclk;
|
||||
hwmgr->pstate_mclk = tmp_mclk;
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -3195,9 +3250,6 @@ static int smu7_force_dpm_level(struct pp_hwmgr *hwmgr,
|
||||
uint32_t mclk_mask = 0;
|
||||
uint32_t pcie_mask = 0;
|
||||
|
||||
if (hwmgr->pstate_sclk == 0)
|
||||
smu7_get_profiling_clk(hwmgr, level, &sclk_mask, &mclk_mask, &pcie_mask);
|
||||
|
||||
switch (level) {
|
||||
case AMD_DPM_FORCED_LEVEL_HIGH:
|
||||
ret = smu7_force_dpm_highest(hwmgr);
|
||||
@ -5381,6 +5433,8 @@ static int smu7_get_thermal_temperature_range(struct pp_hwmgr *hwmgr,
|
||||
thermal_data->max = data->thermal_temp_setting.temperature_shutdown *
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
|
||||
thermal_data->sw_ctf_threshold = thermal_data->max;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1016,6 +1016,18 @@ static void smu8_reset_acp_boot_level(struct pp_hwmgr *hwmgr)
|
||||
data->acp_boot_level = 0xff;
|
||||
}
|
||||
|
||||
static void smu8_populate_umdpstate_clocks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
struct phm_clock_voltage_dependency_table *table =
|
||||
hwmgr->dyn_state.vddc_dependency_on_sclk;
|
||||
|
||||
hwmgr->pstate_sclk = table->entries[0].clk / 100;
|
||||
hwmgr->pstate_mclk = 0;
|
||||
|
||||
hwmgr->pstate_sclk_peak = table->entries[table->count - 1].clk / 100;
|
||||
hwmgr->pstate_mclk_peak = 0;
|
||||
}
|
||||
|
||||
static int smu8_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
smu8_program_voting_clients(hwmgr);
|
||||
@ -1024,6 +1036,8 @@ static int smu8_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
smu8_program_bootup_state(hwmgr);
|
||||
smu8_reset_acp_boot_level(hwmgr);
|
||||
|
||||
smu8_populate_umdpstate_clocks(hwmgr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1167,8 +1181,6 @@ static int smu8_phm_unforce_dpm_levels(struct pp_hwmgr *hwmgr)
|
||||
|
||||
data->sclk_dpm.soft_min_clk = table->entries[0].clk;
|
||||
data->sclk_dpm.hard_min_clk = table->entries[0].clk;
|
||||
hwmgr->pstate_sclk = table->entries[0].clk;
|
||||
hwmgr->pstate_mclk = 0;
|
||||
|
||||
level = smu8_get_max_sclk_level(hwmgr) - 1;
|
||||
|
||||
|
@ -603,21 +603,17 @@ int phm_irq_process(struct amdgpu_device *adev,
|
||||
struct amdgpu_irq_src *source,
|
||||
struct amdgpu_iv_entry *entry)
|
||||
{
|
||||
struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle;
|
||||
uint32_t client_id = entry->client_id;
|
||||
uint32_t src_id = entry->src_id;
|
||||
|
||||
if (client_id == AMDGPU_IRQ_CLIENTID_LEGACY) {
|
||||
if (src_id == VISLANDS30_IV_SRCID_CG_TSS_THERMAL_LOW_TO_HIGH) {
|
||||
dev_emerg(adev->dev, "ERROR: GPU over temperature range(SW CTF) detected!\n");
|
||||
/*
|
||||
* SW CTF just occurred.
|
||||
* Try to do a graceful shutdown to prevent further damage.
|
||||
*/
|
||||
dev_emerg(adev->dev, "ERROR: System is going to shutdown due to GPU SW CTF!\n");
|
||||
orderly_poweroff(true);
|
||||
} else if (src_id == VISLANDS30_IV_SRCID_CG_TSS_THERMAL_HIGH_TO_LOW)
|
||||
schedule_delayed_work(&hwmgr->swctf_delayed_work,
|
||||
msecs_to_jiffies(AMDGPU_SWCTF_EXTRA_DELAY));
|
||||
} else if (src_id == VISLANDS30_IV_SRCID_CG_TSS_THERMAL_HIGH_TO_LOW) {
|
||||
dev_emerg(adev->dev, "ERROR: GPU under temperature range detected!\n");
|
||||
else if (src_id == VISLANDS30_IV_SRCID_GPIO_19) {
|
||||
} else if (src_id == VISLANDS30_IV_SRCID_GPIO_19) {
|
||||
dev_emerg(adev->dev, "ERROR: GPU HW Critical Temperature Fault(aka CTF) detected!\n");
|
||||
/*
|
||||
* HW CTF just occurred. Shutdown to prevent further damage.
|
||||
@ -626,15 +622,10 @@ int phm_irq_process(struct amdgpu_device *adev,
|
||||
orderly_poweroff(true);
|
||||
}
|
||||
} else if (client_id == SOC15_IH_CLIENTID_THM) {
|
||||
if (src_id == 0) {
|
||||
dev_emerg(adev->dev, "ERROR: GPU over temperature range(SW CTF) detected!\n");
|
||||
/*
|
||||
* SW CTF just occurred.
|
||||
* Try to do a graceful shutdown to prevent further damage.
|
||||
*/
|
||||
dev_emerg(adev->dev, "ERROR: System is going to shutdown due to GPU SW CTF!\n");
|
||||
orderly_poweroff(true);
|
||||
} else
|
||||
if (src_id == 0)
|
||||
schedule_delayed_work(&hwmgr->swctf_delayed_work,
|
||||
msecs_to_jiffies(AMDGPU_SWCTF_EXTRA_DELAY));
|
||||
else
|
||||
dev_emerg(adev->dev, "ERROR: GPU under temperature range detected!\n");
|
||||
} else if (client_id == SOC15_IH_CLIENTID_ROM_SMUIO) {
|
||||
dev_emerg(adev->dev, "ERROR: GPU HW Critical Temperature Fault(aka CTF) detected!\n");
|
||||
|
@ -3008,6 +3008,30 @@ static int vega10_enable_disable_PCC_limit_feature(struct pp_hwmgr *hwmgr, bool
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void vega10_populate_umdpstate_clocks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
struct phm_ppt_v2_information *table_info =
|
||||
(struct phm_ppt_v2_information *)(hwmgr->pptable);
|
||||
|
||||
if (table_info->vdd_dep_on_sclk->count > VEGA10_UMD_PSTATE_GFXCLK_LEVEL &&
|
||||
table_info->vdd_dep_on_mclk->count > VEGA10_UMD_PSTATE_MCLK_LEVEL) {
|
||||
hwmgr->pstate_sclk = table_info->vdd_dep_on_sclk->entries[VEGA10_UMD_PSTATE_GFXCLK_LEVEL].clk;
|
||||
hwmgr->pstate_mclk = table_info->vdd_dep_on_mclk->entries[VEGA10_UMD_PSTATE_MCLK_LEVEL].clk;
|
||||
} else {
|
||||
hwmgr->pstate_sclk = table_info->vdd_dep_on_sclk->entries[0].clk;
|
||||
hwmgr->pstate_mclk = table_info->vdd_dep_on_mclk->entries[0].clk;
|
||||
}
|
||||
|
||||
hwmgr->pstate_sclk_peak = table_info->vdd_dep_on_sclk->entries[table_info->vdd_dep_on_sclk->count - 1].clk;
|
||||
hwmgr->pstate_mclk_peak = table_info->vdd_dep_on_mclk->entries[table_info->vdd_dep_on_mclk->count - 1].clk;
|
||||
|
||||
/* make sure the output is in Mhz */
|
||||
hwmgr->pstate_sclk /= 100;
|
||||
hwmgr->pstate_mclk /= 100;
|
||||
hwmgr->pstate_sclk_peak /= 100;
|
||||
hwmgr->pstate_mclk_peak /= 100;
|
||||
}
|
||||
|
||||
static int vega10_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
struct vega10_hwmgr *data = hwmgr->backend;
|
||||
@ -3082,6 +3106,8 @@ static int vega10_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
result = tmp_result);
|
||||
}
|
||||
|
||||
vega10_populate_umdpstate_clocks(hwmgr);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@ -4169,8 +4195,6 @@ static int vega10_get_profiling_clk_mask(struct pp_hwmgr *hwmgr, enum amd_dpm_fo
|
||||
*sclk_mask = VEGA10_UMD_PSTATE_GFXCLK_LEVEL;
|
||||
*soc_mask = VEGA10_UMD_PSTATE_SOCCLK_LEVEL;
|
||||
*mclk_mask = VEGA10_UMD_PSTATE_MCLK_LEVEL;
|
||||
hwmgr->pstate_sclk = table_info->vdd_dep_on_sclk->entries[VEGA10_UMD_PSTATE_GFXCLK_LEVEL].clk;
|
||||
hwmgr->pstate_mclk = table_info->vdd_dep_on_mclk->entries[VEGA10_UMD_PSTATE_MCLK_LEVEL].clk;
|
||||
}
|
||||
|
||||
if (level == AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK) {
|
||||
@ -4281,9 +4305,6 @@ static int vega10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
|
||||
uint32_t mclk_mask = 0;
|
||||
uint32_t soc_mask = 0;
|
||||
|
||||
if (hwmgr->pstate_sclk == 0)
|
||||
vega10_get_profiling_clk_mask(hwmgr, level, &sclk_mask, &mclk_mask, &soc_mask);
|
||||
|
||||
switch (level) {
|
||||
case AMD_DPM_FORCED_LEVEL_HIGH:
|
||||
ret = vega10_force_dpm_highest(hwmgr);
|
||||
@ -5221,6 +5242,9 @@ static int vega10_get_thermal_temperature_range(struct pp_hwmgr *hwmgr,
|
||||
{
|
||||
struct vega10_hwmgr *data = hwmgr->backend;
|
||||
PPTable_t *pp_table = &(data->smc_state_table.pp_table);
|
||||
struct phm_ppt_v2_information *pp_table_info =
|
||||
(struct phm_ppt_v2_information *)(hwmgr->pptable);
|
||||
struct phm_tdp_table *tdp_table = pp_table_info->tdp_table;
|
||||
|
||||
memcpy(thermal_data, &SMU7ThermalWithDelayPolicy[0], sizeof(struct PP_TemperatureRange));
|
||||
|
||||
@ -5237,6 +5261,13 @@ static int vega10_get_thermal_temperature_range(struct pp_hwmgr *hwmgr,
|
||||
thermal_data->mem_emergency_max = (pp_table->ThbmLimit + CTF_OFFSET_HBM)*
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
|
||||
if (tdp_table->usSoftwareShutdownTemp > pp_table->ThotspotLimit &&
|
||||
tdp_table->usSoftwareShutdownTemp < VEGA10_THERMAL_MAXIMUM_ALERT_TEMP)
|
||||
thermal_data->sw_ctf_threshold = tdp_table->usSoftwareShutdownTemp;
|
||||
else
|
||||
thermal_data->sw_ctf_threshold = VEGA10_THERMAL_MAXIMUM_ALERT_TEMP;
|
||||
thermal_data->sw_ctf_threshold *= PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -1026,6 +1026,25 @@ static int vega12_get_all_clock_ranges(struct pp_hwmgr *hwmgr)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void vega12_populate_umdpstate_clocks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
struct vega12_hwmgr *data = (struct vega12_hwmgr *)(hwmgr->backend);
|
||||
struct vega12_single_dpm_table *gfx_dpm_table = &(data->dpm_table.gfx_table);
|
||||
struct vega12_single_dpm_table *mem_dpm_table = &(data->dpm_table.mem_table);
|
||||
|
||||
if (gfx_dpm_table->count > VEGA12_UMD_PSTATE_GFXCLK_LEVEL &&
|
||||
mem_dpm_table->count > VEGA12_UMD_PSTATE_MCLK_LEVEL) {
|
||||
hwmgr->pstate_sclk = gfx_dpm_table->dpm_levels[VEGA12_UMD_PSTATE_GFXCLK_LEVEL].value;
|
||||
hwmgr->pstate_mclk = mem_dpm_table->dpm_levels[VEGA12_UMD_PSTATE_MCLK_LEVEL].value;
|
||||
} else {
|
||||
hwmgr->pstate_sclk = gfx_dpm_table->dpm_levels[0].value;
|
||||
hwmgr->pstate_mclk = mem_dpm_table->dpm_levels[0].value;
|
||||
}
|
||||
|
||||
hwmgr->pstate_sclk_peak = gfx_dpm_table->dpm_levels[gfx_dpm_table->count].value;
|
||||
hwmgr->pstate_mclk_peak = mem_dpm_table->dpm_levels[mem_dpm_table->count].value;
|
||||
}
|
||||
|
||||
static int vega12_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
int tmp_result, result = 0;
|
||||
@ -1077,6 +1096,9 @@ static int vega12_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
PP_ASSERT_WITH_CODE(!result,
|
||||
"Failed to setup default DPM tables!",
|
||||
return result);
|
||||
|
||||
vega12_populate_umdpstate_clocks(hwmgr);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
@ -2742,6 +2764,8 @@ static int vega12_notify_cac_buffer_info(struct pp_hwmgr *hwmgr,
|
||||
static int vega12_get_thermal_temperature_range(struct pp_hwmgr *hwmgr,
|
||||
struct PP_TemperatureRange *thermal_data)
|
||||
{
|
||||
struct phm_ppt_v3_information *pptable_information =
|
||||
(struct phm_ppt_v3_information *)hwmgr->pptable;
|
||||
struct vega12_hwmgr *data =
|
||||
(struct vega12_hwmgr *)(hwmgr->backend);
|
||||
PPTable_t *pp_table = &(data->smc_state_table.pp_table);
|
||||
@ -2760,6 +2784,8 @@ static int vega12_get_thermal_temperature_range(struct pp_hwmgr *hwmgr,
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
thermal_data->mem_emergency_max = (pp_table->ThbmLimit + CTF_OFFSET_HBM)*
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
thermal_data->sw_ctf_threshold = pptable_information->us_software_shutdown_temp *
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1555,26 +1555,23 @@ static int vega20_set_mclk_od(
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int vega20_populate_umdpstate_clocks(
|
||||
struct pp_hwmgr *hwmgr)
|
||||
static void vega20_populate_umdpstate_clocks(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
struct vega20_hwmgr *data = (struct vega20_hwmgr *)(hwmgr->backend);
|
||||
struct vega20_single_dpm_table *gfx_table = &(data->dpm_table.gfx_table);
|
||||
struct vega20_single_dpm_table *mem_table = &(data->dpm_table.mem_table);
|
||||
|
||||
hwmgr->pstate_sclk = gfx_table->dpm_levels[0].value;
|
||||
hwmgr->pstate_mclk = mem_table->dpm_levels[0].value;
|
||||
|
||||
if (gfx_table->count > VEGA20_UMD_PSTATE_GFXCLK_LEVEL &&
|
||||
mem_table->count > VEGA20_UMD_PSTATE_MCLK_LEVEL) {
|
||||
hwmgr->pstate_sclk = gfx_table->dpm_levels[VEGA20_UMD_PSTATE_GFXCLK_LEVEL].value;
|
||||
hwmgr->pstate_mclk = mem_table->dpm_levels[VEGA20_UMD_PSTATE_MCLK_LEVEL].value;
|
||||
} else {
|
||||
hwmgr->pstate_sclk = gfx_table->dpm_levels[0].value;
|
||||
hwmgr->pstate_mclk = mem_table->dpm_levels[0].value;
|
||||
}
|
||||
|
||||
hwmgr->pstate_sclk = hwmgr->pstate_sclk * 100;
|
||||
hwmgr->pstate_mclk = hwmgr->pstate_mclk * 100;
|
||||
|
||||
return 0;
|
||||
hwmgr->pstate_sclk_peak = gfx_table->dpm_levels[gfx_table->count - 1].value;
|
||||
hwmgr->pstate_mclk_peak = mem_table->dpm_levels[mem_table->count - 1].value;
|
||||
}
|
||||
|
||||
static int vega20_get_max_sustainable_clock(struct pp_hwmgr *hwmgr,
|
||||
@ -1753,10 +1750,7 @@ static int vega20_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
"[EnableDPMTasks] Failed to initialize odn settings!",
|
||||
return result);
|
||||
|
||||
result = vega20_populate_umdpstate_clocks(hwmgr);
|
||||
PP_ASSERT_WITH_CODE(!result,
|
||||
"[EnableDPMTasks] Failed to populate umdpstate clocks!",
|
||||
return result);
|
||||
vega20_populate_umdpstate_clocks(hwmgr);
|
||||
|
||||
result = smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_GetPptLimit,
|
||||
POWER_SOURCE_AC << 16, &hwmgr->default_power_limit);
|
||||
@ -4213,6 +4207,8 @@ static int vega20_notify_cac_buffer_info(struct pp_hwmgr *hwmgr,
|
||||
static int vega20_get_thermal_temperature_range(struct pp_hwmgr *hwmgr,
|
||||
struct PP_TemperatureRange *thermal_data)
|
||||
{
|
||||
struct phm_ppt_v3_information *pptable_information =
|
||||
(struct phm_ppt_v3_information *)hwmgr->pptable;
|
||||
struct vega20_hwmgr *data =
|
||||
(struct vega20_hwmgr *)(hwmgr->backend);
|
||||
PPTable_t *pp_table = &(data->smc_state_table.pp_table);
|
||||
@ -4231,6 +4227,8 @@ static int vega20_get_thermal_temperature_range(struct pp_hwmgr *hwmgr,
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
thermal_data->mem_emergency_max = (pp_table->ThbmLimit + CTF_OFFSET_HBM)*
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
thermal_data->sw_ctf_threshold = pptable_information->us_software_shutdown_temp *
|
||||
PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -809,6 +809,10 @@ struct pp_hwmgr {
|
||||
uint32_t workload_prority[Workload_Policy_Max];
|
||||
uint32_t workload_setting[Workload_Policy_Max];
|
||||
bool gfxoff_state_changed_by_workload;
|
||||
uint32_t pstate_sclk_peak;
|
||||
uint32_t pstate_mclk_peak;
|
||||
|
||||
struct delayed_work swctf_delayed_work;
|
||||
};
|
||||
|
||||
int hwmgr_early_init(struct pp_hwmgr *hwmgr);
|
||||
|
@ -131,6 +131,7 @@ struct PP_TemperatureRange {
|
||||
int mem_min;
|
||||
int mem_crit_max;
|
||||
int mem_emergency_max;
|
||||
int sw_ctf_threshold;
|
||||
};
|
||||
|
||||
struct PP_StateValidationBlock {
|
||||
|
@ -24,6 +24,7 @@
|
||||
|
||||
#include <linux/firmware.h>
|
||||
#include <linux/pci.h>
|
||||
#include <linux/reboot.h>
|
||||
|
||||
#include "amdgpu.h"
|
||||
#include "amdgpu_smu.h"
|
||||
@ -1061,6 +1062,34 @@ static void smu_interrupt_work_fn(struct work_struct *work)
|
||||
smu->ppt_funcs->interrupt_work(smu);
|
||||
}
|
||||
|
||||
static void smu_swctf_delayed_work_handler(struct work_struct *work)
|
||||
{
|
||||
struct smu_context *smu =
|
||||
container_of(work, struct smu_context, swctf_delayed_work.work);
|
||||
struct smu_temperature_range *range =
|
||||
&smu->thermal_range;
|
||||
struct amdgpu_device *adev = smu->adev;
|
||||
uint32_t hotspot_tmp, size;
|
||||
|
||||
/*
|
||||
* If the hotspot temperature is confirmed as below SW CTF setting point
|
||||
* after the delay enforced, nothing will be done.
|
||||
* Otherwise, a graceful shutdown will be performed to prevent further damage.
|
||||
*/
|
||||
if (range->software_shutdown_temp &&
|
||||
smu->ppt_funcs->read_sensor &&
|
||||
!smu->ppt_funcs->read_sensor(smu,
|
||||
AMDGPU_PP_SENSOR_HOTSPOT_TEMP,
|
||||
&hotspot_tmp,
|
||||
&size) &&
|
||||
hotspot_tmp / 1000 < range->software_shutdown_temp)
|
||||
return;
|
||||
|
||||
dev_emerg(adev->dev, "ERROR: GPU over temperature range(SW CTF) detected!\n");
|
||||
dev_emerg(adev->dev, "ERROR: System is going to shutdown due to GPU SW CTF!\n");
|
||||
orderly_poweroff(true);
|
||||
}
|
||||
|
||||
static int smu_sw_init(void *handle)
|
||||
{
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
@ -1109,6 +1138,9 @@ static int smu_sw_init(void *handle)
|
||||
return ret;
|
||||
}
|
||||
|
||||
INIT_DELAYED_WORK(&smu->swctf_delayed_work,
|
||||
smu_swctf_delayed_work_handler);
|
||||
|
||||
ret = smu_smc_table_sw_init(smu);
|
||||
if (ret) {
|
||||
dev_err(adev->dev, "Failed to sw init smc table!\n");
|
||||
@ -1581,6 +1613,8 @@ static int smu_smc_hw_cleanup(struct smu_context *smu)
|
||||
return ret;
|
||||
}
|
||||
|
||||
cancel_delayed_work_sync(&smu->swctf_delayed_work);
|
||||
|
||||
ret = smu_disable_dpms(smu);
|
||||
if (ret) {
|
||||
dev_err(adev->dev, "Fail to disable dpm features!\n");
|
||||
@ -2520,6 +2554,14 @@ static int smu_read_sensor(void *handle,
|
||||
*((uint32_t *)data) = pstate_table->uclk_pstate.standard * 100;
|
||||
*size = 4;
|
||||
break;
|
||||
case AMDGPU_PP_SENSOR_PEAK_PSTATE_SCLK:
|
||||
*((uint32_t *)data) = pstate_table->gfxclk_pstate.peak * 100;
|
||||
*size = 4;
|
||||
break;
|
||||
case AMDGPU_PP_SENSOR_PEAK_PSTATE_MCLK:
|
||||
*((uint32_t *)data) = pstate_table->uclk_pstate.peak * 100;
|
||||
*size = 4;
|
||||
break;
|
||||
case AMDGPU_PP_SENSOR_ENABLED_SMC_FEATURES_MASK:
|
||||
ret = smu_feature_get_enabled_mask(smu, (uint64_t *)data);
|
||||
*size = 8;
|
||||
|
@ -573,6 +573,8 @@ struct smu_context
|
||||
u32 debug_param_reg;
|
||||
u32 debug_msg_reg;
|
||||
u32 debug_resp_reg;
|
||||
|
||||
struct delayed_work swctf_delayed_work;
|
||||
};
|
||||
|
||||
struct i2c_adapter;
|
||||
|
@ -1438,13 +1438,8 @@ static int smu_v11_0_irq_process(struct amdgpu_device *adev,
|
||||
if (client_id == SOC15_IH_CLIENTID_THM) {
|
||||
switch (src_id) {
|
||||
case THM_11_0__SRCID__THM_DIG_THERM_L2H:
|
||||
dev_emerg(adev->dev, "ERROR: GPU over temperature range(SW CTF) detected!\n");
|
||||
/*
|
||||
* SW CTF just occurred.
|
||||
* Try to do a graceful shutdown to prevent further damage.
|
||||
*/
|
||||
dev_emerg(adev->dev, "ERROR: System is going to shutdown due to GPU SW CTF!\n");
|
||||
orderly_poweroff(true);
|
||||
schedule_delayed_work(&smu->swctf_delayed_work,
|
||||
msecs_to_jiffies(AMDGPU_SWCTF_EXTRA_DELAY));
|
||||
break;
|
||||
case THM_11_0__SRCID__THM_DIG_THERM_H2L:
|
||||
dev_emerg(adev->dev, "ERROR: GPU under temperature range detected\n");
|
||||
|
@ -1386,13 +1386,8 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
|
||||
if (client_id == SOC15_IH_CLIENTID_THM) {
|
||||
switch (src_id) {
|
||||
case THM_11_0__SRCID__THM_DIG_THERM_L2H:
|
||||
dev_emerg(adev->dev, "ERROR: GPU over temperature range(SW CTF) detected!\n");
|
||||
/*
|
||||
* SW CTF just occurred.
|
||||
* Try to do a graceful shutdown to prevent further damage.
|
||||
*/
|
||||
dev_emerg(adev->dev, "ERROR: System is going to shutdown due to GPU SW CTF!\n");
|
||||
orderly_poweroff(true);
|
||||
schedule_delayed_work(&smu->swctf_delayed_work,
|
||||
msecs_to_jiffies(AMDGPU_SWCTF_EXTRA_DELAY));
|
||||
break;
|
||||
case THM_11_0__SRCID__THM_DIG_THERM_H2L:
|
||||
dev_emerg(adev->dev, "ERROR: GPU under temperature range detected\n");
|
||||
|
@ -622,7 +622,13 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
|
||||
int ret;
|
||||
|
||||
if (obj->import_attach) {
|
||||
/* Reset both vm_ops and vm_private_data, so we don't end up with
|
||||
* vm_ops pointing to our implementation if the dma-buf backend
|
||||
* doesn't set those fields.
|
||||
*/
|
||||
vma->vm_private_data = NULL;
|
||||
vma->vm_ops = NULL;
|
||||
|
||||
ret = dma_buf_mmap(obj->dma_buf, vma, 0);
|
||||
|
||||
/* Drop the reference drm_gem_mmap_obj() acquired.*/
|
||||
|
@ -967,7 +967,7 @@ nouveau_connector_get_modes(struct drm_connector *connector)
|
||||
/* Determine display colour depth for everything except LVDS now,
|
||||
* DP requires this before mode_valid() is called.
|
||||
*/
|
||||
if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS && nv_connector->native_mode)
|
||||
if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS)
|
||||
nouveau_connector_detect_depth(connector);
|
||||
|
||||
/* Find the native mode if this is a digital panel, if we didn't
|
||||
|
@ -26,6 +26,8 @@
|
||||
#include "head.h"
|
||||
#include "ior.h"
|
||||
|
||||
#include <drm/display/drm_dp.h>
|
||||
|
||||
#include <subdev/bios.h>
|
||||
#include <subdev/bios/init.h>
|
||||
#include <subdev/gpio.h>
|
||||
@ -474,6 +476,50 @@ nvkm_dp_train(struct nvkm_outp *outp, u32 dataKBps)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* XXX: This is a big fat hack, and this is just drm_dp_read_dpcd_caps()
|
||||
* converted to work inside nvkm. This is a temporary holdover until we start
|
||||
* passing the drm_dp_aux device through NVKM
|
||||
*/
|
||||
static int
|
||||
nvkm_dp_read_dpcd_caps(struct nvkm_outp *outp)
|
||||
{
|
||||
struct nvkm_i2c_aux *aux = outp->dp.aux;
|
||||
u8 dpcd_ext[DP_RECEIVER_CAP_SIZE];
|
||||
int ret;
|
||||
|
||||
ret = nvkm_rdaux(aux, DPCD_RC00_DPCD_REV, outp->dp.dpcd, DP_RECEIVER_CAP_SIZE);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Prior to DP1.3 the bit represented by
|
||||
* DP_EXTENDED_RECEIVER_CAP_FIELD_PRESENT was reserved.
|
||||
* If it is set DP_DPCD_REV at 0000h could be at a value less than
|
||||
* the true capability of the panel. The only way to check is to
|
||||
* then compare 0000h and 2200h.
|
||||
*/
|
||||
if (!(outp->dp.dpcd[DP_TRAINING_AUX_RD_INTERVAL] &
|
||||
DP_EXTENDED_RECEIVER_CAP_FIELD_PRESENT))
|
||||
return 0;
|
||||
|
||||
ret = nvkm_rdaux(aux, DP_DP13_DPCD_REV, dpcd_ext, sizeof(dpcd_ext));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (outp->dp.dpcd[DP_DPCD_REV] > dpcd_ext[DP_DPCD_REV]) {
|
||||
OUTP_DBG(outp, "Extended DPCD rev less than base DPCD rev (%d > %d)\n",
|
||||
outp->dp.dpcd[DP_DPCD_REV], dpcd_ext[DP_DPCD_REV]);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!memcmp(outp->dp.dpcd, dpcd_ext, sizeof(dpcd_ext)))
|
||||
return 0;
|
||||
|
||||
memcpy(outp->dp.dpcd, dpcd_ext, sizeof(dpcd_ext));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void
|
||||
nvkm_dp_disable(struct nvkm_outp *outp, struct nvkm_ior *ior)
|
||||
{
|
||||
@ -630,7 +676,7 @@ nvkm_dp_enable(struct nvkm_outp *outp, bool enable)
|
||||
memset(outp->dp.lttpr, 0x00, sizeof(outp->dp.lttpr));
|
||||
}
|
||||
|
||||
if (!nvkm_rdaux(aux, DPCD_RC00_DPCD_REV, outp->dp.dpcd, sizeof(outp->dp.dpcd))) {
|
||||
if (!nvkm_dp_read_dpcd_caps(outp)) {
|
||||
const u8 rates[] = { 0x1e, 0x14, 0x0a, 0x06, 0 };
|
||||
const u8 *rate;
|
||||
int rate_max;
|
||||
|
@ -123,6 +123,7 @@ void gk104_grctx_generate_r418800(struct gf100_gr *);
|
||||
|
||||
extern const struct gf100_grctx_func gk110_grctx;
|
||||
void gk110_grctx_generate_r419eb0(struct gf100_gr *);
|
||||
void gk110_grctx_generate_r419f78(struct gf100_gr *);
|
||||
|
||||
extern const struct gf100_grctx_func gk110b_grctx;
|
||||
extern const struct gf100_grctx_func gk208_grctx;
|
||||
|
@ -916,7 +916,9 @@ static void
|
||||
gk104_grctx_generate_r419f78(struct gf100_gr *gr)
|
||||
{
|
||||
struct nvkm_device *device = gr->base.engine.subdev.device;
|
||||
nvkm_mask(device, 0x419f78, 0x00000001, 0x00000000);
|
||||
|
||||
/* bit 3 set disables loads in fp helper invocations, we need it enabled */
|
||||
nvkm_mask(device, 0x419f78, 0x00000009, 0x00000000);
|
||||
}
|
||||
|
||||
void
|
||||
|
@ -820,6 +820,15 @@ gk110_grctx_generate_r419eb0(struct gf100_gr *gr)
|
||||
nvkm_mask(device, 0x419eb0, 0x00001000, 0x00001000);
|
||||
}
|
||||
|
||||
void
|
||||
gk110_grctx_generate_r419f78(struct gf100_gr *gr)
|
||||
{
|
||||
struct nvkm_device *device = gr->base.engine.subdev.device;
|
||||
|
||||
/* bit 3 set disables loads in fp helper invocations, we need it enabled */
|
||||
nvkm_mask(device, 0x419f78, 0x00000008, 0x00000000);
|
||||
}
|
||||
|
||||
const struct gf100_grctx_func
|
||||
gk110_grctx = {
|
||||
.main = gf100_grctx_generate_main,
|
||||
@ -852,4 +861,5 @@ gk110_grctx = {
|
||||
.gpc_tpc_nr = gk104_grctx_generate_gpc_tpc_nr,
|
||||
.r418800 = gk104_grctx_generate_r418800,
|
||||
.r419eb0 = gk110_grctx_generate_r419eb0,
|
||||
.r419f78 = gk110_grctx_generate_r419f78,
|
||||
};
|
||||
|
@ -101,4 +101,5 @@ gk110b_grctx = {
|
||||
.gpc_tpc_nr = gk104_grctx_generate_gpc_tpc_nr,
|
||||
.r418800 = gk104_grctx_generate_r418800,
|
||||
.r419eb0 = gk110_grctx_generate_r419eb0,
|
||||
.r419f78 = gk110_grctx_generate_r419f78,
|
||||
};
|
||||
|
@ -566,4 +566,5 @@ gk208_grctx = {
|
||||
.dist_skip_table = gf117_grctx_generate_dist_skip_table,
|
||||
.gpc_tpc_nr = gk104_grctx_generate_gpc_tpc_nr,
|
||||
.r418800 = gk104_grctx_generate_r418800,
|
||||
.r419f78 = gk110_grctx_generate_r419f78,
|
||||
};
|
||||
|
@ -991,4 +991,5 @@ gm107_grctx = {
|
||||
.r406500 = gm107_grctx_generate_r406500,
|
||||
.gpc_tpc_nr = gk104_grctx_generate_gpc_tpc_nr,
|
||||
.r419e00 = gm107_grctx_generate_r419e00,
|
||||
.r419f78 = gk110_grctx_generate_r419f78,
|
||||
};
|
||||
|
@ -836,12 +836,12 @@ static int vop_plane_atomic_check(struct drm_plane *plane,
|
||||
* need align with 2 pixel.
|
||||
*/
|
||||
if (fb->format->is_yuv && ((new_plane_state->src.x1 >> 16) % 2)) {
|
||||
DRM_ERROR("Invalid Source: Yuv format not support odd xpos\n");
|
||||
DRM_DEBUG_KMS("Invalid Source: Yuv format not support odd xpos\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (fb->format->is_yuv && new_plane_state->rotation & DRM_MODE_REFLECT_Y) {
|
||||
DRM_ERROR("Invalid Source: Yuv format does not support this rotation\n");
|
||||
DRM_DEBUG_KMS("Invalid Source: Yuv format does not support this rotation\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -849,7 +849,7 @@ static int vop_plane_atomic_check(struct drm_plane *plane,
|
||||
struct vop *vop = to_vop(crtc);
|
||||
|
||||
if (!vop->data->afbc) {
|
||||
DRM_ERROR("vop does not support AFBC\n");
|
||||
DRM_DEBUG_KMS("vop does not support AFBC\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -858,15 +858,16 @@ static int vop_plane_atomic_check(struct drm_plane *plane,
|
||||
return ret;
|
||||
|
||||
if (new_plane_state->src.x1 || new_plane_state->src.y1) {
|
||||
DRM_ERROR("AFBC does not support offset display, xpos=%d, ypos=%d, offset=%d\n",
|
||||
new_plane_state->src.x1,
|
||||
new_plane_state->src.y1, fb->offsets[0]);
|
||||
DRM_DEBUG_KMS("AFBC does not support offset display, " \
|
||||
"xpos=%d, ypos=%d, offset=%d\n",
|
||||
new_plane_state->src.x1, new_plane_state->src.y1,
|
||||
fb->offsets[0]);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (new_plane_state->rotation && new_plane_state->rotation != DRM_MODE_ROTATE_0) {
|
||||
DRM_ERROR("No rotation support in AFBC, rotation=%d\n",
|
||||
new_plane_state->rotation);
|
||||
DRM_DEBUG_KMS("No rotation support in AFBC, rotation=%d\n",
|
||||
new_plane_state->rotation);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
@ -17,12 +17,13 @@
|
||||
enum chips {pfe1100, pfe3000};
|
||||
|
||||
/*
|
||||
* Disable status check for pfe3000 devices, because some devices report
|
||||
* communication error (invalid command) for VOUT_MODE command (0x20)
|
||||
* although correct VOUT_MODE (0x16) is returned: it leads to incorrect
|
||||
* exponent in linear mode.
|
||||
* Disable status check because some devices report communication error
|
||||
* (invalid command) for VOUT_MODE command (0x20) although the correct
|
||||
* VOUT_MODE (0x16) is returned: it leads to incorrect exponent in linear
|
||||
* mode.
|
||||
* This affects both pfe3000 and pfe1100.
|
||||
*/
|
||||
static struct pmbus_platform_data pfe3000_plat_data = {
|
||||
static struct pmbus_platform_data pfe_plat_data = {
|
||||
.flags = PMBUS_SKIP_STATUS_CHECK,
|
||||
};
|
||||
|
||||
@ -94,16 +95,15 @@ static int pfe_pmbus_probe(struct i2c_client *client)
|
||||
int model;
|
||||
|
||||
model = (int)i2c_match_id(pfe_device_id, client)->driver_data;
|
||||
client->dev.platform_data = &pfe_plat_data;
|
||||
|
||||
/*
|
||||
* PFE3000-12-069RA devices may not stay in page 0 during device
|
||||
* probe which leads to probe failure (read status word failed).
|
||||
* So let's set the device to page 0 at the beginning.
|
||||
*/
|
||||
if (model == pfe3000) {
|
||||
client->dev.platform_data = &pfe3000_plat_data;
|
||||
if (model == pfe3000)
|
||||
i2c_smbus_write_byte_data(client, PMBUS_PAGE, 0);
|
||||
}
|
||||
|
||||
return pmbus_do_probe(client, &pfe_driver_info[model]);
|
||||
}
|
||||
|
@ -62,7 +62,6 @@
|
||||
#define AD7192_MODE_STA_MASK BIT(20) /* Status Register transmission Mask */
|
||||
#define AD7192_MODE_CLKSRC(x) (((x) & 0x3) << 18) /* Clock Source Select */
|
||||
#define AD7192_MODE_SINC3 BIT(15) /* SINC3 Filter Select */
|
||||
#define AD7192_MODE_ACX BIT(14) /* AC excitation enable(AD7195 only)*/
|
||||
#define AD7192_MODE_ENPAR BIT(13) /* Parity Enable */
|
||||
#define AD7192_MODE_CLKDIV BIT(12) /* Clock divide by 2 (AD7190/2 only)*/
|
||||
#define AD7192_MODE_SCYCLE BIT(11) /* Single cycle conversion */
|
||||
@ -91,6 +90,7 @@
|
||||
/* Configuration Register Bit Designations (AD7192_REG_CONF) */
|
||||
|
||||
#define AD7192_CONF_CHOP BIT(23) /* CHOP enable */
|
||||
#define AD7192_CONF_ACX BIT(22) /* AC excitation enable(AD7195 only) */
|
||||
#define AD7192_CONF_REFSEL BIT(20) /* REFIN1/REFIN2 Reference Select */
|
||||
#define AD7192_CONF_CHAN(x) ((x) << 8) /* Channel select */
|
||||
#define AD7192_CONF_CHAN_MASK (0x7FF << 8) /* Channel select mask */
|
||||
@ -473,7 +473,7 @@ static ssize_t ad7192_show_ac_excitation(struct device *dev,
|
||||
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
|
||||
struct ad7192_state *st = iio_priv(indio_dev);
|
||||
|
||||
return sysfs_emit(buf, "%d\n", !!(st->mode & AD7192_MODE_ACX));
|
||||
return sysfs_emit(buf, "%d\n", !!(st->conf & AD7192_CONF_ACX));
|
||||
}
|
||||
|
||||
static ssize_t ad7192_show_bridge_switch(struct device *dev,
|
||||
@ -514,13 +514,13 @@ static ssize_t ad7192_set(struct device *dev,
|
||||
|
||||
ad_sd_write_reg(&st->sd, AD7192_REG_GPOCON, 1, st->gpocon);
|
||||
break;
|
||||
case AD7192_REG_MODE:
|
||||
case AD7192_REG_CONF:
|
||||
if (val)
|
||||
st->mode |= AD7192_MODE_ACX;
|
||||
st->conf |= AD7192_CONF_ACX;
|
||||
else
|
||||
st->mode &= ~AD7192_MODE_ACX;
|
||||
st->conf &= ~AD7192_CONF_ACX;
|
||||
|
||||
ad_sd_write_reg(&st->sd, AD7192_REG_MODE, 3, st->mode);
|
||||
ad_sd_write_reg(&st->sd, AD7192_REG_CONF, 3, st->conf);
|
||||
break;
|
||||
default:
|
||||
ret = -EINVAL;
|
||||
@ -580,12 +580,11 @@ static IIO_DEVICE_ATTR(bridge_switch_en, 0644,
|
||||
|
||||
static IIO_DEVICE_ATTR(ac_excitation_en, 0644,
|
||||
ad7192_show_ac_excitation, ad7192_set,
|
||||
AD7192_REG_MODE);
|
||||
AD7192_REG_CONF);
|
||||
|
||||
static struct attribute *ad7192_attributes[] = {
|
||||
&iio_dev_attr_filter_low_pass_3db_frequency_available.dev_attr.attr,
|
||||
&iio_dev_attr_bridge_switch_en.dev_attr.attr,
|
||||
&iio_dev_attr_ac_excitation_en.dev_attr.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
@ -596,6 +595,7 @@ static const struct attribute_group ad7192_attribute_group = {
|
||||
static struct attribute *ad7195_attributes[] = {
|
||||
&iio_dev_attr_filter_low_pass_3db_frequency_available.dev_attr.attr,
|
||||
&iio_dev_attr_bridge_switch_en.dev_attr.attr,
|
||||
&iio_dev_attr_ac_excitation_en.dev_attr.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
|
@ -124,6 +124,7 @@ static const struct regmap_config ina2xx_regmap_config = {
|
||||
enum ina2xx_ids { ina219, ina226 };
|
||||
|
||||
struct ina2xx_config {
|
||||
const char *name;
|
||||
u16 config_default;
|
||||
int calibration_value;
|
||||
int shunt_voltage_lsb; /* nV */
|
||||
@ -155,6 +156,7 @@ struct ina2xx_chip_info {
|
||||
|
||||
static const struct ina2xx_config ina2xx_config[] = {
|
||||
[ina219] = {
|
||||
.name = "ina219",
|
||||
.config_default = INA219_CONFIG_DEFAULT,
|
||||
.calibration_value = 4096,
|
||||
.shunt_voltage_lsb = 10000,
|
||||
@ -164,6 +166,7 @@ static const struct ina2xx_config ina2xx_config[] = {
|
||||
.chip_id = ina219,
|
||||
},
|
||||
[ina226] = {
|
||||
.name = "ina226",
|
||||
.config_default = INA226_CONFIG_DEFAULT,
|
||||
.calibration_value = 2048,
|
||||
.shunt_voltage_lsb = 2500,
|
||||
@ -996,7 +999,7 @@ static int ina2xx_probe(struct i2c_client *client,
|
||||
/* Patch the current config register with default. */
|
||||
val = chip->config->config_default;
|
||||
|
||||
if (id->driver_data == ina226) {
|
||||
if (type == ina226) {
|
||||
ina226_set_average(chip, INA226_DEFAULT_AVG, &val);
|
||||
ina226_set_int_time_vbus(chip, INA226_DEFAULT_IT, &val);
|
||||
ina226_set_int_time_vshunt(chip, INA226_DEFAULT_IT, &val);
|
||||
@ -1015,7 +1018,7 @@ static int ina2xx_probe(struct i2c_client *client,
|
||||
}
|
||||
|
||||
indio_dev->modes = INDIO_DIRECT_MODE;
|
||||
if (id->driver_data == ina226) {
|
||||
if (type == ina226) {
|
||||
indio_dev->channels = ina226_channels;
|
||||
indio_dev->num_channels = ARRAY_SIZE(ina226_channels);
|
||||
indio_dev->info = &ina226_info;
|
||||
@ -1024,7 +1027,7 @@ static int ina2xx_probe(struct i2c_client *client,
|
||||
indio_dev->num_channels = ARRAY_SIZE(ina219_channels);
|
||||
indio_dev->info = &ina219_info;
|
||||
}
|
||||
indio_dev->name = id->name;
|
||||
indio_dev->name = id ? id->name : chip->config->name;
|
||||
|
||||
ret = devm_iio_kfifo_buffer_setup(&client->dev, indio_dev,
|
||||
&ina2xx_setup_ops);
|
||||
|
@ -253,7 +253,7 @@ int cros_ec_sensors_core_init(struct platform_device *pdev,
|
||||
platform_set_drvdata(pdev, indio_dev);
|
||||
|
||||
state->ec = ec->ec_dev;
|
||||
state->msg = devm_kzalloc(&pdev->dev,
|
||||
state->msg = devm_kzalloc(&pdev->dev, sizeof(*state->msg) +
|
||||
max((u16)sizeof(struct ec_params_motion_sense),
|
||||
state->ec->max_response), GFP_KERNEL);
|
||||
if (!state->msg)
|
||||
|
@ -344,9 +344,12 @@ static int admv1013_update_quad_filters(struct admv1013_state *st)
|
||||
|
||||
static int admv1013_update_mixer_vgate(struct admv1013_state *st)
|
||||
{
|
||||
unsigned int vcm, mixer_vgate;
|
||||
unsigned int mixer_vgate;
|
||||
int vcm;
|
||||
|
||||
vcm = regulator_get_voltage(st->reg);
|
||||
if (vcm < 0)
|
||||
return vcm;
|
||||
|
||||
if (vcm < 1800000)
|
||||
mixer_vgate = (2389 * vcm / 1000000 + 8100) / 100;
|
||||
|
@ -1916,7 +1916,7 @@ static const struct iio_buffer_setup_ops noop_ring_setup_ops;
|
||||
int __iio_device_register(struct iio_dev *indio_dev, struct module *this_mod)
|
||||
{
|
||||
struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev);
|
||||
struct fwnode_handle *fwnode;
|
||||
struct fwnode_handle *fwnode = NULL;
|
||||
int ret;
|
||||
|
||||
if (!indio_dev->info)
|
||||
@ -1927,7 +1927,8 @@ int __iio_device_register(struct iio_dev *indio_dev, struct module *this_mod)
|
||||
/* If the calling driver did not initialize firmware node, do it here */
|
||||
if (dev_fwnode(&indio_dev->dev))
|
||||
fwnode = dev_fwnode(&indio_dev->dev);
|
||||
else
|
||||
/* The default dummy IIO device has no parent */
|
||||
else if (indio_dev->dev.parent)
|
||||
fwnode = dev_fwnode(indio_dev->dev.parent);
|
||||
device_set_node(&indio_dev->dev, fwnode);
|
||||
|
||||
|
@ -85,6 +85,8 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
|
||||
dma_addr_t mask;
|
||||
int i;
|
||||
|
||||
umem->iova = va = virt;
|
||||
|
||||
if (umem->is_odp) {
|
||||
unsigned int page_size = BIT(to_ib_umem_odp(umem)->page_shift);
|
||||
|
||||
@ -100,7 +102,6 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
|
||||
*/
|
||||
pgsz_bitmap &= GENMASK(BITS_PER_LONG - 1, PAGE_SHIFT);
|
||||
|
||||
umem->iova = va = virt;
|
||||
/* The best result is the smallest page size that results in the minimum
|
||||
* number of required pages. Compute the largest page size that could
|
||||
* work based on VA address bits that don't change.
|
||||
|
@ -12307,6 +12307,7 @@ static void free_cntrs(struct hfi1_devdata *dd)
|
||||
|
||||
if (dd->synth_stats_timer.function)
|
||||
del_timer_sync(&dd->synth_stats_timer);
|
||||
cancel_work_sync(&dd->update_cntr_work);
|
||||
ppd = (struct hfi1_pportdata *)(dd + 1);
|
||||
for (i = 0; i < dd->num_pports; i++, ppd++) {
|
||||
kfree(ppd->cntrs);
|
||||
|
@ -83,6 +83,11 @@ static void bcm_aggregate(struct qcom_icc_bcm *bcm)
|
||||
|
||||
temp = agg_peak[bucket] * bcm->vote_scale;
|
||||
bcm->vote_y[bucket] = bcm_div(temp, bcm->aux_data.unit);
|
||||
|
||||
if (bcm->enable_mask && (bcm->vote_x[bucket] || bcm->vote_y[bucket])) {
|
||||
bcm->vote_x[bucket] = 0;
|
||||
bcm->vote_y[bucket] = bcm->enable_mask;
|
||||
}
|
||||
}
|
||||
|
||||
if (bcm->keepalive && bcm->vote_x[QCOM_ICC_BUCKET_AMC] == 0 &&
|
||||
|
@ -81,6 +81,7 @@ struct qcom_icc_node {
|
||||
* @vote_x: aggregated threshold values, represents sum_bw when @type is bw bcm
|
||||
* @vote_y: aggregated threshold values, represents peak_bw when @type is bw bcm
|
||||
* @vote_scale: scaling factor for vote_x and vote_y
|
||||
* @enable_mask: optional mask to send as vote instead of vote_x/vote_y
|
||||
* @dirty: flag used to indicate whether the bcm needs to be committed
|
||||
* @keepalive: flag used to indicate whether a keepalive is required
|
||||
* @aux_data: auxiliary data used when calculating threshold values and
|
||||
@ -97,6 +98,7 @@ struct qcom_icc_bcm {
|
||||
u64 vote_x[QCOM_ICC_NUM_BUCKETS];
|
||||
u64 vote_y[QCOM_ICC_NUM_BUCKETS];
|
||||
u64 vote_scale;
|
||||
u32 enable_mask;
|
||||
bool dirty;
|
||||
bool keepalive;
|
||||
struct bcm_db aux_data;
|
||||
|
@ -1337,6 +1337,7 @@ static struct qcom_icc_node qns_mem_noc_sf_disp = {
|
||||
|
||||
static struct qcom_icc_bcm bcm_acv = {
|
||||
.name = "ACV",
|
||||
.enable_mask = 0x8,
|
||||
.num_nodes = 1,
|
||||
.nodes = { &ebi },
|
||||
};
|
||||
@ -1349,6 +1350,7 @@ static struct qcom_icc_bcm bcm_ce0 = {
|
||||
|
||||
static struct qcom_icc_bcm bcm_cn0 = {
|
||||
.name = "CN0",
|
||||
.enable_mask = 0x1,
|
||||
.keepalive = true,
|
||||
.num_nodes = 55,
|
||||
.nodes = { &qnm_gemnoc_cnoc, &qnm_gemnoc_pcie,
|
||||
@ -1383,6 +1385,7 @@ static struct qcom_icc_bcm bcm_cn0 = {
|
||||
|
||||
static struct qcom_icc_bcm bcm_co0 = {
|
||||
.name = "CO0",
|
||||
.enable_mask = 0x1,
|
||||
.num_nodes = 2,
|
||||
.nodes = { &qxm_nsp, &qns_nsp_gemnoc },
|
||||
};
|
||||
@ -1403,6 +1406,7 @@ static struct qcom_icc_bcm bcm_mm0 = {
|
||||
|
||||
static struct qcom_icc_bcm bcm_mm1 = {
|
||||
.name = "MM1",
|
||||
.enable_mask = 0x1,
|
||||
.num_nodes = 12,
|
||||
.nodes = { &qnm_camnoc_hf, &qnm_camnoc_icp,
|
||||
&qnm_camnoc_sf, &qnm_mdp,
|
||||
@ -1445,6 +1449,7 @@ static struct qcom_icc_bcm bcm_sh0 = {
|
||||
|
||||
static struct qcom_icc_bcm bcm_sh1 = {
|
||||
.name = "SH1",
|
||||
.enable_mask = 0x1,
|
||||
.num_nodes = 7,
|
||||
.nodes = { &alm_gpu_tcu, &alm_sys_tcu,
|
||||
&qnm_nsp_gemnoc, &qnm_pcie,
|
||||
@ -1461,6 +1466,7 @@ static struct qcom_icc_bcm bcm_sn0 = {
|
||||
|
||||
static struct qcom_icc_bcm bcm_sn1 = {
|
||||
.name = "SN1",
|
||||
.enable_mask = 0x1,
|
||||
.num_nodes = 4,
|
||||
.nodes = { &qhm_gic, &qxm_pimem,
|
||||
&xm_gic, &qns_gemnoc_gc },
|
||||
@ -1492,6 +1498,7 @@ static struct qcom_icc_bcm bcm_sn7 = {
|
||||
|
||||
static struct qcom_icc_bcm bcm_acv_disp = {
|
||||
.name = "ACV",
|
||||
.enable_mask = 0x1,
|
||||
.num_nodes = 1,
|
||||
.nodes = { &ebi_disp },
|
||||
};
|
||||
@ -1510,6 +1517,7 @@ static struct qcom_icc_bcm bcm_mm0_disp = {
|
||||
|
||||
static struct qcom_icc_bcm bcm_mm1_disp = {
|
||||
.name = "MM1",
|
||||
.enable_mask = 0x1,
|
||||
.num_nodes = 3,
|
||||
.nodes = { &qnm_mdp_disp, &qnm_rot_disp,
|
||||
&qns_mem_noc_sf_disp },
|
||||
@ -1523,6 +1531,7 @@ static struct qcom_icc_bcm bcm_sh0_disp = {
|
||||
|
||||
static struct qcom_icc_bcm bcm_sh1_disp = {
|
||||
.name = "SH1",
|
||||
.enable_mask = 0x1,
|
||||
.num_nodes = 1,
|
||||
.nodes = { &qnm_pcie_disp },
|
||||
};
|
||||
|
@ -247,7 +247,7 @@ extern void dsp_cmx_hardware(struct dsp_conf *conf, struct dsp *dsp);
|
||||
extern int dsp_cmx_conf(struct dsp *dsp, u32 conf_id);
|
||||
extern void dsp_cmx_receive(struct dsp *dsp, struct sk_buff *skb);
|
||||
extern void dsp_cmx_hdlc(struct dsp *dsp, struct sk_buff *skb);
|
||||
extern void dsp_cmx_send(void *arg);
|
||||
extern void dsp_cmx_send(struct timer_list *arg);
|
||||
extern void dsp_cmx_transmit(struct dsp *dsp, struct sk_buff *skb);
|
||||
extern int dsp_cmx_del_conf_member(struct dsp *dsp);
|
||||
extern int dsp_cmx_del_conf(struct dsp_conf *conf);
|
||||
|
@ -1625,7 +1625,7 @@ static u16 dsp_count; /* last sample count */
|
||||
static int dsp_count_valid; /* if we have last sample count */
|
||||
|
||||
void
|
||||
dsp_cmx_send(void *arg)
|
||||
dsp_cmx_send(struct timer_list *arg)
|
||||
{
|
||||
struct dsp_conf *conf;
|
||||
struct dsp_conf_member *member;
|
||||
|
@ -1195,7 +1195,7 @@ static int __init dsp_init(void)
|
||||
}
|
||||
|
||||
/* set sample timer */
|
||||
timer_setup(&dsp_spl_tl, (void *)dsp_cmx_send, 0);
|
||||
timer_setup(&dsp_spl_tl, dsp_cmx_send, 0);
|
||||
dsp_spl_tl.expires = jiffies + dsp_tics;
|
||||
dsp_spl_jiffies = dsp_spl_tl.expires;
|
||||
add_timer(&dsp_spl_tl);
|
||||
|
@ -195,7 +195,7 @@ static int rts5227_extra_init_hw(struct rtsx_pcr *pcr)
|
||||
}
|
||||
}
|
||||
|
||||
if (option->force_clkreq_0)
|
||||
if (option->force_clkreq_0 && pcr->aspm_mode == ASPM_MODE_CFG)
|
||||
rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
|
||||
else
|
||||
|
@ -435,17 +435,10 @@ static void rts5228_init_from_cfg(struct rtsx_pcr *pcr)
|
||||
option->ltr_enabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
|
||||
| PM_L1_1_EN | PM_L1_2_EN))
|
||||
option->force_clkreq_0 = false;
|
||||
else
|
||||
option->force_clkreq_0 = true;
|
||||
}
|
||||
|
||||
static int rts5228_extra_init_hw(struct rtsx_pcr *pcr)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5228_AUTOLOAD_CFG1,
|
||||
CD_RESUME_EN_MASK, CD_RESUME_EN_MASK);
|
||||
@ -476,17 +469,6 @@ static int rts5228_extra_init_hw(struct rtsx_pcr *pcr)
|
||||
else
|
||||
rtsx_pci_write_register(pcr, PETXCFG, 0x30, 0x00);
|
||||
|
||||
/*
|
||||
* If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced
|
||||
* to drive low, and we forcibly request clock.
|
||||
*/
|
||||
if (option->force_clkreq_0)
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
|
||||
else
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH);
|
||||
|
||||
rtsx_pci_write_register(pcr, PWD_SUSPEND_EN, 0xFF, 0xFB);
|
||||
|
||||
if (pcr->rtd3_en) {
|
||||
|
@ -327,12 +327,11 @@ static int rts5249_extra_init_hw(struct rtsx_pcr *pcr)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced
|
||||
* to drive low, and we forcibly request clock.
|
||||
*/
|
||||
if (option->force_clkreq_0)
|
||||
if (option->force_clkreq_0 && pcr->aspm_mode == ASPM_MODE_CFG)
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
|
||||
else
|
||||
|
@ -517,17 +517,10 @@ static void rts5260_init_from_cfg(struct rtsx_pcr *pcr)
|
||||
option->ltr_enabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
|
||||
| PM_L1_1_EN | PM_L1_2_EN))
|
||||
option->force_clkreq_0 = false;
|
||||
else
|
||||
option->force_clkreq_0 = true;
|
||||
}
|
||||
|
||||
static int rts5260_extra_init_hw(struct rtsx_pcr *pcr)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
|
||||
/* Set mcu_cnt to 7 to ensure data can be sampled properly */
|
||||
rtsx_pci_write_register(pcr, 0xFC03, 0x7F, 0x07);
|
||||
@ -546,17 +539,6 @@ static int rts5260_extra_init_hw(struct rtsx_pcr *pcr)
|
||||
|
||||
rts5260_init_hw(pcr);
|
||||
|
||||
/*
|
||||
* If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced
|
||||
* to drive low, and we forcibly request clock.
|
||||
*/
|
||||
if (option->force_clkreq_0)
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
|
||||
else
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH);
|
||||
|
||||
rtsx_pci_write_register(pcr, pcr->reg_pm_ctrl3, 0x10, 0x00);
|
||||
|
||||
return 0;
|
||||
|
@ -498,17 +498,10 @@ static void rts5261_init_from_cfg(struct rtsx_pcr *pcr)
|
||||
option->ltr_enabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN
|
||||
| PM_L1_1_EN | PM_L1_2_EN))
|
||||
option->force_clkreq_0 = false;
|
||||
else
|
||||
option->force_clkreq_0 = true;
|
||||
}
|
||||
|
||||
static int rts5261_extra_init_hw(struct rtsx_pcr *pcr)
|
||||
{
|
||||
struct rtsx_cr_option *option = &pcr->option;
|
||||
u32 val;
|
||||
|
||||
rtsx_pci_write_register(pcr, RTS5261_AUTOLOAD_CFG1,
|
||||
@ -554,17 +547,6 @@ static int rts5261_extra_init_hw(struct rtsx_pcr *pcr)
|
||||
else
|
||||
rtsx_pci_write_register(pcr, PETXCFG, 0x30, 0x00);
|
||||
|
||||
/*
|
||||
* If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced
|
||||
* to drive low, and we forcibly request clock.
|
||||
*/
|
||||
if (option->force_clkreq_0)
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW);
|
||||
else
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH);
|
||||
|
||||
rtsx_pci_write_register(pcr, PWD_SUSPEND_EN, 0xFF, 0xFB);
|
||||
|
||||
if (pcr->rtd3_en) {
|
||||
|
@ -1326,8 +1326,11 @@ static int rtsx_pci_init_hw(struct rtsx_pcr *pcr)
|
||||
return err;
|
||||
}
|
||||
|
||||
if (pcr->aspm_mode == ASPM_MODE_REG)
|
||||
if (pcr->aspm_mode == ASPM_MODE_REG) {
|
||||
rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0x30, 0x30);
|
||||
rtsx_pci_write_register(pcr, PETXCFG,
|
||||
FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH);
|
||||
}
|
||||
|
||||
/* No CD interrupt if probing driver with card inserted.
|
||||
* So we need to initialize pcr->card_exist here.
|
||||
|
@ -338,13 +338,7 @@ static void moxart_transfer_pio(struct moxart_host *host)
|
||||
return;
|
||||
}
|
||||
for (len = 0; len < remain && len < host->fifo_width;) {
|
||||
/* SCR data must be read in big endian. */
|
||||
if (data->mrq->cmd->opcode == SD_APP_SEND_SCR)
|
||||
*sgp = ioread32be(host->base +
|
||||
REG_DATA_WINDOW);
|
||||
else
|
||||
*sgp = ioread32(host->base +
|
||||
REG_DATA_WINDOW);
|
||||
*sgp = ioread32(host->base + REG_DATA_WINDOW);
|
||||
sgp++;
|
||||
len += 4;
|
||||
}
|
||||
|
@ -5839,7 +5839,9 @@ void bond_setup(struct net_device *bond_dev)
|
||||
|
||||
bond_dev->hw_features = BOND_VLAN_FEATURES |
|
||||
NETIF_F_HW_VLAN_CTAG_RX |
|
||||
NETIF_F_HW_VLAN_CTAG_FILTER;
|
||||
NETIF_F_HW_VLAN_CTAG_FILTER |
|
||||
NETIF_F_HW_VLAN_STAG_RX |
|
||||
NETIF_F_HW_VLAN_STAG_FILTER;
|
||||
|
||||
bond_dev->hw_features |= NETIF_F_GSO_ENCAP_ALL;
|
||||
bond_dev->features |= bond_dev->hw_features;
|
||||
|
@ -1606,8 +1606,10 @@ static void felix_teardown(struct dsa_switch *ds)
|
||||
struct felix *felix = ocelot_to_felix(ocelot);
|
||||
struct dsa_port *dp;
|
||||
|
||||
rtnl_lock();
|
||||
if (felix->tag_proto_ops)
|
||||
felix->tag_proto_ops->teardown(ds);
|
||||
rtnl_unlock();
|
||||
|
||||
dsa_switch_for_each_available_port(dp, ds)
|
||||
ocelot_deinit_port(ocelot, dp->index);
|
||||
|
@ -458,9 +458,9 @@ static void hns3_dbg_fill_content(char *content, u16 len,
|
||||
if (result) {
|
||||
if (item_len < strlen(result[i]))
|
||||
break;
|
||||
strscpy(pos, result[i], strlen(result[i]));
|
||||
memcpy(pos, result[i], strlen(result[i]));
|
||||
} else {
|
||||
strscpy(pos, items[i].name, strlen(items[i].name));
|
||||
memcpy(pos, items[i].name, strlen(items[i].name));
|
||||
}
|
||||
pos += item_len;
|
||||
len -= item_len;
|
||||
|
@ -5854,6 +5854,9 @@ void hns3_external_lb_prepare(struct net_device *ndev, bool if_running)
|
||||
if (!if_running)
|
||||
return;
|
||||
|
||||
if (test_and_set_bit(HNS3_NIC_STATE_DOWN, &priv->state))
|
||||
return;
|
||||
|
||||
netif_carrier_off(ndev);
|
||||
netif_tx_disable(ndev);
|
||||
|
||||
@ -5882,7 +5885,16 @@ void hns3_external_lb_restore(struct net_device *ndev, bool if_running)
|
||||
if (!if_running)
|
||||
return;
|
||||
|
||||
hns3_nic_reset_all_ring(priv->ae_handle);
|
||||
if (hns3_nic_resetting(ndev))
|
||||
return;
|
||||
|
||||
if (!test_bit(HNS3_NIC_STATE_DOWN, &priv->state))
|
||||
return;
|
||||
|
||||
if (hns3_nic_reset_all_ring(priv->ae_handle))
|
||||
return;
|
||||
|
||||
clear_bit(HNS3_NIC_STATE_DOWN, &priv->state);
|
||||
|
||||
for (i = 0; i < priv->vector_num; i++)
|
||||
hns3_vector_enable(&priv->tqp_vector[i]);
|
||||
|
@ -110,9 +110,9 @@ static void hclge_dbg_fill_content(char *content, u16 len,
|
||||
if (result) {
|
||||
if (item_len < strlen(result[i]))
|
||||
break;
|
||||
strscpy(pos, result[i], strlen(result[i]));
|
||||
memcpy(pos, result[i], strlen(result[i]));
|
||||
} else {
|
||||
strscpy(pos, items[i].name, strlen(items[i].name));
|
||||
memcpy(pos, items[i].name, strlen(items[i].name));
|
||||
}
|
||||
pos += item_len;
|
||||
len -= item_len;
|
||||
|
@ -72,6 +72,8 @@ static void hclge_restore_hw_table(struct hclge_dev *hdev);
|
||||
static void hclge_sync_promisc_mode(struct hclge_dev *hdev);
|
||||
static void hclge_sync_fd_table(struct hclge_dev *hdev);
|
||||
static void hclge_update_fec_stats(struct hclge_dev *hdev);
|
||||
static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret,
|
||||
int wait_cnt);
|
||||
|
||||
static struct hnae3_ae_algo ae_algo;
|
||||
|
||||
@ -7567,6 +7569,8 @@ static void hclge_enable_fd(struct hnae3_handle *handle, bool enable)
|
||||
|
||||
static void hclge_cfg_mac_mode(struct hclge_dev *hdev, bool enable)
|
||||
{
|
||||
#define HCLGE_LINK_STATUS_WAIT_CNT 3
|
||||
|
||||
struct hclge_desc desc;
|
||||
struct hclge_config_mac_mode_cmd *req =
|
||||
(struct hclge_config_mac_mode_cmd *)desc.data;
|
||||
@ -7591,9 +7595,15 @@ static void hclge_cfg_mac_mode(struct hclge_dev *hdev, bool enable)
|
||||
req->txrx_pad_fcs_loop_en = cpu_to_le32(loop_en);
|
||||
|
||||
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
dev_err(&hdev->pdev->dev,
|
||||
"mac enable fail, ret =%d.\n", ret);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!enable)
|
||||
hclge_mac_link_status_wait(hdev, HCLGE_LINK_STATUS_DOWN,
|
||||
HCLGE_LINK_STATUS_WAIT_CNT);
|
||||
}
|
||||
|
||||
static int hclge_config_switch_param(struct hclge_dev *hdev, int vfid,
|
||||
@ -7656,10 +7666,9 @@ static void hclge_phy_link_status_wait(struct hclge_dev *hdev,
|
||||
} while (++i < HCLGE_PHY_LINK_STATUS_NUM);
|
||||
}
|
||||
|
||||
static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret)
|
||||
static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret,
|
||||
int wait_cnt)
|
||||
{
|
||||
#define HCLGE_MAC_LINK_STATUS_NUM 100
|
||||
|
||||
int link_status;
|
||||
int i = 0;
|
||||
int ret;
|
||||
@ -7672,13 +7681,15 @@ static int hclge_mac_link_status_wait(struct hclge_dev *hdev, int link_ret)
|
||||
return 0;
|
||||
|
||||
msleep(HCLGE_LINK_STATUS_MS);
|
||||
} while (++i < HCLGE_MAC_LINK_STATUS_NUM);
|
||||
} while (++i < wait_cnt);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
static int hclge_mac_phy_link_status_wait(struct hclge_dev *hdev, bool en,
|
||||
bool is_phy)
|
||||
{
|
||||
#define HCLGE_MAC_LINK_STATUS_NUM 100
|
||||
|
||||
int link_ret;
|
||||
|
||||
link_ret = en ? HCLGE_LINK_STATUS_UP : HCLGE_LINK_STATUS_DOWN;
|
||||
@ -7686,7 +7697,8 @@ static int hclge_mac_phy_link_status_wait(struct hclge_dev *hdev, bool en,
|
||||
if (is_phy)
|
||||
hclge_phy_link_status_wait(hdev, link_ret);
|
||||
|
||||
return hclge_mac_link_status_wait(hdev, link_ret);
|
||||
return hclge_mac_link_status_wait(hdev, link_ret,
|
||||
HCLGE_MAC_LINK_STATUS_NUM);
|
||||
}
|
||||
|
||||
static int hclge_set_app_loopback(struct hclge_dev *hdev, bool en)
|
||||
|
@ -96,6 +96,8 @@ static int pending_scrq(struct ibmvnic_adapter *,
|
||||
static union sub_crq *ibmvnic_next_scrq(struct ibmvnic_adapter *,
|
||||
struct ibmvnic_sub_crq_queue *);
|
||||
static int ibmvnic_poll(struct napi_struct *napi, int data);
|
||||
static int reset_sub_crq_queues(struct ibmvnic_adapter *adapter);
|
||||
static inline void reinit_init_done(struct ibmvnic_adapter *adapter);
|
||||
static void send_query_map(struct ibmvnic_adapter *adapter);
|
||||
static int send_request_map(struct ibmvnic_adapter *, dma_addr_t, u32, u8);
|
||||
static int send_request_unmap(struct ibmvnic_adapter *, u8);
|
||||
@ -113,6 +115,7 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
|
||||
static void free_long_term_buff(struct ibmvnic_adapter *adapter,
|
||||
struct ibmvnic_long_term_buff *ltb);
|
||||
static void ibmvnic_disable_irqs(struct ibmvnic_adapter *adapter);
|
||||
static void flush_reset_queue(struct ibmvnic_adapter *adapter);
|
||||
|
||||
struct ibmvnic_stat {
|
||||
char name[ETH_GSTRING_LEN];
|
||||
@ -1314,8 +1317,8 @@ static const char *adapter_state_to_string(enum vnic_state state)
|
||||
|
||||
static int ibmvnic_login(struct net_device *netdev)
|
||||
{
|
||||
unsigned long flags, timeout = msecs_to_jiffies(20000);
|
||||
struct ibmvnic_adapter *adapter = netdev_priv(netdev);
|
||||
unsigned long timeout = msecs_to_jiffies(20000);
|
||||
int retry_count = 0;
|
||||
int retries = 10;
|
||||
bool retry;
|
||||
@ -1336,11 +1339,9 @@ static int ibmvnic_login(struct net_device *netdev)
|
||||
|
||||
if (!wait_for_completion_timeout(&adapter->init_done,
|
||||
timeout)) {
|
||||
netdev_warn(netdev, "Login timed out, retrying...\n");
|
||||
retry = true;
|
||||
adapter->init_done_rc = 0;
|
||||
retry_count++;
|
||||
continue;
|
||||
netdev_warn(netdev, "Login timed out\n");
|
||||
adapter->login_pending = false;
|
||||
goto partial_reset;
|
||||
}
|
||||
|
||||
if (adapter->init_done_rc == ABORTED) {
|
||||
@ -1382,10 +1383,69 @@ static int ibmvnic_login(struct net_device *netdev)
|
||||
"SCRQ irq initialization failed\n");
|
||||
return rc;
|
||||
}
|
||||
/* Default/timeout error handling, reset and start fresh */
|
||||
} else if (adapter->init_done_rc) {
|
||||
netdev_warn(netdev, "Adapter login failed, init_done_rc = %d\n",
|
||||
adapter->init_done_rc);
|
||||
return -EIO;
|
||||
|
||||
partial_reset:
|
||||
/* adapter login failed, so free any CRQs or sub-CRQs
|
||||
* and register again before attempting to login again.
|
||||
* If we don't do this then the VIOS may think that
|
||||
* we are already logged in and reject any subsequent
|
||||
* attempts
|
||||
*/
|
||||
netdev_warn(netdev,
|
||||
"Freeing and re-registering CRQs before attempting to login again\n");
|
||||
retry = true;
|
||||
adapter->init_done_rc = 0;
|
||||
release_sub_crqs(adapter, true);
|
||||
/* Much of this is similar logic as ibmvnic_probe(),
|
||||
* we are essentially re-initializing communication
|
||||
* with the server. We really should not run any
|
||||
* resets/failovers here because this is already a form
|
||||
* of reset and we do not want parallel resets occurring
|
||||
*/
|
||||
do {
|
||||
reinit_init_done(adapter);
|
||||
/* Clear any failovers we got in the previous
|
||||
* pass since we are re-initializing the CRQ
|
||||
*/
|
||||
adapter->failover_pending = false;
|
||||
release_crq_queue(adapter);
|
||||
/* If we don't sleep here then we risk an
|
||||
* unnecessary failover event from the VIOS.
|
||||
* This is a known VIOS issue caused by a vnic
|
||||
* device freeing and registering a CRQ too
|
||||
* quickly.
|
||||
*/
|
||||
msleep(1500);
|
||||
/* Avoid any resets, since we are currently
|
||||
* resetting.
|
||||
*/
|
||||
spin_lock_irqsave(&adapter->rwi_lock, flags);
|
||||
flush_reset_queue(adapter);
|
||||
spin_unlock_irqrestore(&adapter->rwi_lock,
|
||||
flags);
|
||||
|
||||
rc = init_crq_queue(adapter);
|
||||
if (rc) {
|
||||
netdev_err(netdev, "login recovery: init CRQ failed %d\n",
|
||||
rc);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
rc = ibmvnic_reset_init(adapter, false);
|
||||
if (rc)
|
||||
netdev_err(netdev, "login recovery: Reset init failed %d\n",
|
||||
rc);
|
||||
/* IBMVNIC_CRQ_INIT will return EAGAIN if it
|
||||
* fails, since ibmvnic_reset_init will free
|
||||
* irq's in failure, we won't be able to receive
|
||||
* new CRQs so we need to keep trying. probe()
|
||||
* handles this similarly.
|
||||
*/
|
||||
} while (rc == -EAGAIN && retry_count++ < retries);
|
||||
}
|
||||
} while (retry);
|
||||
|
||||
@ -1397,12 +1457,22 @@ static int ibmvnic_login(struct net_device *netdev)
|
||||
|
||||
static void release_login_buffer(struct ibmvnic_adapter *adapter)
|
||||
{
|
||||
if (!adapter->login_buf)
|
||||
return;
|
||||
|
||||
dma_unmap_single(&adapter->vdev->dev, adapter->login_buf_token,
|
||||
adapter->login_buf_sz, DMA_TO_DEVICE);
|
||||
kfree(adapter->login_buf);
|
||||
adapter->login_buf = NULL;
|
||||
}
|
||||
|
||||
static void release_login_rsp_buffer(struct ibmvnic_adapter *adapter)
|
||||
{
|
||||
if (!adapter->login_rsp_buf)
|
||||
return;
|
||||
|
||||
dma_unmap_single(&adapter->vdev->dev, adapter->login_rsp_buf_token,
|
||||
adapter->login_rsp_buf_sz, DMA_FROM_DEVICE);
|
||||
kfree(adapter->login_rsp_buf);
|
||||
adapter->login_rsp_buf = NULL;
|
||||
}
|
||||
@ -4626,11 +4696,14 @@ static int send_login(struct ibmvnic_adapter *adapter)
|
||||
if (rc) {
|
||||
adapter->login_pending = false;
|
||||
netdev_err(adapter->netdev, "Failed to send login, rc=%d\n", rc);
|
||||
goto buf_rsp_map_failed;
|
||||
goto buf_send_failed;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
buf_send_failed:
|
||||
dma_unmap_single(dev, rsp_buffer_token, rsp_buffer_size,
|
||||
DMA_FROM_DEVICE);
|
||||
buf_rsp_map_failed:
|
||||
kfree(login_rsp_buffer);
|
||||
adapter->login_rsp_buf = NULL;
|
||||
@ -5192,6 +5265,7 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
|
||||
int num_tx_pools;
|
||||
int num_rx_pools;
|
||||
u64 *size_array;
|
||||
u32 rsp_len;
|
||||
int i;
|
||||
|
||||
/* CHECK: Test/set of login_pending does not need to be atomic
|
||||
@ -5203,11 +5277,6 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
|
||||
}
|
||||
adapter->login_pending = false;
|
||||
|
||||
dma_unmap_single(dev, adapter->login_buf_token, adapter->login_buf_sz,
|
||||
DMA_TO_DEVICE);
|
||||
dma_unmap_single(dev, adapter->login_rsp_buf_token,
|
||||
adapter->login_rsp_buf_sz, DMA_FROM_DEVICE);
|
||||
|
||||
/* If the number of queues requested can't be allocated by the
|
||||
* server, the login response will return with code 1. We will need
|
||||
* to resend the login buffer with fewer queues requested.
|
||||
@ -5243,6 +5312,23 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
|
||||
ibmvnic_reset(adapter, VNIC_RESET_FATAL);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
rsp_len = be32_to_cpu(login_rsp->len);
|
||||
if (be32_to_cpu(login->login_rsp_len) < rsp_len ||
|
||||
rsp_len <= be32_to_cpu(login_rsp->off_txsubm_subcrqs) ||
|
||||
rsp_len <= be32_to_cpu(login_rsp->off_rxadd_subcrqs) ||
|
||||
rsp_len <= be32_to_cpu(login_rsp->off_rxadd_buff_size) ||
|
||||
rsp_len <= be32_to_cpu(login_rsp->off_supp_tx_desc)) {
|
||||
/* This can happen if a login request times out and there are
|
||||
* 2 outstanding login requests sent, the LOGIN_RSP crq
|
||||
* could have been for the older login request. So we are
|
||||
* parsing the newer response buffer which may be incomplete
|
||||
*/
|
||||
dev_err(dev, "FATAL: Login rsp offsets/lengths invalid\n");
|
||||
ibmvnic_reset(adapter, VNIC_RESET_FATAL);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
size_array = (u64 *)((u8 *)(adapter->login_rsp_buf) +
|
||||
be32_to_cpu(adapter->login_rsp_buf->off_rxadd_buff_size));
|
||||
/* variable buffer sizes are not supported, so just read the
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user