Merge 6.1.20 into android14-6.1

Changes in 6.1.20
	fs: prevent out-of-bounds array speculation when closing a file descriptor
	btrfs: fix unnecessary increment of read error stat on write error
	btrfs: fix percent calculation for bg reclaim message
	io_uring/uring_cmd: ensure that device supports IOPOLL
	erofs: fix wrong kunmap when using LZMA on HIGHMEM platforms
	perf inject: Fix --buildid-all not to eat up MMAP2
	fork: allow CLONE_NEWTIME in clone3 flags
	RISC-V: Stop emitting attributes
	x86/CPU/AMD: Disable XSAVES on AMD family 0x17
	drm/amdgpu: fix error checking in amdgpu_read_mm_registers for soc15
	drm/amdgpu: fix error checking in amdgpu_read_mm_registers for soc21
	drm/amdgpu: fix error checking in amdgpu_read_mm_registers for nv
	drm/display: Don't block HDR_OUTPUT_METADATA on unknown EOTF
	drm/connector: print max_requested_bpc in state debugfs
	staging: rtl8723bs: Fix key-store index handling
	staging: rtl8723bs: Pass correct parameters to cfg80211_get_bss()
	ext4: fix cgroup writeback accounting with fs-layer encryption
	ext4: fix RENAME_WHITEOUT handling for inline directories
	ext4: fix another off-by-one fsmap error on 1k block filesystems
	ext4: move where set the MAY_INLINE_DATA flag is set
	ext4: fix WARNING in ext4_update_inline_data
	ext4: zero i_disksize when initializing the bootloader inode
	HID: core: Provide new max_buffer_size attribute to over-ride the default
	HID: uhid: Over-ride the default maximum data buffer value with our own
	nfc: change order inside nfc_se_io error path
	KVM: VMX: Reset eVMCS controls in VP assist page during hardware disabling
	KVM: VMX: Don't bother disabling eVMCS static key on module exit
	KVM: x86: Move guts of kvm_arch_init() to standalone helper
	KVM: VMX: Do _all_ initialization before exposing /dev/kvm to userspace
	fs: dlm: fix log of lowcomms vs midcomms
	fs: dlm: add midcomms init/start functions
	fs: dlm: start midcomms before scand
	fs: dlm: remove send repeat remove handling
	fs: dlm: use packet in dlm_mhandle
	fd: dlm: trace send/recv of dlm message and rcom
	fs: dlm: fix use after free in midcomms commit
	fs: dlm: use WARN_ON_ONCE() instead of WARN_ON()
	fs: dlm: be sure to call dlm_send_queue_flush()
	fs: dlm: fix race setting stop tx flag
	udf: Fix off-by-one error when discarding preallocation
	bus: mhi: ep: Power up/down MHI stack during MHI RESET
	bus: mhi: ep: Change state_lock to mutex
	Input: exc3000 - properly stop timer on shutdown
	ipmi:ssif: Remove rtc_us_timer
	ipmi:ssif: Increase the message retry time
	ipmi:ssif: Add a timer between request retries
	spi: intel: Check number of chip selects after reading the descriptor
	drm/i915: Introduce intel_panel_init_alloc()
	drm/i915: Do panel VBT init early if the VBT declares an explicit panel type
	drm/i915: Populate encoder->devdata for DSI on icl+
	block: Revert "block: Do not reread partition table on exclusively open device"
	block: fix scan partition for exclusively open device again
	riscv: Add header include guards to insn.h
	scsi: core: Remove the /proc/scsi/${proc_name} directory earlier
	ext4: Fix possible corruption when moving a directory
	cifs: improve checking of DFS links over STATUS_OBJECT_NAME_INVALID
	drm/nouveau/kms/nv50: fix nv50_wndw_new_ prototype
	drm/msm: Fix potential invalid ptr free
	drm/msm/a5xx: fix setting of the CP_PREEMPT_ENABLE_LOCAL register
	drm/msm/a5xx: fix highest bank bit for a530
	drm/msm/a5xx: fix the emptyness check in the preempt code
	drm/msm/a5xx: fix context faults during ring switch
	bgmac: fix *initial* chip reset to support BCM5358
	nfc: fdp: add null check of devm_kmalloc_array in fdp_nci_i2c_read_device_properties
	powerpc: dts: t1040rdb: fix compatible string for Rev A boards
	tls: rx: fix return value for async crypto
	drm/msm/dpu: disable features unsupported by QCM2290
	ila: do not generate empty messages in ila_xlat_nl_cmd_get_mapping()
	net: lan966x: Fix port police support using tc-matchall
	selftests: nft_nat: ensuring the listening side is up before starting the client
	netfilter: nft_last: copy content when cloning expression
	netfilter: nft_quota: copy content when cloning expression
	net: tls: fix possible race condition between do_tls_getsockopt_conf() and do_tls_setsockopt_conf()
	net: use indirect calls helpers for sk_exit_memory_pressure()
	perf stat: Fix counting when initial delay configured
	net: lan78xx: fix accessing the LAN7800's internal phy specific registers from the MAC driver
	net: caif: Fix use-after-free in cfusbl_device_notify()
	ice: copy last block omitted in ice_get_module_eeprom()
	bpf, sockmap: Fix an infinite loop error when len is 0 in tcp_bpf_recvmsg_parser()
	drm/msm/dpu: fix len of sc7180 ctl blocks
	drm/msm/dpu: drop DPU_DIM_LAYER from MIXER_MSM8998_MASK
	drm/msm/dpu: fix clocks settings for msm8998 SSPP blocks
	drm/msm/dpu: clear DSPP reservations in rm release
	net: stmmac: add to set device wake up flag when stmmac init phy
	net: phylib: get rid of unnecessary locking
	bnxt_en: Avoid order-5 memory allocation for TPA data
	netfilter: ctnetlink: revert to dumping mark regardless of event type
	netfilter: tproxy: fix deadlock due to missing BH disable
	m68k: mm: Move initrd phys_to_virt handling after paging_init()
	btrfs: fix extent map logging bit not cleared for split maps after dropping range
	bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES
	btf: fix resolving BTF_KIND_VAR after ARRAY, STRUCT, UNION, PTR
	net: phy: smsc: fix link up detection in forced irq mode
	net: ethernet: mtk_eth_soc: fix RX data corruption issue
	net: tls: fix device-offloaded sendpage straddling records
	scsi: megaraid_sas: Update max supported LD IDs to 240
	scsi: sd: Fix wrong zone_write_granularity value during revalidate
	netfilter: conntrack: adopt safer max chain length
	platform: mellanox: select REGMAP instead of depending on it
	platform: x86: MLX_PLATFORM: select REGMAP instead of depending on it
	block: fix wrong mode for blkdev_put() from disk_scan_partitions()
	NFSD: Protect against filesystem freezing
	ice: Fix DSCP PFC TLV creation
	ethernet: ice: avoid gcc-9 integer overflow warning
	net/smc: fix fallback failed while sendmsg with fastopen
	octeontx2-af: Unlock contexts in the queue context cache in case of fault detection
	SUNRPC: Fix a server shutdown leak
	net: dsa: mt7530: permit port 5 to work without port 6 on MT7621 SoC
	af_unix: fix struct pid leaks in OOB support
	erofs: Revert "erofs: fix kvcalloc() misuse with __GFP_NOFAIL"
	riscv: Use READ_ONCE_NOCHECK in imprecise unwinding stack mode
	RISC-V: Don't check text_mutex during stop_machine
	drm/amdgpu: fix return value check in kfd
	ext4: Fix deadlock during directory rename
	drm/amdgpu/soc21: don't expose AV1 if VCN0 is harvested
	drm/amdgpu/soc21: Add video cap query support for VCN_4_0_4
	adreno: Shutdown the GPU properly
	drm/msm/adreno: fix runtime PM imbalance at unbind
	watch_queue: fix IOC_WATCH_QUEUE_SET_SIZE alloc error paths
	tpm/eventlog: Don't abort tpm_read_log on faulty ACPI address
	MIPS: Fix a compilation issue
	powerpc/64: Don't recurse irq replay
	powerpc/iommu: fix memory leak with using debugfs_lookup()
	clk: renesas: rcar-gen3: Disable R-Car H3 ES1.*
	powerpc/bpf/32: Only set a stack frame when necessary
	powerpc/64: Fix task_cpu in early boot when booting non-zero cpuid
	powerpc/64: Move paca allocation to early_setup()
	powerpc/kcsan: Exclude udelay to prevent recursive instrumentation
	alpha: fix R_ALPHA_LITERAL reloc for large modules
	macintosh: windfarm: Use unsigned type for 1-bit bitfields
	PCI: Add SolidRun vendor ID
	scripts: handle BrokenPipeError for python scripts
	media: ov5640: Fix analogue gain control
	media: rc: gpio-ir-recv: add remove function
	drm/amd/display: Allow subvp on vactive pipes that are 2560x1440@60
	drm/amd/display: adjust MALL size available for DCN32 and DCN321
	filelocks: use mount idmapping for setlease permission check
	Revert "bpf, test_run: fix &xdp_frame misplacement for LIVE_FRAMES"
	UML: define RUNTIME_DISCARD_EXIT
	Linux 6.1.20

Change-Id: I2f92629ce02bc07295fea17b16f9bb567916a285
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman 2023-03-22 19:57:05 +00:00
commit a22c3a8790
174 changed files with 1718 additions and 863 deletions

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 6
PATCHLEVEL = 1
SUBLEVEL = 19
SUBLEVEL = 20
EXTRAVERSION =
NAME = Hurr durr I'ma ninja sloth

View File

@ -146,10 +146,8 @@ apply_relocate_add(Elf64_Shdr *sechdrs, const char *strtab,
base = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr;
symtab = (Elf64_Sym *)sechdrs[symindex].sh_addr;
/* The small sections were sorted to the end of the segment.
The following should definitely cover them. */
gp = (u64)me->core_layout.base + me->core_layout.size - 0x8000;
got = sechdrs[me->arch.gotsecindex].sh_addr;
gp = got + 0x8000;
for (i = 0; i < n; i++) {
unsigned long r_sym = ELF64_R_SYM (rela[i].r_info);

View File

@ -326,16 +326,16 @@ void __init setup_arch(char **cmdline_p)
panic("No configuration setup");
}
#ifdef CONFIG_BLK_DEV_INITRD
if (m68k_ramdisk.size) {
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && m68k_ramdisk.size)
memblock_reserve(m68k_ramdisk.addr, m68k_ramdisk.size);
paging_init();
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && m68k_ramdisk.size) {
initrd_start = (unsigned long)phys_to_virt(m68k_ramdisk.addr);
initrd_end = initrd_start + m68k_ramdisk.size;
pr_info("initrd: %08lx - %08lx\n", initrd_start, initrd_end);
}
#endif
paging_init();
#ifdef CONFIG_NATFEAT
nf_init();

View File

@ -374,7 +374,7 @@ struct pci_msu {
PCI_CFG04_STAT_SSE | \
PCI_CFG04_STAT_PE)
#define KORINA_CNFG1 ((KORINA_STAT<<16)|KORINA_CMD)
#define KORINA_CNFG1 (KORINA_STAT | KORINA_CMD)
#define KORINA_REVID 0
#define KORINA_CLASS_CODE 0

View File

@ -10,7 +10,6 @@
/ {
model = "fsl,T1040RDB-REV-A";
compatible = "fsl,T1040RDB-REV-A";
};
&seville_port0 {

View File

@ -36,15 +36,17 @@
#define PACA_IRQ_DEC 0x08 /* Or FIT */
#define PACA_IRQ_HMI 0x10
#define PACA_IRQ_PMI 0x20
#define PACA_IRQ_REPLAYING 0x40
/*
* Some soft-masked interrupts must be hard masked until they are replayed
* (e.g., because the soft-masked handler does not clear the exception).
* Interrupt replay itself must remain hard masked too.
*/
#ifdef CONFIG_PPC_BOOK3S
#define PACA_IRQ_MUST_HARD_MASK (PACA_IRQ_EE|PACA_IRQ_PMI)
#define PACA_IRQ_MUST_HARD_MASK (PACA_IRQ_EE|PACA_IRQ_PMI|PACA_IRQ_REPLAYING)
#else
#define PACA_IRQ_MUST_HARD_MASK (PACA_IRQ_EE)
#define PACA_IRQ_MUST_HARD_MASK (PACA_IRQ_EE|PACA_IRQ_REPLAYING)
#endif
#endif /* CONFIG_PPC64 */

View File

@ -295,7 +295,6 @@ extern void free_unused_pacas(void);
#else /* CONFIG_PPC64 */
static inline void allocate_paca_ptrs(void) { }
static inline void allocate_paca(int cpu) { }
static inline void free_unused_pacas(void) { }

View File

@ -26,6 +26,7 @@
#include <asm/percpu.h>
extern int boot_cpuid;
extern int boot_cpu_hwid; /* PPC64 only */
extern int spinning_secondaries;
extern u32 *cpu_to_phys_id;
extern bool coregroup_enabled;

View File

@ -67,11 +67,9 @@ static void iommu_debugfs_add(struct iommu_table *tbl)
static void iommu_debugfs_del(struct iommu_table *tbl)
{
char name[10];
struct dentry *liobn_entry;
sprintf(name, "%08lx", tbl->it_index);
liobn_entry = debugfs_lookup(name, iommu_debugfs_dir);
debugfs_remove(liobn_entry);
debugfs_lookup_and_remove(name, iommu_debugfs_dir);
}
#else
static void iommu_debugfs_add(struct iommu_table *tbl){}

View File

@ -70,22 +70,19 @@ int distribute_irqs = 1;
static inline void next_interrupt(struct pt_regs *regs)
{
/*
* Softirq processing can enable/disable irqs, which will leave
* MSR[EE] enabled and the soft mask set to IRQS_DISABLED. Fix
* this up.
*/
if (!(local_paca->irq_happened & PACA_IRQ_HARD_DIS))
hard_irq_disable();
else
irq_soft_mask_set(IRQS_ALL_DISABLED);
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) {
WARN_ON(!(local_paca->irq_happened & PACA_IRQ_HARD_DIS));
WARN_ON(irq_soft_mask_return() != IRQS_ALL_DISABLED);
}
/*
* We are responding to the next interrupt, so interrupt-off
* latencies should be reset here.
*/
lockdep_hardirq_exit();
trace_hardirqs_on();
trace_hardirqs_off();
lockdep_hardirq_enter();
}
static inline bool irq_happened_test_and_clear(u8 irq)
@ -97,22 +94,11 @@ static inline bool irq_happened_test_and_clear(u8 irq)
return false;
}
void replay_soft_interrupts(void)
static void __replay_soft_interrupts(void)
{
struct pt_regs regs;
/*
* Be careful here, calling these interrupt handlers can cause
* softirqs to be raised, which they may run when calling irq_exit,
* which will cause local_irq_enable() to be run, which can then
* recurse into this function. Don't keep any state across
* interrupt handler calls which may change underneath us.
*
* Softirqs can not be disabled over replay to stop this recursion
* because interrupts taken in idle code may require RCU softirq
* to run in the irq RCU tracking context. This is a hard problem
* to fix without changes to the softirq or idle layer.
*
* We use local_paca rather than get_paca() to avoid all the
* debug_smp_processor_id() business in this low level function.
*/
@ -120,13 +106,20 @@ void replay_soft_interrupts(void)
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) {
WARN_ON_ONCE(mfmsr() & MSR_EE);
WARN_ON(!(local_paca->irq_happened & PACA_IRQ_HARD_DIS));
WARN_ON(local_paca->irq_happened & PACA_IRQ_REPLAYING);
}
/*
* PACA_IRQ_REPLAYING prevents interrupt handlers from enabling
* MSR[EE] to get PMIs, which can result in more IRQs becoming
* pending.
*/
local_paca->irq_happened |= PACA_IRQ_REPLAYING;
ppc_save_regs(&regs);
regs.softe = IRQS_ENABLED;
regs.msr |= MSR_EE;
again:
/*
* Force the delivery of pending soft-disabled interrupts on PS3.
* Any HV call will have this side effect.
@ -175,13 +168,14 @@ void replay_soft_interrupts(void)
next_interrupt(&regs);
}
/*
* Softirq processing can enable and disable interrupts, which can
* result in new irqs becoming pending. Must keep looping until we
* have cleared out all pending interrupts.
*/
if (local_paca->irq_happened & ~PACA_IRQ_HARD_DIS)
goto again;
local_paca->irq_happened &= ~PACA_IRQ_REPLAYING;
}
void replay_soft_interrupts(void)
{
irq_enter(); /* See comment in arch_local_irq_restore */
__replay_soft_interrupts();
irq_exit();
}
#if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_PPC_KUAP)
@ -200,13 +194,13 @@ static inline void replay_soft_interrupts_irqrestore(void)
if (kuap_state != AMR_KUAP_BLOCKED)
set_kuap(AMR_KUAP_BLOCKED);
replay_soft_interrupts();
__replay_soft_interrupts();
if (kuap_state != AMR_KUAP_BLOCKED)
set_kuap(kuap_state);
}
#else
#define replay_soft_interrupts_irqrestore() replay_soft_interrupts()
#define replay_soft_interrupts_irqrestore() __replay_soft_interrupts()
#endif
notrace void arch_local_irq_restore(unsigned long mask)
@ -219,9 +213,13 @@ notrace void arch_local_irq_restore(unsigned long mask)
return;
}
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
WARN_ON_ONCE(in_nmi() || in_hardirq());
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) {
WARN_ON_ONCE(in_nmi());
WARN_ON_ONCE(in_hardirq());
WARN_ON_ONCE(local_paca->irq_happened & PACA_IRQ_REPLAYING);
}
again:
/*
* After the stb, interrupts are unmasked and there are no interrupts
* pending replay. The restart sequence makes this atomic with
@ -248,6 +246,12 @@ notrace void arch_local_irq_restore(unsigned long mask)
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
WARN_ON_ONCE(!(mfmsr() & MSR_EE));
/*
* If we came here from the replay below, we might have a preempt
* pending (due to preempt_enable_no_resched()). Have to check now.
*/
preempt_check_resched();
return;
happened:
@ -261,6 +265,7 @@ notrace void arch_local_irq_restore(unsigned long mask)
irq_soft_mask_set(IRQS_ENABLED);
local_paca->irq_happened = 0;
__hard_irq_enable();
preempt_check_resched();
return;
}
@ -296,12 +301,38 @@ notrace void arch_local_irq_restore(unsigned long mask)
irq_soft_mask_set(IRQS_ALL_DISABLED);
trace_hardirqs_off();
/*
* Now enter interrupt context. The interrupt handlers themselves
* also call irq_enter/exit (which is okay, they can nest). But call
* it here now to hold off softirqs until the below irq_exit(). If
* we allowed replayed handlers to run softirqs, that enables irqs,
* which must replay interrupts, which recurses in here and makes
* things more complicated. The recursion is limited to 2, and it can
* be made to work, but it's complicated.
*
* local_bh_disable can not be used here because interrupts taken in
* idle are not in the right context (RCU, tick, etc) to run softirqs
* so irq_enter must be called.
*/
irq_enter();
replay_soft_interrupts_irqrestore();
irq_exit();
if (unlikely(local_paca->irq_happened != PACA_IRQ_HARD_DIS)) {
/*
* The softirq processing in irq_exit() may enable interrupts
* temporarily, which can result in MSR[EE] being enabled and
* more irqs becoming pending. Go around again if that happens.
*/
trace_hardirqs_on();
preempt_enable_no_resched();
goto again;
}
trace_hardirqs_on();
irq_soft_mask_set(IRQS_ENABLED);
if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
WARN_ON(local_paca->irq_happened != PACA_IRQ_HARD_DIS);
local_paca->irq_happened = 0;
__hard_irq_enable();
preempt_enable();

View File

@ -366,8 +366,8 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
be32_to_cpu(intserv[found_thread]));
boot_cpuid = found;
// Pass the boot CPU's hard CPU id back to our caller
*((u32 *)data) = be32_to_cpu(intserv[found_thread]);
if (IS_ENABLED(CONFIG_PPC64))
boot_cpu_hwid = be32_to_cpu(intserv[found_thread]);
/*
* PAPR defines "logical" PVR values for cpus that
@ -751,7 +751,6 @@ static inline void save_fscr_to_task(void) {}
void __init early_init_devtree(void *params)
{
u32 boot_cpu_hwid;
phys_addr_t limit;
DBG(" -> early_init_devtree(%px)\n", params);
@ -847,7 +846,7 @@ void __init early_init_devtree(void *params)
/* Retrieve CPU related informations from the flat tree
* (altivec support, boot CPU ID, ...)
*/
of_scan_flat_dt(early_init_dt_scan_cpus, &boot_cpu_hwid);
of_scan_flat_dt(early_init_dt_scan_cpus, NULL);
if (boot_cpuid < 0) {
printk("Failed to identify boot CPU !\n");
BUG();
@ -864,11 +863,6 @@ void __init early_init_devtree(void *params)
mmu_early_init_devtree();
// NB. paca is not installed until later in early_setup()
allocate_paca_ptrs();
allocate_paca(boot_cpuid);
set_hard_smp_processor_id(boot_cpuid, boot_cpu_hwid);
#ifdef CONFIG_PPC_POWERNV
/* Scan and build the list of machine check recoverable ranges */
of_scan_flat_dt(early_init_dt_scan_recoverable_ranges, NULL);

View File

@ -86,6 +86,10 @@ EXPORT_SYMBOL(machine_id);
int boot_cpuid = -1;
EXPORT_SYMBOL_GPL(boot_cpuid);
#ifdef CONFIG_PPC64
int boot_cpu_hwid = -1;
#endif
/*
* These are used in binfmt_elf.c to put aux entries on the stack
* for each elf executable being started.

View File

@ -385,17 +385,21 @@ void __init early_setup(unsigned long dt_ptr)
/*
* Do early initialization using the flattened device
* tree, such as retrieving the physical memory map or
* calculating/retrieving the hash table size.
* calculating/retrieving the hash table size, discover
* boot_cpuid and boot_cpu_hwid.
*/
early_init_devtree(__va(dt_ptr));
/* Now we know the logical id of our boot cpu, setup the paca. */
if (boot_cpuid != 0) {
/* Poison paca_ptrs[0] again if it's not the boot cpu */
memset(&paca_ptrs[0], 0x88, sizeof(paca_ptrs[0]));
}
allocate_paca_ptrs();
allocate_paca(boot_cpuid);
set_hard_smp_processor_id(boot_cpuid, boot_cpu_hwid);
fixup_boot_paca(paca_ptrs[boot_cpuid]);
setup_paca(paca_ptrs[boot_cpuid]); /* install the paca into registers */
// smp_processor_id() now reports boot_cpuid
#ifdef CONFIG_SMP
task_thread_info(current)->cpu = boot_cpuid; // fix task_cpu(current)
#endif
/*
* Configure exception handlers. This include setting up trampolines

View File

@ -374,7 +374,7 @@ void vtime_flush(struct task_struct *tsk)
#define calc_cputime_factors()
#endif
void __delay(unsigned long loops)
void __no_kcsan __delay(unsigned long loops)
{
unsigned long start;
@ -395,7 +395,7 @@ void __delay(unsigned long loops)
}
EXPORT_SYMBOL(__delay);
void udelay(unsigned long usecs)
void __no_kcsan udelay(unsigned long usecs)
{
__delay(tb_ticks_per_usec * usecs);
}

View File

@ -79,6 +79,20 @@ static int bpf_jit_stack_offsetof(struct codegen_context *ctx, int reg)
#define SEEN_NVREG_FULL_MASK 0x0003ffff /* Non volatile registers r14-r31 */
#define SEEN_NVREG_TEMP_MASK 0x00001e01 /* BPF_REG_5, BPF_REG_AX, TMP_REG */
static inline bool bpf_has_stack_frame(struct codegen_context *ctx)
{
/*
* We only need a stack frame if:
* - we call other functions (kernel helpers), or
* - we use non volatile registers, or
* - we use tail call counter
* - the bpf program uses its stack area
* The latter condition is deduced from the usage of BPF_REG_FP
*/
return ctx->seen & (SEEN_FUNC | SEEN_TAILCALL | SEEN_NVREG_FULL_MASK) ||
bpf_is_seen_register(ctx, bpf_to_ppc(BPF_REG_FP));
}
void bpf_jit_realloc_regs(struct codegen_context *ctx)
{
unsigned int nvreg_mask;
@ -118,7 +132,8 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx)
#define BPF_TAILCALL_PROLOGUE_SIZE 4
EMIT(PPC_RAW_STWU(_R1, _R1, -BPF_PPC_STACKFRAME(ctx)));
if (bpf_has_stack_frame(ctx))
EMIT(PPC_RAW_STWU(_R1, _R1, -BPF_PPC_STACKFRAME(ctx)));
if (ctx->seen & SEEN_TAILCALL)
EMIT(PPC_RAW_STW(_R4, _R1, bpf_jit_stack_offsetof(ctx, BPF_PPC_TC)));
@ -171,7 +186,8 @@ static void bpf_jit_emit_common_epilogue(u32 *image, struct codegen_context *ctx
EMIT(PPC_RAW_LWZ(_R0, _R1, BPF_PPC_STACKFRAME(ctx) + PPC_LR_STKOFF));
/* Tear down our stack frame */
EMIT(PPC_RAW_ADDI(_R1, _R1, BPF_PPC_STACKFRAME(ctx)));
if (bpf_has_stack_frame(ctx))
EMIT(PPC_RAW_ADDI(_R1, _R1, BPF_PPC_STACKFRAME(ctx)));
if (ctx->seen & SEEN_FUNC)
EMIT(PPC_RAW_MTLR(_R0));

View File

@ -87,6 +87,13 @@ endif
# Avoid generating .eh_frame sections.
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
# The RISC-V attributes frequently cause compatibility issues and provide no
# information, so just turn them off.
KBUILD_CFLAGS += $(call cc-option,-mno-riscv-attribute)
KBUILD_AFLAGS += $(call cc-option,-mno-riscv-attribute)
KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mno-arch-attr)
KBUILD_AFLAGS += $(call as-option,-Wa$(comma)-mno-arch-attr)
KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax)
KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)

View File

@ -109,6 +109,6 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
#define ftrace_init_nop ftrace_init_nop
#endif
#endif
#endif /* CONFIG_DYNAMIC_FTRACE */
#endif /* _ASM_RISCV_FTRACE_H */

View File

@ -3,6 +3,9 @@
* Copyright (C) 2020 SiFive
*/
#ifndef _ASM_RISCV_INSN_H
#define _ASM_RISCV_INSN_H
#include <linux/bits.h>
/* The bit field of immediate value in I-type instruction */
@ -217,3 +220,5 @@ static inline bool is_ ## INSN_NAME ## _insn(long insn) \
(RVC_X(x_, RVC_B_IMM_5_OPOFF, RVC_B_IMM_5_MASK) << RVC_B_IMM_5_OFF) | \
(RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \
(RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); })
#endif /* _ASM_RISCV_INSN_H */

View File

@ -9,4 +9,6 @@
int patch_text_nosync(void *addr, const void *insns, size_t len);
int patch_text(void *addr, u32 insn);
extern int riscv_patch_in_stop_machine;
#endif /* _ASM_RISCV_PATCH_H */

View File

@ -14,6 +14,10 @@ COMPAT_LD := $(LD)
COMPAT_CC_FLAGS := -march=rv32g -mabi=ilp32
COMPAT_LD_FLAGS := -melf32lriscv
# Disable attributes, as they're useless and break the build.
COMPAT_CC_FLAGS += $(call cc-option,-mno-riscv-attribute)
COMPAT_CC_FLAGS += $(call as-option,-Wa$(comma)-mno-arch-attr)
# Files to link into the compat_vdso
obj-compat_vdso = $(patsubst %, %.o, $(compat_vdso-syms)) note.o

View File

@ -15,10 +15,19 @@
void ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex)
{
mutex_lock(&text_mutex);
/*
* The code sequences we use for ftrace can't be patched while the
* kernel is running, so we need to use stop_machine() to modify them
* for now. This doesn't play nice with text_mutex, we use this flag
* to elide the check.
*/
riscv_patch_in_stop_machine = true;
}
void ftrace_arch_code_modify_post_process(void) __releases(&text_mutex)
{
riscv_patch_in_stop_machine = false;
mutex_unlock(&text_mutex);
}
@ -107,9 +116,9 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
{
int out;
ftrace_arch_code_modify_prepare();
mutex_lock(&text_mutex);
out = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
ftrace_arch_code_modify_post_process();
mutex_unlock(&text_mutex);
return out;
}

View File

@ -11,6 +11,7 @@
#include <asm/kprobes.h>
#include <asm/cacheflush.h>
#include <asm/fixmap.h>
#include <asm/ftrace.h>
#include <asm/patch.h>
struct patch_insn {
@ -19,6 +20,8 @@ struct patch_insn {
atomic_t cpu_count;
};
int riscv_patch_in_stop_machine = false;
#ifdef CONFIG_MMU
/*
* The fix_to_virt(, idx) needs a const value (not a dynamic variable of
@ -59,8 +62,15 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
* Before reaching here, it was expected to lock the text_mutex
* already, so we don't need to give another lock here and could
* ensure that it was safe between each cores.
*
* We're currently using stop_machine() for ftrace & kprobes, and while
* that ensures text_mutex is held before installing the mappings it
* does not ensure text_mutex is held by the calling thread. That's
* safe but triggers a lockdep failure, so just elide it for that
* specific case.
*/
lockdep_assert_held(&text_mutex);
if (!riscv_patch_in_stop_machine)
lockdep_assert_held(&text_mutex);
if (across_pages)
patch_map(addr + len, FIX_TEXT_POKE1);
@ -121,13 +131,25 @@ NOKPROBE_SYMBOL(patch_text_cb);
int patch_text(void *addr, u32 insn)
{
int ret;
struct patch_insn patch = {
.addr = addr,
.insn = insn,
.cpu_count = ATOMIC_INIT(0),
};
return stop_machine_cpuslocked(patch_text_cb,
&patch, cpu_online_mask);
/*
* kprobes takes text_mutex, before calling patch_text(), but as we call
* calls stop_machine(), the lockdep assertion in patch_insn_write()
* gets confused by the context in which the lock is taken.
* Instead, ensure the lock is held before calling stop_machine(), and
* set riscv_patch_in_stop_machine to skip the check in
* patch_insn_write().
*/
lockdep_assert_held(&text_mutex);
riscv_patch_in_stop_machine = true;
ret = stop_machine_cpuslocked(patch_text_cb, &patch, cpu_online_mask);
riscv_patch_in_stop_machine = false;
return ret;
}
NOKPROBE_SYMBOL(patch_text);

View File

@ -92,7 +92,7 @@ void notrace walk_stackframe(struct task_struct *task,
while (!kstack_end(ksp)) {
if (__kernel_text_address(pc) && unlikely(!fn(arg, pc)))
break;
pc = (*ksp++) - 0x4;
pc = READ_ONCE_NOCHECK(*ksp++) - 0x4;
}
}

View File

@ -1,4 +1,4 @@
#define RUNTIME_DISCARD_EXIT
KERNEL_STACK_SIZE = 4096 * (1 << CONFIG_KERNEL_STACK_ORDER);
#ifdef CONFIG_LD_SCRIPT_STATIC

View File

@ -1695,6 +1695,9 @@ extern struct kvm_x86_ops kvm_x86_ops;
#define KVM_X86_OP_OPTIONAL_RET0 KVM_X86_OP
#include <asm/kvm-x86-ops.h>
int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops);
void kvm_x86_vendor_exit(void);
#define __KVM_HAVE_ARCH_VM_ALLOC
static inline struct kvm *kvm_arch_alloc_vm(void)
{

View File

@ -880,6 +880,15 @@ void init_spectral_chicken(struct cpuinfo_x86 *c)
}
}
#endif
/*
* Work around Erratum 1386. The XSAVES instruction malfunctions in
* certain circumstances on Zen1/2 uarch, and not all parts have had
* updated microcode at the time of writing (March 2023).
*
* Affected parts all have no supervisor XSAVE states, meaning that
* the XSAVEC instruction (which works fine) is equivalent.
*/
clear_cpu_cap(c, X86_FEATURE_XSAVES);
}
static void init_amd_zn(struct cpuinfo_x86 *c)

View File

@ -5080,15 +5080,34 @@ static struct kvm_x86_init_ops svm_init_ops __initdata = {
static int __init svm_init(void)
{
int r;
__unused_size_checks();
return kvm_init(&svm_init_ops, sizeof(struct vcpu_svm),
__alignof__(struct vcpu_svm), THIS_MODULE);
r = kvm_x86_vendor_init(&svm_init_ops);
if (r)
return r;
/*
* Common KVM initialization _must_ come last, after this, /dev/kvm is
* exposed to userspace!
*/
r = kvm_init(&svm_init_ops, sizeof(struct vcpu_svm),
__alignof__(struct vcpu_svm), THIS_MODULE);
if (r)
goto err_kvm_init;
return 0;
err_kvm_init:
kvm_x86_vendor_exit();
return r;
}
static void __exit svm_exit(void)
{
kvm_exit();
kvm_x86_vendor_exit();
}
module_init(svm_init)

View File

@ -551,6 +551,33 @@ static int hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu)
return 0;
}
static void hv_reset_evmcs(void)
{
struct hv_vp_assist_page *vp_ap;
if (!static_branch_unlikely(&enable_evmcs))
return;
/*
* KVM should enable eVMCS if and only if all CPUs have a VP assist
* page, and should reject CPU onlining if eVMCS is enabled the CPU
* doesn't have a VP assist page allocated.
*/
vp_ap = hv_get_vp_assist_page(smp_processor_id());
if (WARN_ON_ONCE(!vp_ap))
return;
/*
* Reset everything to support using non-enlightened VMCS access later
* (e.g. when we reload the module with enlightened_vmcs=0)
*/
vp_ap->nested_control.features.directhypercall = 0;
vp_ap->current_nested_vmcs = 0;
vp_ap->enlighten_vmentry = 0;
}
#else /* IS_ENABLED(CONFIG_HYPERV) */
static void hv_reset_evmcs(void) {}
#endif /* IS_ENABLED(CONFIG_HYPERV) */
/*
@ -2501,6 +2528,8 @@ static void vmx_hardware_disable(void)
if (cpu_vmxoff())
kvm_spurious_fault();
hv_reset_evmcs();
intel_pt_handle_vmx(0);
}
@ -8427,41 +8456,23 @@ static void vmx_cleanup_l1d_flush(void)
l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
}
static void vmx_exit(void)
static void __vmx_exit(void)
{
allow_smaller_maxphyaddr = false;
#ifdef CONFIG_KEXEC_CORE
RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL);
synchronize_rcu();
#endif
kvm_exit();
#if IS_ENABLED(CONFIG_HYPERV)
if (static_branch_unlikely(&enable_evmcs)) {
int cpu;
struct hv_vp_assist_page *vp_ap;
/*
* Reset everything to support using non-enlightened VMCS
* access later (e.g. when we reload the module with
* enlightened_vmcs=0)
*/
for_each_online_cpu(cpu) {
vp_ap = hv_get_vp_assist_page(cpu);
if (!vp_ap)
continue;
vp_ap->nested_control.features.directhypercall = 0;
vp_ap->current_nested_vmcs = 0;
vp_ap->enlighten_vmentry = 0;
}
static_branch_disable(&enable_evmcs);
}
#endif
vmx_cleanup_l1d_flush();
}
allow_smaller_maxphyaddr = false;
static void vmx_exit(void)
{
kvm_exit();
kvm_x86_vendor_exit();
__vmx_exit();
}
module_exit(vmx_exit);
@ -8502,23 +8513,20 @@ static int __init vmx_init(void)
}
#endif
r = kvm_init(&vmx_init_ops, sizeof(struct vcpu_vmx),
__alignof__(struct vcpu_vmx), THIS_MODULE);
r = kvm_x86_vendor_init(&vmx_init_ops);
if (r)
return r;
/*
* Must be called after kvm_init() so enable_ept is properly set
* Must be called after common x86 init so enable_ept is properly set
* up. Hand the parameter mitigation value in which was stored in
* the pre module init parser. If no parameter was given, it will
* contain 'auto' which will be turned into the default 'cond'
* mitigation mode.
*/
r = vmx_setup_l1d_flush(vmentry_l1d_flush_param);
if (r) {
vmx_exit();
return r;
}
if (r)
goto err_l1d_flush;
vmx_setup_fb_clear_ctrl();
@ -8542,6 +8550,21 @@ static int __init vmx_init(void)
if (!enable_ept)
allow_smaller_maxphyaddr = true;
/*
* Common KVM initialization _must_ come last, after this, /dev/kvm is
* exposed to userspace!
*/
r = kvm_init(&vmx_init_ops, sizeof(struct vcpu_vmx),
__alignof__(struct vcpu_vmx), THIS_MODULE);
if (r)
goto err_kvm_init;
return 0;
err_kvm_init:
__vmx_exit();
err_l1d_flush:
kvm_x86_vendor_exit();
return r;
}
module_init(vmx_init);

View File

@ -9351,7 +9351,16 @@ static struct notifier_block pvclock_gtod_notifier = {
int kvm_arch_init(void *opaque)
{
struct kvm_x86_init_ops *ops = opaque;
return 0;
}
void kvm_arch_exit(void)
{
}
int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
{
u64 host_pat;
int r;
@ -9441,8 +9450,9 @@ int kvm_arch_init(void *opaque)
kmem_cache_destroy(x86_emulator_cache);
return r;
}
EXPORT_SYMBOL_GPL(kvm_x86_vendor_init);
void kvm_arch_exit(void)
void kvm_x86_vendor_exit(void)
{
#ifdef CONFIG_X86_64
if (hypervisor_is_type(X86_HYPER_MS_HYPERV))
@ -9468,6 +9478,7 @@ void kvm_arch_exit(void)
WARN_ON(static_branch_unlikely(&kvm_xen_enabled.key));
#endif
}
EXPORT_SYMBOL_GPL(kvm_x86_vendor_exit);
static int __kvm_emulate_halt(struct kvm_vcpu *vcpu, int state, int reason)
{

View File

@ -436,7 +436,7 @@ static inline struct kmem_cache *blk_get_queue_kmem_cache(bool srcu)
}
struct request_queue *blk_alloc_queue(int node_id, bool alloc_srcu);
int disk_scan_partitions(struct gendisk *disk, fmode_t mode, void *owner);
int disk_scan_partitions(struct gendisk *disk, fmode_t mode);
int disk_alloc_events(struct gendisk *disk);
void disk_add_events(struct gendisk *disk);

View File

@ -356,9 +356,10 @@ void disk_uevent(struct gendisk *disk, enum kobject_action action)
}
EXPORT_SYMBOL_GPL(disk_uevent);
int disk_scan_partitions(struct gendisk *disk, fmode_t mode, void *owner)
int disk_scan_partitions(struct gendisk *disk, fmode_t mode)
{
struct block_device *bdev;
int ret = 0;
if (disk->flags & (GENHD_FL_NO_PART | GENHD_FL_HIDDEN))
return -EINVAL;
@ -366,16 +367,29 @@ int disk_scan_partitions(struct gendisk *disk, fmode_t mode, void *owner)
return -EINVAL;
if (disk->open_partitions)
return -EBUSY;
/* Someone else has bdev exclusively open? */
if (disk->part0->bd_holder && disk->part0->bd_holder != owner)
return -EBUSY;
set_bit(GD_NEED_PART_SCAN, &disk->state);
bdev = blkdev_get_by_dev(disk_devt(disk), mode, NULL);
/*
* If the device is opened exclusively by current thread already, it's
* safe to scan partitons, otherwise, use bd_prepare_to_claim() to
* synchronize with other exclusive openers and other partition
* scanners.
*/
if (!(mode & FMODE_EXCL)) {
ret = bd_prepare_to_claim(disk->part0, disk_scan_partitions);
if (ret)
return ret;
}
bdev = blkdev_get_by_dev(disk_devt(disk), mode & ~FMODE_EXCL, NULL);
if (IS_ERR(bdev))
return PTR_ERR(bdev);
blkdev_put(bdev, mode);
return 0;
ret = PTR_ERR(bdev);
else
blkdev_put(bdev, mode & ~FMODE_EXCL);
if (!(mode & FMODE_EXCL))
bd_abort_claiming(disk->part0, disk_scan_partitions);
return ret;
}
/**
@ -501,9 +515,14 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
if (ret)
goto out_unregister_bdi;
/* Make sure the first partition scan will be proceed */
if (get_capacity(disk) && !(disk->flags & GENHD_FL_NO_PART) &&
!test_bit(GD_SUPPRESS_PART_SCAN, &disk->state))
set_bit(GD_NEED_PART_SCAN, &disk->state);
bdev_add(disk->part0, ddev->devt);
if (get_capacity(disk))
disk_scan_partitions(disk, FMODE_READ, NULL);
disk_scan_partitions(disk, FMODE_READ);
/*
* Announce the disk and partitions after all partitions are

View File

@ -467,10 +467,10 @@ static int blkdev_bszset(struct block_device *bdev, fmode_t mode,
* user space. Note the separate arg/argp parameters that are needed
* to deal with the compat_ptr() conversion.
*/
static int blkdev_common_ioctl(struct file *file, fmode_t mode, unsigned cmd,
unsigned long arg, void __user *argp)
static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode,
unsigned int cmd, unsigned long arg,
void __user *argp)
{
struct block_device *bdev = I_BDEV(file->f_mapping->host);
unsigned int max_sectors;
switch (cmd) {
@ -528,8 +528,7 @@ static int blkdev_common_ioctl(struct file *file, fmode_t mode, unsigned cmd,
return -EACCES;
if (bdev_is_partition(bdev))
return -EINVAL;
return disk_scan_partitions(bdev->bd_disk, mode & ~FMODE_EXCL,
file);
return disk_scan_partitions(bdev->bd_disk, mode);
case BLKTRACESTART:
case BLKTRACESTOP:
case BLKTRACETEARDOWN:
@ -607,7 +606,7 @@ long blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
break;
}
ret = blkdev_common_ioctl(file, mode, cmd, arg, argp);
ret = blkdev_common_ioctl(bdev, mode, cmd, arg, argp);
if (ret != -ENOIOCTLCMD)
return ret;
@ -676,7 +675,7 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
break;
}
ret = blkdev_common_ioctl(file, mode, cmd, arg, argp);
ret = blkdev_common_ioctl(bdev, mode, cmd, arg, argp);
if (ret == -ENOIOCTLCMD && disk->fops->compat_ioctl)
ret = disk->fops->compat_ioctl(bdev, mode, cmd, arg);

View File

@ -990,44 +990,25 @@ static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
static void mhi_ep_reset_worker(struct work_struct *work)
{
struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, reset_work);
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_state cur_state;
int ret;
mhi_ep_abort_transfer(mhi_cntrl);
mhi_ep_power_down(mhi_cntrl);
mutex_lock(&mhi_cntrl->state_lock);
spin_lock_bh(&mhi_cntrl->state_lock);
/* Reset MMIO to signal host that the MHI_RESET is completed in endpoint */
mhi_ep_mmio_reset(mhi_cntrl);
cur_state = mhi_cntrl->mhi_state;
spin_unlock_bh(&mhi_cntrl->state_lock);
/*
* Only proceed further if the reset is due to SYS_ERR. The host will
* issue reset during shutdown also and we don't need to do re-init in
* that case.
*/
if (cur_state == MHI_STATE_SYS_ERR) {
mhi_ep_mmio_init(mhi_cntrl);
if (cur_state == MHI_STATE_SYS_ERR)
mhi_ep_power_up(mhi_cntrl);
/* Set AMSS EE before signaling ready state */
mhi_ep_mmio_set_env(mhi_cntrl, MHI_EE_AMSS);
/* All set, notify the host that we are ready */
ret = mhi_ep_set_ready_state(mhi_cntrl);
if (ret)
return;
dev_dbg(dev, "READY state notification sent to the host\n");
ret = mhi_ep_enable(mhi_cntrl);
if (ret) {
dev_err(dev, "Failed to enable MHI endpoint: %d\n", ret);
return;
}
enable_irq(mhi_cntrl->irq);
}
mutex_unlock(&mhi_cntrl->state_lock);
}
/*
@ -1106,11 +1087,11 @@ EXPORT_SYMBOL_GPL(mhi_ep_power_up);
void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
{
if (mhi_cntrl->enabled)
if (mhi_cntrl->enabled) {
mhi_ep_abort_transfer(mhi_cntrl);
kfree(mhi_cntrl->mhi_event);
disable_irq(mhi_cntrl->irq);
kfree(mhi_cntrl->mhi_event);
disable_irq(mhi_cntrl->irq);
}
}
EXPORT_SYMBOL_GPL(mhi_ep_power_down);
@ -1400,8 +1381,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
spin_lock_init(&mhi_cntrl->state_lock);
spin_lock_init(&mhi_cntrl->list_lock);
mutex_init(&mhi_cntrl->state_lock);
mutex_init(&mhi_cntrl->event_lock);
/* Set MHI version and AMSS EE before enumeration */

View File

@ -63,24 +63,23 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
int ret;
/* If MHI is in M3, resume suspended channels */
spin_lock_bh(&mhi_cntrl->state_lock);
mutex_lock(&mhi_cntrl->state_lock);
old_state = mhi_cntrl->mhi_state;
if (old_state == MHI_STATE_M3)
mhi_ep_resume_channels(mhi_cntrl);
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
spin_unlock_bh(&mhi_cntrl->state_lock);
if (ret) {
mhi_ep_handle_syserr(mhi_cntrl);
return ret;
goto err_unlock;
}
/* Signal host that the device moved to M0 */
ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M0);
if (ret) {
dev_err(dev, "Failed sending M0 state change event\n");
return ret;
goto err_unlock;
}
if (old_state == MHI_STATE_READY) {
@ -88,11 +87,14 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
ret = mhi_ep_send_ee_event(mhi_cntrl, MHI_EE_AMSS);
if (ret) {
dev_err(dev, "Failed sending AMSS EE event\n");
return ret;
goto err_unlock;
}
}
return 0;
err_unlock:
mutex_unlock(&mhi_cntrl->state_lock);
return ret;
}
int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
@ -100,13 +102,12 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int ret;
spin_lock_bh(&mhi_cntrl->state_lock);
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
spin_unlock_bh(&mhi_cntrl->state_lock);
mutex_lock(&mhi_cntrl->state_lock);
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
if (ret) {
mhi_ep_handle_syserr(mhi_cntrl);
return ret;
goto err_unlock;
}
mhi_ep_suspend_channels(mhi_cntrl);
@ -115,10 +116,13 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
if (ret) {
dev_err(dev, "Failed sending M3 state change event\n");
return ret;
goto err_unlock;
}
return 0;
err_unlock:
mutex_unlock(&mhi_cntrl->state_lock);
return ret;
}
int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl)
@ -127,22 +131,24 @@ int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl)
enum mhi_state mhi_state;
int ret, is_ready;
spin_lock_bh(&mhi_cntrl->state_lock);
mutex_lock(&mhi_cntrl->state_lock);
/* Ensure that the MHISTATUS is set to RESET by host */
mhi_state = mhi_ep_mmio_masked_read(mhi_cntrl, EP_MHISTATUS, MHISTATUS_MHISTATE_MASK);
is_ready = mhi_ep_mmio_masked_read(mhi_cntrl, EP_MHISTATUS, MHISTATUS_READY_MASK);
if (mhi_state != MHI_STATE_RESET || is_ready) {
dev_err(dev, "READY state transition failed. MHI host not in RESET state\n");
spin_unlock_bh(&mhi_cntrl->state_lock);
return -EIO;
ret = -EIO;
goto err_unlock;
}
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_READY);
spin_unlock_bh(&mhi_cntrl->state_lock);
if (ret)
mhi_ep_handle_syserr(mhi_cntrl);
err_unlock:
mutex_unlock(&mhi_cntrl->state_lock);
return ret;
}

View File

@ -74,7 +74,8 @@
/*
* Timer values
*/
#define SSIF_MSG_USEC 20000 /* 20ms between message tries. */
#define SSIF_MSG_USEC 60000 /* 60ms between message tries (T3). */
#define SSIF_REQ_RETRY_USEC 60000 /* 60ms between send retries (T6). */
#define SSIF_MSG_PART_USEC 5000 /* 5ms for a message part */
/* How many times to we retry sending/receiving the message. */
@ -82,7 +83,9 @@
#define SSIF_RECV_RETRIES 250
#define SSIF_MSG_MSEC (SSIF_MSG_USEC / 1000)
#define SSIF_REQ_RETRY_MSEC (SSIF_REQ_RETRY_USEC / 1000)
#define SSIF_MSG_JIFFIES ((SSIF_MSG_USEC * 1000) / TICK_NSEC)
#define SSIF_REQ_RETRY_JIFFIES ((SSIF_REQ_RETRY_USEC * 1000) / TICK_NSEC)
#define SSIF_MSG_PART_JIFFIES ((SSIF_MSG_PART_USEC * 1000) / TICK_NSEC)
/*
@ -229,6 +232,9 @@ struct ssif_info {
bool got_alert;
bool waiting_alert;
/* Used to inform the timeout that it should do a resend. */
bool do_resend;
/*
* If set to true, this will request events the next time the
* state machine is idle.
@ -241,12 +247,6 @@ struct ssif_info {
*/
bool req_flags;
/*
* Used to perform timer operations when run-to-completion
* mode is on. This is a countdown timer.
*/
int rtc_us_timer;
/* Used for sending/receiving data. +1 for the length. */
unsigned char data[IPMI_MAX_MSG_LENGTH + 1];
unsigned int data_len;
@ -530,7 +530,6 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
static void start_get(struct ssif_info *ssif_info)
{
ssif_info->rtc_us_timer = 0;
ssif_info->multi_pos = 0;
ssif_i2c_send(ssif_info, msg_done_handler, I2C_SMBUS_READ,
@ -538,22 +537,28 @@ static void start_get(struct ssif_info *ssif_info)
ssif_info->recv, I2C_SMBUS_BLOCK_DATA);
}
static void start_resend(struct ssif_info *ssif_info);
static void retry_timeout(struct timer_list *t)
{
struct ssif_info *ssif_info = from_timer(ssif_info, t, retry_timer);
unsigned long oflags, *flags;
bool waiting;
bool waiting, resend;
if (ssif_info->stopping)
return;
flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
resend = ssif_info->do_resend;
ssif_info->do_resend = false;
waiting = ssif_info->waiting_alert;
ssif_info->waiting_alert = false;
ipmi_ssif_unlock_cond(ssif_info, flags);
if (waiting)
start_get(ssif_info);
if (resend)
start_resend(ssif_info);
}
static void watch_timeout(struct timer_list *t)
@ -602,8 +607,6 @@ static void ssif_alert(struct i2c_client *client, enum i2c_alert_protocol type,
start_get(ssif_info);
}
static void start_resend(struct ssif_info *ssif_info);
static void msg_done_handler(struct ssif_info *ssif_info, int result,
unsigned char *data, unsigned int len)
{
@ -622,7 +625,6 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
ssif_info->waiting_alert = true;
ssif_info->rtc_us_timer = SSIF_MSG_USEC;
if (!ssif_info->stopping)
mod_timer(&ssif_info->retry_timer,
jiffies + SSIF_MSG_JIFFIES);
@ -909,7 +911,13 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
if (result < 0) {
ssif_info->retries_left--;
if (ssif_info->retries_left > 0) {
start_resend(ssif_info);
/*
* Wait the retry timeout time per the spec,
* then redo the send.
*/
ssif_info->do_resend = true;
mod_timer(&ssif_info->retry_timer,
jiffies + SSIF_REQ_RETRY_JIFFIES);
return;
}
@ -973,7 +981,6 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
/* Wait a jiffie then request the next message */
ssif_info->waiting_alert = true;
ssif_info->retries_left = SSIF_RECV_RETRIES;
ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC;
if (!ssif_info->stopping)
mod_timer(&ssif_info->retry_timer,
jiffies + SSIF_MSG_PART_JIFFIES);
@ -1320,8 +1327,10 @@ static int do_cmd(struct i2c_client *client, int len, unsigned char *msg,
ret = i2c_smbus_write_block_data(client, SSIF_IPMI_REQUEST, len, msg);
if (ret) {
retry_cnt--;
if (retry_cnt > 0)
if (retry_cnt > 0) {
msleep(SSIF_REQ_RETRY_MSEC);
goto retry1;
}
return -ENODEV;
}
@ -1462,8 +1471,10 @@ static int start_multipart_test(struct i2c_client *client,
32, msg);
if (ret) {
retry_cnt--;
if (retry_cnt > 0)
if (retry_cnt > 0) {
msleep(SSIF_REQ_RETRY_MSEC);
goto retry_write;
}
dev_err(&client->dev, "Could not write multi-part start, though the BMC said it could handle it. Just limit sends to one part.\n");
return ret;
}

View File

@ -143,8 +143,12 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
ret = -EIO;
virt = acpi_os_map_iomem(start, len);
if (!virt)
if (!virt) {
dev_warn(&chip->dev, "%s: Failed to map ACPI memory\n", __func__);
/* try EFI log next */
ret = -ENODEV;
goto err;
}
memcpy_fromio(log->bios_event_log, virt, len);

View File

@ -22,7 +22,7 @@ config CLK_RENESAS
select CLK_R8A7791 if ARCH_R8A7791 || ARCH_R8A7793
select CLK_R8A7792 if ARCH_R8A7792
select CLK_R8A7794 if ARCH_R8A7794
select CLK_R8A7795 if ARCH_R8A77950 || ARCH_R8A77951
select CLK_R8A7795 if ARCH_R8A77951
select CLK_R8A77960 if ARCH_R8A77960
select CLK_R8A77961 if ARCH_R8A77961
select CLK_R8A77965 if ARCH_R8A77965

View File

@ -128,7 +128,6 @@ static struct cpg_core_clk r8a7795_core_clks[] __initdata = {
};
static struct mssr_mod_clk r8a7795_mod_clks[] __initdata = {
DEF_MOD("fdp1-2", 117, R8A7795_CLK_S2D1), /* ES1.x */
DEF_MOD("fdp1-1", 118, R8A7795_CLK_S0D1),
DEF_MOD("fdp1-0", 119, R8A7795_CLK_S0D1),
DEF_MOD("tmu4", 121, R8A7795_CLK_S0D6),
@ -162,7 +161,6 @@ static struct mssr_mod_clk r8a7795_mod_clks[] __initdata = {
DEF_MOD("pcie1", 318, R8A7795_CLK_S3D1),
DEF_MOD("pcie0", 319, R8A7795_CLK_S3D1),
DEF_MOD("usb-dmac30", 326, R8A7795_CLK_S3D1),
DEF_MOD("usb3-if1", 327, R8A7795_CLK_S3D1), /* ES1.x */
DEF_MOD("usb3-if0", 328, R8A7795_CLK_S3D1),
DEF_MOD("usb-dmac31", 329, R8A7795_CLK_S3D1),
DEF_MOD("usb-dmac0", 330, R8A7795_CLK_S3D1),
@ -187,28 +185,21 @@ static struct mssr_mod_clk r8a7795_mod_clks[] __initdata = {
DEF_MOD("hscif0", 520, R8A7795_CLK_S3D1),
DEF_MOD("thermal", 522, R8A7795_CLK_CP),
DEF_MOD("pwm", 523, R8A7795_CLK_S0D12),
DEF_MOD("fcpvd3", 600, R8A7795_CLK_S2D1), /* ES1.x */
DEF_MOD("fcpvd2", 601, R8A7795_CLK_S0D2),
DEF_MOD("fcpvd1", 602, R8A7795_CLK_S0D2),
DEF_MOD("fcpvd0", 603, R8A7795_CLK_S0D2),
DEF_MOD("fcpvb1", 606, R8A7795_CLK_S0D1),
DEF_MOD("fcpvb0", 607, R8A7795_CLK_S0D1),
DEF_MOD("fcpvi2", 609, R8A7795_CLK_S2D1), /* ES1.x */
DEF_MOD("fcpvi1", 610, R8A7795_CLK_S0D1),
DEF_MOD("fcpvi0", 611, R8A7795_CLK_S0D1),
DEF_MOD("fcpf2", 613, R8A7795_CLK_S2D1), /* ES1.x */
DEF_MOD("fcpf1", 614, R8A7795_CLK_S0D1),
DEF_MOD("fcpf0", 615, R8A7795_CLK_S0D1),
DEF_MOD("fcpci1", 616, R8A7795_CLK_S2D1), /* ES1.x */
DEF_MOD("fcpci0", 617, R8A7795_CLK_S2D1), /* ES1.x */
DEF_MOD("fcpcs", 619, R8A7795_CLK_S0D1),
DEF_MOD("vspd3", 620, R8A7795_CLK_S2D1), /* ES1.x */
DEF_MOD("vspd2", 621, R8A7795_CLK_S0D2),
DEF_MOD("vspd1", 622, R8A7795_CLK_S0D2),
DEF_MOD("vspd0", 623, R8A7795_CLK_S0D2),
DEF_MOD("vspbc", 624, R8A7795_CLK_S0D1),
DEF_MOD("vspbd", 626, R8A7795_CLK_S0D1),
DEF_MOD("vspi2", 629, R8A7795_CLK_S2D1), /* ES1.x */
DEF_MOD("vspi1", 630, R8A7795_CLK_S0D1),
DEF_MOD("vspi0", 631, R8A7795_CLK_S0D1),
DEF_MOD("ehci3", 700, R8A7795_CLK_S3D2),
@ -221,7 +212,6 @@ static struct mssr_mod_clk r8a7795_mod_clks[] __initdata = {
DEF_MOD("cmm2", 709, R8A7795_CLK_S2D1),
DEF_MOD("cmm1", 710, R8A7795_CLK_S2D1),
DEF_MOD("cmm0", 711, R8A7795_CLK_S2D1),
DEF_MOD("csi21", 713, R8A7795_CLK_CSI0), /* ES1.x */
DEF_MOD("csi20", 714, R8A7795_CLK_CSI0),
DEF_MOD("csi41", 715, R8A7795_CLK_CSI0),
DEF_MOD("csi40", 716, R8A7795_CLK_CSI0),
@ -350,103 +340,26 @@ static const struct rcar_gen3_cpg_pll_config cpg_pll_configs[16] __initconst = {
{ 2, 192, 1, 192, 1, 32, },
};
static const struct soc_device_attribute r8a7795es1[] __initconst = {
static const struct soc_device_attribute r8a7795_denylist[] __initconst = {
{ .soc_id = "r8a7795", .revision = "ES1.*" },
{ /* sentinel */ }
};
/*
* Fixups for R-Car H3 ES1.x
*/
static const unsigned int r8a7795es1_mod_nullify[] __initconst = {
MOD_CLK_ID(326), /* USB-DMAC3-0 */
MOD_CLK_ID(329), /* USB-DMAC3-1 */
MOD_CLK_ID(700), /* EHCI/OHCI3 */
MOD_CLK_ID(705), /* HS-USB-IF3 */
};
static const struct mssr_mod_reparent r8a7795es1_mod_reparent[] __initconst = {
{ MOD_CLK_ID(118), R8A7795_CLK_S2D1 }, /* FDP1-1 */
{ MOD_CLK_ID(119), R8A7795_CLK_S2D1 }, /* FDP1-0 */
{ MOD_CLK_ID(121), R8A7795_CLK_S3D2 }, /* TMU4 */
{ MOD_CLK_ID(217), R8A7795_CLK_S3D1 }, /* SYS-DMAC2 */
{ MOD_CLK_ID(218), R8A7795_CLK_S3D1 }, /* SYS-DMAC1 */
{ MOD_CLK_ID(219), R8A7795_CLK_S3D1 }, /* SYS-DMAC0 */
{ MOD_CLK_ID(408), R8A7795_CLK_S3D1 }, /* INTC-AP */
{ MOD_CLK_ID(501), R8A7795_CLK_S3D1 }, /* AUDMAC1 */
{ MOD_CLK_ID(502), R8A7795_CLK_S3D1 }, /* AUDMAC0 */
{ MOD_CLK_ID(523), R8A7795_CLK_S3D4 }, /* PWM */
{ MOD_CLK_ID(601), R8A7795_CLK_S2D1 }, /* FCPVD2 */
{ MOD_CLK_ID(602), R8A7795_CLK_S2D1 }, /* FCPVD1 */
{ MOD_CLK_ID(603), R8A7795_CLK_S2D1 }, /* FCPVD0 */
{ MOD_CLK_ID(606), R8A7795_CLK_S2D1 }, /* FCPVB1 */
{ MOD_CLK_ID(607), R8A7795_CLK_S2D1 }, /* FCPVB0 */
{ MOD_CLK_ID(610), R8A7795_CLK_S2D1 }, /* FCPVI1 */
{ MOD_CLK_ID(611), R8A7795_CLK_S2D1 }, /* FCPVI0 */
{ MOD_CLK_ID(614), R8A7795_CLK_S2D1 }, /* FCPF1 */
{ MOD_CLK_ID(615), R8A7795_CLK_S2D1 }, /* FCPF0 */
{ MOD_CLK_ID(619), R8A7795_CLK_S2D1 }, /* FCPCS */
{ MOD_CLK_ID(621), R8A7795_CLK_S2D1 }, /* VSPD2 */
{ MOD_CLK_ID(622), R8A7795_CLK_S2D1 }, /* VSPD1 */
{ MOD_CLK_ID(623), R8A7795_CLK_S2D1 }, /* VSPD0 */
{ MOD_CLK_ID(624), R8A7795_CLK_S2D1 }, /* VSPBC */
{ MOD_CLK_ID(626), R8A7795_CLK_S2D1 }, /* VSPBD */
{ MOD_CLK_ID(630), R8A7795_CLK_S2D1 }, /* VSPI1 */
{ MOD_CLK_ID(631), R8A7795_CLK_S2D1 }, /* VSPI0 */
{ MOD_CLK_ID(804), R8A7795_CLK_S2D1 }, /* VIN7 */
{ MOD_CLK_ID(805), R8A7795_CLK_S2D1 }, /* VIN6 */
{ MOD_CLK_ID(806), R8A7795_CLK_S2D1 }, /* VIN5 */
{ MOD_CLK_ID(807), R8A7795_CLK_S2D1 }, /* VIN4 */
{ MOD_CLK_ID(808), R8A7795_CLK_S2D1 }, /* VIN3 */
{ MOD_CLK_ID(809), R8A7795_CLK_S2D1 }, /* VIN2 */
{ MOD_CLK_ID(810), R8A7795_CLK_S2D1 }, /* VIN1 */
{ MOD_CLK_ID(811), R8A7795_CLK_S2D1 }, /* VIN0 */
{ MOD_CLK_ID(812), R8A7795_CLK_S3D2 }, /* EAVB-IF */
{ MOD_CLK_ID(820), R8A7795_CLK_S2D1 }, /* IMR3 */
{ MOD_CLK_ID(821), R8A7795_CLK_S2D1 }, /* IMR2 */
{ MOD_CLK_ID(822), R8A7795_CLK_S2D1 }, /* IMR1 */
{ MOD_CLK_ID(823), R8A7795_CLK_S2D1 }, /* IMR0 */
{ MOD_CLK_ID(905), R8A7795_CLK_CP }, /* GPIO7 */
{ MOD_CLK_ID(906), R8A7795_CLK_CP }, /* GPIO6 */
{ MOD_CLK_ID(907), R8A7795_CLK_CP }, /* GPIO5 */
{ MOD_CLK_ID(908), R8A7795_CLK_CP }, /* GPIO4 */
{ MOD_CLK_ID(909), R8A7795_CLK_CP }, /* GPIO3 */
{ MOD_CLK_ID(910), R8A7795_CLK_CP }, /* GPIO2 */
{ MOD_CLK_ID(911), R8A7795_CLK_CP }, /* GPIO1 */
{ MOD_CLK_ID(912), R8A7795_CLK_CP }, /* GPIO0 */
{ MOD_CLK_ID(918), R8A7795_CLK_S3D2 }, /* I2C6 */
{ MOD_CLK_ID(919), R8A7795_CLK_S3D2 }, /* I2C5 */
{ MOD_CLK_ID(927), R8A7795_CLK_S3D2 }, /* I2C4 */
{ MOD_CLK_ID(928), R8A7795_CLK_S3D2 }, /* I2C3 */
};
/*
* Fixups for R-Car H3 ES2.x
*/
static const unsigned int r8a7795es2_mod_nullify[] __initconst = {
MOD_CLK_ID(117), /* FDP1-2 */
MOD_CLK_ID(327), /* USB3-IF1 */
MOD_CLK_ID(600), /* FCPVD3 */
MOD_CLK_ID(609), /* FCPVI2 */
MOD_CLK_ID(613), /* FCPF2 */
MOD_CLK_ID(616), /* FCPCI1 */
MOD_CLK_ID(617), /* FCPCI0 */
MOD_CLK_ID(620), /* VSPD3 */
MOD_CLK_ID(629), /* VSPI2 */
MOD_CLK_ID(713), /* CSI21 */
};
static int __init r8a7795_cpg_mssr_init(struct device *dev)
{
const struct rcar_gen3_cpg_pll_config *cpg_pll_config;
u32 cpg_mode;
int error;
/*
* We panic here to ensure removed SoCs and clk updates are always in
* sync to avoid overclocking damages. The panic can only be seen with
* commandline args 'earlycon keep_bootcon'. But these SoCs were for
* developers only anyhow.
*/
if (soc_device_match(r8a7795_denylist))
panic("SoC not supported anymore!\n");
error = rcar_rst_read_mode_pins(&cpg_mode);
if (error)
return error;
@ -457,25 +370,6 @@ static int __init r8a7795_cpg_mssr_init(struct device *dev)
return -EINVAL;
}
if (soc_device_match(r8a7795es1)) {
cpg_core_nullify_range(r8a7795_core_clks,
ARRAY_SIZE(r8a7795_core_clks),
R8A7795_CLK_S0D2, R8A7795_CLK_S0D12);
mssr_mod_nullify(r8a7795_mod_clks,
ARRAY_SIZE(r8a7795_mod_clks),
r8a7795es1_mod_nullify,
ARRAY_SIZE(r8a7795es1_mod_nullify));
mssr_mod_reparent(r8a7795_mod_clks,
ARRAY_SIZE(r8a7795_mod_clks),
r8a7795es1_mod_reparent,
ARRAY_SIZE(r8a7795es1_mod_reparent));
} else {
mssr_mod_nullify(r8a7795_mod_clks,
ARRAY_SIZE(r8a7795_mod_clks),
r8a7795es2_mod_nullify,
ARRAY_SIZE(r8a7795es2_mod_nullify));
}
return rcar_gen3_cpg_init(cpg_pll_config, CLK_EXTALR, cpg_mode);
}

View File

@ -310,19 +310,10 @@ static unsigned int cpg_clk_extalr __initdata;
static u32 cpg_mode __initdata;
static u32 cpg_quirks __initdata;
#define PLL_ERRATA BIT(0) /* Missing PLL0/2/4 post-divider */
#define RCKCR_CKSEL BIT(1) /* Manual RCLK parent selection */
static const struct soc_device_attribute cpg_quirks_match[] __initconst = {
{
.soc_id = "r8a7795", .revision = "ES1.0",
.data = (void *)(PLL_ERRATA | RCKCR_CKSEL),
},
{
.soc_id = "r8a7795", .revision = "ES1.*",
.data = (void *)(RCKCR_CKSEL),
},
{
.soc_id = "r8a7796", .revision = "ES1.0",
.data = (void *)(RCKCR_CKSEL),
@ -355,9 +346,8 @@ struct clk * __init rcar_gen3_cpg_clk_register(struct device *dev,
* multiplier when cpufreq changes between normal and boost
* modes.
*/
mult = (cpg_quirks & PLL_ERRATA) ? 4 : 2;
return cpg_pll_clk_register(core->name, __clk_get_name(parent),
base, mult, CPG_PLL0CR, 0);
base, 2, CPG_PLL0CR, 0);
case CLK_TYPE_GEN3_PLL1:
mult = cpg_pll_config->pll1_mult;
@ -370,9 +360,8 @@ struct clk * __init rcar_gen3_cpg_clk_register(struct device *dev,
* multiplier when cpufreq changes between normal and boost
* modes.
*/
mult = (cpg_quirks & PLL_ERRATA) ? 4 : 2;
return cpg_pll_clk_register(core->name, __clk_get_name(parent),
base, mult, CPG_PLL2CR, 2);
base, 2, CPG_PLL2CR, 2);
case CLK_TYPE_GEN3_PLL3:
mult = cpg_pll_config->pll3_mult;
@ -388,8 +377,6 @@ struct clk * __init rcar_gen3_cpg_clk_register(struct device *dev,
*/
value = readl(base + CPG_PLL4CR);
mult = (((value >> 24) & 0x7f) + 1) * 2;
if (cpg_quirks & PLL_ERRATA)
mult *= 2;
break;
case CLK_TYPE_GEN3_SDH:

View File

@ -1113,19 +1113,6 @@ static int __init cpg_mssr_init(void)
subsys_initcall(cpg_mssr_init);
void __init cpg_core_nullify_range(struct cpg_core_clk *core_clks,
unsigned int num_core_clks,
unsigned int first_clk,
unsigned int last_clk)
{
unsigned int i;
for (i = 0; i < num_core_clks; i++)
if (core_clks[i].id >= first_clk &&
core_clks[i].id <= last_clk)
core_clks[i].name = NULL;
}
void __init mssr_mod_nullify(struct mssr_mod_clk *mod_clks,
unsigned int num_mod_clks,
const unsigned int *clks, unsigned int n)
@ -1139,19 +1126,5 @@ void __init mssr_mod_nullify(struct mssr_mod_clk *mod_clks,
}
}
void __init mssr_mod_reparent(struct mssr_mod_clk *mod_clks,
unsigned int num_mod_clks,
const struct mssr_mod_reparent *clks,
unsigned int n)
{
unsigned int i, j;
for (i = 0, j = 0; i < num_mod_clks && j < n; i++)
if (mod_clks[i].id == clks[j].clk) {
mod_clks[i].parent = clks[j].parent;
j++;
}
}
MODULE_DESCRIPTION("Renesas CPG/MSSR Driver");
MODULE_LICENSE("GPL v2");

View File

@ -187,21 +187,7 @@ void __init cpg_mssr_early_init(struct device_node *np,
/*
* Helpers for fixing up clock tables depending on SoC revision
*/
struct mssr_mod_reparent {
unsigned int clk, parent;
};
extern void cpg_core_nullify_range(struct cpg_core_clk *core_clks,
unsigned int num_core_clks,
unsigned int first_clk,
unsigned int last_clk);
extern void mssr_mod_nullify(struct mssr_mod_clk *mod_clks,
unsigned int num_mod_clks,
const unsigned int *clks, unsigned int n);
extern void mssr_mod_reparent(struct mssr_mod_clk *mod_clks,
unsigned int num_mod_clks,
const struct mssr_mod_reparent *clks,
unsigned int n);
#endif

View File

@ -393,9 +393,10 @@ static int nv_read_register(struct amdgpu_device *adev, u32 se_num,
*value = 0;
for (i = 0; i < ARRAY_SIZE(nv_allowed_read_registers); i++) {
en = &nv_allowed_read_registers[i];
if (adev->reg_offset[en->hwip][en->inst] &&
reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
+ en->reg_offset))
if (!adev->reg_offset[en->hwip][en->inst])
continue;
else if (reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
+ en->reg_offset))
continue;
*value = nv_get_register_value(adev,

View File

@ -439,8 +439,9 @@ static int soc15_read_register(struct amdgpu_device *adev, u32 se_num,
*value = 0;
for (i = 0; i < ARRAY_SIZE(soc15_allowed_read_registers); i++) {
en = &soc15_allowed_read_registers[i];
if (adev->reg_offset[en->hwip][en->inst] &&
reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
if (!adev->reg_offset[en->hwip][en->inst])
continue;
else if (reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
+ en->reg_offset))
continue;

View File

@ -47,19 +47,31 @@
static const struct amd_ip_funcs soc21_common_ip_funcs;
/* SOC21 */
static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_encode_array[] =
static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_encode_array_vcn0[] =
{
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)},
};
static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode =
static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_encode_array_vcn1[] =
{
.codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_encode_array),
.codec_array = vcn_4_0_0_video_codecs_encode_array,
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 2304, 0)},
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 2304, 0)},
};
static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_decode_array[] =
static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode_vcn0 =
{
.codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_encode_array_vcn0),
.codec_array = vcn_4_0_0_video_codecs_encode_array_vcn0,
};
static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_encode_vcn1 =
{
.codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_encode_array_vcn1),
.codec_array = vcn_4_0_0_video_codecs_encode_array_vcn1,
};
static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_decode_array_vcn0[] =
{
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
@ -68,23 +80,47 @@ static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_decode_array[
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_AV1, 8192, 4352, 0)},
};
static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_decode =
static const struct amdgpu_video_codec_info vcn_4_0_0_video_codecs_decode_array_vcn1[] =
{
.codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_decode_array),
.codec_array = vcn_4_0_0_video_codecs_decode_array,
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
{codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
};
static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_decode_vcn0 =
{
.codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_decode_array_vcn0),
.codec_array = vcn_4_0_0_video_codecs_decode_array_vcn0,
};
static const struct amdgpu_video_codecs vcn_4_0_0_video_codecs_decode_vcn1 =
{
.codec_count = ARRAY_SIZE(vcn_4_0_0_video_codecs_decode_array_vcn1),
.codec_array = vcn_4_0_0_video_codecs_decode_array_vcn1,
};
static int soc21_query_video_codecs(struct amdgpu_device *adev, bool encode,
const struct amdgpu_video_codecs **codecs)
{
switch (adev->ip_versions[UVD_HWIP][0]) {
if (adev->vcn.num_vcn_inst == hweight8(adev->vcn.harvest_config))
return -EINVAL;
switch (adev->ip_versions[UVD_HWIP][0]) {
case IP_VERSION(4, 0, 0):
case IP_VERSION(4, 0, 2):
if (encode)
*codecs = &vcn_4_0_0_video_codecs_encode;
else
*codecs = &vcn_4_0_0_video_codecs_decode;
case IP_VERSION(4, 0, 4):
if (adev->vcn.harvest_config & AMDGPU_VCN_HARVEST_VCN0) {
if (encode)
*codecs = &vcn_4_0_0_video_codecs_encode_vcn1;
else
*codecs = &vcn_4_0_0_video_codecs_decode_vcn1;
} else {
if (encode)
*codecs = &vcn_4_0_0_video_codecs_encode_vcn0;
else
*codecs = &vcn_4_0_0_video_codecs_decode_vcn0;
}
return 0;
default:
return -EINVAL;
@ -254,9 +290,10 @@ static int soc21_read_register(struct amdgpu_device *adev, u32 se_num,
*value = 0;
for (i = 0; i < ARRAY_SIZE(soc21_allowed_read_registers); i++) {
en = &soc21_allowed_read_registers[i];
if (adev->reg_offset[en->hwip][en->inst] &&
reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
+ en->reg_offset))
if (!adev->reg_offset[en->hwip][en->inst])
continue;
else if (reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
+ en->reg_offset))
continue;
*value = soc21_get_register_value(adev,

View File

@ -280,7 +280,7 @@ phys_addr_t kfd_get_process_doorbells(struct kfd_process_device *pdd)
if (!pdd->doorbell_index) {
int r = kfd_alloc_process_doorbells(pdd->dev,
&pdd->doorbell_index);
if (r)
if (r < 0)
return 0;
}

View File

@ -2109,13 +2109,19 @@ static bool dcn32_resource_construct(
dc->caps.max_cursor_size = 64;
dc->caps.min_horizontal_blanking_period = 80;
dc->caps.dmdata_alloc_size = 2048;
dc->caps.mall_size_per_mem_channel = 0;
dc->caps.mall_size_per_mem_channel = 4;
dc->caps.mall_size_total = 0;
dc->caps.cursor_cache_size = dc->caps.max_cursor_size * dc->caps.max_cursor_size * 8;
dc->caps.cache_line_size = 64;
dc->caps.cache_num_ways = 16;
dc->caps.max_cab_allocation_bytes = 67108864; // 64MB = 1024 * 1024 * 64
/* Calculate the available MALL space */
dc->caps.max_cab_allocation_bytes = dcn32_calc_num_avail_chans_for_mall(
dc, dc->ctx->dc_bios->vram_info.num_chans) *
dc->caps.mall_size_per_mem_channel * 1024 * 1024;
dc->caps.mall_size_total = dc->caps.max_cab_allocation_bytes;
dc->caps.subvp_fw_processing_delay_us = 15;
dc->caps.subvp_prefetch_end_to_mall_start_us = 15;
dc->caps.subvp_swath_height_margin_lines = 16;
@ -2545,3 +2551,55 @@ struct pipe_ctx *dcn32_acquire_idle_pipe_for_head_pipe_in_layer(
return idle_pipe;
}
unsigned int dcn32_calc_num_avail_chans_for_mall(struct dc *dc, int num_chans)
{
/*
* DCN32 and DCN321 SKUs may have different sizes for MALL
* but we may not be able to access all the MALL space.
* If the num_chans is power of 2, then we can access all
* of the available MALL space. Otherwise, we can only
* access:
*
* max_cab_size_in_bytes = total_cache_size_in_bytes *
* ((2^floor(log2(num_chans)))/num_chans)
*
* Calculating the MALL sizes for all available SKUs, we
* have come up with the follow simplified check.
* - we have max_chans which provides the max MALL size.
* Each chans supports 4MB of MALL so:
*
* total_cache_size_in_bytes = max_chans * 4 MB
*
* - we have avail_chans which shows the number of channels
* we can use if we can't access the entire MALL space.
* It is generally half of max_chans
* - so we use the following checks:
*
* if (num_chans == max_chans), return max_chans
* if (num_chans < max_chans), return avail_chans
*
* - exception is GC_11_0_0 where we can't access max_chans,
* so we define max_avail_chans as the maximum available
* MALL space
*
*/
int gc_11_0_0_max_chans = 48;
int gc_11_0_0_max_avail_chans = 32;
int gc_11_0_0_avail_chans = 16;
int gc_11_0_3_max_chans = 16;
int gc_11_0_3_avail_chans = 8;
int gc_11_0_2_max_chans = 8;
int gc_11_0_2_avail_chans = 4;
if (ASICREV_IS_GC_11_0_0(dc->ctx->asic_id.hw_internal_rev)) {
return (num_chans == gc_11_0_0_max_chans) ?
gc_11_0_0_max_avail_chans : gc_11_0_0_avail_chans;
} else if (ASICREV_IS_GC_11_0_2(dc->ctx->asic_id.hw_internal_rev)) {
return (num_chans == gc_11_0_2_max_chans) ?
gc_11_0_2_max_chans : gc_11_0_2_avail_chans;
} else { // if (ASICREV_IS_GC_11_0_3(dc->ctx->asic_id.hw_internal_rev)) {
return (num_chans == gc_11_0_3_max_chans) ?
gc_11_0_3_max_chans : gc_11_0_3_avail_chans;
}
}

View File

@ -142,6 +142,10 @@ void dcn32_restore_mall_state(struct dc *dc,
struct dc_state *context,
struct mall_temp_config *temp_config);
bool dcn32_allow_subvp_with_active_margin(struct pipe_ctx *pipe);
unsigned int dcn32_calc_num_avail_chans_for_mall(struct dc *dc, int num_chans);
/* definitions for run time init of reg offsets */
/* CLK SRC */

View File

@ -1697,11 +1697,18 @@ static bool dcn321_resource_construct(
dc->caps.max_cursor_size = 64;
dc->caps.min_horizontal_blanking_period = 80;
dc->caps.dmdata_alloc_size = 2048;
dc->caps.mall_size_per_mem_channel = 0;
dc->caps.mall_size_per_mem_channel = 4;
dc->caps.mall_size_total = 0;
dc->caps.cursor_cache_size = dc->caps.max_cursor_size * dc->caps.max_cursor_size * 8;
dc->caps.cache_line_size = 64;
dc->caps.cache_num_ways = 16;
/* Calculate the available MALL space */
dc->caps.max_cab_allocation_bytes = dcn32_calc_num_avail_chans_for_mall(
dc, dc->ctx->dc_bios->vram_info.num_chans) *
dc->caps.mall_size_per_mem_channel * 1024 * 1024;
dc->caps.mall_size_total = dc->caps.max_cab_allocation_bytes;
dc->caps.max_cab_allocation_bytes = 33554432; // 32MB = 1024 * 1024 * 32
dc->caps.subvp_fw_processing_delay_us = 15;
dc->caps.subvp_prefetch_end_to_mall_start_us = 15;

View File

@ -676,7 +676,9 @@ static bool dcn32_assign_subvp_pipe(struct dc *dc,
*/
if (pipe->plane_state && !pipe->top_pipe &&
pipe->stream->mall_stream_config.type == SUBVP_NONE && refresh_rate < 120 && !pipe->plane_state->address.tmz_surface &&
vba->ActiveDRAMClockChangeLatencyMarginPerState[vba->VoltageLevel][vba->maxMpcComb][vba->pipe_plane[pipe_idx]] <= 0) {
(vba->ActiveDRAMClockChangeLatencyMarginPerState[vba->VoltageLevel][vba->maxMpcComb][vba->pipe_plane[pipe_idx]] <= 0 ||
(vba->ActiveDRAMClockChangeLatencyMarginPerState[vba->VoltageLevel][vba->maxMpcComb][vba->pipe_plane[pipe_idx]] > 0 &&
dcn32_allow_subvp_with_active_margin(pipe)))) {
while (pipe) {
num_pipes++;
pipe = pipe->bottom_pipe;
@ -2379,8 +2381,11 @@ void dcn32_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_pa
}
/* Override from VBIOS for num_chan */
if (dc->ctx->dc_bios->vram_info.num_chans)
if (dc->ctx->dc_bios->vram_info.num_chans) {
dcn3_2_soc.num_chans = dc->ctx->dc_bios->vram_info.num_chans;
dcn3_2_soc.mall_allocated_for_dcn_mbytes = (double)(dcn32_calc_num_avail_chans_for_mall(dc,
dc->ctx->dc_bios->vram_info.num_chans) * dc->caps.mall_size_per_mem_channel);
}
if (dc->ctx->dc_bios->vram_info.dram_channel_width_bytes)
dcn3_2_soc.dram_channel_width_bytes = dc->ctx->dc_bios->vram_info.dram_channel_width_bytes;
@ -2558,3 +2563,30 @@ void dcn32_zero_pipe_dcc_fraction(display_e2e_pipe_params_st *pipes,
pipes[pipe_cnt].pipe.src.dcc_fraction_of_zs_req_luma = 0;
pipes[pipe_cnt].pipe.src.dcc_fraction_of_zs_req_chroma = 0;
}
bool dcn32_allow_subvp_with_active_margin(struct pipe_ctx *pipe)
{
bool allow = false;
uint32_t refresh_rate = 0;
/* Allow subvp on displays that have active margin for 2560x1440@60hz displays
* only for now. There must be no scaling as well.
*
* For now we only enable on 2560x1440@60hz displays to enable 4K60 + 1440p60 configs
* for p-state switching.
*/
if (pipe->stream && pipe->plane_state) {
refresh_rate = (pipe->stream->timing.pix_clk_100hz * 100 +
pipe->stream->timing.v_total * pipe->stream->timing.h_total - 1)
/ (double)(pipe->stream->timing.v_total * pipe->stream->timing.h_total);
if (pipe->stream->timing.v_addressable == 1440 &&
pipe->stream->timing.h_addressable == 2560 &&
refresh_rate >= 55 && refresh_rate <= 65 &&
pipe->plane_state->src_rect.height == 1440 &&
pipe->plane_state->src_rect.width == 2560 &&
pipe->plane_state->dst_rect.height == 1440 &&
pipe->plane_state->dst_rect.width == 2560)
allow = true;
}
return allow;
}

View File

@ -534,8 +534,11 @@ void dcn321_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_p
}
/* Override from VBIOS for num_chan */
if (dc->ctx->dc_bios->vram_info.num_chans)
if (dc->ctx->dc_bios->vram_info.num_chans) {
dcn3_21_soc.num_chans = dc->ctx->dc_bios->vram_info.num_chans;
dcn3_21_soc.mall_allocated_for_dcn_mbytes = (double)(dcn32_calc_num_avail_chans_for_mall(dc,
dc->ctx->dc_bios->vram_info.num_chans) * dc->caps.mall_size_per_mem_channel);
}
if (dc->ctx->dc_bios->vram_info.dram_channel_width_bytes)
dcn3_21_soc.dram_channel_width_bytes = dc->ctx->dc_bios->vram_info.dram_channel_width_bytes;

View File

@ -44,10 +44,8 @@ int drm_hdmi_infoframe_set_hdr_metadata(struct hdmi_drm_infoframe *frame,
/* Sink EOTF is Bit map while infoframe is absolute values */
if (!is_eotf_supported(hdr_metadata->hdmi_metadata_type1.eotf,
connector->hdr_sink_metadata.hdmi_type1.eotf)) {
DRM_DEBUG_KMS("EOTF Not Supported\n");
return -EINVAL;
}
connector->hdr_sink_metadata.hdmi_type1.eotf))
DRM_DEBUG_KMS("Unknown EOTF %d\n", hdr_metadata->hdmi_metadata_type1.eotf);
err = hdmi_drm_infoframe_init(frame);
if (err < 0)

View File

@ -1070,6 +1070,7 @@ static void drm_atomic_connector_print_state(struct drm_printer *p,
drm_printf(p, "connector[%u]: %s\n", connector->base.id, connector->name);
drm_printf(p, "\tcrtc=%s\n", state->crtc ? state->crtc->name : "(null)");
drm_printf(p, "\tself_refresh_aware=%d\n", state->self_refresh_aware);
drm_printf(p, "\tmax_requested_bpc=%d\n", state->max_requested_bpc);
if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
if (state->writeback_job && state->writeback_job->fb)

View File

@ -2053,7 +2053,8 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
/* attach connector to encoder */
intel_connector_attach_encoder(intel_connector, encoder);
intel_bios_init_panel(dev_priv, &intel_connector->panel, NULL, NULL);
encoder->devdata = intel_bios_encoder_data_lookup(dev_priv, port);
intel_bios_init_panel_late(dev_priv, &intel_connector->panel, encoder->devdata, NULL);
mutex_lock(&dev->mode_config.mutex);
intel_panel_add_vbt_lfp_fixed_mode(intel_connector);

View File

@ -620,14 +620,14 @@ static void dump_pnp_id(struct drm_i915_private *i915,
static int opregion_get_panel_type(struct drm_i915_private *i915,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid)
const struct edid *edid, bool use_fallback)
{
return intel_opregion_get_panel_type(i915);
}
static int vbt_get_panel_type(struct drm_i915_private *i915,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid)
const struct edid *edid, bool use_fallback)
{
const struct bdb_lvds_options *lvds_options;
@ -652,7 +652,7 @@ static int vbt_get_panel_type(struct drm_i915_private *i915,
static int pnpid_get_panel_type(struct drm_i915_private *i915,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid)
const struct edid *edid, bool use_fallback)
{
const struct bdb_lvds_lfp_data *data;
const struct bdb_lvds_lfp_data_ptrs *ptrs;
@ -701,9 +701,9 @@ static int pnpid_get_panel_type(struct drm_i915_private *i915,
static int fallback_get_panel_type(struct drm_i915_private *i915,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid)
const struct edid *edid, bool use_fallback)
{
return 0;
return use_fallback ? 0 : -1;
}
enum panel_type {
@ -715,13 +715,13 @@ enum panel_type {
static int get_panel_type(struct drm_i915_private *i915,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid)
const struct edid *edid, bool use_fallback)
{
struct {
const char *name;
int (*get_panel_type)(struct drm_i915_private *i915,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid);
const struct edid *edid, bool use_fallback);
int panel_type;
} panel_types[] = {
[PANEL_TYPE_OPREGION] = {
@ -744,7 +744,8 @@ static int get_panel_type(struct drm_i915_private *i915,
int i;
for (i = 0; i < ARRAY_SIZE(panel_types); i++) {
panel_types[i].panel_type = panel_types[i].get_panel_type(i915, devdata, edid);
panel_types[i].panel_type = panel_types[i].get_panel_type(i915, devdata,
edid, use_fallback);
drm_WARN_ON(&i915->drm, panel_types[i].panel_type > 0xf &&
panel_types[i].panel_type != 0xff);
@ -2592,6 +2593,12 @@ intel_bios_encoder_supports_edp(const struct intel_bios_encoder_data *devdata)
devdata->child.device_type & DEVICE_TYPE_INTERNAL_CONNECTOR;
}
static bool
intel_bios_encoder_supports_dsi(const struct intel_bios_encoder_data *devdata)
{
return devdata->child.device_type & DEVICE_TYPE_MIPI_OUTPUT;
}
static int _intel_bios_hdmi_level_shift(const struct intel_bios_encoder_data *devdata)
{
if (!devdata || devdata->i915->display.vbt.version < 158)
@ -2642,7 +2649,7 @@ static void print_ddi_port(const struct intel_bios_encoder_data *devdata,
{
struct drm_i915_private *i915 = devdata->i915;
const struct child_device_config *child = &devdata->child;
bool is_dvi, is_hdmi, is_dp, is_edp, is_crt, supports_typec_usb, supports_tbt;
bool is_dvi, is_hdmi, is_dp, is_edp, is_dsi, is_crt, supports_typec_usb, supports_tbt;
int dp_boost_level, dp_max_link_rate, hdmi_boost_level, hdmi_level_shift, max_tmds_clock;
is_dvi = intel_bios_encoder_supports_dvi(devdata);
@ -2650,13 +2657,14 @@ static void print_ddi_port(const struct intel_bios_encoder_data *devdata,
is_crt = intel_bios_encoder_supports_crt(devdata);
is_hdmi = intel_bios_encoder_supports_hdmi(devdata);
is_edp = intel_bios_encoder_supports_edp(devdata);
is_dsi = intel_bios_encoder_supports_dsi(devdata);
supports_typec_usb = intel_bios_encoder_supports_typec_usb(devdata);
supports_tbt = intel_bios_encoder_supports_tbt(devdata);
drm_dbg_kms(&i915->drm,
"Port %c VBT info: CRT:%d DVI:%d HDMI:%d DP:%d eDP:%d LSPCON:%d USB-Type-C:%d TBT:%d DSC:%d\n",
port_name(port), is_crt, is_dvi, is_hdmi, is_dp, is_edp,
"Port %c VBT info: CRT:%d DVI:%d HDMI:%d DP:%d eDP:%d DSI:%d LSPCON:%d USB-Type-C:%d TBT:%d DSC:%d\n",
port_name(port), is_crt, is_dvi, is_hdmi, is_dp, is_edp, is_dsi,
HAS_LSPCON(i915) && child->lspcon,
supports_typec_usb, supports_tbt,
devdata->dsc != NULL);
@ -2701,6 +2709,8 @@ static void parse_ddi_port(struct intel_bios_encoder_data *devdata)
enum port port;
port = dvo_port_to_port(i915, child->dvo_port);
if (port == PORT_NONE && DISPLAY_VER(i915) >= 11)
port = dsi_dvo_port_to_port(i915, child->dvo_port);
if (port == PORT_NONE)
return;
@ -3191,14 +3201,26 @@ void intel_bios_init(struct drm_i915_private *i915)
kfree(oprom_vbt);
}
void intel_bios_init_panel(struct drm_i915_private *i915,
struct intel_panel *panel,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid)
static void intel_bios_init_panel(struct drm_i915_private *i915,
struct intel_panel *panel,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid,
bool use_fallback)
{
init_vbt_panel_defaults(panel);
/* already have it? */
if (panel->vbt.panel_type >= 0) {
drm_WARN_ON(&i915->drm, !use_fallback);
return;
}
panel->vbt.panel_type = get_panel_type(i915, devdata, edid);
panel->vbt.panel_type = get_panel_type(i915, devdata,
edid, use_fallback);
if (panel->vbt.panel_type < 0) {
drm_WARN_ON(&i915->drm, use_fallback);
return;
}
init_vbt_panel_defaults(panel);
parse_panel_options(i915, panel);
parse_generic_dtd(i915, panel);
@ -3213,6 +3235,21 @@ void intel_bios_init_panel(struct drm_i915_private *i915,
parse_mipi_sequence(i915, panel);
}
void intel_bios_init_panel_early(struct drm_i915_private *i915,
struct intel_panel *panel,
const struct intel_bios_encoder_data *devdata)
{
intel_bios_init_panel(i915, panel, devdata, NULL, false);
}
void intel_bios_init_panel_late(struct drm_i915_private *i915,
struct intel_panel *panel,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid)
{
intel_bios_init_panel(i915, panel, devdata, edid, true);
}
/**
* intel_bios_driver_remove - Free any resources allocated by intel_bios_init()
* @i915: i915 device instance

View File

@ -232,10 +232,13 @@ struct mipi_pps_data {
} __packed;
void intel_bios_init(struct drm_i915_private *dev_priv);
void intel_bios_init_panel(struct drm_i915_private *dev_priv,
struct intel_panel *panel,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid);
void intel_bios_init_panel_early(struct drm_i915_private *dev_priv,
struct intel_panel *panel,
const struct intel_bios_encoder_data *devdata);
void intel_bios_init_panel_late(struct drm_i915_private *dev_priv,
struct intel_panel *panel,
const struct intel_bios_encoder_data *devdata,
const struct edid *edid);
void intel_bios_fini_panel(struct intel_panel *panel);
void intel_bios_driver_remove(struct drm_i915_private *dev_priv);
bool intel_bios_is_valid_vbt(const void *buf, size_t size);

View File

@ -54,7 +54,7 @@ int intel_connector_init(struct intel_connector *connector)
__drm_atomic_helper_connector_reset(&connector->base,
&conn_state->base);
INIT_LIST_HEAD(&connector->panel.fixed_modes);
intel_panel_init_alloc(connector);
return 0;
}

View File

@ -291,7 +291,7 @@ struct intel_vbt_panel_data {
struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */
/* Feature bits */
unsigned int panel_type:4;
int panel_type;
unsigned int lvds_dither:1;
unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */

View File

@ -5179,6 +5179,9 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp,
return false;
}
intel_bios_init_panel_early(dev_priv, &intel_connector->panel,
encoder->devdata);
intel_pps_init(intel_dp);
/* Cache DPCD and EDID for edp. */
@ -5213,8 +5216,8 @@ static bool intel_edp_init_connector(struct intel_dp *intel_dp,
}
intel_connector->edid = edid;
intel_bios_init_panel(dev_priv, &intel_connector->panel,
encoder->devdata, IS_ERR(edid) ? NULL : edid);
intel_bios_init_panel_late(dev_priv, &intel_connector->panel,
encoder->devdata, IS_ERR(edid) ? NULL : edid);
intel_panel_add_edid_fixed_modes(intel_connector, true);

View File

@ -967,8 +967,8 @@ void intel_lvds_init(struct drm_i915_private *dev_priv)
}
intel_connector->edid = edid;
intel_bios_init_panel(dev_priv, &intel_connector->panel, NULL,
IS_ERR(edid) ? NULL : edid);
intel_bios_init_panel_late(dev_priv, &intel_connector->panel, NULL,
IS_ERR(edid) ? NULL : edid);
/* Try EDID first */
intel_panel_add_edid_fixed_modes(intel_connector,

View File

@ -648,6 +648,14 @@ intel_panel_mode_valid(struct intel_connector *connector,
return MODE_OK;
}
void intel_panel_init_alloc(struct intel_connector *connector)
{
struct intel_panel *panel = &connector->panel;
connector->panel.vbt.panel_type = -1;
INIT_LIST_HEAD(&panel->fixed_modes);
}
int intel_panel_init(struct intel_connector *connector)
{
struct intel_panel *panel = &connector->panel;

View File

@ -18,6 +18,7 @@ struct intel_connector;
struct intel_crtc_state;
struct intel_encoder;
void intel_panel_init_alloc(struct intel_connector *connector);
int intel_panel_init(struct intel_connector *connector);
void intel_panel_fini(struct intel_connector *connector);
enum drm_connector_status

View File

@ -2891,7 +2891,7 @@ intel_sdvo_lvds_init(struct intel_sdvo *intel_sdvo, int device)
if (!intel_sdvo_create_enhance_property(intel_sdvo, intel_sdvo_connector))
goto err;
intel_bios_init_panel(i915, &intel_connector->panel, NULL, NULL);
intel_bios_init_panel_late(i915, &intel_connector->panel, NULL, NULL);
/*
* Fetch modes from VBT. For SDVO prefer the VBT mode since some

View File

@ -1925,7 +1925,7 @@ void vlv_dsi_init(struct drm_i915_private *dev_priv)
intel_dsi->panel_power_off_time = ktime_get_boottime();
intel_bios_init_panel(dev_priv, &intel_connector->panel, NULL, NULL);
intel_bios_init_panel_late(dev_priv, &intel_connector->panel, NULL, NULL);
if (intel_connector->panel.vbt.dsi.config->dual_link)
intel_dsi->ports = BIT(PORT_A) | BIT(PORT_C);

View File

@ -151,8 +151,8 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
OUT_RING(ring, 1);
/* Enable local preemption for finegrain preemption */
OUT_PKT7(ring, CP_PREEMPT_ENABLE_GLOBAL, 1);
OUT_RING(ring, 0x02);
OUT_PKT7(ring, CP_PREEMPT_ENABLE_LOCAL, 1);
OUT_RING(ring, 0x1);
/* Allow CP_CONTEXT_SWITCH_YIELD packets in the IB2 */
OUT_PKT7(ring, CP_YIELD_ENABLE, 1);
@ -808,7 +808,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
gpu_write(gpu, REG_A5XX_RBBM_AHB_CNTL2, 0x0000003F);
/* Set the highest bank bit */
if (adreno_is_a540(adreno_gpu))
if (adreno_is_a540(adreno_gpu) || adreno_is_a530(adreno_gpu))
regbit = 2;
else
regbit = 1;

View File

@ -63,7 +63,7 @@ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
struct msm_ringbuffer *ring = gpu->rb[i];
spin_lock_irqsave(&ring->preempt_lock, flags);
empty = (get_wptr(ring) == ring->memptrs->rptr);
empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
spin_unlock_irqrestore(&ring->preempt_lock, flags);
if (!empty)
@ -208,6 +208,7 @@ void a5xx_preempt_hw_init(struct msm_gpu *gpu)
a5xx_gpu->preempt[i]->wptr = 0;
a5xx_gpu->preempt[i]->rptr = 0;
a5xx_gpu->preempt[i]->rbase = gpu->rb[i]->iova;
a5xx_gpu->preempt[i]->rptr_addr = shadowptr(a5xx_gpu, gpu->rb[i]);
}
/* Write a 0 to signal that we aren't switching pagetables */
@ -259,7 +260,6 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu,
ptr->data = 0;
ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE;
ptr->rptr_addr = shadowptr(a5xx_gpu, ring);
ptr->counter = counters_iova;
return 0;

View File

@ -551,13 +551,15 @@ static int adreno_bind(struct device *dev, struct device *master, void *data)
return 0;
}
static int adreno_system_suspend(struct device *dev);
static void adreno_unbind(struct device *dev, struct device *master,
void *data)
{
struct msm_drm_private *priv = dev_get_drvdata(master);
struct msm_gpu *gpu = dev_to_gpu(dev);
pm_runtime_force_suspend(dev);
if (pm_runtime_enabled(dev))
WARN_ON_ONCE(adreno_system_suspend(dev));
gpu->funcs->destroy(gpu);
priv->gpu_pdev = NULL;
@ -609,7 +611,7 @@ static int adreno_remove(struct platform_device *pdev)
static void adreno_shutdown(struct platform_device *pdev)
{
pm_runtime_force_suspend(&pdev->dev);
WARN_ON_ONCE(adreno_system_suspend(&pdev->dev));
}
static const struct of_device_id dt_match[] = {

View File

@ -12,11 +12,15 @@
#include "dpu_hw_catalog.h"
#include "dpu_kms.h"
#define VIG_MASK \
#define VIG_BASE_MASK \
(BIT(DPU_SSPP_SRC) | BIT(DPU_SSPP_QOS) |\
BIT(DPU_SSPP_CSC_10BIT) | BIT(DPU_SSPP_CDP) |\
BIT(DPU_SSPP_CDP) |\
BIT(DPU_SSPP_TS_PREFILL) | BIT(DPU_SSPP_EXCL_RECT))
#define VIG_MASK \
(VIG_BASE_MASK | \
BIT(DPU_SSPP_CSC_10BIT))
#define VIG_MSM8998_MASK \
(VIG_MASK | BIT(DPU_SSPP_SCALER_QSEED3))
@ -29,7 +33,7 @@
#define VIG_SM8250_MASK \
(VIG_MASK | BIT(DPU_SSPP_QOS_8LVL) | BIT(DPU_SSPP_SCALER_QSEED3LITE))
#define VIG_QCM2290_MASK (VIG_MASK | BIT(DPU_SSPP_QOS_8LVL))
#define VIG_QCM2290_MASK (VIG_BASE_MASK | BIT(DPU_SSPP_QOS_8LVL))
#define DMA_MSM8998_MASK \
(BIT(DPU_SSPP_SRC) | BIT(DPU_SSPP_QOS) |\
@ -51,7 +55,7 @@
(DMA_MSM8998_MASK | BIT(DPU_SSPP_CURSOR))
#define MIXER_MSM8998_MASK \
(BIT(DPU_MIXER_SOURCESPLIT) | BIT(DPU_DIM_LAYER))
(BIT(DPU_MIXER_SOURCESPLIT))
#define MIXER_SDM845_MASK \
(BIT(DPU_MIXER_SOURCESPLIT) | BIT(DPU_DIM_LAYER) | BIT(DPU_MIXER_COMBINED_ALPHA))
@ -283,7 +287,6 @@ static const struct dpu_caps qcm2290_dpu_caps = {
.max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH,
.max_mixer_blendstages = 0x4,
.smart_dma_rev = DPU_SSPP_SMART_DMA_V2,
.ubwc_version = DPU_HW_UBWC_VER_20,
.has_dim_layer = true,
.has_idle_pc = true,
.max_linewidth = 2160,
@ -604,19 +607,19 @@ static const struct dpu_ctl_cfg sdm845_ctl[] = {
static const struct dpu_ctl_cfg sc7180_ctl[] = {
{
.name = "ctl_0", .id = CTL_0,
.base = 0x1000, .len = 0xE4,
.base = 0x1000, .len = 0x1dc,
.features = BIT(DPU_CTL_ACTIVE_CFG),
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9),
},
{
.name = "ctl_1", .id = CTL_1,
.base = 0x1200, .len = 0xE4,
.base = 0x1200, .len = 0x1dc,
.features = BIT(DPU_CTL_ACTIVE_CFG),
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
},
{
.name = "ctl_2", .id = CTL_2,
.base = 0x1400, .len = 0xE4,
.base = 0x1400, .len = 0x1dc,
.features = BIT(DPU_CTL_ACTIVE_CFG),
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
},
@ -810,9 +813,9 @@ static const struct dpu_sspp_cfg msm8998_sspp[] = {
SSPP_BLK("sspp_9", SSPP_DMA1, 0x26000, DMA_MSM8998_MASK,
sdm845_dma_sblk_1, 5, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA1),
SSPP_BLK("sspp_10", SSPP_DMA2, 0x28000, DMA_CURSOR_MSM8998_MASK,
sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_CURSOR0),
sdm845_dma_sblk_2, 9, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA2),
SSPP_BLK("sspp_11", SSPP_DMA3, 0x2a000, DMA_CURSOR_MSM8998_MASK,
sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_CURSOR1),
sdm845_dma_sblk_3, 13, SSPP_TYPE_DMA, DPU_CLK_CTRL_DMA3),
};
static const struct dpu_sspp_cfg sdm845_sspp[] = {
@ -1918,8 +1921,6 @@ static const struct dpu_mdss_cfg qcm2290_dpu_cfg = {
.intf = qcm2290_intf,
.vbif_count = ARRAY_SIZE(sdm845_vbif),
.vbif = sdm845_vbif,
.reg_dma_count = 1,
.dma_cfg = &sdm845_regdma,
.perf = &qcm2290_perf_data,
.mdss_irqs = IRQ_SC7180_MASK,
};

View File

@ -572,6 +572,8 @@ void dpu_rm_release(struct dpu_global_state *global_state,
ARRAY_SIZE(global_state->ctl_to_enc_id), enc->base.id);
_dpu_rm_clear_mapping(global_state->dsc_to_enc_id,
ARRAY_SIZE(global_state->dsc_to_enc_id), enc->base.id);
_dpu_rm_clear_mapping(global_state->dspp_to_enc_id,
ARRAY_SIZE(global_state->dspp_to_enc_id), enc->base.id);
}
int dpu_rm_reserve(

View File

@ -627,8 +627,8 @@ static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *dev,
int ret = 0;
uint32_t i, j;
post_deps = kmalloc_array(nr_syncobjs, sizeof(*post_deps),
GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
post_deps = kcalloc(nr_syncobjs, sizeof(*post_deps),
GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
if (!post_deps)
return ERR_PTR(-ENOMEM);
@ -643,7 +643,6 @@ static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *dev,
}
post_deps[i].point = syncobj_desc.point;
post_deps[i].chain = NULL;
if (syncobj_desc.flags) {
ret = -EINVAL;

View File

@ -35,8 +35,9 @@ struct nv50_wndw {
int nv50_wndw_new_(const struct nv50_wndw_func *, struct drm_device *,
enum drm_plane_type, const char *name, int index,
const u32 *format, enum nv50_disp_interlock_type,
u32 interlock_data, u32 heads, struct nv50_wndw **);
const u32 *format, u32 heads,
enum nv50_disp_interlock_type, u32 interlock_data,
struct nv50_wndw **);
void nv50_wndw_flush_set(struct nv50_wndw *, u32 *interlock,
struct nv50_wndw_atom *);
void nv50_wndw_flush_clr(struct nv50_wndw *, u32 *interlock, bool flush,

View File

@ -261,6 +261,7 @@ static int hid_add_field(struct hid_parser *parser, unsigned report_type, unsign
{
struct hid_report *report;
struct hid_field *field;
unsigned int max_buffer_size = HID_MAX_BUFFER_SIZE;
unsigned int usages;
unsigned int offset;
unsigned int i;
@ -291,8 +292,11 @@ static int hid_add_field(struct hid_parser *parser, unsigned report_type, unsign
offset = report->size;
report->size += parser->global.report_size * parser->global.report_count;
if (parser->device->ll_driver->max_buffer_size)
max_buffer_size = parser->device->ll_driver->max_buffer_size;
/* Total size check: Allow for possible report index byte */
if (report->size > (HID_MAX_BUFFER_SIZE - 1) << 3) {
if (report->size > (max_buffer_size - 1) << 3) {
hid_err(parser->device, "report is too long\n");
return -1;
}
@ -1966,6 +1970,7 @@ int hid_report_raw_event(struct hid_device *hid, enum hid_report_type type, u8 *
struct hid_report_enum *report_enum = hid->report_enum + type;
struct hid_report *report;
struct hid_driver *hdrv;
int max_buffer_size = HID_MAX_BUFFER_SIZE;
u32 rsize, csize = size;
u8 *cdata = data;
int ret = 0;
@ -1981,10 +1986,13 @@ int hid_report_raw_event(struct hid_device *hid, enum hid_report_type type, u8 *
rsize = hid_compute_report_size(report);
if (report_enum->numbered && rsize >= HID_MAX_BUFFER_SIZE)
rsize = HID_MAX_BUFFER_SIZE - 1;
else if (rsize > HID_MAX_BUFFER_SIZE)
rsize = HID_MAX_BUFFER_SIZE;
if (hid->ll_driver->max_buffer_size)
max_buffer_size = hid->ll_driver->max_buffer_size;
if (report_enum->numbered && rsize >= max_buffer_size)
rsize = max_buffer_size - 1;
else if (rsize > max_buffer_size)
rsize = max_buffer_size;
if (csize < rsize) {
dbg_hid("report %d is too short, (%d < %d)\n", report->id,
@ -2387,7 +2395,12 @@ int hid_hw_raw_request(struct hid_device *hdev,
unsigned char reportnum, __u8 *buf,
size_t len, enum hid_report_type rtype, enum hid_class_request reqtype)
{
if (len < 1 || len > HID_MAX_BUFFER_SIZE || !buf)
unsigned int max_buffer_size = HID_MAX_BUFFER_SIZE;
if (hdev->ll_driver->max_buffer_size)
max_buffer_size = hdev->ll_driver->max_buffer_size;
if (len < 1 || len > max_buffer_size || !buf)
return -EINVAL;
return hdev->ll_driver->raw_request(hdev, reportnum, buf, len,
@ -2406,7 +2419,12 @@ EXPORT_SYMBOL_GPL(hid_hw_raw_request);
*/
int hid_hw_output_report(struct hid_device *hdev, __u8 *buf, size_t len)
{
if (len < 1 || len > HID_MAX_BUFFER_SIZE || !buf)
unsigned int max_buffer_size = HID_MAX_BUFFER_SIZE;
if (hdev->ll_driver->max_buffer_size)
max_buffer_size = hdev->ll_driver->max_buffer_size;
if (len < 1 || len > max_buffer_size || !buf)
return -EINVAL;
if (hdev->ll_driver->output_report)

View File

@ -395,6 +395,7 @@ struct hid_ll_driver uhid_hid_driver = {
.parse = uhid_hid_parse,
.raw_request = uhid_hid_raw_request,
.output_report = uhid_hid_output_report,
.max_buffer_size = UHID_DATA_MAX,
};
EXPORT_SYMBOL_GPL(uhid_hid_driver);

View File

@ -109,6 +109,11 @@ static inline void exc3000_schedule_timer(struct exc3000_data *data)
mod_timer(&data->timer, jiffies + msecs_to_jiffies(EXC3000_TIMEOUT_MS));
}
static void exc3000_shutdown_timer(void *timer)
{
del_timer_sync(timer);
}
static int exc3000_read_frame(struct exc3000_data *data, u8 *buf)
{
struct i2c_client *client = data->client;
@ -386,6 +391,11 @@ static int exc3000_probe(struct i2c_client *client)
if (error)
return error;
error = devm_add_action_or_reset(&client->dev, exc3000_shutdown_timer,
&data->timer);
if (error)
return error;
error = devm_request_threaded_irq(&client->dev, client->irq,
NULL, exc3000_interrupt, IRQF_ONESHOT,
client->name, data);

View File

@ -33,8 +33,8 @@
#endif
struct wf_lm75_sensor {
int ds1775 : 1;
int inited : 1;
unsigned int ds1775 : 1;
unsigned int inited : 1;
struct i2c_client *i2c;
struct wf_sensor sens;
};

View File

@ -274,8 +274,8 @@ struct smu_cpu_power_sensor {
struct list_head link;
struct wf_sensor *volts;
struct wf_sensor *amps;
int fake_volts : 1;
int quadratic : 1;
unsigned int fake_volts : 1;
unsigned int quadratic : 1;
struct wf_sensor sens;
};
#define to_smu_cpu_power(c) container_of(c, struct smu_cpu_power_sensor, sens)

View File

@ -3482,7 +3482,7 @@ static int ov5640_init_controls(struct ov5640_dev *sensor)
/* Auto/manual gain */
ctrls->auto_gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_AUTOGAIN,
0, 1, 1, 1);
ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_GAIN,
ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_ANALOGUE_GAIN,
0, 1023, 1, 0);
ctrls->saturation = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_SATURATION,

View File

@ -130,6 +130,23 @@ static int gpio_ir_recv_probe(struct platform_device *pdev)
"gpio-ir-recv-irq", gpio_dev);
}
static int gpio_ir_recv_remove(struct platform_device *pdev)
{
struct gpio_rc_dev *gpio_dev = platform_get_drvdata(pdev);
struct device *pmdev = gpio_dev->pmdev;
if (pmdev) {
pm_runtime_get_sync(pmdev);
cpu_latency_qos_remove_request(&gpio_dev->qos);
pm_runtime_disable(pmdev);
pm_runtime_put_noidle(pmdev);
pm_runtime_set_suspended(pmdev);
}
return 0;
}
#ifdef CONFIG_PM
static int gpio_ir_recv_suspend(struct device *dev)
{
@ -189,6 +206,7 @@ MODULE_DEVICE_TABLE(of, gpio_ir_recv_of_match);
static struct platform_driver gpio_ir_recv_driver = {
.probe = gpio_ir_recv_probe,
.remove = gpio_ir_recv_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = of_match_ptr(gpio_ir_recv_of_match),

View File

@ -393,6 +393,24 @@ mt7530_fdb_write(struct mt7530_priv *priv, u16 vid,
mt7530_write(priv, MT7530_ATA1 + (i * 4), reg[i]);
}
/* Set up switch core clock for MT7530 */
static void mt7530_pll_setup(struct mt7530_priv *priv)
{
/* Disable PLL */
core_write(priv, CORE_GSWPLL_GRP1, 0);
/* Set core clock into 500Mhz */
core_write(priv, CORE_GSWPLL_GRP2,
RG_GSWPLL_POSDIV_500M(1) |
RG_GSWPLL_FBKDIV_500M(25));
/* Enable PLL */
core_write(priv, CORE_GSWPLL_GRP1,
RG_GSWPLL_EN_PRE |
RG_GSWPLL_POSDIV_200M(2) |
RG_GSWPLL_FBKDIV_200M(32));
}
/* Setup TX circuit including relevant PAD and driving */
static int
mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
@ -453,21 +471,6 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
core_clear(priv, CORE_TRGMII_GSW_CLK_CG,
REG_GSWCK_EN | REG_TRGMIICK_EN);
/* Setup core clock for MT7530 */
/* Disable PLL */
core_write(priv, CORE_GSWPLL_GRP1, 0);
/* Set core clock into 500Mhz */
core_write(priv, CORE_GSWPLL_GRP2,
RG_GSWPLL_POSDIV_500M(1) |
RG_GSWPLL_FBKDIV_500M(25));
/* Enable PLL */
core_write(priv, CORE_GSWPLL_GRP1,
RG_GSWPLL_EN_PRE |
RG_GSWPLL_POSDIV_200M(2) |
RG_GSWPLL_FBKDIV_200M(32));
/* Setup the MT7530 TRGMII Tx Clock */
core_write(priv, CORE_PLL_GROUP5, RG_LCDDS_PCW_NCPO1(ncpo1));
core_write(priv, CORE_PLL_GROUP6, RG_LCDDS_PCW_NCPO0(0));
@ -2201,6 +2204,8 @@ mt7530_setup(struct dsa_switch *ds)
SYS_CTRL_PHY_RST | SYS_CTRL_SW_RST |
SYS_CTRL_REG_RST);
mt7530_pll_setup(priv);
/* Enable Port 6 only; P5 as GMAC5 which currently is not supported */
val = mt7530_read(priv, MT7530_MHWTRAP);
val &= ~MHWTRAP_P6_DIS & ~MHWTRAP_PHY_ACCESS;

View File

@ -890,13 +890,13 @@ static void bgmac_chip_reset_idm_config(struct bgmac *bgmac)
if (iost & BGMAC_BCMA_IOST_ATTACHED) {
flags = BGMAC_BCMA_IOCTL_SW_CLKEN;
if (!bgmac->has_robosw)
if (bgmac->in_init || !bgmac->has_robosw)
flags |= BGMAC_BCMA_IOCTL_SW_RESET;
}
bgmac_clk_enable(bgmac, flags);
}
if (iost & BGMAC_BCMA_IOST_ATTACHED && !bgmac->has_robosw)
if (iost & BGMAC_BCMA_IOST_ATTACHED && (bgmac->in_init || !bgmac->has_robosw))
bgmac_idm_write(bgmac, BCMA_IOCTL,
bgmac_idm_read(bgmac, BCMA_IOCTL) &
~BGMAC_BCMA_IOCTL_SW_RESET);
@ -1490,6 +1490,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
struct net_device *net_dev = bgmac->net_dev;
int err;
bgmac->in_init = true;
bgmac_chip_intrs_off(bgmac);
net_dev->irq = bgmac->irq;
@ -1542,6 +1544,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
/* Omit FCS from max MTU size */
net_dev->max_mtu = BGMAC_RX_MAX_FRAME_SIZE - ETH_FCS_LEN;
bgmac->in_init = false;
err = register_netdev(bgmac->net_dev);
if (err) {
dev_err(bgmac->dev, "Cannot register net device\n");

View File

@ -472,6 +472,8 @@ struct bgmac {
int irq;
u32 int_mask;
bool in_init;
/* Current MAC state */
int mac_speed;
int mac_duplex;

View File

@ -3143,7 +3143,7 @@ static int bnxt_alloc_ring(struct bnxt *bp, struct bnxt_ring_mem_info *rmem)
static void bnxt_free_tpa_info(struct bnxt *bp)
{
int i;
int i, j;
for (i = 0; i < bp->rx_nr_rings; i++) {
struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
@ -3151,8 +3151,10 @@ static void bnxt_free_tpa_info(struct bnxt *bp)
kfree(rxr->rx_tpa_idx_map);
rxr->rx_tpa_idx_map = NULL;
if (rxr->rx_tpa) {
kfree(rxr->rx_tpa[0].agg_arr);
rxr->rx_tpa[0].agg_arr = NULL;
for (j = 0; j < bp->max_tpa; j++) {
kfree(rxr->rx_tpa[j].agg_arr);
rxr->rx_tpa[j].agg_arr = NULL;
}
}
kfree(rxr->rx_tpa);
rxr->rx_tpa = NULL;
@ -3161,14 +3163,13 @@ static void bnxt_free_tpa_info(struct bnxt *bp)
static int bnxt_alloc_tpa_info(struct bnxt *bp)
{
int i, j, total_aggs = 0;
int i, j;
bp->max_tpa = MAX_TPA;
if (bp->flags & BNXT_FLAG_CHIP_P5) {
if (!bp->max_tpa_v2)
return 0;
bp->max_tpa = max_t(u16, bp->max_tpa_v2, MAX_TPA_P5);
total_aggs = bp->max_tpa * MAX_SKB_FRAGS;
}
for (i = 0; i < bp->rx_nr_rings; i++) {
@ -3182,12 +3183,12 @@ static int bnxt_alloc_tpa_info(struct bnxt *bp)
if (!(bp->flags & BNXT_FLAG_CHIP_P5))
continue;
agg = kcalloc(total_aggs, sizeof(*agg), GFP_KERNEL);
rxr->rx_tpa[0].agg_arr = agg;
if (!agg)
return -ENOMEM;
for (j = 1; j < bp->max_tpa; j++)
rxr->rx_tpa[j].agg_arr = agg + j * MAX_SKB_FRAGS;
for (j = 0; j < bp->max_tpa; j++) {
agg = kcalloc(MAX_SKB_FRAGS, sizeof(*agg), GFP_KERNEL);
if (!agg)
return -ENOMEM;
rxr->rx_tpa[j].agg_arr = agg;
}
rxr->rx_tpa_idx_map = kzalloc(sizeof(*rxr->rx_tpa_idx_map),
GFP_KERNEL);
if (!rxr->rx_tpa_idx_map)

View File

@ -1372,7 +1372,7 @@ ice_add_dscp_pfc_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
tlv->ouisubtype = htonl(ouisubtype);
buf[0] = dcbcfg->pfc.pfccap & 0xF;
buf[1] = dcbcfg->pfc.pfcena & 0xF;
buf[1] = dcbcfg->pfc.pfcena;
}
/**

View File

@ -4145,6 +4145,8 @@ ice_get_module_eeprom(struct net_device *netdev,
* SFP modules only ever use page 0.
*/
if (page == 0 || !(data[0x2] & 0x4)) {
u32 copy_len;
/* If i2c bus is busy due to slow page change or
* link management access, call can fail. This is normal.
* So we retry this a few times.
@ -4168,8 +4170,8 @@ ice_get_module_eeprom(struct net_device *netdev,
}
/* Make sure we have enough room for the new block */
if ((i + SFF_READ_BLOCK_SIZE) < ee->len)
memcpy(data + i, value, SFF_READ_BLOCK_SIZE);
copy_len = min_t(u32, SFF_READ_BLOCK_SIZE, ee->len - i);
memcpy(data + i, value, copy_len);
}
}
return 0;

View File

@ -1322,8 +1322,8 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi,
if (match.mask->vlan_priority) {
fltr->flags |= ICE_TC_FLWR_FIELD_VLAN_PRIO;
headers->vlan_hdr.vlan_prio =
cpu_to_be16((match.key->vlan_priority <<
VLAN_PRIO_SHIFT) & VLAN_PRIO_MASK);
be16_encode_bits(match.key->vlan_priority,
VLAN_PRIO_MASK);
}
if (match.mask->vlan_tpid)
@ -1356,8 +1356,8 @@ ice_parse_cls_flower(struct net_device *filter_dev, struct ice_vsi *vsi,
if (match.mask->vlan_priority) {
fltr->flags |= ICE_TC_FLWR_FIELD_CVLAN_PRIO;
headers->cvlan_hdr.vlan_prio =
cpu_to_be16((match.key->vlan_priority <<
VLAN_PRIO_SHIFT) & VLAN_PRIO_MASK);
be16_encode_bits(match.key->vlan_priority,
VLAN_PRIO_MASK);
}
}

View File

@ -859,6 +859,9 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf,
int slot);
int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc);
#define NDC_AF_BANK_MASK GENMASK_ULL(7, 0)
#define NDC_AF_BANK_LINE_MASK GENMASK_ULL(31, 16)
/* CN10K RVU */
int rvu_set_channels_base(struct rvu *rvu);
void rvu_program_channels(struct rvu *rvu);
@ -874,6 +877,8 @@ static inline void rvu_dbg_init(struct rvu *rvu) {}
static inline void rvu_dbg_exit(struct rvu *rvu) {}
#endif
int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr);
/* RVU Switch */
void rvu_switch_enable(struct rvu *rvu);
void rvu_switch_disable(struct rvu *rvu);

View File

@ -198,9 +198,6 @@ enum cpt_eng_type {
CPT_IE_TYPE = 3,
};
#define NDC_MAX_BANK(rvu, blk_addr) (rvu_read64(rvu, \
blk_addr, NDC_AF_CONST) & 0xFF)
#define rvu_dbg_NULL NULL
#define rvu_dbg_open_NULL NULL
@ -1448,6 +1445,7 @@ static int ndc_blk_hits_miss_stats(struct seq_file *s, int idx, int blk_addr)
struct nix_hw *nix_hw;
struct rvu *rvu;
int bank, max_bank;
u64 ndc_af_const;
if (blk_addr == BLKADDR_NDC_NPA0) {
rvu = s->private;
@ -1456,7 +1454,8 @@ static int ndc_blk_hits_miss_stats(struct seq_file *s, int idx, int blk_addr)
rvu = nix_hw->rvu;
}
max_bank = NDC_MAX_BANK(rvu, blk_addr);
ndc_af_const = rvu_read64(rvu, blk_addr, NDC_AF_CONST);
max_bank = FIELD_GET(NDC_AF_BANK_MASK, ndc_af_const);
for (bank = 0; bank < max_bank; bank++) {
seq_printf(s, "BANK:%d\n", bank);
seq_printf(s, "\tHits:\t%lld\n",

View File

@ -790,6 +790,7 @@ static int nix_aq_enqueue_wait(struct rvu *rvu, struct rvu_block *block,
struct nix_aq_res_s *result;
int timeout = 1000;
u64 reg, head;
int ret;
result = (struct nix_aq_res_s *)aq->res->base;
@ -813,9 +814,22 @@ static int nix_aq_enqueue_wait(struct rvu *rvu, struct rvu_block *block,
return -EBUSY;
}
if (result->compcode != NIX_AQ_COMP_GOOD)
if (result->compcode != NIX_AQ_COMP_GOOD) {
/* TODO: Replace this with some error code */
if (result->compcode == NIX_AQ_COMP_CTX_FAULT ||
result->compcode == NIX_AQ_COMP_LOCKERR ||
result->compcode == NIX_AQ_COMP_CTX_POISON) {
ret = rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX0_RX);
ret |= rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX0_TX);
ret |= rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX1_RX);
ret |= rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX1_TX);
if (ret)
dev_err(rvu->dev,
"%s: Not able to unlock cachelines\n", __func__);
}
return -EBUSY;
}
return 0;
}

View File

@ -4,7 +4,7 @@
* Copyright (C) 2018 Marvell.
*
*/
#include <linux/bitfield.h>
#include <linux/module.h>
#include <linux/pci.h>
@ -42,9 +42,18 @@ static int npa_aq_enqueue_wait(struct rvu *rvu, struct rvu_block *block,
return -EBUSY;
}
if (result->compcode != NPA_AQ_COMP_GOOD)
if (result->compcode != NPA_AQ_COMP_GOOD) {
/* TODO: Replace this with some error code */
if (result->compcode == NPA_AQ_COMP_CTX_FAULT ||
result->compcode == NPA_AQ_COMP_LOCKERR ||
result->compcode == NPA_AQ_COMP_CTX_POISON) {
if (rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NPA0))
dev_err(rvu->dev,
"%s: Not able to unlock cachelines\n", __func__);
}
return -EBUSY;
}
return 0;
}
@ -545,3 +554,48 @@ void rvu_npa_lf_teardown(struct rvu *rvu, u16 pcifunc, int npalf)
npa_ctx_free(rvu, pfvf);
}
/* Due to an Hardware errata, in some corner cases, AQ context lock
* operations can result in a NDC way getting into an illegal state
* of not valid but locked.
*
* This API solves the problem by clearing the lock bit of the NDC block.
* The operation needs to be done for each line of all the NDC banks.
*/
int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr)
{
int bank, max_bank, line, max_line, err;
u64 reg, ndc_af_const;
/* Set the ENABLE bit(63) to '0' */
reg = rvu_read64(rvu, blkaddr, NDC_AF_CAMS_RD_INTERVAL);
rvu_write64(rvu, blkaddr, NDC_AF_CAMS_RD_INTERVAL, reg & GENMASK_ULL(62, 0));
/* Poll until the BUSY bits(47:32) are set to '0' */
err = rvu_poll_reg(rvu, blkaddr, NDC_AF_CAMS_RD_INTERVAL, GENMASK_ULL(47, 32), true);
if (err) {
dev_err(rvu->dev, "Timed out while polling for NDC CAM busy bits.\n");
return err;
}
ndc_af_const = rvu_read64(rvu, blkaddr, NDC_AF_CONST);
max_bank = FIELD_GET(NDC_AF_BANK_MASK, ndc_af_const);
max_line = FIELD_GET(NDC_AF_BANK_LINE_MASK, ndc_af_const);
for (bank = 0; bank < max_bank; bank++) {
for (line = 0; line < max_line; line++) {
/* Check if 'cache line valid bit(63)' is not set
* but 'cache line lock bit(60)' is set and on
* success, reset the lock bit(60).
*/
reg = rvu_read64(rvu, blkaddr,
NDC_AF_BANKX_LINEX_METADATA(bank, line));
if (!(reg & BIT_ULL(63)) && (reg & BIT_ULL(60))) {
rvu_write64(rvu, blkaddr,
NDC_AF_BANKX_LINEX_METADATA(bank, line),
reg & ~BIT_ULL(60));
}
}
}
return 0;
}

View File

@ -690,6 +690,7 @@
#define NDC_AF_INTR_ENA_W1S (0x00068)
#define NDC_AF_INTR_ENA_W1C (0x00070)
#define NDC_AF_ACTIVE_PC (0x00078)
#define NDC_AF_CAMS_RD_INTERVAL (0x00080)
#define NDC_AF_BP_TEST_ENABLE (0x001F8)
#define NDC_AF_BP_TEST(a) (0x00200 | (a) << 3)
#define NDC_AF_BLK_RST (0x002F0)
@ -705,6 +706,8 @@
(0x00F00 | (a) << 5 | (b) << 4)
#define NDC_AF_BANKX_HIT_PC(a) (0x01000 | (a) << 3)
#define NDC_AF_BANKX_MISS_PC(a) (0x01100 | (a) << 3)
#define NDC_AF_BANKX_LINEX_METADATA(a, b) \
(0x10000 | (a) << 12 | (b) << 3)
/* LBK */
#define LBK_CONST (0x10ull)

View File

@ -561,7 +561,8 @@ static int mtk_mac_finish(struct phylink_config *config, unsigned int mode,
mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id));
mcr_new = mcr_cur;
mcr_new |= MAC_MCR_IPG_CFG | MAC_MCR_FORCE_MODE |
MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK;
MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK |
MAC_MCR_RX_FIFO_CLR_DIS;
/* Only update control register when needed! */
if (mcr_new != mcr_cur)

View File

@ -357,6 +357,7 @@
#define MAC_MCR_FORCE_MODE BIT(15)
#define MAC_MCR_TX_EN BIT(14)
#define MAC_MCR_RX_EN BIT(13)
#define MAC_MCR_RX_FIFO_CLR_DIS BIT(12)
#define MAC_MCR_BACKOFF_EN BIT(9)
#define MAC_MCR_BACKPR_EN BIT(8)
#define MAC_MCR_FORCE_RX_FC BIT(5)

View File

@ -194,7 +194,7 @@ int lan966x_police_port_del(struct lan966x_port *port,
return -EINVAL;
}
err = lan966x_police_del(port, port->tc.police_id);
err = lan966x_police_del(port, POL_IDX_PORT + port->chip_port);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Failed to add policer to port");

View File

@ -1168,6 +1168,7 @@ static int stmmac_init_phy(struct net_device *dev)
phylink_ethtool_get_wol(priv->phylink, &wol);
device_set_wakeup_capable(priv->device, !!wol.supported);
device_set_wakeup_enable(priv->device, !!wol.wolopts);
}
return ret;

View File

@ -342,6 +342,37 @@ static int lan88xx_config_aneg(struct phy_device *phydev)
return genphy_config_aneg(phydev);
}
static void lan88xx_link_change_notify(struct phy_device *phydev)
{
int temp;
/* At forced 100 F/H mode, chip may fail to set mode correctly
* when cable is switched between long(~50+m) and short one.
* As workaround, set to 10 before setting to 100
* at forced 100 F/H mode.
*/
if (!phydev->autoneg && phydev->speed == 100) {
/* disable phy interrupt */
temp = phy_read(phydev, LAN88XX_INT_MASK);
temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_;
phy_write(phydev, LAN88XX_INT_MASK, temp);
temp = phy_read(phydev, MII_BMCR);
temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000);
phy_write(phydev, MII_BMCR, temp); /* set to 10 first */
temp |= BMCR_SPEED100;
phy_write(phydev, MII_BMCR, temp); /* set to 100 later */
/* clear pending interrupt generated while workaround */
temp = phy_read(phydev, LAN88XX_INT_STS);
/* enable phy interrupt back */
temp = phy_read(phydev, LAN88XX_INT_MASK);
temp |= LAN88XX_INT_MASK_MDINTPIN_EN_;
phy_write(phydev, LAN88XX_INT_MASK, temp);
}
}
static struct phy_driver microchip_phy_driver[] = {
{
.phy_id = 0x0007c132,
@ -359,6 +390,7 @@ static struct phy_driver microchip_phy_driver[] = {
.config_init = lan88xx_config_init,
.config_aneg = lan88xx_config_aneg,
.link_change_notify = lan88xx_link_change_notify,
.config_intr = lan88xx_phy_config_intr,
.handle_interrupt = lan88xx_handle_interrupt,

View File

@ -3041,8 +3041,6 @@ static int phy_probe(struct device *dev)
if (phydrv->flags & PHY_IS_INTERNAL)
phydev->is_internal = true;
mutex_lock(&phydev->lock);
/* Deassert the reset signal */
phy_device_reset(phydev, 0);
@ -3110,12 +3108,10 @@ static int phy_probe(struct device *dev)
phydev->state = PHY_READY;
out:
/* Assert the reset signal */
/* Re-assert the reset signal on error */
if (err)
phy_device_reset(phydev, 1);
mutex_unlock(&phydev->lock);
return err;
}
@ -3125,9 +3121,7 @@ static int phy_remove(struct device *dev)
cancel_delayed_work_sync(&phydev->state_queue);
mutex_lock(&phydev->lock);
phydev->state = PHY_DOWN;
mutex_unlock(&phydev->lock);
sfp_bus_del_upstream(phydev->sfp_bus);
phydev->sfp_bus = NULL;

View File

@ -44,7 +44,6 @@ static struct smsc_hw_stat smsc_hw_stats[] = {
};
struct smsc_phy_priv {
u16 intmask;
bool energy_enable;
};
@ -57,7 +56,6 @@ static int smsc_phy_ack_interrupt(struct phy_device *phydev)
static int smsc_phy_config_intr(struct phy_device *phydev)
{
struct smsc_phy_priv *priv = phydev->priv;
int rc;
if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
@ -65,14 +63,9 @@ static int smsc_phy_config_intr(struct phy_device *phydev)
if (rc)
return rc;
priv->intmask = MII_LAN83C185_ISF_INT4 | MII_LAN83C185_ISF_INT6;
if (priv->energy_enable)
priv->intmask |= MII_LAN83C185_ISF_INT7;
rc = phy_write(phydev, MII_LAN83C185_IM, priv->intmask);
rc = phy_write(phydev, MII_LAN83C185_IM,
MII_LAN83C185_ISF_INT_PHYLIB_EVENTS);
} else {
priv->intmask = 0;
rc = phy_write(phydev, MII_LAN83C185_IM, 0);
if (rc)
return rc;
@ -85,7 +78,6 @@ static int smsc_phy_config_intr(struct phy_device *phydev)
static irqreturn_t smsc_phy_handle_interrupt(struct phy_device *phydev)
{
struct smsc_phy_priv *priv = phydev->priv;
int irq_status;
irq_status = phy_read(phydev, MII_LAN83C185_ISF);
@ -96,7 +88,7 @@ static irqreturn_t smsc_phy_handle_interrupt(struct phy_device *phydev)
return IRQ_NONE;
}
if (!(irq_status & priv->intmask))
if (!(irq_status & MII_LAN83C185_ISF_INT_PHYLIB_EVENTS))
return IRQ_NONE;
phy_trigger_machine(phydev);

View File

@ -2115,33 +2115,8 @@ static void lan78xx_remove_mdio(struct lan78xx_net *dev)
static void lan78xx_link_status_change(struct net_device *net)
{
struct phy_device *phydev = net->phydev;
int temp;
/* At forced 100 F/H mode, chip may fail to set mode correctly
* when cable is switched between long(~50+m) and short one.
* As workaround, set to 10 before setting to 100
* at forced 100 F/H mode.
*/
if (!phydev->autoneg && (phydev->speed == 100)) {
/* disable phy interrupt */
temp = phy_read(phydev, LAN88XX_INT_MASK);
temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_;
phy_write(phydev, LAN88XX_INT_MASK, temp);
temp = phy_read(phydev, MII_BMCR);
temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000);
phy_write(phydev, MII_BMCR, temp); /* set to 10 first */
temp |= BMCR_SPEED100;
phy_write(phydev, MII_BMCR, temp); /* set to 100 later */
/* clear pending interrupt generated while workaround */
temp = phy_read(phydev, LAN88XX_INT_STS);
/* enable phy interrupt back */
temp = phy_read(phydev, LAN88XX_INT_MASK);
temp |= LAN88XX_INT_MASK_MDINTPIN_EN_;
phy_write(phydev, LAN88XX_INT_MASK, temp);
}
phy_print_status(phydev);
}
static int irq_map(struct irq_domain *d, unsigned int irq,

View File

@ -247,6 +247,9 @@ static void fdp_nci_i2c_read_device_properties(struct device *dev,
len, sizeof(**fw_vsc_cfg),
GFP_KERNEL);
if (!*fw_vsc_cfg)
goto alloc_err;
r = device_property_read_u8_array(dev, FDP_DP_FW_VSC_CFG_NAME,
*fw_vsc_cfg, len);
@ -260,6 +263,7 @@ static void fdp_nci_i2c_read_device_properties(struct device *dev,
*fw_vsc_cfg = NULL;
}
alloc_err:
dev_dbg(dev, "Clock type: %d, clock frequency: %d, VSC: %s",
*clock_type, *clock_freq, *fw_vsc_cfg != NULL ? "yes" : "no");
}

View File

@ -16,17 +16,17 @@ if MELLANOX_PLATFORM
config MLXREG_HOTPLUG
tristate "Mellanox platform hotplug driver support"
depends on REGMAP
depends on HWMON
depends on I2C
select REGMAP
help
This driver handles hot-plug events for the power suppliers, power
cables and fans on the wide range Mellanox IB and Ethernet systems.
config MLXREG_IO
tristate "Mellanox platform register access driver support"
depends on REGMAP
depends on HWMON
select REGMAP
help
This driver allows access to Mellanox programmable device register
space through sysfs interface. The sets of registers for sysfs access
@ -36,9 +36,9 @@ config MLXREG_IO
config MLXREG_LC
tristate "Mellanox line card platform driver support"
depends on REGMAP
depends on HWMON
depends on I2C
select REGMAP
help
This driver provides support for the Mellanox MSN4800-XX line cards,
which are the part of MSN4800 Ethernet modular switch systems
@ -80,10 +80,9 @@ config MLXBF_PMC
config NVSW_SN2201
tristate "Nvidia SN2201 platform driver support"
depends on REGMAP
depends on HWMON
depends on I2C
depends on REGMAP_I2C
select REGMAP_I2C
help
This driver provides support for the Nvidia SN2201 platform.
The SN2201 is a highly integrated for one rack unit system with

View File

@ -997,7 +997,8 @@ config SERIAL_MULTI_INSTANTIATE
config MLX_PLATFORM
tristate "Mellanox Technologies platform support"
depends on I2C && REGMAP
depends on I2C
select REGMAP
help
This option enables system support for the Mellanox Technologies
platform. The Mellanox systems provide data center networking

Some files were not shown because too many files have changed in this diff Show More