Merge 6.1.31 into android14-6.1-lts
Changes in 6.1.31 usb: dwc3: fix gadget mode suspend interrupt handler issue tpm, tpm_tis: Avoid cache incoherency in test for interrupts tpm, tpm_tis: Only handle supported interrupts tpm_tis: Use tpm_chip_{start,stop} decoration inside tpm_tis_resume tpm, tpm_tis: startup chip before testing for interrupts tpm: Re-enable TPM chip boostrapping non-tpm_tis TPM drivers tpm: Prevent hwrng from activating during resume watchdog: sp5100_tco: Immediately trigger upon starting. drm/amd/amdgpu: update mes11 api def drm/amdgpu/mes11: enable reg active poll skbuff: Proactively round up to kmalloc bucket size platform/x86: hp-wmi: Fix cast to smaller integer type warning net: dsa: mv88e6xxx: Add RGMII delay to 88E6320 drm/amd/display: hpd rx irq not working with eDP interface ocfs2: Switch to security_inode_init_security() arm64: Also reset KASAN tag if page is not PG_mte_tagged x86/mm: Avoid incomplete Global INVLPG flushes platform/x86/intel/ifs: Annotate work queue on stack so object debug does not complain ALSA: hda/ca0132: add quirk for EVGA X299 DARK ALSA: hda: Fix unhandled register update during auto-suspend period ALSA: hda/realtek: Enable headset onLenovo M70/M90 SUNRPC: Don't change task->tk_status after the call to rpc_exit_task mmc: sdhci-esdhc-imx: make "no-mmc-hs400" works mmc: block: ensure error propagation for non-blk power: supply: axp288_fuel_gauge: Fix external_power_changed race power: supply: bq25890: Fix external_power_changed race ASoC: rt5682: Disable jack detection interrupt during suspend net: cdc_ncm: Deal with too low values of dwNtbOutMaxSize m68k: Move signal frame following exception on 68020/030 xtensa: fix signal delivery to FDPIC process xtensa: add __bswap{si,di}2 helpers parisc: Use num_present_cpus() in alternative patching code parisc: Handle kgdb breakpoints only in kernel context parisc: Fix flush_dcache_page() for usage from irq context parisc: Allow to reboot machine after system halt parisc: Enable LOCKDEP support parisc: Handle kprobes breakpoints only in kernel context gpio: mockup: Fix mode of debugfs files btrfs: use nofs when cleaning up aborted transactions dt-binding: cdns,usb3: Fix cdns,on-chip-buff-size type drm/mgag200: Fix gamma lut not initialized. drm/radeon: reintroduce radeon_dp_work_func content drm/amd/pm: add missing NotifyPowerSource message mapping for SMU13.0.7 drm/amd/pm: Fix output of pp_od_clk_voltage Revert "binder_alloc: add missing mmap_lock calls when using the VMA" Revert "android: binder: stop saving a pointer to the VMA" binder: add lockless binder_alloc_(set|get)_vma() binder: fix UAF caused by faulty buffer cleanup binder: fix UAF of alloc->vma in race with munmap() selftests/memfd: Fix unknown type name build failure drm/amd/amdgpu: limit one queue per gang perf/x86/uncore: Correct the number of CHAs on SPR x86/topology: Fix erroneous smp_num_siblings on Intel Hybrid platforms irqchip/mips-gic: Don't touch vl_map if a local interrupt is not routable irqchip/mips-gic: Use raw spinlock for gic_lock debugobjects: Don't wake up kswapd from fill_pool() fbdev: udlfb: Fix endpoint check net: fix stack overflow when LRO is disabled for virtual interfaces udplite: Fix NULL pointer dereference in __sk_mem_raise_allocated(). USB: core: Add routines for endpoint checks in old drivers USB: sisusbvga: Add endpoint checks media: radio-shark: Add endpoint checks ASoC: lpass: Fix for KASAN use_after_free out of bounds net: fix skb leak in __skb_tstamp_tx() drm: fix drmm_mutex_init() selftests: fib_tests: mute cleanup error message octeontx2-pf: Fix TSOv6 offload bpf: Fix mask generation for 32-bit narrow loads of 64-bit fields bpf: fix a memory leak in the LRU and LRU_PERCPU hash maps lan966x: Fix unloading/loading of the driver ipv6: Fix out-of-bounds access in ipv6_find_tlv() cifs: mapchars mount option ignored power: supply: leds: Fix blink to LED on transition power: supply: mt6360: add a check of devm_work_autocancel in mt6360_charger_probe power: supply: bq27xxx: Fix bq27xxx_battery_update() race condition power: supply: bq27xxx: Fix I2C IRQ race on remove power: supply: bq27xxx: Fix poll_interval handling and races on remove power: supply: bq27xxx: Add cache parameter to bq27xxx_battery_current_and_status() power: supply: bq27xxx: Move bq27xxx_battery_update() down power: supply: bq27xxx: Ensure power_supply_changed() is called on current sign changes power: supply: bq27xxx: After charger plug in/out wait 0.5s for things to stabilize power: supply: bq25890: Call power_supply_changed() after updating input current or voltage power: supply: bq24190: Call power_supply_changed() after updating input current power: supply: sbs-charger: Fix INHIBITED bit for Status reg optee: fix uninited async notif value firmware: arm_ffa: Check if ffa_driver remove is present before executing firmware: arm_ffa: Fix FFA device names for logical partitions fs: fix undefined behavior in bit shift for SB_NOUSER regulator: pca9450: Fix BUCK2 enable_mask platform/x86: ISST: Remove 8 socket limit coresight: Fix signedness bug in tmc_etr_buf_insert_barrier_packet() ARM: dts: imx6qdl-mba6: Add missing pvcie-supply regulator x86/pci/xen: populate MSI sysfs entries xen/pvcalls-back: fix double frees with pvcalls_new_active_socket() x86/show_trace_log_lvl: Ensure stack pointer is aligned, again ASoC: Intel: Skylake: Fix declaration of enum skl_ch_cfg ASoC: Intel: avs: Fix declaration of enum avs_channel_config ASoC: Intel: avs: Access path components under lock cxl: Wait Memory_Info_Valid before access memory related info sctp: fix an issue that plpmtu can never go to complete state forcedeth: Fix an error handling path in nv_probe() platform/mellanox: mlxbf-pmc: fix sscanf() error checking net/mlx5e: Fix SQ wake logic in ptp napi_poll context net/mlx5e: Fix deadlock in tc route query code net/mlx5e: Use correct encap attribute during invalidation net/mlx5e: do as little as possible in napi poll when budget is 0 net/mlx5: DR, Fix crc32 calculation to work on big-endian (BE) CPUs net/mlx5: Handle pairing of E-switch via uplink un/load APIs net/mlx5: DR, Check force-loopback RC QP capability independently from RoCE net/mlx5: Fix error message when failing to allocate device memory net/mlx5: Collect command failures data only for known commands net/mlx5: Devcom, fix error flow in mlx5_devcom_register_device net/mlx5: Devcom, serialize devcom registration arm64: dts: imx8mn-var-som: fix PHY detection bug by adding deassert delay firmware: arm_ffa: Set reserved/MBZ fields to zero in the memory descriptors regulator: mt6359: add read check for PMIC MT6359 net/smc: Reset connection when trying to use SMCRv2 fails. 3c589_cs: Fix an error handling path in tc589_probe() net: phy: mscc: add VSC8502 to MODULE_DEVICE_TABLE Linux 6.1.31 Change-Id: I1043b7dd190672829baaf093f690e70a07c7a6dd Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
03c3264a15
@ -64,7 +64,7 @@ properties:
|
||||
description:
|
||||
size of memory intended as internal memory for endpoints
|
||||
buffers expressed in KB
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
$ref: /schemas/types.yaml#/definitions/uint16
|
||||
|
||||
cdns,phyrst-a-enable:
|
||||
description: Enable resetting of PHY if Rx fail is detected
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 1
|
||||
SUBLEVEL = 30
|
||||
SUBLEVEL = 31
|
||||
EXTRAVERSION =
|
||||
NAME = Curry Ramen
|
||||
|
||||
|
@ -209,6 +209,7 @@ &pcie {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_pcie>;
|
||||
reset-gpio = <&gpio6 7 GPIO_ACTIVE_LOW>;
|
||||
vpcie-supply = <®_pcie>;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
@ -98,11 +98,17 @@ mdio {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
ethphy: ethernet-phy@4 {
|
||||
ethphy: ethernet-phy@4 { /* AR8033 or ADIN1300 */
|
||||
compatible = "ethernet-phy-ieee802.3-c22";
|
||||
reg = <4>;
|
||||
reset-gpios = <&gpio1 9 GPIO_ACTIVE_LOW>;
|
||||
reset-assert-us = <10000>;
|
||||
/*
|
||||
* Deassert delay:
|
||||
* ADIN1300 requires 5ms.
|
||||
* AR8033 requires 1ms.
|
||||
*/
|
||||
reset-deassert-us = <20000>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
@ -858,11 +858,17 @@ static inline int rt_setup_ucontext(struct ucontext __user *uc, struct pt_regs *
|
||||
}
|
||||
|
||||
static inline void __user *
|
||||
get_sigframe(struct ksignal *ksig, size_t frame_size)
|
||||
get_sigframe(struct ksignal *ksig, struct pt_regs *tregs, size_t frame_size)
|
||||
{
|
||||
unsigned long usp = sigsp(rdusp(), ksig);
|
||||
unsigned long gap = 0;
|
||||
|
||||
return (void __user *)((usp - frame_size) & -8UL);
|
||||
if (CPU_IS_020_OR_030 && tregs->format == 0xb) {
|
||||
/* USP is unreliable so use worst-case value */
|
||||
gap = 256;
|
||||
}
|
||||
|
||||
return (void __user *)((usp - gap - frame_size) & -8UL);
|
||||
}
|
||||
|
||||
static int setup_frame(struct ksignal *ksig, sigset_t *set,
|
||||
@ -880,7 +886,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
frame = get_sigframe(ksig, sizeof(*frame) + fsize);
|
||||
frame = get_sigframe(ksig, tregs, sizeof(*frame) + fsize);
|
||||
|
||||
if (fsize)
|
||||
err |= copy_to_user (frame + 1, regs + 1, fsize);
|
||||
@ -952,7 +958,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
frame = get_sigframe(ksig, sizeof(*frame));
|
||||
frame = get_sigframe(ksig, tregs, sizeof(*frame));
|
||||
|
||||
if (fsize)
|
||||
err |= copy_to_user (&frame->uc.uc_extra, regs + 1, fsize);
|
||||
|
@ -129,6 +129,10 @@ config PM
|
||||
config STACKTRACE_SUPPORT
|
||||
def_bool y
|
||||
|
||||
config LOCKDEP_SUPPORT
|
||||
bool
|
||||
default y
|
||||
|
||||
config ISA_DMA_API
|
||||
bool
|
||||
|
||||
|
@ -48,6 +48,10 @@ void flush_dcache_page(struct page *page);
|
||||
|
||||
#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages)
|
||||
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages)
|
||||
#define flush_dcache_mmap_lock_irqsave(mapping, flags) \
|
||||
xa_lock_irqsave(&mapping->i_pages, flags)
|
||||
#define flush_dcache_mmap_unlock_irqrestore(mapping, flags) \
|
||||
xa_unlock_irqrestore(&mapping->i_pages, flags)
|
||||
|
||||
#define flush_icache_page(vma,page) do { \
|
||||
flush_kernel_dcache_page_addr(page_address(page)); \
|
||||
|
@ -25,7 +25,7 @@ void __init_or_module apply_alternatives(struct alt_instr *start,
|
||||
{
|
||||
struct alt_instr *entry;
|
||||
int index = 0, applied = 0;
|
||||
int num_cpus = num_online_cpus();
|
||||
int num_cpus = num_present_cpus();
|
||||
u16 cond_check;
|
||||
|
||||
cond_check = ALT_COND_ALWAYS |
|
||||
|
@ -399,6 +399,7 @@ void flush_dcache_page(struct page *page)
|
||||
unsigned long offset;
|
||||
unsigned long addr, old_addr = 0;
|
||||
unsigned long count = 0;
|
||||
unsigned long flags;
|
||||
pgoff_t pgoff;
|
||||
|
||||
if (mapping && !mapping_mapped(mapping)) {
|
||||
@ -420,7 +421,7 @@ void flush_dcache_page(struct page *page)
|
||||
* to flush one address here for them all to become coherent
|
||||
* on machines that support equivalent aliasing
|
||||
*/
|
||||
flush_dcache_mmap_lock(mapping);
|
||||
flush_dcache_mmap_lock_irqsave(mapping, flags);
|
||||
vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {
|
||||
offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
|
||||
addr = mpnt->vm_start + offset;
|
||||
@ -460,7 +461,7 @@ void flush_dcache_page(struct page *page)
|
||||
}
|
||||
WARN_ON(++count == 4096);
|
||||
}
|
||||
flush_dcache_mmap_unlock(mapping);
|
||||
flush_dcache_mmap_unlock_irqrestore(mapping, flags);
|
||||
}
|
||||
EXPORT_SYMBOL(flush_dcache_page);
|
||||
|
||||
|
@ -122,13 +122,18 @@ void machine_power_off(void)
|
||||
/* It seems we have no way to power the system off via
|
||||
* software. The user has to press the button himself. */
|
||||
|
||||
printk(KERN_EMERG "System shut down completed.\n"
|
||||
"Please power this system off now.");
|
||||
printk("Power off or press RETURN to reboot.\n");
|
||||
|
||||
/* prevent soft lockup/stalled CPU messages for endless loop. */
|
||||
rcu_sysrq_start();
|
||||
lockup_detector_soft_poweroff();
|
||||
for (;;);
|
||||
while (1) {
|
||||
/* reboot if user presses RETURN key */
|
||||
if (pdc_iodc_getc() == 13) {
|
||||
printk("Rebooting...\n");
|
||||
machine_restart(NULL);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void (*pm_power_off)(void);
|
||||
|
@ -291,19 +291,19 @@ static void handle_break(struct pt_regs *regs)
|
||||
}
|
||||
|
||||
#ifdef CONFIG_KPROBES
|
||||
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN)) {
|
||||
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN && !user_mode(regs))) {
|
||||
parisc_kprobe_break_handler(regs);
|
||||
return;
|
||||
}
|
||||
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2)) {
|
||||
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2 && !user_mode(regs))) {
|
||||
parisc_kprobe_ss_handler(regs);
|
||||
return;
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_KGDB
|
||||
if (unlikely(iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
|
||||
iir == PARISC_KGDB_BREAK_INSN)) {
|
||||
if (unlikely((iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
|
||||
iir == PARISC_KGDB_BREAK_INSN)) && !user_mode(regs)) {
|
||||
kgdb_handle_exception(9, SIGTRAP, 0, regs);
|
||||
return;
|
||||
}
|
||||
|
@ -5822,6 +5822,7 @@ static struct intel_uncore_type spr_uncore_mdf = {
|
||||
};
|
||||
|
||||
#define UNCORE_SPR_NUM_UNCORE_TYPES 12
|
||||
#define UNCORE_SPR_CHA 0
|
||||
#define UNCORE_SPR_IIO 1
|
||||
#define UNCORE_SPR_IMC 6
|
||||
|
||||
@ -6064,12 +6065,22 @@ static int uncore_type_max_boxes(struct intel_uncore_type **types,
|
||||
return max + 1;
|
||||
}
|
||||
|
||||
#define SPR_MSR_UNC_CBO_CONFIG 0x2FFE
|
||||
|
||||
void spr_uncore_cpu_init(void)
|
||||
{
|
||||
struct intel_uncore_type *type;
|
||||
u64 num_cbo;
|
||||
|
||||
uncore_msr_uncores = uncore_get_uncores(UNCORE_ACCESS_MSR,
|
||||
UNCORE_SPR_MSR_EXTRA_UNCORES,
|
||||
spr_msr_uncores);
|
||||
|
||||
type = uncore_find_type_by_id(uncore_msr_uncores, UNCORE_SPR_CHA);
|
||||
if (type) {
|
||||
rdmsrl(SPR_MSR_UNC_CBO_CONFIG, num_cbo);
|
||||
type->num_boxes = num_cbo;
|
||||
}
|
||||
spr_uncore_iio_free_running.num_boxes = uncore_type_max_boxes(uncore_msr_uncores, UNCORE_SPR_IIO);
|
||||
}
|
||||
|
||||
|
@ -79,7 +79,7 @@ int detect_extended_topology_early(struct cpuinfo_x86 *c)
|
||||
* initial apic id, which also represents 32-bit extended x2apic id.
|
||||
*/
|
||||
c->initial_apicid = edx;
|
||||
smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
|
||||
smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
@ -109,7 +109,8 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
|
||||
*/
|
||||
cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
|
||||
c->initial_apicid = edx;
|
||||
core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
|
||||
core_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
|
||||
smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
|
||||
core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
|
||||
die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
|
||||
pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
|
||||
|
@ -195,7 +195,6 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
|
||||
printk("%sCall Trace:\n", log_lvl);
|
||||
|
||||
unwind_start(&state, task, regs, stack);
|
||||
stack = stack ? : get_stack_pointer(task, regs);
|
||||
regs = unwind_get_entry_regs(&state, &partial);
|
||||
|
||||
/*
|
||||
@ -214,9 +213,13 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
|
||||
* - hardirq stack
|
||||
* - entry stack
|
||||
*/
|
||||
for ( ; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) {
|
||||
for (stack = stack ?: get_stack_pointer(task, regs);
|
||||
stack;
|
||||
stack = stack_info.next_sp) {
|
||||
const char *stack_name;
|
||||
|
||||
stack = PTR_ALIGN(stack, sizeof(long));
|
||||
|
||||
if (get_stack_info(stack, task, &stack_info, &visit_mask)) {
|
||||
/*
|
||||
* We weren't on a valid stack. It's possible that
|
||||
|
@ -9,6 +9,7 @@
|
||||
#include <linux/sched/task.h>
|
||||
|
||||
#include <asm/set_memory.h>
|
||||
#include <asm/cpu_device_id.h>
|
||||
#include <asm/e820/api.h>
|
||||
#include <asm/init.h>
|
||||
#include <asm/page.h>
|
||||
@ -266,6 +267,24 @@ static void __init probe_page_size_mask(void)
|
||||
}
|
||||
}
|
||||
|
||||
#define INTEL_MATCH(_model) { .vendor = X86_VENDOR_INTEL, \
|
||||
.family = 6, \
|
||||
.model = _model, \
|
||||
}
|
||||
/*
|
||||
* INVLPG may not properly flush Global entries
|
||||
* on these CPUs when PCIDs are enabled.
|
||||
*/
|
||||
static const struct x86_cpu_id invlpg_miss_ids[] = {
|
||||
INTEL_MATCH(INTEL_FAM6_ALDERLAKE ),
|
||||
INTEL_MATCH(INTEL_FAM6_ALDERLAKE_L ),
|
||||
INTEL_MATCH(INTEL_FAM6_ALDERLAKE_N ),
|
||||
INTEL_MATCH(INTEL_FAM6_RAPTORLAKE ),
|
||||
INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_P),
|
||||
INTEL_MATCH(INTEL_FAM6_RAPTORLAKE_S),
|
||||
{}
|
||||
};
|
||||
|
||||
static void setup_pcid(void)
|
||||
{
|
||||
if (!IS_ENABLED(CONFIG_X86_64))
|
||||
@ -274,6 +293,12 @@ static void setup_pcid(void)
|
||||
if (!boot_cpu_has(X86_FEATURE_PCID))
|
||||
return;
|
||||
|
||||
if (x86_match_cpu(invlpg_miss_ids)) {
|
||||
pr_info("Incomplete global flushes, disabling PCID");
|
||||
setup_clear_cpu_cap(X86_FEATURE_PCID);
|
||||
return;
|
||||
}
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_PGE)) {
|
||||
/*
|
||||
* This can't be cr4_set_bits_and_update_boot() -- the
|
||||
|
@ -198,7 +198,7 @@ static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
|
||||
i++;
|
||||
}
|
||||
kfree(v);
|
||||
return 0;
|
||||
return msi_device_populate_sysfs(&dev->dev);
|
||||
|
||||
error:
|
||||
if (ret == -ENOSYS)
|
||||
@ -254,7 +254,7 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
|
||||
dev_dbg(&dev->dev,
|
||||
"xen: msi --> pirq=%d --> irq=%d\n", pirq, irq);
|
||||
}
|
||||
return 0;
|
||||
return msi_device_populate_sysfs(&dev->dev);
|
||||
|
||||
error:
|
||||
dev_err(&dev->dev, "Failed to create MSI%s! ret=%d!\n",
|
||||
@ -346,7 +346,7 @@ static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
ret = 0;
|
||||
ret = msi_device_populate_sysfs(&dev->dev);
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
@ -393,6 +393,8 @@ static void xen_teardown_msi_irqs(struct pci_dev *dev)
|
||||
for (i = 0; i < msidesc->nvec_used; i++)
|
||||
xen_destroy_irq(msidesc->irq + i);
|
||||
}
|
||||
|
||||
msi_device_destroy_sysfs(&dev->dev);
|
||||
}
|
||||
|
||||
static void xen_pv_teardown_msi_irqs(struct pci_dev *dev)
|
||||
|
@ -343,7 +343,19 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
|
||||
struct rt_sigframe *frame;
|
||||
int err = 0, sig = ksig->sig;
|
||||
unsigned long sp, ra, tp, ps;
|
||||
unsigned long handler = (unsigned long)ksig->ka.sa.sa_handler;
|
||||
unsigned long handler_fdpic_GOT = 0;
|
||||
unsigned int base;
|
||||
bool fdpic = IS_ENABLED(CONFIG_BINFMT_ELF_FDPIC) &&
|
||||
(current->personality & FDPIC_FUNCPTRS);
|
||||
|
||||
if (fdpic) {
|
||||
unsigned long __user *fdpic_func_desc =
|
||||
(unsigned long __user *)handler;
|
||||
if (__get_user(handler, &fdpic_func_desc[0]) ||
|
||||
__get_user(handler_fdpic_GOT, &fdpic_func_desc[1]))
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
sp = regs->areg[1];
|
||||
|
||||
@ -373,20 +385,26 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
|
||||
err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
|
||||
|
||||
if (ksig->ka.sa.sa_flags & SA_RESTORER) {
|
||||
ra = (unsigned long)ksig->ka.sa.sa_restorer;
|
||||
if (fdpic) {
|
||||
unsigned long __user *fdpic_func_desc =
|
||||
(unsigned long __user *)ksig->ka.sa.sa_restorer;
|
||||
|
||||
err |= __get_user(ra, fdpic_func_desc);
|
||||
} else {
|
||||
ra = (unsigned long)ksig->ka.sa.sa_restorer;
|
||||
}
|
||||
} else {
|
||||
|
||||
/* Create sys_rt_sigreturn syscall in stack frame */
|
||||
|
||||
err |= gen_return_code(frame->retcode);
|
||||
|
||||
if (err) {
|
||||
return -EFAULT;
|
||||
}
|
||||
ra = (unsigned long) frame->retcode;
|
||||
}
|
||||
|
||||
/*
|
||||
if (err)
|
||||
return -EFAULT;
|
||||
|
||||
/*
|
||||
* Create signal handler execution context.
|
||||
* Return context not modified until this point.
|
||||
*/
|
||||
@ -394,8 +412,7 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
|
||||
/* Set up registers for signal handler; preserve the threadptr */
|
||||
tp = regs->threadptr;
|
||||
ps = regs->ps;
|
||||
start_thread(regs, (unsigned long) ksig->ka.sa.sa_handler,
|
||||
(unsigned long) frame);
|
||||
start_thread(regs, handler, (unsigned long)frame);
|
||||
|
||||
/* Set up a stack frame for a call4 if userspace uses windowed ABI */
|
||||
if (ps & PS_WOE_MASK) {
|
||||
@ -413,6 +430,8 @@ static int setup_frame(struct ksignal *ksig, sigset_t *set,
|
||||
regs->areg[base + 4] = (unsigned long) &frame->uc;
|
||||
regs->threadptr = tp;
|
||||
regs->ps = ps;
|
||||
if (fdpic)
|
||||
regs->areg[base + 11] = handler_fdpic_GOT;
|
||||
|
||||
pr_debug("SIG rt deliver (%s:%d): signal=%d sp=%p pc=%08lx\n",
|
||||
current->comm, current->pid, sig, frame, regs->pc);
|
||||
|
@ -56,6 +56,8 @@ EXPORT_SYMBOL(empty_zero_page);
|
||||
*/
|
||||
extern long long __ashrdi3(long long, int);
|
||||
extern long long __ashldi3(long long, int);
|
||||
extern long long __bswapdi2(long long);
|
||||
extern int __bswapsi2(int);
|
||||
extern long long __lshrdi3(long long, int);
|
||||
extern int __divsi3(int, int);
|
||||
extern int __modsi3(int, int);
|
||||
@ -66,6 +68,8 @@ extern unsigned long long __umulsidi3(unsigned int, unsigned int);
|
||||
|
||||
EXPORT_SYMBOL(__ashldi3);
|
||||
EXPORT_SYMBOL(__ashrdi3);
|
||||
EXPORT_SYMBOL(__bswapdi2);
|
||||
EXPORT_SYMBOL(__bswapsi2);
|
||||
EXPORT_SYMBOL(__lshrdi3);
|
||||
EXPORT_SYMBOL(__divsi3);
|
||||
EXPORT_SYMBOL(__modsi3);
|
||||
|
@ -4,7 +4,7 @@
|
||||
#
|
||||
|
||||
lib-y += memcopy.o memset.o checksum.o \
|
||||
ashldi3.o ashrdi3.o lshrdi3.o \
|
||||
ashldi3.o ashrdi3.o bswapdi2.o bswapsi2.o lshrdi3.o \
|
||||
divsi3.o udivsi3.o modsi3.o umodsi3.o mulsi3.o umulsidi3.o \
|
||||
usercopy.o strncpy_user.o strnlen_user.o
|
||||
lib-$(CONFIG_PCI) += pci-auto.o
|
||||
|
21
arch/xtensa/lib/bswapdi2.S
Normal file
21
arch/xtensa/lib/bswapdi2.S
Normal file
@ -0,0 +1,21 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later WITH GCC-exception-2.0 */
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/asmmacro.h>
|
||||
#include <asm/core.h>
|
||||
|
||||
ENTRY(__bswapdi2)
|
||||
|
||||
abi_entry_default
|
||||
ssai 8
|
||||
srli a4, a2, 16
|
||||
src a4, a4, a2
|
||||
src a4, a4, a4
|
||||
src a4, a2, a4
|
||||
srli a2, a3, 16
|
||||
src a2, a2, a3
|
||||
src a2, a2, a2
|
||||
src a2, a3, a2
|
||||
mov a3, a4
|
||||
abi_ret_default
|
||||
|
||||
ENDPROC(__bswapdi2)
|
16
arch/xtensa/lib/bswapsi2.S
Normal file
16
arch/xtensa/lib/bswapsi2.S
Normal file
@ -0,0 +1,16 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later WITH GCC-exception-2.0 */
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/asmmacro.h>
|
||||
#include <asm/core.h>
|
||||
|
||||
ENTRY(__bswapsi2)
|
||||
|
||||
abi_entry_default
|
||||
ssai 8
|
||||
srli a3, a2, 16
|
||||
src a3, a3, a2
|
||||
src a3, a3, a3
|
||||
src a2, a2, a3
|
||||
abi_ret_default
|
||||
|
||||
ENDPROC(__bswapsi2)
|
@ -213,8 +213,8 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
|
||||
mm = alloc->mm;
|
||||
|
||||
if (mm) {
|
||||
mmap_read_lock(mm);
|
||||
vma = vma_lookup(mm, alloc->vma_addr);
|
||||
mmap_write_lock(mm);
|
||||
vma = alloc->vma;
|
||||
}
|
||||
|
||||
if (!vma && need_mm) {
|
||||
@ -271,7 +271,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
|
||||
trace_binder_alloc_page_end(alloc, index);
|
||||
}
|
||||
if (mm) {
|
||||
mmap_read_unlock(mm);
|
||||
mmap_write_unlock(mm);
|
||||
mmput(mm);
|
||||
}
|
||||
return 0;
|
||||
@ -304,21 +304,24 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
|
||||
}
|
||||
err_no_vma:
|
||||
if (mm) {
|
||||
mmap_read_unlock(mm);
|
||||
mmap_write_unlock(mm);
|
||||
mmput(mm);
|
||||
}
|
||||
return vma ? -ENOMEM : -ESRCH;
|
||||
}
|
||||
|
||||
static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
|
||||
struct vm_area_struct *vma)
|
||||
{
|
||||
/* pairs with smp_load_acquire in binder_alloc_get_vma() */
|
||||
smp_store_release(&alloc->vma, vma);
|
||||
}
|
||||
|
||||
static inline struct vm_area_struct *binder_alloc_get_vma(
|
||||
struct binder_alloc *alloc)
|
||||
{
|
||||
struct vm_area_struct *vma = NULL;
|
||||
|
||||
if (alloc->vma_addr)
|
||||
vma = vma_lookup(alloc->mm, alloc->vma_addr);
|
||||
|
||||
return vma;
|
||||
/* pairs with smp_store_release in binder_alloc_set_vma() */
|
||||
return smp_load_acquire(&alloc->vma);
|
||||
}
|
||||
|
||||
static bool debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
|
||||
@ -381,15 +384,13 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
|
||||
size_t size, data_offsets_size;
|
||||
int ret;
|
||||
|
||||
mmap_read_lock(alloc->mm);
|
||||
/* Check binder_alloc is fully initialized */
|
||||
if (!binder_alloc_get_vma(alloc)) {
|
||||
mmap_read_unlock(alloc->mm);
|
||||
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
|
||||
"%d: binder_alloc_buf, no vma\n",
|
||||
alloc->pid);
|
||||
return ERR_PTR(-ESRCH);
|
||||
}
|
||||
mmap_read_unlock(alloc->mm);
|
||||
|
||||
data_offsets_size = ALIGN(data_size, sizeof(void *)) +
|
||||
ALIGN(offsets_size, sizeof(void *));
|
||||
@ -780,7 +781,9 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
|
||||
buffer->free = 1;
|
||||
binder_insert_free_buffer(alloc, buffer);
|
||||
alloc->free_async_space = alloc->buffer_size / 2;
|
||||
alloc->vma_addr = vma->vm_start;
|
||||
|
||||
/* Signal binder_alloc is fully initialized */
|
||||
binder_alloc_set_vma(alloc, vma);
|
||||
|
||||
return 0;
|
||||
|
||||
@ -810,8 +813,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
|
||||
|
||||
buffers = 0;
|
||||
mutex_lock(&alloc->mutex);
|
||||
BUG_ON(alloc->vma_addr &&
|
||||
vma_lookup(alloc->mm, alloc->vma_addr));
|
||||
BUG_ON(alloc->vma);
|
||||
|
||||
while ((n = rb_first(&alloc->allocated_buffers))) {
|
||||
buffer = rb_entry(n, struct binder_buffer, rb_node);
|
||||
@ -918,25 +920,17 @@ void binder_alloc_print_pages(struct seq_file *m,
|
||||
* Make sure the binder_alloc is fully initialized, otherwise we might
|
||||
* read inconsistent state.
|
||||
*/
|
||||
|
||||
mmap_read_lock(alloc->mm);
|
||||
if (binder_alloc_get_vma(alloc) == NULL) {
|
||||
mmap_read_unlock(alloc->mm);
|
||||
goto uninitialized;
|
||||
if (binder_alloc_get_vma(alloc) != NULL) {
|
||||
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
|
||||
page = &alloc->pages[i];
|
||||
if (!page->page_ptr)
|
||||
free++;
|
||||
else if (list_empty(&page->lru))
|
||||
active++;
|
||||
else
|
||||
lru++;
|
||||
}
|
||||
}
|
||||
|
||||
mmap_read_unlock(alloc->mm);
|
||||
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
|
||||
page = &alloc->pages[i];
|
||||
if (!page->page_ptr)
|
||||
free++;
|
||||
else if (list_empty(&page->lru))
|
||||
active++;
|
||||
else
|
||||
lru++;
|
||||
}
|
||||
|
||||
uninitialized:
|
||||
mutex_unlock(&alloc->mutex);
|
||||
seq_printf(m, " pages: %d:%d:%d\n", active, lru, free);
|
||||
seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high);
|
||||
@ -971,7 +965,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
|
||||
*/
|
||||
void binder_alloc_vma_close(struct binder_alloc *alloc)
|
||||
{
|
||||
alloc->vma_addr = 0;
|
||||
binder_alloc_set_vma(alloc, NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -75,7 +75,7 @@ struct binder_lru_page {
|
||||
/**
|
||||
* struct binder_alloc - per-binder proc state for binder allocator
|
||||
* @mutex: protects binder_alloc fields
|
||||
* @vma_addr: vm_area_struct->vm_start passed to mmap_handler
|
||||
* @vma: vm_area_struct passed to mmap_handler
|
||||
* (invariant after mmap)
|
||||
* @mm: copy of task->mm (invariant after open)
|
||||
* @buffer: base of per-proc address space mapped via mmap
|
||||
@ -99,7 +99,7 @@ struct binder_lru_page {
|
||||
*/
|
||||
struct binder_alloc {
|
||||
struct mutex mutex;
|
||||
unsigned long vma_addr;
|
||||
struct vm_area_struct *vma;
|
||||
struct mm_struct *mm;
|
||||
void __user *buffer;
|
||||
struct list_head buffers;
|
||||
|
@ -287,7 +287,7 @@ void binder_selftest_alloc(struct binder_alloc *alloc)
|
||||
if (!binder_selftest_run)
|
||||
return;
|
||||
mutex_lock(&binder_selftest_lock);
|
||||
if (!binder_selftest_run || !alloc->vma_addr)
|
||||
if (!binder_selftest_run || !alloc->vma)
|
||||
goto done;
|
||||
pr_info("STARTED\n");
|
||||
binder_selftest_alloc_offset(alloc, end_offset, 0);
|
||||
|
@ -568,6 +568,10 @@ static int tpm_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
|
||||
{
|
||||
struct tpm_chip *chip = container_of(rng, struct tpm_chip, hwrng);
|
||||
|
||||
/* Give back zero bytes, as TPM chip has not yet fully resumed: */
|
||||
if (chip->flags & TPM_CHIP_FLAG_SUSPENDED)
|
||||
return 0;
|
||||
|
||||
return tpm_get_random(chip, data, max);
|
||||
}
|
||||
|
||||
@ -601,6 +605,42 @@ static int tpm_get_pcr_allocation(struct tpm_chip *chip)
|
||||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
* tpm_chip_bootstrap() - Boostrap TPM chip after power on
|
||||
* @chip: TPM chip to use.
|
||||
*
|
||||
* Initialize TPM chip after power on. This a one-shot function: subsequent
|
||||
* calls will have no effect.
|
||||
*/
|
||||
int tpm_chip_bootstrap(struct tpm_chip *chip)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (chip->flags & TPM_CHIP_FLAG_BOOTSTRAPPED)
|
||||
return 0;
|
||||
|
||||
rc = tpm_chip_start(chip);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = tpm_auto_startup(chip);
|
||||
if (rc)
|
||||
goto stop;
|
||||
|
||||
rc = tpm_get_pcr_allocation(chip);
|
||||
stop:
|
||||
tpm_chip_stop(chip);
|
||||
|
||||
/*
|
||||
* Unconditionally set, as driver initialization should cease, when the
|
||||
* boostrapping process fails.
|
||||
*/
|
||||
chip->flags |= TPM_CHIP_FLAG_BOOTSTRAPPED;
|
||||
|
||||
return rc;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tpm_chip_bootstrap);
|
||||
|
||||
/*
|
||||
* tpm_chip_register() - create a character device for the TPM chip
|
||||
* @chip: TPM chip to use.
|
||||
@ -616,17 +656,7 @@ int tpm_chip_register(struct tpm_chip *chip)
|
||||
{
|
||||
int rc;
|
||||
|
||||
rc = tpm_chip_start(chip);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = tpm_auto_startup(chip);
|
||||
if (rc) {
|
||||
tpm_chip_stop(chip);
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = tpm_get_pcr_allocation(chip);
|
||||
tpm_chip_stop(chip);
|
||||
rc = tpm_chip_bootstrap(chip);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
|
@ -412,6 +412,8 @@ int tpm_pm_suspend(struct device *dev)
|
||||
}
|
||||
|
||||
suspended:
|
||||
chip->flags |= TPM_CHIP_FLAG_SUSPENDED;
|
||||
|
||||
if (rc)
|
||||
dev_err(dev, "Ignoring error %d while suspending\n", rc);
|
||||
return 0;
|
||||
@ -429,6 +431,14 @@ int tpm_pm_resume(struct device *dev)
|
||||
if (chip == NULL)
|
||||
return -ENODEV;
|
||||
|
||||
chip->flags &= ~TPM_CHIP_FLAG_SUSPENDED;
|
||||
|
||||
/*
|
||||
* Guarantee that SUSPENDED is written last, so that hwrng does not
|
||||
* activate before the chip has been fully resumed.
|
||||
*/
|
||||
wmb();
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tpm_pm_resume);
|
||||
|
@ -263,6 +263,7 @@ static inline void tpm_msleep(unsigned int delay_msec)
|
||||
delay_msec * 1000);
|
||||
};
|
||||
|
||||
int tpm_chip_bootstrap(struct tpm_chip *chip);
|
||||
int tpm_chip_start(struct tpm_chip *chip);
|
||||
void tpm_chip_stop(struct tpm_chip *chip);
|
||||
struct tpm_chip *tpm_find_get_ops(struct tpm_chip *chip);
|
||||
|
@ -243,7 +243,7 @@ static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info)
|
||||
irq = tpm_info->irq;
|
||||
|
||||
if (itpm || is_itpm(ACPI_COMPANION(dev)))
|
||||
phy->priv.flags |= TPM_TIS_ITPM_WORKAROUND;
|
||||
set_bit(TPM_TIS_ITPM_WORKAROUND, &phy->priv.flags);
|
||||
|
||||
return tpm_tis_core_init(dev, &phy->priv, irq, &tpm_tcg,
|
||||
ACPI_HANDLE(dev));
|
||||
|
@ -53,41 +53,63 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
|
||||
long rc;
|
||||
u8 status;
|
||||
bool canceled = false;
|
||||
u8 sts_mask = 0;
|
||||
int ret = 0;
|
||||
|
||||
/* check current status */
|
||||
status = chip->ops->status(chip);
|
||||
if ((status & mask) == mask)
|
||||
return 0;
|
||||
|
||||
stop = jiffies + timeout;
|
||||
/* check what status changes can be handled by irqs */
|
||||
if (priv->int_mask & TPM_INTF_STS_VALID_INT)
|
||||
sts_mask |= TPM_STS_VALID;
|
||||
|
||||
if (chip->flags & TPM_CHIP_FLAG_IRQ) {
|
||||
if (priv->int_mask & TPM_INTF_DATA_AVAIL_INT)
|
||||
sts_mask |= TPM_STS_DATA_AVAIL;
|
||||
|
||||
if (priv->int_mask & TPM_INTF_CMD_READY_INT)
|
||||
sts_mask |= TPM_STS_COMMAND_READY;
|
||||
|
||||
sts_mask &= mask;
|
||||
|
||||
stop = jiffies + timeout;
|
||||
/* process status changes with irq support */
|
||||
if (sts_mask) {
|
||||
ret = -ETIME;
|
||||
again:
|
||||
timeout = stop - jiffies;
|
||||
if ((long)timeout <= 0)
|
||||
return -ETIME;
|
||||
rc = wait_event_interruptible_timeout(*queue,
|
||||
wait_for_tpm_stat_cond(chip, mask, check_cancel,
|
||||
wait_for_tpm_stat_cond(chip, sts_mask, check_cancel,
|
||||
&canceled),
|
||||
timeout);
|
||||
if (rc > 0) {
|
||||
if (canceled)
|
||||
return -ECANCELED;
|
||||
return 0;
|
||||
ret = 0;
|
||||
}
|
||||
if (rc == -ERESTARTSYS && freezing(current)) {
|
||||
clear_thread_flag(TIF_SIGPENDING);
|
||||
goto again;
|
||||
}
|
||||
} else {
|
||||
do {
|
||||
usleep_range(priv->timeout_min,
|
||||
priv->timeout_max);
|
||||
status = chip->ops->status(chip);
|
||||
if ((status & mask) == mask)
|
||||
return 0;
|
||||
} while (time_before(jiffies, stop));
|
||||
}
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
mask &= ~sts_mask;
|
||||
if (!mask) /* all done */
|
||||
return 0;
|
||||
/* process status changes without irq support */
|
||||
do {
|
||||
status = chip->ops->status(chip);
|
||||
if ((status & mask) == mask)
|
||||
return 0;
|
||||
usleep_range(priv->timeout_min,
|
||||
priv->timeout_max);
|
||||
} while (time_before(jiffies, stop));
|
||||
return -ETIME;
|
||||
}
|
||||
|
||||
@ -376,7 +398,7 @@ static int tpm_tis_send_data(struct tpm_chip *chip, const u8 *buf, size_t len)
|
||||
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
|
||||
int rc, status, burstcnt;
|
||||
size_t count = 0;
|
||||
bool itpm = priv->flags & TPM_TIS_ITPM_WORKAROUND;
|
||||
bool itpm = test_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags);
|
||||
|
||||
status = tpm_tis_status(chip);
|
||||
if ((status & TPM_STS_COMMAND_READY) == 0) {
|
||||
@ -509,7 +531,8 @@ static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len)
|
||||
int rc, irq;
|
||||
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
|
||||
|
||||
if (!(chip->flags & TPM_CHIP_FLAG_IRQ) || priv->irq_tested)
|
||||
if (!(chip->flags & TPM_CHIP_FLAG_IRQ) ||
|
||||
test_bit(TPM_TIS_IRQ_TESTED, &priv->flags))
|
||||
return tpm_tis_send_main(chip, buf, len);
|
||||
|
||||
/* Verify receipt of the expected IRQ */
|
||||
@ -519,11 +542,11 @@ static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len)
|
||||
rc = tpm_tis_send_main(chip, buf, len);
|
||||
priv->irq = irq;
|
||||
chip->flags |= TPM_CHIP_FLAG_IRQ;
|
||||
if (!priv->irq_tested)
|
||||
if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags))
|
||||
tpm_msleep(1);
|
||||
if (!priv->irq_tested)
|
||||
if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags))
|
||||
disable_interrupts(chip);
|
||||
priv->irq_tested = true;
|
||||
set_bit(TPM_TIS_IRQ_TESTED, &priv->flags);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -666,7 +689,7 @@ static int probe_itpm(struct tpm_chip *chip)
|
||||
size_t len = sizeof(cmd_getticks);
|
||||
u16 vendor;
|
||||
|
||||
if (priv->flags & TPM_TIS_ITPM_WORKAROUND)
|
||||
if (test_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags))
|
||||
return 0;
|
||||
|
||||
rc = tpm_tis_read16(priv, TPM_DID_VID(0), &vendor);
|
||||
@ -686,13 +709,13 @@ static int probe_itpm(struct tpm_chip *chip)
|
||||
|
||||
tpm_tis_ready(chip);
|
||||
|
||||
priv->flags |= TPM_TIS_ITPM_WORKAROUND;
|
||||
set_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags);
|
||||
|
||||
rc = tpm_tis_send_data(chip, cmd_getticks, len);
|
||||
if (rc == 0)
|
||||
dev_info(&chip->dev, "Detected an iTPM.\n");
|
||||
else {
|
||||
priv->flags &= ~TPM_TIS_ITPM_WORKAROUND;
|
||||
clear_bit(TPM_TIS_ITPM_WORKAROUND, &priv->flags);
|
||||
rc = -EFAULT;
|
||||
}
|
||||
|
||||
@ -736,7 +759,7 @@ static irqreturn_t tis_int_handler(int dummy, void *dev_id)
|
||||
if (interrupt == 0)
|
||||
return IRQ_NONE;
|
||||
|
||||
priv->irq_tested = true;
|
||||
set_bit(TPM_TIS_IRQ_TESTED, &priv->flags);
|
||||
if (interrupt & TPM_INTF_DATA_AVAIL_INT)
|
||||
wake_up_interruptible(&priv->read_queue);
|
||||
if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT)
|
||||
@ -819,7 +842,7 @@ static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask,
|
||||
if (rc < 0)
|
||||
goto restore_irqs;
|
||||
|
||||
priv->irq_tested = false;
|
||||
clear_bit(TPM_TIS_IRQ_TESTED, &priv->flags);
|
||||
|
||||
/* Generate an interrupt by having the core call through to
|
||||
* tpm_tis_send
|
||||
@ -1031,8 +1054,40 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
|
||||
if (rc < 0)
|
||||
goto out_err;
|
||||
|
||||
intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT |
|
||||
TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT;
|
||||
/* Figure out the capabilities */
|
||||
rc = tpm_tis_read32(priv, TPM_INTF_CAPS(priv->locality), &intfcaps);
|
||||
if (rc < 0)
|
||||
goto out_err;
|
||||
|
||||
dev_dbg(dev, "TPM interface capabilities (0x%x):\n",
|
||||
intfcaps);
|
||||
if (intfcaps & TPM_INTF_BURST_COUNT_STATIC)
|
||||
dev_dbg(dev, "\tBurst Count Static\n");
|
||||
if (intfcaps & TPM_INTF_CMD_READY_INT) {
|
||||
intmask |= TPM_INTF_CMD_READY_INT;
|
||||
dev_dbg(dev, "\tCommand Ready Int Support\n");
|
||||
}
|
||||
if (intfcaps & TPM_INTF_INT_EDGE_FALLING)
|
||||
dev_dbg(dev, "\tInterrupt Edge Falling\n");
|
||||
if (intfcaps & TPM_INTF_INT_EDGE_RISING)
|
||||
dev_dbg(dev, "\tInterrupt Edge Rising\n");
|
||||
if (intfcaps & TPM_INTF_INT_LEVEL_LOW)
|
||||
dev_dbg(dev, "\tInterrupt Level Low\n");
|
||||
if (intfcaps & TPM_INTF_INT_LEVEL_HIGH)
|
||||
dev_dbg(dev, "\tInterrupt Level High\n");
|
||||
if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT) {
|
||||
intmask |= TPM_INTF_LOCALITY_CHANGE_INT;
|
||||
dev_dbg(dev, "\tLocality Change Int Support\n");
|
||||
}
|
||||
if (intfcaps & TPM_INTF_STS_VALID_INT) {
|
||||
intmask |= TPM_INTF_STS_VALID_INT;
|
||||
dev_dbg(dev, "\tSts Valid Int Support\n");
|
||||
}
|
||||
if (intfcaps & TPM_INTF_DATA_AVAIL_INT) {
|
||||
intmask |= TPM_INTF_DATA_AVAIL_INT;
|
||||
dev_dbg(dev, "\tData Avail Int Support\n");
|
||||
}
|
||||
|
||||
intmask &= ~TPM_GLOBAL_INT_ENABLE;
|
||||
|
||||
rc = tpm_tis_request_locality(chip, 0);
|
||||
@ -1066,35 +1121,14 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
/* Figure out the capabilities */
|
||||
rc = tpm_tis_read32(priv, TPM_INTF_CAPS(priv->locality), &intfcaps);
|
||||
if (rc < 0)
|
||||
goto out_err;
|
||||
|
||||
dev_dbg(dev, "TPM interface capabilities (0x%x):\n",
|
||||
intfcaps);
|
||||
if (intfcaps & TPM_INTF_BURST_COUNT_STATIC)
|
||||
dev_dbg(dev, "\tBurst Count Static\n");
|
||||
if (intfcaps & TPM_INTF_CMD_READY_INT)
|
||||
dev_dbg(dev, "\tCommand Ready Int Support\n");
|
||||
if (intfcaps & TPM_INTF_INT_EDGE_FALLING)
|
||||
dev_dbg(dev, "\tInterrupt Edge Falling\n");
|
||||
if (intfcaps & TPM_INTF_INT_EDGE_RISING)
|
||||
dev_dbg(dev, "\tInterrupt Edge Rising\n");
|
||||
if (intfcaps & TPM_INTF_INT_LEVEL_LOW)
|
||||
dev_dbg(dev, "\tInterrupt Level Low\n");
|
||||
if (intfcaps & TPM_INTF_INT_LEVEL_HIGH)
|
||||
dev_dbg(dev, "\tInterrupt Level High\n");
|
||||
if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT)
|
||||
dev_dbg(dev, "\tLocality Change Int Support\n");
|
||||
if (intfcaps & TPM_INTF_STS_VALID_INT)
|
||||
dev_dbg(dev, "\tSts Valid Int Support\n");
|
||||
if (intfcaps & TPM_INTF_DATA_AVAIL_INT)
|
||||
dev_dbg(dev, "\tData Avail Int Support\n");
|
||||
|
||||
/* INTERRUPT Setup */
|
||||
init_waitqueue_head(&priv->read_queue);
|
||||
init_waitqueue_head(&priv->int_queue);
|
||||
|
||||
rc = tpm_chip_bootstrap(chip);
|
||||
if (rc)
|
||||
goto out_err;
|
||||
|
||||
if (irq != -1) {
|
||||
/*
|
||||
* Before doing irq testing issue a command to the TPM in polling mode
|
||||
@ -1122,7 +1156,9 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
|
||||
else
|
||||
tpm_tis_probe_irq(chip, intmask);
|
||||
|
||||
if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
|
||||
if (chip->flags & TPM_CHIP_FLAG_IRQ) {
|
||||
priv->int_mask = intmask;
|
||||
} else {
|
||||
dev_err(&chip->dev, FW_BUG
|
||||
"TPM interrupt not working, polling instead\n");
|
||||
|
||||
@ -1159,31 +1195,20 @@ static void tpm_tis_reenable_interrupts(struct tpm_chip *chip)
|
||||
u32 intmask;
|
||||
int rc;
|
||||
|
||||
if (chip->ops->clk_enable != NULL)
|
||||
chip->ops->clk_enable(chip, true);
|
||||
|
||||
/* reenable interrupts that device may have lost or
|
||||
* BIOS/firmware may have disabled
|
||||
/*
|
||||
* Re-enable interrupts that device may have lost or BIOS/firmware may
|
||||
* have disabled.
|
||||
*/
|
||||
rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), priv->irq);
|
||||
if (rc < 0) {
|
||||
dev_err(&chip->dev, "Setting IRQ failed.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
intmask = priv->int_mask | TPM_GLOBAL_INT_ENABLE;
|
||||
rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
|
||||
if (rc < 0)
|
||||
goto out;
|
||||
|
||||
rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
|
||||
if (rc < 0)
|
||||
goto out;
|
||||
|
||||
intmask |= TPM_INTF_CMD_READY_INT
|
||||
| TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_DATA_AVAIL_INT
|
||||
| TPM_INTF_STS_VALID_INT | TPM_GLOBAL_INT_ENABLE;
|
||||
|
||||
tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
|
||||
|
||||
out:
|
||||
if (chip->ops->clk_enable != NULL)
|
||||
chip->ops->clk_enable(chip, false);
|
||||
|
||||
return;
|
||||
dev_err(&chip->dev, "Enabling interrupts failed.\n");
|
||||
}
|
||||
|
||||
int tpm_tis_resume(struct device *dev)
|
||||
@ -1191,27 +1216,27 @@ int tpm_tis_resume(struct device *dev)
|
||||
struct tpm_chip *chip = dev_get_drvdata(dev);
|
||||
int ret;
|
||||
|
||||
ret = tpm_tis_request_locality(chip, 0);
|
||||
if (ret < 0)
|
||||
ret = tpm_chip_start(chip);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (chip->flags & TPM_CHIP_FLAG_IRQ)
|
||||
tpm_tis_reenable_interrupts(chip);
|
||||
|
||||
ret = tpm_pm_resume(dev);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* TPM 1.2 requires self-test on resume. This function actually returns
|
||||
* an error code but for unknown reason it isn't handled.
|
||||
*/
|
||||
if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
|
||||
tpm1_do_selftest(chip);
|
||||
out:
|
||||
tpm_tis_relinquish_locality(chip, 0);
|
||||
|
||||
return ret;
|
||||
tpm_chip_stop(chip);
|
||||
|
||||
ret = tpm_pm_resume(dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tpm_tis_resume);
|
||||
#endif
|
||||
|
@ -87,6 +87,7 @@ enum tpm_tis_flags {
|
||||
TPM_TIS_ITPM_WORKAROUND = BIT(0),
|
||||
TPM_TIS_INVALID_STATUS = BIT(1),
|
||||
TPM_TIS_DEFAULT_CANCELLATION = BIT(2),
|
||||
TPM_TIS_IRQ_TESTED = BIT(3),
|
||||
};
|
||||
|
||||
struct tpm_tis_data {
|
||||
@ -95,7 +96,7 @@ struct tpm_tis_data {
|
||||
unsigned int locality_count;
|
||||
int locality;
|
||||
int irq;
|
||||
bool irq_tested;
|
||||
unsigned int int_mask;
|
||||
unsigned long flags;
|
||||
void __iomem *ilb_base_addr;
|
||||
u16 clkrun_enabled;
|
||||
|
@ -103,23 +103,57 @@ int devm_cxl_port_enumerate_dports(struct cxl_port *port)
|
||||
}
|
||||
EXPORT_SYMBOL_NS_GPL(devm_cxl_port_enumerate_dports, CXL);
|
||||
|
||||
/*
|
||||
* Wait up to @media_ready_timeout for the device to report memory
|
||||
* active.
|
||||
*/
|
||||
int cxl_await_media_ready(struct cxl_dev_state *cxlds)
|
||||
static int cxl_dvsec_mem_range_valid(struct cxl_dev_state *cxlds, int id)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
|
||||
int d = cxlds->cxl_dvsec;
|
||||
bool valid = false;
|
||||
int rc, i;
|
||||
u32 temp;
|
||||
|
||||
if (id > CXL_DVSEC_RANGE_MAX)
|
||||
return -EINVAL;
|
||||
|
||||
/* Check MEM INFO VALID bit first, give up after 1s */
|
||||
i = 1;
|
||||
do {
|
||||
rc = pci_read_config_dword(pdev,
|
||||
d + CXL_DVSEC_RANGE_SIZE_LOW(id),
|
||||
&temp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
valid = FIELD_GET(CXL_DVSEC_MEM_INFO_VALID, temp);
|
||||
if (valid)
|
||||
break;
|
||||
msleep(1000);
|
||||
} while (i--);
|
||||
|
||||
if (!valid) {
|
||||
dev_err(&pdev->dev,
|
||||
"Timeout awaiting memory range %d valid after 1s.\n",
|
||||
id);
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cxl_dvsec_mem_range_active(struct cxl_dev_state *cxlds, int id)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
|
||||
int d = cxlds->cxl_dvsec;
|
||||
bool active = false;
|
||||
u64 md_status;
|
||||
int rc, i;
|
||||
u32 temp;
|
||||
|
||||
if (id > CXL_DVSEC_RANGE_MAX)
|
||||
return -EINVAL;
|
||||
|
||||
/* Check MEM ACTIVE bit, up to 60s timeout by default */
|
||||
for (i = media_ready_timeout; i; i--) {
|
||||
u32 temp;
|
||||
|
||||
rc = pci_read_config_dword(
|
||||
pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &temp);
|
||||
pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(id), &temp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
@ -136,6 +170,39 @@ int cxl_await_media_ready(struct cxl_dev_state *cxlds)
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Wait up to @media_ready_timeout for the device to report memory
|
||||
* active.
|
||||
*/
|
||||
int cxl_await_media_ready(struct cxl_dev_state *cxlds)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
|
||||
int d = cxlds->cxl_dvsec;
|
||||
int rc, i, hdm_count;
|
||||
u64 md_status;
|
||||
u16 cap;
|
||||
|
||||
rc = pci_read_config_word(pdev,
|
||||
d + CXL_DVSEC_CAP_OFFSET, &cap);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
hdm_count = FIELD_GET(CXL_DVSEC_HDM_COUNT_MASK, cap);
|
||||
for (i = 0; i < hdm_count; i++) {
|
||||
rc = cxl_dvsec_mem_range_valid(cxlds, i);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
for (i = 0; i < hdm_count; i++) {
|
||||
rc = cxl_dvsec_mem_range_active(cxlds, i);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
|
||||
md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET);
|
||||
if (!CXLMDEV_READY(md_status))
|
||||
return -EIO;
|
||||
|
@ -31,6 +31,8 @@
|
||||
#define CXL_DVSEC_RANGE_BASE_LOW(i) (0x24 + (i * 0x10))
|
||||
#define CXL_DVSEC_MEM_BASE_LOW_MASK GENMASK(31, 28)
|
||||
|
||||
#define CXL_DVSEC_RANGE_MAX 2
|
||||
|
||||
/* CXL 2.0 8.1.4: Non-CXL Function Map DVSEC */
|
||||
#define CXL_DVSEC_FUNCTION_MAP 2
|
||||
|
||||
|
@ -15,6 +15,8 @@
|
||||
|
||||
#include "common.h"
|
||||
|
||||
static DEFINE_IDA(ffa_bus_id);
|
||||
|
||||
static int ffa_device_match(struct device *dev, struct device_driver *drv)
|
||||
{
|
||||
const struct ffa_device_id *id_table;
|
||||
@ -53,7 +55,8 @@ static void ffa_device_remove(struct device *dev)
|
||||
{
|
||||
struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver);
|
||||
|
||||
ffa_drv->remove(to_ffa_dev(dev));
|
||||
if (ffa_drv->remove)
|
||||
ffa_drv->remove(to_ffa_dev(dev));
|
||||
}
|
||||
|
||||
static int ffa_device_uevent(struct device *dev, struct kobj_uevent_env *env)
|
||||
@ -130,6 +133,7 @@ static void ffa_release_device(struct device *dev)
|
||||
{
|
||||
struct ffa_device *ffa_dev = to_ffa_dev(dev);
|
||||
|
||||
ida_free(&ffa_bus_id, ffa_dev->id);
|
||||
kfree(ffa_dev);
|
||||
}
|
||||
|
||||
@ -170,18 +174,24 @@ bool ffa_device_is_valid(struct ffa_device *ffa_dev)
|
||||
struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
|
||||
const struct ffa_ops *ops)
|
||||
{
|
||||
int ret;
|
||||
int id, ret;
|
||||
struct device *dev;
|
||||
struct ffa_device *ffa_dev;
|
||||
|
||||
ffa_dev = kzalloc(sizeof(*ffa_dev), GFP_KERNEL);
|
||||
if (!ffa_dev)
|
||||
id = ida_alloc_min(&ffa_bus_id, 1, GFP_KERNEL);
|
||||
if (id < 0)
|
||||
return NULL;
|
||||
|
||||
ffa_dev = kzalloc(sizeof(*ffa_dev), GFP_KERNEL);
|
||||
if (!ffa_dev) {
|
||||
ida_free(&ffa_bus_id, id);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
dev = &ffa_dev->dev;
|
||||
dev->bus = &ffa_bus_type;
|
||||
dev->release = ffa_release_device;
|
||||
dev_set_name(&ffa_dev->dev, "arm-ffa-%04x", vm_id);
|
||||
dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id);
|
||||
|
||||
ffa_dev->vm_id = vm_id;
|
||||
ffa_dev->ops = ops;
|
||||
@ -217,4 +227,5 @@ void arm_ffa_bus_exit(void)
|
||||
{
|
||||
ffa_devices_unregister();
|
||||
bus_unregister(&ffa_bus_type);
|
||||
ida_destroy(&ffa_bus_id);
|
||||
}
|
||||
|
@ -420,12 +420,17 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
|
||||
ep_mem_access->receiver = args->attrs[idx].receiver;
|
||||
ep_mem_access->attrs = args->attrs[idx].attrs;
|
||||
ep_mem_access->composite_off = COMPOSITE_OFFSET(args->nattrs);
|
||||
ep_mem_access->flag = 0;
|
||||
ep_mem_access->reserved = 0;
|
||||
}
|
||||
mem_region->reserved_0 = 0;
|
||||
mem_region->reserved_1 = 0;
|
||||
mem_region->ep_count = args->nattrs;
|
||||
|
||||
composite = buffer + COMPOSITE_OFFSET(args->nattrs);
|
||||
composite->total_pg_cnt = ffa_get_num_pages_sg(args->sg);
|
||||
composite->addr_range_cnt = num_entries;
|
||||
composite->reserved = 0;
|
||||
|
||||
length = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, num_entries);
|
||||
frag_len = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, 0);
|
||||
@ -460,6 +465,7 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
|
||||
|
||||
constituents->address = sg_phys(args->sg);
|
||||
constituents->pg_cnt = args->sg->length / FFA_PAGE_SIZE;
|
||||
constituents->reserved = 0;
|
||||
constituents++;
|
||||
frag_len += sizeof(struct ffa_mem_region_addr_range);
|
||||
} while ((args->sg = sg_next(args->sg)));
|
||||
|
@ -368,7 +368,7 @@ static void gpio_mockup_debugfs_setup(struct device *dev,
|
||||
priv->offset = i;
|
||||
priv->desc = gpiochip_get_desc(gc, i);
|
||||
|
||||
debugfs_create_file(name, 0200, chip->dbg_dir, priv,
|
||||
debugfs_create_file(name, 0600, chip->dbg_dir, priv,
|
||||
&gpio_mockup_debugfs_ops);
|
||||
}
|
||||
}
|
||||
|
@ -1328,12 +1328,9 @@ int amdgpu_mes_self_test(struct amdgpu_device *adev)
|
||||
struct amdgpu_mes_ctx_data ctx_data = {0};
|
||||
struct amdgpu_ring *added_rings[AMDGPU_MES_CTX_MAX_RINGS] = { NULL };
|
||||
int gang_ids[3] = {0};
|
||||
int queue_types[][2] = { { AMDGPU_RING_TYPE_GFX,
|
||||
AMDGPU_MES_CTX_MAX_GFX_RINGS},
|
||||
{ AMDGPU_RING_TYPE_COMPUTE,
|
||||
AMDGPU_MES_CTX_MAX_COMPUTE_RINGS},
|
||||
{ AMDGPU_RING_TYPE_SDMA,
|
||||
AMDGPU_MES_CTX_MAX_SDMA_RINGS } };
|
||||
int queue_types[][2] = { { AMDGPU_RING_TYPE_GFX, 1 },
|
||||
{ AMDGPU_RING_TYPE_COMPUTE, 1 },
|
||||
{ AMDGPU_RING_TYPE_SDMA, 1} };
|
||||
int i, r, pasid, k = 0;
|
||||
|
||||
pasid = amdgpu_pasid_alloc(16);
|
||||
|
@ -390,6 +390,7 @@ static int mes_v11_0_set_hw_resources(struct amdgpu_mes *mes)
|
||||
mes_set_hw_res_pkt.disable_reset = 1;
|
||||
mes_set_hw_res_pkt.disable_mes_log = 1;
|
||||
mes_set_hw_res_pkt.use_different_vmid_compute = 1;
|
||||
mes_set_hw_res_pkt.enable_reg_active_poll = 1;
|
||||
mes_set_hw_res_pkt.oversubscription_timer = 50;
|
||||
|
||||
return mes_v11_0_submit_pkt_and_poll_completion(mes,
|
||||
|
@ -1634,14 +1634,18 @@ static bool dc_link_construct_legacy(struct dc_link *link,
|
||||
link->irq_source_hpd = DC_IRQ_SOURCE_INVALID;
|
||||
|
||||
switch (link->dc->config.allow_edp_hotplug_detection) {
|
||||
case 1: // only the 1st eDP handles hotplug
|
||||
case HPD_EN_FOR_ALL_EDP:
|
||||
link->irq_source_hpd_rx =
|
||||
dal_irq_get_rx_source(link->hpd_gpio);
|
||||
break;
|
||||
case HPD_EN_FOR_PRIMARY_EDP_ONLY:
|
||||
if (link->link_index == 0)
|
||||
link->irq_source_hpd_rx =
|
||||
dal_irq_get_rx_source(link->hpd_gpio);
|
||||
else
|
||||
link->irq_source_hpd = DC_IRQ_SOURCE_INVALID;
|
||||
break;
|
||||
case 2: // only the 2nd eDP handles hotplug
|
||||
case HPD_EN_FOR_SECONDARY_EDP_ONLY:
|
||||
if (link->link_index == 1)
|
||||
link->irq_source_hpd_rx =
|
||||
dal_irq_get_rx_source(link->hpd_gpio);
|
||||
@ -1649,6 +1653,7 @@ static bool dc_link_construct_legacy(struct dc_link *link,
|
||||
link->irq_source_hpd = DC_IRQ_SOURCE_INVALID;
|
||||
break;
|
||||
default:
|
||||
link->irq_source_hpd = DC_IRQ_SOURCE_INVALID;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -993,4 +993,10 @@ struct display_endpoint_id {
|
||||
enum display_endpoint_type ep_type;
|
||||
};
|
||||
|
||||
enum dc_hpd_enable_select {
|
||||
HPD_EN_FOR_ALL_EDP = 0,
|
||||
HPD_EN_FOR_PRIMARY_EDP_ONLY,
|
||||
HPD_EN_FOR_SECONDARY_EDP_ONLY,
|
||||
};
|
||||
|
||||
#endif /* DC_TYPES_H_ */
|
||||
|
@ -222,7 +222,11 @@ union MESAPI_SET_HW_RESOURCES {
|
||||
uint32_t apply_grbm_remote_register_dummy_read_wa : 1;
|
||||
uint32_t second_gfx_pipe_enabled : 1;
|
||||
uint32_t enable_level_process_quantum_check : 1;
|
||||
uint32_t reserved : 25;
|
||||
uint32_t legacy_sch_mode : 1;
|
||||
uint32_t disable_add_queue_wptr_mc_addr : 1;
|
||||
uint32_t enable_mes_event_int_logging : 1;
|
||||
uint32_t enable_reg_active_poll : 1;
|
||||
uint32_t reserved : 21;
|
||||
};
|
||||
uint32_t uint32_t_all;
|
||||
};
|
||||
|
@ -869,13 +869,11 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
|
||||
}
|
||||
if (ret == -ENOENT) {
|
||||
size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
|
||||
if (size > 0) {
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size);
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size);
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size);
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size);
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size);
|
||||
}
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size);
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size);
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size);
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size);
|
||||
size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size);
|
||||
}
|
||||
|
||||
if (size == 0)
|
||||
|
@ -125,6 +125,7 @@ static struct cmn2asic_msg_mapping smu_v13_0_7_message_map[SMU_MSG_MAX_COUNT] =
|
||||
MSG_MAP(ArmD3, PPSMC_MSG_ArmD3, 0),
|
||||
MSG_MAP(AllowGpo, PPSMC_MSG_SetGpoAllow, 0),
|
||||
MSG_MAP(GetPptLimit, PPSMC_MSG_GetPptLimit, 0),
|
||||
MSG_MAP(NotifyPowerSource, PPSMC_MSG_NotifyPowerSource, 0),
|
||||
};
|
||||
|
||||
static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = {
|
||||
|
@ -264,28 +264,10 @@ void drmm_kfree(struct drm_device *dev, void *data)
|
||||
}
|
||||
EXPORT_SYMBOL(drmm_kfree);
|
||||
|
||||
static void drmm_mutex_release(struct drm_device *dev, void *res)
|
||||
void __drmm_mutex_release(struct drm_device *dev, void *res)
|
||||
{
|
||||
struct mutex *lock = res;
|
||||
|
||||
mutex_destroy(lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* drmm_mutex_init - &drm_device-managed mutex_init()
|
||||
* @dev: DRM device
|
||||
* @lock: lock to be initialized
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or a negative errno code otherwise.
|
||||
*
|
||||
* This is a &drm_device-managed version of mutex_init(). The initialized
|
||||
* lock is automatically destroyed on the final drm_dev_put().
|
||||
*/
|
||||
int drmm_mutex_init(struct drm_device *dev, struct mutex *lock)
|
||||
{
|
||||
mutex_init(lock);
|
||||
|
||||
return drmm_add_action_or_reset(dev, drmm_mutex_release, lock);
|
||||
}
|
||||
EXPORT_SYMBOL(drmm_mutex_init);
|
||||
EXPORT_SYMBOL(__drmm_mutex_release);
|
||||
|
@ -640,6 +640,11 @@ void mgag200_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_
|
||||
if (funcs->pixpllc_atomic_update)
|
||||
funcs->pixpllc_atomic_update(crtc, old_state);
|
||||
|
||||
if (crtc_state->gamma_lut)
|
||||
mgag200_crtc_set_gamma(mdev, format, crtc_state->gamma_lut->data);
|
||||
else
|
||||
mgag200_crtc_set_gamma_linear(mdev, format);
|
||||
|
||||
mgag200_enable_display(mdev);
|
||||
|
||||
if (funcs->enable_vidrst)
|
||||
|
@ -100,6 +100,16 @@ static void radeon_hotplug_work_func(struct work_struct *work)
|
||||
|
||||
static void radeon_dp_work_func(struct work_struct *work)
|
||||
{
|
||||
struct radeon_device *rdev = container_of(work, struct radeon_device,
|
||||
dp_work);
|
||||
struct drm_device *dev = rdev->ddev;
|
||||
struct drm_mode_config *mode_config = &dev->mode_config;
|
||||
struct drm_connector *connector;
|
||||
|
||||
mutex_lock(&mode_config->mutex);
|
||||
list_for_each_entry(connector, &mode_config->connector_list, head)
|
||||
radeon_connector_hotplug(connector);
|
||||
mutex_unlock(&mode_config->mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -942,7 +942,7 @@ tmc_etr_buf_insert_barrier_packet(struct etr_buf *etr_buf, u64 offset)
|
||||
|
||||
len = tmc_etr_buf_get_data(etr_buf, offset,
|
||||
CORESIGHT_BARRIER_PKT_SIZE, &bufp);
|
||||
if (WARN_ON(len < CORESIGHT_BARRIER_PKT_SIZE))
|
||||
if (WARN_ON(len < 0 || len < CORESIGHT_BARRIER_PKT_SIZE))
|
||||
return -EINVAL;
|
||||
coresight_insert_barrier_packet(bufp);
|
||||
return offset + CORESIGHT_BARRIER_PKT_SIZE;
|
||||
|
@ -50,7 +50,7 @@ void __iomem *mips_gic_base;
|
||||
|
||||
static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks);
|
||||
|
||||
static DEFINE_SPINLOCK(gic_lock);
|
||||
static DEFINE_RAW_SPINLOCK(gic_lock);
|
||||
static struct irq_domain *gic_irq_domain;
|
||||
static int gic_shared_intrs;
|
||||
static unsigned int gic_cpu_pin;
|
||||
@ -211,7 +211,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
|
||||
|
||||
irq = GIC_HWIRQ_TO_SHARED(d->hwirq);
|
||||
|
||||
spin_lock_irqsave(&gic_lock, flags);
|
||||
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||
switch (type & IRQ_TYPE_SENSE_MASK) {
|
||||
case IRQ_TYPE_EDGE_FALLING:
|
||||
pol = GIC_POL_FALLING_EDGE;
|
||||
@ -251,7 +251,7 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
|
||||
else
|
||||
irq_set_chip_handler_name_locked(d, &gic_level_irq_controller,
|
||||
handle_level_irq, NULL);
|
||||
spin_unlock_irqrestore(&gic_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -269,7 +269,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
|
||||
return -EINVAL;
|
||||
|
||||
/* Assumption : cpumask refers to a single CPU */
|
||||
spin_lock_irqsave(&gic_lock, flags);
|
||||
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||
|
||||
/* Re-route this IRQ */
|
||||
write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
|
||||
@ -280,7 +280,7 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
|
||||
set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
|
||||
|
||||
irq_data_update_effective_affinity(d, cpumask_of(cpu));
|
||||
spin_unlock_irqrestore(&gic_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||
|
||||
return IRQ_SET_MASK_OK;
|
||||
}
|
||||
@ -358,12 +358,12 @@ static void gic_mask_local_irq_all_vpes(struct irq_data *d)
|
||||
cd = irq_data_get_irq_chip_data(d);
|
||||
cd->mask = false;
|
||||
|
||||
spin_lock_irqsave(&gic_lock, flags);
|
||||
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||
for_each_online_cpu(cpu) {
|
||||
write_gic_vl_other(mips_cm_vp_id(cpu));
|
||||
write_gic_vo_rmask(BIT(intr));
|
||||
}
|
||||
spin_unlock_irqrestore(&gic_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||
}
|
||||
|
||||
static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
|
||||
@ -376,12 +376,12 @@ static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
|
||||
cd = irq_data_get_irq_chip_data(d);
|
||||
cd->mask = true;
|
||||
|
||||
spin_lock_irqsave(&gic_lock, flags);
|
||||
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||
for_each_online_cpu(cpu) {
|
||||
write_gic_vl_other(mips_cm_vp_id(cpu));
|
||||
write_gic_vo_smask(BIT(intr));
|
||||
}
|
||||
spin_unlock_irqrestore(&gic_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||
}
|
||||
|
||||
static void gic_all_vpes_irq_cpu_online(void)
|
||||
@ -394,19 +394,21 @@ static void gic_all_vpes_irq_cpu_online(void)
|
||||
unsigned long flags;
|
||||
int i;
|
||||
|
||||
spin_lock_irqsave(&gic_lock, flags);
|
||||
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(local_intrs); i++) {
|
||||
unsigned int intr = local_intrs[i];
|
||||
struct gic_all_vpes_chip_data *cd;
|
||||
|
||||
if (!gic_local_irq_is_routable(intr))
|
||||
continue;
|
||||
cd = &gic_all_vpes_chip_data[intr];
|
||||
write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map);
|
||||
if (cd->mask)
|
||||
write_gic_vl_smask(BIT(intr));
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&gic_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||
}
|
||||
|
||||
static struct irq_chip gic_all_vpes_local_irq_controller = {
|
||||
@ -436,11 +438,11 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq,
|
||||
|
||||
data = irq_get_irq_data(virq);
|
||||
|
||||
spin_lock_irqsave(&gic_lock, flags);
|
||||
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||
write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
|
||||
write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
|
||||
irq_data_update_effective_affinity(data, cpumask_of(cpu));
|
||||
spin_unlock_irqrestore(&gic_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -535,12 +537,12 @@ static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq,
|
||||
if (!gic_local_irq_is_routable(intr))
|
||||
return -EPERM;
|
||||
|
||||
spin_lock_irqsave(&gic_lock, flags);
|
||||
raw_spin_lock_irqsave(&gic_lock, flags);
|
||||
for_each_online_cpu(cpu) {
|
||||
write_gic_vl_other(mips_cm_vp_id(cpu));
|
||||
write_gic_vo_map(mips_gic_vx_map_reg(intr), map);
|
||||
}
|
||||
spin_unlock_irqrestore(&gic_lock, flags);
|
||||
raw_spin_unlock_irqrestore(&gic_lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -316,6 +316,16 @@ static int usb_shark_probe(struct usb_interface *intf,
|
||||
{
|
||||
struct shark_device *shark;
|
||||
int retval = -ENOMEM;
|
||||
static const u8 ep_addresses[] = {
|
||||
SHARK_IN_EP | USB_DIR_IN,
|
||||
SHARK_OUT_EP | USB_DIR_OUT,
|
||||
0};
|
||||
|
||||
/* Are the expected endpoints present? */
|
||||
if (!usb_check_int_endpoints(intf, ep_addresses)) {
|
||||
dev_err(&intf->dev, "Invalid radioSHARK device\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
shark = kzalloc(sizeof(struct shark_device), GFP_KERNEL);
|
||||
if (!shark)
|
||||
|
@ -282,6 +282,16 @@ static int usb_shark_probe(struct usb_interface *intf,
|
||||
{
|
||||
struct shark_device *shark;
|
||||
int retval = -ENOMEM;
|
||||
static const u8 ep_addresses[] = {
|
||||
SHARK_IN_EP | USB_DIR_IN,
|
||||
SHARK_OUT_EP | USB_DIR_OUT,
|
||||
0};
|
||||
|
||||
/* Are the expected endpoints present? */
|
||||
if (!usb_check_int_endpoints(intf, ep_addresses)) {
|
||||
dev_err(&intf->dev, "Invalid radioSHARK2 device\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
shark = kzalloc(sizeof(struct shark_device), GFP_KERNEL);
|
||||
if (!shark)
|
||||
|
@ -266,6 +266,7 @@ static ssize_t power_ro_lock_store(struct device *dev,
|
||||
goto out_put;
|
||||
}
|
||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_BOOT_WP;
|
||||
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||
blk_execute_rq(req, false);
|
||||
ret = req_to_mmc_queue_req(req)->drv_op_result;
|
||||
blk_mq_free_request(req);
|
||||
@ -657,6 +658,7 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md,
|
||||
idatas[0] = idata;
|
||||
req_to_mmc_queue_req(req)->drv_op =
|
||||
rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
|
||||
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||
req_to_mmc_queue_req(req)->drv_op_data = idatas;
|
||||
req_to_mmc_queue_req(req)->ioc_count = 1;
|
||||
blk_execute_rq(req, false);
|
||||
@ -728,6 +730,7 @@ static int mmc_blk_ioctl_multi_cmd(struct mmc_blk_data *md,
|
||||
}
|
||||
req_to_mmc_queue_req(req)->drv_op =
|
||||
rpmb ? MMC_DRV_OP_IOCTL_RPMB : MMC_DRV_OP_IOCTL;
|
||||
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||
req_to_mmc_queue_req(req)->drv_op_data = idata;
|
||||
req_to_mmc_queue_req(req)->ioc_count = n;
|
||||
blk_execute_rq(req, false);
|
||||
@ -2812,6 +2815,7 @@ static int mmc_dbg_card_status_get(void *data, u64 *val)
|
||||
if (IS_ERR(req))
|
||||
return PTR_ERR(req);
|
||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_CARD_STATUS;
|
||||
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||
blk_execute_rq(req, false);
|
||||
ret = req_to_mmc_queue_req(req)->drv_op_result;
|
||||
if (ret >= 0) {
|
||||
@ -2850,6 +2854,7 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp)
|
||||
goto out_free;
|
||||
}
|
||||
req_to_mmc_queue_req(req)->drv_op = MMC_DRV_OP_GET_EXT_CSD;
|
||||
req_to_mmc_queue_req(req)->drv_op_result = -EIO;
|
||||
req_to_mmc_queue_req(req)->drv_op_data = &ext_csd;
|
||||
blk_execute_rq(req, false);
|
||||
err = req_to_mmc_queue_req(req)->drv_op_result;
|
||||
|
@ -1585,6 +1585,10 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* HS400/HS400ES require 8 bit bus */
|
||||
if (!(host->mmc->caps & MMC_CAP_8_BIT_DATA))
|
||||
host->mmc->caps2 &= ~(MMC_CAP2_HS400 | MMC_CAP2_HS400_ES);
|
||||
|
||||
if (mmc_gpio_get_cd(host->mmc) >= 0)
|
||||
host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION;
|
||||
|
||||
@ -1669,10 +1673,6 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
|
||||
host->mmc_host_ops.execute_tuning = usdhc_execute_tuning;
|
||||
}
|
||||
|
||||
err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
|
||||
if (err)
|
||||
goto disable_ahb_clk;
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_MAN_TUNING)
|
||||
sdhci_esdhc_ops.platform_execute_tuning =
|
||||
esdhc_executing_tuning;
|
||||
@ -1680,15 +1680,13 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536)
|
||||
host->quirks |= SDHCI_QUIRK_BROKEN_ADMA;
|
||||
|
||||
if (host->mmc->caps & MMC_CAP_8_BIT_DATA &&
|
||||
imx_data->socdata->flags & ESDHC_FLAG_HS400)
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_HS400)
|
||||
host->mmc->caps2 |= MMC_CAP2_HS400;
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23)
|
||||
host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN;
|
||||
|
||||
if (host->mmc->caps & MMC_CAP_8_BIT_DATA &&
|
||||
imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {
|
||||
host->mmc->caps2 |= MMC_CAP2_HS400_ES;
|
||||
host->mmc_host_ops.hs400_enhanced_strobe =
|
||||
esdhc_hs400_enhanced_strobe;
|
||||
@ -1710,6 +1708,10 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
|
||||
goto disable_ahb_clk;
|
||||
}
|
||||
|
||||
err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);
|
||||
if (err)
|
||||
goto disable_ahb_clk;
|
||||
|
||||
sdhci_esdhc_imx_hwinit(host);
|
||||
|
||||
err = sdhci_add_host(host);
|
||||
|
@ -3921,7 +3921,11 @@ static int bond_slave_netdev_event(unsigned long event,
|
||||
unblock_netpoll_tx();
|
||||
break;
|
||||
case NETDEV_FEAT_CHANGE:
|
||||
bond_compute_features(bond);
|
||||
if (!bond->notifier_ctx) {
|
||||
bond->notifier_ctx = true;
|
||||
bond_compute_features(bond);
|
||||
bond->notifier_ctx = false;
|
||||
}
|
||||
break;
|
||||
case NETDEV_RESEND_IGMP:
|
||||
/* Propagate to master device */
|
||||
@ -6284,6 +6288,8 @@ static int bond_init(struct net_device *bond_dev)
|
||||
if (!bond->wq)
|
||||
return -ENOMEM;
|
||||
|
||||
bond->notifier_ctx = false;
|
||||
|
||||
spin_lock_init(&bond->stats_lock);
|
||||
netdev_lockdep_set_classes(bond_dev);
|
||||
|
||||
|
@ -5044,6 +5044,7 @@ static const struct mv88e6xxx_ops mv88e6320_ops = {
|
||||
.phy_write = mv88e6xxx_g2_smi_phy_write,
|
||||
.port_set_link = mv88e6xxx_port_set_link,
|
||||
.port_sync_link = mv88e6xxx_port_sync_link,
|
||||
.port_set_rgmii_delay = mv88e6320_port_set_rgmii_delay,
|
||||
.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
|
||||
.port_tag_remap = mv88e6095_port_tag_remap,
|
||||
.port_set_frame_mode = mv88e6351_port_set_frame_mode,
|
||||
@ -5088,6 +5089,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
|
||||
.phy_write = mv88e6xxx_g2_smi_phy_write,
|
||||
.port_set_link = mv88e6xxx_port_set_link,
|
||||
.port_sync_link = mv88e6xxx_port_sync_link,
|
||||
.port_set_rgmii_delay = mv88e6320_port_set_rgmii_delay,
|
||||
.port_set_speed_duplex = mv88e6185_port_set_speed_duplex,
|
||||
.port_tag_remap = mv88e6095_port_tag_remap,
|
||||
.port_set_frame_mode = mv88e6351_port_set_frame_mode,
|
||||
|
@ -133,6 +133,15 @@ int mv88e6390_port_set_rgmii_delay(struct mv88e6xxx_chip *chip, int port,
|
||||
return mv88e6xxx_port_set_rgmii_delay(chip, port, mode);
|
||||
}
|
||||
|
||||
int mv88e6320_port_set_rgmii_delay(struct mv88e6xxx_chip *chip, int port,
|
||||
phy_interface_t mode)
|
||||
{
|
||||
if (port != 2 && port != 5 && port != 6)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
return mv88e6xxx_port_set_rgmii_delay(chip, port, mode);
|
||||
}
|
||||
|
||||
int mv88e6xxx_port_set_link(struct mv88e6xxx_chip *chip, int port, int link)
|
||||
{
|
||||
u16 reg;
|
||||
|
@ -332,6 +332,8 @@ int mv88e6xxx_port_wait_bit(struct mv88e6xxx_chip *chip, int port, int reg,
|
||||
|
||||
int mv88e6185_port_set_pause(struct mv88e6xxx_chip *chip, int port,
|
||||
int pause);
|
||||
int mv88e6320_port_set_rgmii_delay(struct mv88e6xxx_chip *chip, int port,
|
||||
phy_interface_t mode);
|
||||
int mv88e6352_port_set_rgmii_delay(struct mv88e6xxx_chip *chip, int port,
|
||||
phy_interface_t mode);
|
||||
int mv88e6390_port_set_rgmii_delay(struct mv88e6xxx_chip *chip, int port,
|
||||
|
@ -195,6 +195,7 @@ static int tc589_probe(struct pcmcia_device *link)
|
||||
{
|
||||
struct el3_private *lp;
|
||||
struct net_device *dev;
|
||||
int ret;
|
||||
|
||||
dev_dbg(&link->dev, "3c589_attach()\n");
|
||||
|
||||
@ -218,7 +219,15 @@ static int tc589_probe(struct pcmcia_device *link)
|
||||
|
||||
dev->ethtool_ops = &netdev_ethtool_ops;
|
||||
|
||||
return tc589_config(link);
|
||||
ret = tc589_config(link);
|
||||
if (ret)
|
||||
goto err_free_netdev;
|
||||
|
||||
return 0;
|
||||
|
||||
err_free_netdev:
|
||||
free_netdev(dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void tc589_detach(struct pcmcia_device *link)
|
||||
|
@ -652,9 +652,7 @@ static void otx2_sqe_add_ext(struct otx2_nic *pfvf, struct otx2_snd_queue *sq,
|
||||
htons(ext->lso_sb - skb_network_offset(skb));
|
||||
} else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) {
|
||||
ext->lso_format = pfvf->hw.lso_tsov6_idx;
|
||||
|
||||
ipv6_hdr(skb)->payload_len =
|
||||
htons(ext->lso_sb - skb_network_offset(skb));
|
||||
ipv6_hdr(skb)->payload_len = htons(tcp_hdrlen(skb));
|
||||
} else if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
|
||||
__be16 l3_proto = vlan_get_protocol(skb);
|
||||
struct udphdr *udph = udp_hdr(skb);
|
||||
|
@ -1894,9 +1894,10 @@ static void mlx5_cmd_err_trace(struct mlx5_core_dev *dev, u16 opcode, u16 op_mod
|
||||
static void cmd_status_log(struct mlx5_core_dev *dev, u16 opcode, u8 status,
|
||||
u32 syndrome, int err)
|
||||
{
|
||||
const char *namep = mlx5_command_str(opcode);
|
||||
struct mlx5_cmd_stats *stats;
|
||||
|
||||
if (!err)
|
||||
if (!err || !(strcmp(namep, "unknown command opcode")))
|
||||
return;
|
||||
|
||||
stats = &dev->cmd.stats[opcode];
|
||||
|
@ -175,6 +175,8 @@ static bool mlx5e_ptp_poll_ts_cq(struct mlx5e_cq *cq, int budget)
|
||||
/* ensure cq space is freed before enabling more cqes */
|
||||
wmb();
|
||||
|
||||
mlx5e_txqsq_wake(&ptpsq->txqsq);
|
||||
|
||||
return work_done == budget;
|
||||
}
|
||||
|
||||
|
@ -1338,11 +1338,13 @@ static void mlx5e_invalidate_encap(struct mlx5e_priv *priv,
|
||||
struct mlx5e_tc_flow *flow;
|
||||
|
||||
list_for_each_entry(flow, encap_flows, tmp_list) {
|
||||
struct mlx5_flow_attr *attr = flow->attr;
|
||||
struct mlx5_esw_flow_attr *esw_attr;
|
||||
struct mlx5_flow_attr *attr;
|
||||
|
||||
if (!mlx5e_is_offloaded_flow(flow))
|
||||
continue;
|
||||
|
||||
attr = mlx5e_tc_get_encap_attr(flow);
|
||||
esw_attr = attr->esw_attr;
|
||||
|
||||
if (flow_flag_test(flow, SLOW))
|
||||
|
@ -177,6 +177,8 @@ static inline u16 mlx5e_txqsq_get_next_pi(struct mlx5e_txqsq *sq, u16 size)
|
||||
return pi;
|
||||
}
|
||||
|
||||
void mlx5e_txqsq_wake(struct mlx5e_txqsq *sq);
|
||||
|
||||
static inline u16 mlx5e_shampo_get_cqe_header_index(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
|
||||
{
|
||||
return be16_to_cpu(cqe->shampo.header_entry_index) & (rq->mpwqe.shampo->hd_per_wq - 1);
|
||||
|
@ -1578,11 +1578,9 @@ bool mlx5e_tc_is_vf_tunnel(struct net_device *out_dev, struct net_device *route_
|
||||
int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *route_dev, u16 *vport)
|
||||
{
|
||||
struct mlx5e_priv *out_priv, *route_priv;
|
||||
struct mlx5_devcom *devcom = NULL;
|
||||
struct mlx5_core_dev *route_mdev;
|
||||
struct mlx5_eswitch *esw;
|
||||
u16 vhca_id;
|
||||
int err;
|
||||
|
||||
out_priv = netdev_priv(out_dev);
|
||||
esw = out_priv->mdev->priv.eswitch;
|
||||
@ -1591,6 +1589,9 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
|
||||
|
||||
vhca_id = MLX5_CAP_GEN(route_mdev, vhca_id);
|
||||
if (mlx5_lag_is_active(out_priv->mdev)) {
|
||||
struct mlx5_devcom *devcom;
|
||||
int err;
|
||||
|
||||
/* In lag case we may get devices from different eswitch instances.
|
||||
* If we failed to get vport num, it means, mostly, that we on the wrong
|
||||
* eswitch.
|
||||
@ -1599,16 +1600,16 @@ int mlx5e_tc_query_route_vport(struct net_device *out_dev, struct net_device *ro
|
||||
if (err != -ENOENT)
|
||||
return err;
|
||||
|
||||
rcu_read_lock();
|
||||
devcom = out_priv->mdev->priv.devcom;
|
||||
esw = mlx5_devcom_get_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
|
||||
if (!esw)
|
||||
return -ENODEV;
|
||||
esw = mlx5_devcom_get_peer_data_rcu(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
|
||||
err = esw ? mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport) : -ENODEV;
|
||||
rcu_read_unlock();
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
err = mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
|
||||
if (devcom)
|
||||
mlx5_devcom_release_peer_data(devcom, MLX5_DEVCOM_ESW_OFFLOADS);
|
||||
return err;
|
||||
return mlx5_eswitch_vhca_id_to_vport(esw, vhca_id, vport);
|
||||
}
|
||||
|
||||
int mlx5e_tc_add_flow_mod_hdr(struct mlx5e_priv *priv,
|
||||
@ -5142,6 +5143,8 @@ int mlx5e_tc_esw_init(struct mlx5_rep_uplink_priv *uplink_priv)
|
||||
goto err_register_fib_notifier;
|
||||
}
|
||||
|
||||
mlx5_esw_offloads_devcom_init(esw);
|
||||
|
||||
return 0;
|
||||
|
||||
err_register_fib_notifier:
|
||||
@ -5168,7 +5171,7 @@ void mlx5e_tc_esw_cleanup(struct mlx5_rep_uplink_priv *uplink_priv)
|
||||
priv = netdev_priv(rpriv->netdev);
|
||||
esw = priv->mdev->priv.eswitch;
|
||||
|
||||
mlx5e_tc_clean_fdb_peer_flows(esw);
|
||||
mlx5_esw_offloads_devcom_cleanup(esw);
|
||||
|
||||
mlx5e_tc_tun_cleanup(uplink_priv->encap);
|
||||
|
||||
|
@ -777,6 +777,17 @@ static void mlx5e_tx_wi_consume_fifo_skbs(struct mlx5e_txqsq *sq, struct mlx5e_t
|
||||
}
|
||||
}
|
||||
|
||||
void mlx5e_txqsq_wake(struct mlx5e_txqsq *sq)
|
||||
{
|
||||
if (netif_tx_queue_stopped(sq->txq) &&
|
||||
mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) &&
|
||||
mlx5e_ptpsq_fifo_has_room(sq) &&
|
||||
!test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) {
|
||||
netif_tx_wake_queue(sq->txq);
|
||||
sq->stats->wake++;
|
||||
}
|
||||
}
|
||||
|
||||
bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
|
||||
{
|
||||
struct mlx5e_sq_stats *stats;
|
||||
@ -876,13 +887,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
|
||||
|
||||
netdev_tx_completed_queue(sq->txq, npkts, nbytes);
|
||||
|
||||
if (netif_tx_queue_stopped(sq->txq) &&
|
||||
mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room) &&
|
||||
mlx5e_ptpsq_fifo_has_room(sq) &&
|
||||
!test_bit(MLX5E_SQ_STATE_RECOVERING, &sq->state)) {
|
||||
netif_tx_wake_queue(sq->txq);
|
||||
stats->wake++;
|
||||
}
|
||||
mlx5e_txqsq_wake(sq);
|
||||
|
||||
return (i == MLX5E_TX_CQ_POLL_BUDGET);
|
||||
}
|
||||
|
@ -161,20 +161,22 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
|
||||
}
|
||||
}
|
||||
|
||||
/* budget=0 means we may be in IRQ context, do as little as possible */
|
||||
if (unlikely(!budget))
|
||||
goto out;
|
||||
|
||||
busy |= mlx5e_poll_xdpsq_cq(&c->xdpsq.cq);
|
||||
|
||||
if (c->xdp)
|
||||
busy |= mlx5e_poll_xdpsq_cq(&c->rq_xdpsq.cq);
|
||||
|
||||
if (likely(budget)) { /* budget=0 means: don't poll rx rings */
|
||||
if (xsk_open)
|
||||
work_done = mlx5e_poll_rx_cq(&xskrq->cq, budget);
|
||||
if (xsk_open)
|
||||
work_done = mlx5e_poll_rx_cq(&xskrq->cq, budget);
|
||||
|
||||
if (likely(budget - work_done))
|
||||
work_done += mlx5e_poll_rx_cq(&rq->cq, budget - work_done);
|
||||
if (likely(budget - work_done))
|
||||
work_done += mlx5e_poll_rx_cq(&rq->cq, budget - work_done);
|
||||
|
||||
busy |= work_done == budget;
|
||||
}
|
||||
busy |= work_done == budget;
|
||||
|
||||
mlx5e_poll_ico_cq(&c->icosq.cq);
|
||||
if (mlx5e_poll_ico_cq(&c->async_icosq.cq))
|
||||
|
@ -368,6 +368,8 @@ int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs);
|
||||
void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf);
|
||||
void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw);
|
||||
void mlx5_eswitch_disable(struct mlx5_eswitch *esw);
|
||||
void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw);
|
||||
void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw);
|
||||
int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
|
||||
u16 vport, const u8 *mac);
|
||||
int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
|
||||
@ -757,6 +759,8 @@ static inline void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) {}
|
||||
static inline int mlx5_eswitch_enable(struct mlx5_eswitch *esw, int num_vfs) { return 0; }
|
||||
static inline void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw, bool clear_vf) {}
|
||||
static inline void mlx5_eswitch_disable(struct mlx5_eswitch *esw) {}
|
||||
static inline void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw) {}
|
||||
static inline void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw) {}
|
||||
static inline bool mlx5_eswitch_is_funcs_handler(struct mlx5_core_dev *dev) { return false; }
|
||||
static inline
|
||||
int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw, u16 vport, int link_state) { return 0; }
|
||||
|
@ -2864,7 +2864,7 @@ static int mlx5_esw_offloads_devcom_event(int event,
|
||||
return err;
|
||||
}
|
||||
|
||||
static void esw_offloads_devcom_init(struct mlx5_eswitch *esw)
|
||||
void mlx5_esw_offloads_devcom_init(struct mlx5_eswitch *esw)
|
||||
{
|
||||
struct mlx5_devcom *devcom = esw->dev->priv.devcom;
|
||||
|
||||
@ -2887,7 +2887,7 @@ static void esw_offloads_devcom_init(struct mlx5_eswitch *esw)
|
||||
ESW_OFFLOADS_DEVCOM_PAIR, esw);
|
||||
}
|
||||
|
||||
static void esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw)
|
||||
void mlx5_esw_offloads_devcom_cleanup(struct mlx5_eswitch *esw)
|
||||
{
|
||||
struct mlx5_devcom *devcom = esw->dev->priv.devcom;
|
||||
|
||||
@ -3357,8 +3357,6 @@ int esw_offloads_enable(struct mlx5_eswitch *esw)
|
||||
if (err)
|
||||
goto err_vports;
|
||||
|
||||
esw_offloads_devcom_init(esw);
|
||||
|
||||
return 0;
|
||||
|
||||
err_vports:
|
||||
@ -3399,7 +3397,6 @@ static int esw_offloads_stop(struct mlx5_eswitch *esw,
|
||||
|
||||
void esw_offloads_disable(struct mlx5_eswitch *esw)
|
||||
{
|
||||
esw_offloads_devcom_cleanup(esw);
|
||||
mlx5_eswitch_disable_pf_vf_vports(esw);
|
||||
esw_offloads_unload_rep(esw, MLX5_VPORT_UPLINK);
|
||||
esw_set_passing_vport_metadata(esw, false);
|
||||
|
@ -3,6 +3,7 @@
|
||||
|
||||
#include <linux/mlx5/vport.h>
|
||||
#include "lib/devcom.h"
|
||||
#include "mlx5_core.h"
|
||||
|
||||
static LIST_HEAD(devcom_list);
|
||||
|
||||
@ -13,7 +14,7 @@ static LIST_HEAD(devcom_list);
|
||||
|
||||
struct mlx5_devcom_component {
|
||||
struct {
|
||||
void *data;
|
||||
void __rcu *data;
|
||||
} device[MLX5_DEVCOM_PORTS_SUPPORTED];
|
||||
|
||||
mlx5_devcom_event_handler_t handler;
|
||||
@ -77,6 +78,7 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
|
||||
if (MLX5_CAP_GEN(dev, num_lag_ports) != MLX5_DEVCOM_PORTS_SUPPORTED)
|
||||
return NULL;
|
||||
|
||||
mlx5_dev_list_lock();
|
||||
sguid0 = mlx5_query_nic_system_image_guid(dev);
|
||||
list_for_each_entry(iter, &devcom_list, list) {
|
||||
struct mlx5_core_dev *tmp_dev = NULL;
|
||||
@ -102,8 +104,10 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
|
||||
|
||||
if (!priv) {
|
||||
priv = mlx5_devcom_list_alloc();
|
||||
if (!priv)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
if (!priv) {
|
||||
devcom = ERR_PTR(-ENOMEM);
|
||||
goto out;
|
||||
}
|
||||
|
||||
idx = 0;
|
||||
new_priv = true;
|
||||
@ -112,13 +116,16 @@ struct mlx5_devcom *mlx5_devcom_register_device(struct mlx5_core_dev *dev)
|
||||
priv->devs[idx] = dev;
|
||||
devcom = mlx5_devcom_alloc(priv, idx);
|
||||
if (!devcom) {
|
||||
kfree(priv);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
if (new_priv)
|
||||
kfree(priv);
|
||||
devcom = ERR_PTR(-ENOMEM);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (new_priv)
|
||||
list_add(&priv->list, &devcom_list);
|
||||
|
||||
out:
|
||||
mlx5_dev_list_unlock();
|
||||
return devcom;
|
||||
}
|
||||
|
||||
@ -131,6 +138,7 @@ void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom)
|
||||
if (IS_ERR_OR_NULL(devcom))
|
||||
return;
|
||||
|
||||
mlx5_dev_list_lock();
|
||||
priv = devcom->priv;
|
||||
priv->devs[devcom->idx] = NULL;
|
||||
|
||||
@ -141,10 +149,12 @@ void mlx5_devcom_unregister_device(struct mlx5_devcom *devcom)
|
||||
break;
|
||||
|
||||
if (i != MLX5_DEVCOM_PORTS_SUPPORTED)
|
||||
return;
|
||||
goto out;
|
||||
|
||||
list_del(&priv->list);
|
||||
kfree(priv);
|
||||
out:
|
||||
mlx5_dev_list_unlock();
|
||||
}
|
||||
|
||||
void mlx5_devcom_register_component(struct mlx5_devcom *devcom,
|
||||
@ -162,7 +172,7 @@ void mlx5_devcom_register_component(struct mlx5_devcom *devcom,
|
||||
comp = &devcom->priv->components[id];
|
||||
down_write(&comp->sem);
|
||||
comp->handler = handler;
|
||||
comp->device[devcom->idx].data = data;
|
||||
rcu_assign_pointer(comp->device[devcom->idx].data, data);
|
||||
up_write(&comp->sem);
|
||||
}
|
||||
|
||||
@ -176,8 +186,9 @@ void mlx5_devcom_unregister_component(struct mlx5_devcom *devcom,
|
||||
|
||||
comp = &devcom->priv->components[id];
|
||||
down_write(&comp->sem);
|
||||
comp->device[devcom->idx].data = NULL;
|
||||
RCU_INIT_POINTER(comp->device[devcom->idx].data, NULL);
|
||||
up_write(&comp->sem);
|
||||
synchronize_rcu();
|
||||
}
|
||||
|
||||
int mlx5_devcom_send_event(struct mlx5_devcom *devcom,
|
||||
@ -193,12 +204,15 @@ int mlx5_devcom_send_event(struct mlx5_devcom *devcom,
|
||||
|
||||
comp = &devcom->priv->components[id];
|
||||
down_write(&comp->sem);
|
||||
for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++)
|
||||
if (i != devcom->idx && comp->device[i].data) {
|
||||
err = comp->handler(event, comp->device[i].data,
|
||||
event_data);
|
||||
for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++) {
|
||||
void *data = rcu_dereference_protected(comp->device[i].data,
|
||||
lockdep_is_held(&comp->sem));
|
||||
|
||||
if (i != devcom->idx && data) {
|
||||
err = comp->handler(event, data, event_data);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
up_write(&comp->sem);
|
||||
return err;
|
||||
@ -213,7 +227,7 @@ void mlx5_devcom_set_paired(struct mlx5_devcom *devcom,
|
||||
comp = &devcom->priv->components[id];
|
||||
WARN_ON(!rwsem_is_locked(&comp->sem));
|
||||
|
||||
comp->paired = paired;
|
||||
WRITE_ONCE(comp->paired, paired);
|
||||
}
|
||||
|
||||
bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom,
|
||||
@ -222,7 +236,7 @@ bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom,
|
||||
if (IS_ERR_OR_NULL(devcom))
|
||||
return false;
|
||||
|
||||
return devcom->priv->components[id].paired;
|
||||
return READ_ONCE(devcom->priv->components[id].paired);
|
||||
}
|
||||
|
||||
void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom,
|
||||
@ -236,7 +250,7 @@ void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom,
|
||||
|
||||
comp = &devcom->priv->components[id];
|
||||
down_read(&comp->sem);
|
||||
if (!comp->paired) {
|
||||
if (!READ_ONCE(comp->paired)) {
|
||||
up_read(&comp->sem);
|
||||
return NULL;
|
||||
}
|
||||
@ -245,7 +259,29 @@ void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom,
|
||||
if (i != devcom->idx)
|
||||
break;
|
||||
|
||||
return comp->device[i].data;
|
||||
return rcu_dereference_protected(comp->device[i].data, lockdep_is_held(&comp->sem));
|
||||
}
|
||||
|
||||
void *mlx5_devcom_get_peer_data_rcu(struct mlx5_devcom *devcom, enum mlx5_devcom_components id)
|
||||
{
|
||||
struct mlx5_devcom_component *comp;
|
||||
int i;
|
||||
|
||||
if (IS_ERR_OR_NULL(devcom))
|
||||
return NULL;
|
||||
|
||||
for (i = 0; i < MLX5_DEVCOM_PORTS_SUPPORTED; i++)
|
||||
if (i != devcom->idx)
|
||||
break;
|
||||
|
||||
comp = &devcom->priv->components[id];
|
||||
/* This can change concurrently, however 'data' pointer will remain
|
||||
* valid for the duration of RCU read section.
|
||||
*/
|
||||
if (!READ_ONCE(comp->paired))
|
||||
return NULL;
|
||||
|
||||
return rcu_dereference(comp->device[i].data);
|
||||
}
|
||||
|
||||
void mlx5_devcom_release_peer_data(struct mlx5_devcom *devcom,
|
||||
|
@ -41,6 +41,7 @@ bool mlx5_devcom_is_paired(struct mlx5_devcom *devcom,
|
||||
|
||||
void *mlx5_devcom_get_peer_data(struct mlx5_devcom *devcom,
|
||||
enum mlx5_devcom_components id);
|
||||
void *mlx5_devcom_get_peer_data_rcu(struct mlx5_devcom *devcom, enum mlx5_devcom_components id);
|
||||
void mlx5_devcom_release_peer_data(struct mlx5_devcom *devcom,
|
||||
enum mlx5_devcom_components id);
|
||||
|
||||
|
@ -1024,7 +1024,7 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
|
||||
|
||||
dev->dm = mlx5_dm_create(dev);
|
||||
if (IS_ERR(dev->dm))
|
||||
mlx5_core_warn(dev, "Failed to init device memory%d\n", err);
|
||||
mlx5_core_warn(dev, "Failed to init device memory %ld\n", PTR_ERR(dev->dm));
|
||||
|
||||
dev->tracer = mlx5_fw_tracer_create(dev);
|
||||
dev->hv_vhca = mlx5_hv_vhca_create(dev);
|
||||
|
@ -117,6 +117,8 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev,
|
||||
caps->gvmi = MLX5_CAP_GEN(mdev, vhca_id);
|
||||
caps->flex_protocols = MLX5_CAP_GEN(mdev, flex_parser_protocols);
|
||||
caps->sw_format_ver = MLX5_CAP_GEN(mdev, steering_format_version);
|
||||
caps->roce_caps.fl_rc_qp_when_roce_disabled =
|
||||
MLX5_CAP_GEN(mdev, fl_rc_qp_when_roce_disabled);
|
||||
|
||||
if (MLX5_CAP_GEN(mdev, roce)) {
|
||||
err = dr_cmd_query_nic_vport_roce_en(mdev, 0, &roce_en);
|
||||
@ -124,7 +126,7 @@ int mlx5dr_cmd_query_device(struct mlx5_core_dev *mdev,
|
||||
return err;
|
||||
|
||||
caps->roce_caps.roce_en = roce_en;
|
||||
caps->roce_caps.fl_rc_qp_when_roce_disabled =
|
||||
caps->roce_caps.fl_rc_qp_when_roce_disabled |=
|
||||
MLX5_CAP_ROCE(mdev, fl_rc_qp_when_roce_disabled);
|
||||
caps->roce_caps.fl_rc_qp_when_roce_enabled =
|
||||
MLX5_CAP_ROCE(mdev, fl_rc_qp_when_roce_enabled);
|
||||
|
@ -15,7 +15,8 @@ static u32 dr_ste_crc32_calc(const void *input_data, size_t length)
|
||||
{
|
||||
u32 crc = crc32(0, input_data, length);
|
||||
|
||||
return (__force u32)htonl(crc);
|
||||
return (__force u32)((crc >> 24) & 0xff) | ((crc << 8) & 0xff0000) |
|
||||
((crc >> 8) & 0xff00) | ((crc << 24) & 0xff000000);
|
||||
}
|
||||
|
||||
bool mlx5dr_ste_supp_ttl_cs_recalc(struct mlx5dr_cmd_caps *caps)
|
||||
|
@ -987,6 +987,16 @@ static int lan966x_reset_switch(struct lan966x *lan966x)
|
||||
|
||||
reset_control_reset(switch_reset);
|
||||
|
||||
/* Don't reinitialize the switch core, if it is already initialized. In
|
||||
* case it is initialized twice, some pointers inside the queue system
|
||||
* in HW will get corrupted and then after a while the queue system gets
|
||||
* full and no traffic is passing through the switch. The issue is seen
|
||||
* when loading and unloading the driver and sending traffic through the
|
||||
* switch.
|
||||
*/
|
||||
if (lan_rd(lan966x, SYS_RESET_CFG) & SYS_RESET_CFG_CORE_ENA)
|
||||
return 0;
|
||||
|
||||
lan_wr(SYS_RESET_CFG_CORE_ENA_SET(0), lan966x, SYS_RESET_CFG);
|
||||
lan_wr(SYS_RAM_INIT_RAM_INIT_SET(1), lan966x, SYS_RAM_INIT);
|
||||
ret = readx_poll_timeout(lan966x_ram_init, lan966x,
|
||||
|
@ -6138,6 +6138,7 @@ static int nv_probe(struct pci_dev *pci_dev, const struct pci_device_id *id)
|
||||
return 0;
|
||||
|
||||
out_error:
|
||||
nv_mgmt_release_sema(dev);
|
||||
if (phystate_orig)
|
||||
writel(phystate|NVREG_ADAPTCTL_RUNNING, base + NvRegAdapterControl);
|
||||
out_freering:
|
||||
|
@ -2664,6 +2664,7 @@ static struct phy_driver vsc85xx_driver[] = {
|
||||
module_phy_driver(vsc85xx_driver);
|
||||
|
||||
static struct mdio_device_id __maybe_unused vsc85xx_tbl[] = {
|
||||
{ PHY_ID_VSC8502, 0xfffffff0, },
|
||||
{ PHY_ID_VSC8504, 0xfffffff0, },
|
||||
{ PHY_ID_VSC8514, 0xfffffff0, },
|
||||
{ PHY_ID_VSC8530, 0xfffffff0, },
|
||||
|
@ -1629,6 +1629,7 @@ static int team_init(struct net_device *dev)
|
||||
|
||||
team->dev = dev;
|
||||
team_set_no_mode(team);
|
||||
team->notifier_ctx = false;
|
||||
|
||||
team->pcpu_stats = netdev_alloc_pcpu_stats(struct team_pcpu_stats);
|
||||
if (!team->pcpu_stats)
|
||||
@ -3022,7 +3023,11 @@ static int team_device_event(struct notifier_block *unused,
|
||||
team_del_slave(port->team->dev, dev);
|
||||
break;
|
||||
case NETDEV_FEAT_CHANGE:
|
||||
team_compute_features(port->team);
|
||||
if (!port->team->notifier_ctx) {
|
||||
port->team->notifier_ctx = true;
|
||||
team_compute_features(port->team);
|
||||
port->team->notifier_ctx = false;
|
||||
}
|
||||
break;
|
||||
case NETDEV_PRECHANGEMTU:
|
||||
/* Forbid to change mtu of underlaying device */
|
||||
|
@ -1348,9 +1348,8 @@ static int mlxbf_pmc_map_counters(struct device *dev)
|
||||
|
||||
for (i = 0; i < pmc->total_blocks; ++i) {
|
||||
if (strstr(pmc->block_name[i], "tile")) {
|
||||
ret = sscanf(pmc->block_name[i], "tile%d", &tile_num);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (sscanf(pmc->block_name[i], "tile%d", &tile_num) != 1)
|
||||
return -EINVAL;
|
||||
|
||||
if (tile_num >= pmc->tile_count)
|
||||
continue;
|
||||
|
@ -552,7 +552,7 @@ static int __init hp_wmi_enable_hotkeys(void)
|
||||
|
||||
static int hp_wmi_set_block(void *data, bool blocked)
|
||||
{
|
||||
enum hp_wmi_radio r = (enum hp_wmi_radio) data;
|
||||
enum hp_wmi_radio r = (long)data;
|
||||
int query = BIT(r + 8) | ((!blocked) << r);
|
||||
int ret;
|
||||
|
||||
|
@ -154,7 +154,7 @@ static int scan_chunks_sanity_check(struct device *dev)
|
||||
continue;
|
||||
reinit_completion(&ifs_done);
|
||||
local_work.dev = dev;
|
||||
INIT_WORK(&local_work.w, copy_hashes_authenticate_chunks);
|
||||
INIT_WORK_ONSTACK(&local_work.w, copy_hashes_authenticate_chunks);
|
||||
schedule_work_on(cpu, &local_work.w);
|
||||
wait_for_completion(&ifs_done);
|
||||
if (ifsd->loading_error)
|
||||
|
@ -294,14 +294,13 @@ struct isst_if_pkg_info {
|
||||
static struct isst_if_cpu_info *isst_cpu_info;
|
||||
static struct isst_if_pkg_info *isst_pkg_info;
|
||||
|
||||
#define ISST_MAX_PCI_DOMAINS 8
|
||||
|
||||
static struct pci_dev *_isst_if_get_pci_dev(int cpu, int bus_no, int dev, int fn)
|
||||
{
|
||||
struct pci_dev *matched_pci_dev = NULL;
|
||||
struct pci_dev *pci_dev = NULL;
|
||||
struct pci_dev *_pci_dev = NULL;
|
||||
int no_matches = 0, pkg_id;
|
||||
int i, bus_number;
|
||||
int bus_number;
|
||||
|
||||
if (bus_no < 0 || bus_no >= ISST_MAX_BUS_NUMBER || cpu < 0 ||
|
||||
cpu >= nr_cpu_ids || cpu >= num_possible_cpus())
|
||||
@ -313,12 +312,11 @@ static struct pci_dev *_isst_if_get_pci_dev(int cpu, int bus_no, int dev, int fn
|
||||
if (bus_number < 0)
|
||||
return NULL;
|
||||
|
||||
for (i = 0; i < ISST_MAX_PCI_DOMAINS; ++i) {
|
||||
struct pci_dev *_pci_dev;
|
||||
for_each_pci_dev(_pci_dev) {
|
||||
int node;
|
||||
|
||||
_pci_dev = pci_get_domain_bus_and_slot(i, bus_number, PCI_DEVFN(dev, fn));
|
||||
if (!_pci_dev)
|
||||
if (_pci_dev->bus->number != bus_number ||
|
||||
_pci_dev->devfn != PCI_DEVFN(dev, fn))
|
||||
continue;
|
||||
|
||||
++no_matches;
|
||||
|
@ -507,7 +507,7 @@ static void fuel_gauge_external_power_changed(struct power_supply *psy)
|
||||
mutex_lock(&info->lock);
|
||||
info->valid = 0; /* Force updating of the cached registers */
|
||||
mutex_unlock(&info->lock);
|
||||
power_supply_changed(info->bat);
|
||||
power_supply_changed(psy);
|
||||
}
|
||||
|
||||
static struct power_supply_desc fuel_gauge_desc = {
|
||||
|
@ -1262,6 +1262,7 @@ static void bq24190_input_current_limit_work(struct work_struct *work)
|
||||
bq24190_charger_set_property(bdi->charger,
|
||||
POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT,
|
||||
&val);
|
||||
power_supply_changed(bdi->charger);
|
||||
}
|
||||
|
||||
/* Sync the input-current-limit with our parent supply (if we have one) */
|
||||
|
@ -650,7 +650,7 @@ static void bq25890_charger_external_power_changed(struct power_supply *psy)
|
||||
if (bq->chip_version != BQ25892)
|
||||
return;
|
||||
|
||||
ret = power_supply_get_property_from_supplier(bq->charger,
|
||||
ret = power_supply_get_property_from_supplier(psy,
|
||||
POWER_SUPPLY_PROP_USB_TYPE,
|
||||
&val);
|
||||
if (ret)
|
||||
@ -675,6 +675,7 @@ static void bq25890_charger_external_power_changed(struct power_supply *psy)
|
||||
}
|
||||
|
||||
bq25890_field_write(bq, F_IINLIM, input_current_limit);
|
||||
power_supply_changed(psy);
|
||||
}
|
||||
|
||||
static int bq25890_get_chip_state(struct bq25890_device *bq,
|
||||
@ -973,6 +974,8 @@ static void bq25890_pump_express_work(struct work_struct *data)
|
||||
dev_info(bq->dev, "Hi-voltage charging requested, input voltage is %d mV\n",
|
||||
voltage);
|
||||
|
||||
power_supply_changed(bq->charger);
|
||||
|
||||
return;
|
||||
error_print:
|
||||
bq25890_field_write(bq, F_PUMPX_EN, 0);
|
||||
|
@ -1761,60 +1761,6 @@ static int bq27xxx_battery_read_health(struct bq27xxx_device_info *di)
|
||||
return POWER_SUPPLY_HEALTH_GOOD;
|
||||
}
|
||||
|
||||
void bq27xxx_battery_update(struct bq27xxx_device_info *di)
|
||||
{
|
||||
struct bq27xxx_reg_cache cache = {0, };
|
||||
bool has_singe_flag = di->opts & BQ27XXX_O_ZERO;
|
||||
|
||||
cache.flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, has_singe_flag);
|
||||
if ((cache.flags & 0xff) == 0xff)
|
||||
cache.flags = -1; /* read error */
|
||||
if (cache.flags >= 0) {
|
||||
cache.temperature = bq27xxx_battery_read_temperature(di);
|
||||
if (di->regs[BQ27XXX_REG_TTE] != INVALID_REG_ADDR)
|
||||
cache.time_to_empty = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTE);
|
||||
if (di->regs[BQ27XXX_REG_TTECP] != INVALID_REG_ADDR)
|
||||
cache.time_to_empty_avg = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTECP);
|
||||
if (di->regs[BQ27XXX_REG_TTF] != INVALID_REG_ADDR)
|
||||
cache.time_to_full = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTF);
|
||||
|
||||
cache.charge_full = bq27xxx_battery_read_fcc(di);
|
||||
cache.capacity = bq27xxx_battery_read_soc(di);
|
||||
if (di->regs[BQ27XXX_REG_AE] != INVALID_REG_ADDR)
|
||||
cache.energy = bq27xxx_battery_read_energy(di);
|
||||
di->cache.flags = cache.flags;
|
||||
cache.health = bq27xxx_battery_read_health(di);
|
||||
if (di->regs[BQ27XXX_REG_CYCT] != INVALID_REG_ADDR)
|
||||
cache.cycle_count = bq27xxx_battery_read_cyct(di);
|
||||
|
||||
/* We only have to read charge design full once */
|
||||
if (di->charge_design_full <= 0)
|
||||
di->charge_design_full = bq27xxx_battery_read_dcap(di);
|
||||
}
|
||||
|
||||
if ((di->cache.capacity != cache.capacity) ||
|
||||
(di->cache.flags != cache.flags))
|
||||
power_supply_changed(di->bat);
|
||||
|
||||
if (memcmp(&di->cache, &cache, sizeof(cache)) != 0)
|
||||
di->cache = cache;
|
||||
|
||||
di->last_update = jiffies;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bq27xxx_battery_update);
|
||||
|
||||
static void bq27xxx_battery_poll(struct work_struct *work)
|
||||
{
|
||||
struct bq27xxx_device_info *di =
|
||||
container_of(work, struct bq27xxx_device_info,
|
||||
work.work);
|
||||
|
||||
bq27xxx_battery_update(di);
|
||||
|
||||
if (poll_interval > 0)
|
||||
schedule_delayed_work(&di->work, poll_interval * HZ);
|
||||
}
|
||||
|
||||
static bool bq27xxx_battery_is_full(struct bq27xxx_device_info *di, int flags)
|
||||
{
|
||||
if (di->opts & BQ27XXX_O_ZERO)
|
||||
@ -1833,7 +1779,8 @@ static bool bq27xxx_battery_is_full(struct bq27xxx_device_info *di, int flags)
|
||||
static int bq27xxx_battery_current_and_status(
|
||||
struct bq27xxx_device_info *di,
|
||||
union power_supply_propval *val_curr,
|
||||
union power_supply_propval *val_status)
|
||||
union power_supply_propval *val_status,
|
||||
struct bq27xxx_reg_cache *cache)
|
||||
{
|
||||
bool single_flags = (di->opts & BQ27XXX_O_ZERO);
|
||||
int curr;
|
||||
@ -1845,10 +1792,14 @@ static int bq27xxx_battery_current_and_status(
|
||||
return curr;
|
||||
}
|
||||
|
||||
flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, single_flags);
|
||||
if (flags < 0) {
|
||||
dev_err(di->dev, "error reading flags\n");
|
||||
return flags;
|
||||
if (cache) {
|
||||
flags = cache->flags;
|
||||
} else {
|
||||
flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, single_flags);
|
||||
if (flags < 0) {
|
||||
dev_err(di->dev, "error reading flags\n");
|
||||
return flags;
|
||||
}
|
||||
}
|
||||
|
||||
if (di->opts & BQ27XXX_O_ZERO) {
|
||||
@ -1883,6 +1834,78 @@ static int bq27xxx_battery_current_and_status(
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di)
|
||||
{
|
||||
union power_supply_propval status = di->last_status;
|
||||
struct bq27xxx_reg_cache cache = {0, };
|
||||
bool has_singe_flag = di->opts & BQ27XXX_O_ZERO;
|
||||
|
||||
cache.flags = bq27xxx_read(di, BQ27XXX_REG_FLAGS, has_singe_flag);
|
||||
if ((cache.flags & 0xff) == 0xff)
|
||||
cache.flags = -1; /* read error */
|
||||
if (cache.flags >= 0) {
|
||||
cache.temperature = bq27xxx_battery_read_temperature(di);
|
||||
if (di->regs[BQ27XXX_REG_TTE] != INVALID_REG_ADDR)
|
||||
cache.time_to_empty = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTE);
|
||||
if (di->regs[BQ27XXX_REG_TTECP] != INVALID_REG_ADDR)
|
||||
cache.time_to_empty_avg = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTECP);
|
||||
if (di->regs[BQ27XXX_REG_TTF] != INVALID_REG_ADDR)
|
||||
cache.time_to_full = bq27xxx_battery_read_time(di, BQ27XXX_REG_TTF);
|
||||
|
||||
cache.charge_full = bq27xxx_battery_read_fcc(di);
|
||||
cache.capacity = bq27xxx_battery_read_soc(di);
|
||||
if (di->regs[BQ27XXX_REG_AE] != INVALID_REG_ADDR)
|
||||
cache.energy = bq27xxx_battery_read_energy(di);
|
||||
di->cache.flags = cache.flags;
|
||||
cache.health = bq27xxx_battery_read_health(di);
|
||||
if (di->regs[BQ27XXX_REG_CYCT] != INVALID_REG_ADDR)
|
||||
cache.cycle_count = bq27xxx_battery_read_cyct(di);
|
||||
|
||||
/*
|
||||
* On gauges with signed current reporting the current must be
|
||||
* checked to detect charging <-> discharging status changes.
|
||||
*/
|
||||
if (!(di->opts & BQ27XXX_O_ZERO))
|
||||
bq27xxx_battery_current_and_status(di, NULL, &status, &cache);
|
||||
|
||||
/* We only have to read charge design full once */
|
||||
if (di->charge_design_full <= 0)
|
||||
di->charge_design_full = bq27xxx_battery_read_dcap(di);
|
||||
}
|
||||
|
||||
if ((di->cache.capacity != cache.capacity) ||
|
||||
(di->cache.flags != cache.flags) ||
|
||||
(di->last_status.intval != status.intval)) {
|
||||
di->last_status.intval = status.intval;
|
||||
power_supply_changed(di->bat);
|
||||
}
|
||||
|
||||
if (memcmp(&di->cache, &cache, sizeof(cache)) != 0)
|
||||
di->cache = cache;
|
||||
|
||||
di->last_update = jiffies;
|
||||
|
||||
if (!di->removed && poll_interval > 0)
|
||||
mod_delayed_work(system_wq, &di->work, poll_interval * HZ);
|
||||
}
|
||||
|
||||
void bq27xxx_battery_update(struct bq27xxx_device_info *di)
|
||||
{
|
||||
mutex_lock(&di->lock);
|
||||
bq27xxx_battery_update_unlocked(di);
|
||||
mutex_unlock(&di->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bq27xxx_battery_update);
|
||||
|
||||
static void bq27xxx_battery_poll(struct work_struct *work)
|
||||
{
|
||||
struct bq27xxx_device_info *di =
|
||||
container_of(work, struct bq27xxx_device_info,
|
||||
work.work);
|
||||
|
||||
bq27xxx_battery_update(di);
|
||||
}
|
||||
|
||||
/*
|
||||
* Get the average power in µW
|
||||
* Return < 0 if something fails.
|
||||
@ -1985,10 +2008,8 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
|
||||
struct bq27xxx_device_info *di = power_supply_get_drvdata(psy);
|
||||
|
||||
mutex_lock(&di->lock);
|
||||
if (time_is_before_jiffies(di->last_update + 5 * HZ)) {
|
||||
cancel_delayed_work_sync(&di->work);
|
||||
bq27xxx_battery_poll(&di->work.work);
|
||||
}
|
||||
if (time_is_before_jiffies(di->last_update + 5 * HZ))
|
||||
bq27xxx_battery_update_unlocked(di);
|
||||
mutex_unlock(&di->lock);
|
||||
|
||||
if (psp != POWER_SUPPLY_PROP_PRESENT && di->cache.flags < 0)
|
||||
@ -1996,7 +2017,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
|
||||
|
||||
switch (psp) {
|
||||
case POWER_SUPPLY_PROP_STATUS:
|
||||
ret = bq27xxx_battery_current_and_status(di, NULL, val);
|
||||
ret = bq27xxx_battery_current_and_status(di, NULL, val, NULL);
|
||||
break;
|
||||
case POWER_SUPPLY_PROP_VOLTAGE_NOW:
|
||||
ret = bq27xxx_battery_voltage(di, val);
|
||||
@ -2005,7 +2026,7 @@ static int bq27xxx_battery_get_property(struct power_supply *psy,
|
||||
val->intval = di->cache.flags < 0 ? 0 : 1;
|
||||
break;
|
||||
case POWER_SUPPLY_PROP_CURRENT_NOW:
|
||||
ret = bq27xxx_battery_current_and_status(di, val, NULL);
|
||||
ret = bq27xxx_battery_current_and_status(di, val, NULL, NULL);
|
||||
break;
|
||||
case POWER_SUPPLY_PROP_CAPACITY:
|
||||
ret = bq27xxx_simple_value(di->cache.capacity, val);
|
||||
@ -2078,8 +2099,8 @@ static void bq27xxx_external_power_changed(struct power_supply *psy)
|
||||
{
|
||||
struct bq27xxx_device_info *di = power_supply_get_drvdata(psy);
|
||||
|
||||
cancel_delayed_work_sync(&di->work);
|
||||
schedule_delayed_work(&di->work, 0);
|
||||
/* After charger plug in/out wait 0.5s for things to stabilize */
|
||||
mod_delayed_work(system_wq, &di->work, HZ / 2);
|
||||
}
|
||||
|
||||
int bq27xxx_battery_setup(struct bq27xxx_device_info *di)
|
||||
@ -2127,22 +2148,18 @@ EXPORT_SYMBOL_GPL(bq27xxx_battery_setup);
|
||||
|
||||
void bq27xxx_battery_teardown(struct bq27xxx_device_info *di)
|
||||
{
|
||||
/*
|
||||
* power_supply_unregister call bq27xxx_battery_get_property which
|
||||
* call bq27xxx_battery_poll.
|
||||
* Make sure that bq27xxx_battery_poll will not call
|
||||
* schedule_delayed_work again after unregister (which cause OOPS).
|
||||
*/
|
||||
poll_interval = 0;
|
||||
|
||||
cancel_delayed_work_sync(&di->work);
|
||||
|
||||
power_supply_unregister(di->bat);
|
||||
|
||||
mutex_lock(&bq27xxx_list_lock);
|
||||
list_del(&di->list);
|
||||
mutex_unlock(&bq27xxx_list_lock);
|
||||
|
||||
/* Set removed to avoid bq27xxx_battery_update() re-queuing the work */
|
||||
mutex_lock(&di->lock);
|
||||
di->removed = true;
|
||||
mutex_unlock(&di->lock);
|
||||
|
||||
cancel_delayed_work_sync(&di->work);
|
||||
|
||||
power_supply_unregister(di->bat);
|
||||
mutex_destroy(&di->lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bq27xxx_battery_teardown);
|
||||
|
@ -179,7 +179,7 @@ static int bq27xxx_battery_i2c_probe(struct i2c_client *client,
|
||||
i2c_set_clientdata(client, di);
|
||||
|
||||
if (client->irq) {
|
||||
ret = devm_request_threaded_irq(&client->dev, client->irq,
|
||||
ret = request_threaded_irq(client->irq,
|
||||
NULL, bq27xxx_battery_irq_handler_thread,
|
||||
IRQF_ONESHOT,
|
||||
di->name, di);
|
||||
@ -209,6 +209,7 @@ static void bq27xxx_battery_i2c_remove(struct i2c_client *client)
|
||||
{
|
||||
struct bq27xxx_device_info *di = i2c_get_clientdata(client);
|
||||
|
||||
free_irq(client->irq, di);
|
||||
bq27xxx_battery_teardown(di);
|
||||
|
||||
mutex_lock(&battery_mutex);
|
||||
|
@ -799,7 +799,9 @@ static int mt6360_charger_probe(struct platform_device *pdev)
|
||||
mci->vinovp = 6500000;
|
||||
mutex_init(&mci->chgdet_lock);
|
||||
platform_set_drvdata(pdev, mci);
|
||||
devm_work_autocancel(&pdev->dev, &mci->chrdet_work, mt6360_chrdet_work);
|
||||
ret = devm_work_autocancel(&pdev->dev, &mci->chrdet_work, mt6360_chrdet_work);
|
||||
if (ret)
|
||||
return dev_err_probe(&pdev->dev, ret, "Failed to set delayed work\n");
|
||||
|
||||
ret = device_property_read_u32(&pdev->dev, "richtek,vinovp-microvolt", &mci->vinovp);
|
||||
if (ret)
|
||||
|
@ -34,8 +34,9 @@ static void power_supply_update_bat_leds(struct power_supply *psy)
|
||||
led_trigger_event(psy->charging_full_trig, LED_FULL);
|
||||
led_trigger_event(psy->charging_trig, LED_OFF);
|
||||
led_trigger_event(psy->full_trig, LED_FULL);
|
||||
led_trigger_event(psy->charging_blink_full_solid_trig,
|
||||
LED_FULL);
|
||||
/* Going from blink to LED on requires a LED_OFF event to stop blink */
|
||||
led_trigger_event(psy->charging_blink_full_solid_trig, LED_OFF);
|
||||
led_trigger_event(psy->charging_blink_full_solid_trig, LED_FULL);
|
||||
break;
|
||||
case POWER_SUPPLY_STATUS_CHARGING:
|
||||
led_trigger_event(psy->charging_full_trig, LED_FULL);
|
||||
|
@ -24,7 +24,7 @@
|
||||
#define SBS_CHARGER_REG_STATUS 0x13
|
||||
#define SBS_CHARGER_REG_ALARM_WARNING 0x16
|
||||
|
||||
#define SBS_CHARGER_STATUS_CHARGE_INHIBITED BIT(1)
|
||||
#define SBS_CHARGER_STATUS_CHARGE_INHIBITED BIT(0)
|
||||
#define SBS_CHARGER_STATUS_RES_COLD BIT(9)
|
||||
#define SBS_CHARGER_STATUS_RES_HOT BIT(10)
|
||||
#define SBS_CHARGER_STATUS_BATTERY_PRESENT BIT(14)
|
||||
|
@ -951,9 +951,12 @@ static int mt6359_regulator_probe(struct platform_device *pdev)
|
||||
struct regulator_config config = {};
|
||||
struct regulator_dev *rdev;
|
||||
struct mt6359_regulator_info *mt6359_info;
|
||||
int i, hw_ver;
|
||||
int i, hw_ver, ret;
|
||||
|
||||
ret = regmap_read(mt6397->regmap, MT6359P_HWCID, &hw_ver);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
regmap_read(mt6397->regmap, MT6359P_HWCID, &hw_ver);
|
||||
if (hw_ver >= MT6359P_CHIP_VER)
|
||||
mt6359_info = mt6359p_regulators;
|
||||
else
|
||||
|
@ -264,7 +264,7 @@ static const struct pca9450_regulator_desc pca9450a_regulators[] = {
|
||||
.vsel_reg = PCA9450_REG_BUCK2OUT_DVS0,
|
||||
.vsel_mask = BUCK2OUT_DVS0_MASK,
|
||||
.enable_reg = PCA9450_REG_BUCK2CTRL,
|
||||
.enable_mask = BUCK1_ENMODE_MASK,
|
||||
.enable_mask = BUCK2_ENMODE_MASK,
|
||||
.ramp_reg = PCA9450_REG_BUCK2CTRL,
|
||||
.ramp_mask = BUCK2_RAMP_MASK,
|
||||
.ramp_delay_table = pca9450_dvs_buck_ramp_table,
|
||||
@ -502,7 +502,7 @@ static const struct pca9450_regulator_desc pca9450bc_regulators[] = {
|
||||
.vsel_reg = PCA9450_REG_BUCK2OUT_DVS0,
|
||||
.vsel_mask = BUCK2OUT_DVS0_MASK,
|
||||
.enable_reg = PCA9450_REG_BUCK2CTRL,
|
||||
.enable_mask = BUCK1_ENMODE_MASK,
|
||||
.enable_mask = BUCK2_ENMODE_MASK,
|
||||
.ramp_reg = PCA9450_REG_BUCK2CTRL,
|
||||
.ramp_mask = BUCK2_RAMP_MASK,
|
||||
.ramp_delay_table = pca9450_dvs_buck_ramp_table,
|
||||
|
@ -984,8 +984,10 @@ static u32 get_async_notif_value(optee_invoke_fn *invoke_fn, bool *value_valid,
|
||||
|
||||
invoke_fn(OPTEE_SMC_GET_ASYNC_NOTIF_VALUE, 0, 0, 0, 0, 0, 0, 0, &res);
|
||||
|
||||
if (res.a0)
|
||||
if (res.a0) {
|
||||
*value_valid = false;
|
||||
return 0;
|
||||
}
|
||||
*value_valid = (res.a2 & OPTEE_SMC_ASYNC_NOTIF_VALUE_VALID);
|
||||
*value_pending = (res.a2 & OPTEE_SMC_ASYNC_NOTIF_VALUE_PENDING);
|
||||
return res.a1;
|
||||
|
@ -206,6 +206,82 @@ int usb_find_common_endpoints_reverse(struct usb_host_interface *alt,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_find_common_endpoints_reverse);
|
||||
|
||||
/**
|
||||
* usb_find_endpoint() - Given an endpoint address, search for the endpoint's
|
||||
* usb_host_endpoint structure in an interface's current altsetting.
|
||||
* @intf: the interface whose current altsetting should be searched
|
||||
* @ep_addr: the endpoint address (number and direction) to find
|
||||
*
|
||||
* Search the altsetting's list of endpoints for one with the specified address.
|
||||
*
|
||||
* Return: Pointer to the usb_host_endpoint if found, %NULL otherwise.
|
||||
*/
|
||||
static const struct usb_host_endpoint *usb_find_endpoint(
|
||||
const struct usb_interface *intf, unsigned int ep_addr)
|
||||
{
|
||||
int n;
|
||||
const struct usb_host_endpoint *ep;
|
||||
|
||||
n = intf->cur_altsetting->desc.bNumEndpoints;
|
||||
ep = intf->cur_altsetting->endpoint;
|
||||
for (; n > 0; (--n, ++ep)) {
|
||||
if (ep->desc.bEndpointAddress == ep_addr)
|
||||
return ep;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* usb_check_bulk_endpoints - Check whether an interface's current altsetting
|
||||
* contains a set of bulk endpoints with the given addresses.
|
||||
* @intf: the interface whose current altsetting should be searched
|
||||
* @ep_addrs: 0-terminated array of the endpoint addresses (number and
|
||||
* direction) to look for
|
||||
*
|
||||
* Search for endpoints with the specified addresses and check their types.
|
||||
*
|
||||
* Return: %true if all the endpoints are found and are bulk, %false otherwise.
|
||||
*/
|
||||
bool usb_check_bulk_endpoints(
|
||||
const struct usb_interface *intf, const u8 *ep_addrs)
|
||||
{
|
||||
const struct usb_host_endpoint *ep;
|
||||
|
||||
for (; *ep_addrs; ++ep_addrs) {
|
||||
ep = usb_find_endpoint(intf, *ep_addrs);
|
||||
if (!ep || !usb_endpoint_xfer_bulk(&ep->desc))
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_check_bulk_endpoints);
|
||||
|
||||
/**
|
||||
* usb_check_int_endpoints - Check whether an interface's current altsetting
|
||||
* contains a set of interrupt endpoints with the given addresses.
|
||||
* @intf: the interface whose current altsetting should be searched
|
||||
* @ep_addrs: 0-terminated array of the endpoint addresses (number and
|
||||
* direction) to look for
|
||||
*
|
||||
* Search for endpoints with the specified addresses and check their types.
|
||||
*
|
||||
* Return: %true if all the endpoints are found and are interrupt,
|
||||
* %false otherwise.
|
||||
*/
|
||||
bool usb_check_int_endpoints(
|
||||
const struct usb_interface *intf, const u8 *ep_addrs)
|
||||
{
|
||||
const struct usb_host_endpoint *ep;
|
||||
|
||||
for (; *ep_addrs; ++ep_addrs) {
|
||||
ep = usb_find_endpoint(intf, *ep_addrs);
|
||||
if (!ep || !usb_endpoint_xfer_int(&ep->desc))
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_check_int_endpoints);
|
||||
|
||||
/**
|
||||
* usb_find_alt_setting() - Given a configuration, find the alternate setting
|
||||
* for the given interface.
|
||||
|
@ -4081,7 +4081,6 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
|
||||
|
||||
static void dwc3_gadget_wakeup_interrupt(struct dwc3 *dwc)
|
||||
{
|
||||
|
||||
dwc->suspended = false;
|
||||
|
||||
/*
|
||||
|
@ -3014,6 +3014,20 @@ static int sisusb_probe(struct usb_interface *intf,
|
||||
struct usb_device *dev = interface_to_usbdev(intf);
|
||||
struct sisusb_usb_data *sisusb;
|
||||
int retval = 0, i;
|
||||
static const u8 ep_addresses[] = {
|
||||
SISUSB_EP_GFX_IN | USB_DIR_IN,
|
||||
SISUSB_EP_GFX_OUT | USB_DIR_OUT,
|
||||
SISUSB_EP_GFX_BULK_OUT | USB_DIR_OUT,
|
||||
SISUSB_EP_GFX_LBULK_OUT | USB_DIR_OUT,
|
||||
SISUSB_EP_BRIDGE_IN | USB_DIR_IN,
|
||||
SISUSB_EP_BRIDGE_OUT | USB_DIR_OUT,
|
||||
0};
|
||||
|
||||
/* Are the expected endpoints present? */
|
||||
if (!usb_check_bulk_endpoints(intf, ep_addresses)) {
|
||||
dev_err(&intf->dev, "Invalid USB2VGA device\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dev_info(&dev->dev, "USB2VGA dongle found at address %d\n",
|
||||
dev->devnum);
|
||||
|
@ -27,6 +27,8 @@
|
||||
#include <video/udlfb.h>
|
||||
#include "edid.h"
|
||||
|
||||
#define OUT_EP_NUM 1 /* The endpoint number we will use */
|
||||
|
||||
static const struct fb_fix_screeninfo dlfb_fix = {
|
||||
.id = "udlfb",
|
||||
.type = FB_TYPE_PACKED_PIXELS,
|
||||
@ -1652,7 +1654,7 @@ static int dlfb_usb_probe(struct usb_interface *intf,
|
||||
struct fb_info *info;
|
||||
int retval;
|
||||
struct usb_device *usbdev = interface_to_usbdev(intf);
|
||||
struct usb_endpoint_descriptor *out;
|
||||
static u8 out_ep[] = {OUT_EP_NUM + USB_DIR_OUT, 0};
|
||||
|
||||
/* usb initialization */
|
||||
dlfb = kzalloc(sizeof(*dlfb), GFP_KERNEL);
|
||||
@ -1666,9 +1668,9 @@ static int dlfb_usb_probe(struct usb_interface *intf,
|
||||
dlfb->udev = usb_get_dev(usbdev);
|
||||
usb_set_intfdata(intf, dlfb);
|
||||
|
||||
retval = usb_find_common_endpoints(intf->cur_altsetting, NULL, &out, NULL, NULL);
|
||||
if (retval) {
|
||||
dev_err(&intf->dev, "Device should have at lease 1 bulk endpoint!\n");
|
||||
if (!usb_check_bulk_endpoints(intf, out_ep)) {
|
||||
dev_err(&intf->dev, "Invalid DisplayLink device!\n");
|
||||
retval = -EINVAL;
|
||||
goto error;
|
||||
}
|
||||
|
||||
@ -1927,7 +1929,8 @@ static int dlfb_alloc_urb_list(struct dlfb_data *dlfb, int count, size_t size)
|
||||
}
|
||||
|
||||
/* urb->transfer_buffer_length set to actual before submit */
|
||||
usb_fill_bulk_urb(urb, dlfb->udev, usb_sndbulkpipe(dlfb->udev, 1),
|
||||
usb_fill_bulk_urb(urb, dlfb->udev,
|
||||
usb_sndbulkpipe(dlfb->udev, OUT_EP_NUM),
|
||||
buf, size, dlfb_urb_completion, unode);
|
||||
urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
|
||||
|
||||
|
@ -115,6 +115,10 @@ static int tco_timer_start(struct watchdog_device *wdd)
|
||||
val |= SP5100_WDT_START_STOP_BIT;
|
||||
writel(val, SP5100_WDT_CONTROL(tco->tcobase));
|
||||
|
||||
/* This must be a distinct write. */
|
||||
val |= SP5100_WDT_TRIGGER_BIT;
|
||||
writel(val, SP5100_WDT_CONTROL(tco->tcobase));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -321,8 +321,10 @@ static struct sock_mapping *pvcalls_new_active_socket(
|
||||
void *page;
|
||||
|
||||
map = kzalloc(sizeof(*map), GFP_KERNEL);
|
||||
if (map == NULL)
|
||||
if (map == NULL) {
|
||||
sock_release(sock);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
map->fedata = fedata;
|
||||
map->sock = sock;
|
||||
@ -414,10 +416,8 @@ static int pvcalls_back_connect(struct xenbus_device *dev,
|
||||
req->u.connect.ref,
|
||||
req->u.connect.evtchn,
|
||||
sock);
|
||||
if (!map) {
|
||||
if (!map)
|
||||
ret = -EFAULT;
|
||||
sock_release(sock);
|
||||
}
|
||||
|
||||
out:
|
||||
rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
|
||||
@ -557,7 +557,6 @@ static void __pvcalls_back_accept(struct work_struct *work)
|
||||
sock);
|
||||
if (!map) {
|
||||
ret = -EFAULT;
|
||||
sock_release(sock);
|
||||
goto out_error;
|
||||
}
|
||||
|
||||
|
@ -5035,7 +5035,11 @@ static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root)
|
||||
*/
|
||||
inode = igrab(&btrfs_inode->vfs_inode);
|
||||
if (inode) {
|
||||
unsigned int nofs_flag;
|
||||
|
||||
nofs_flag = memalloc_nofs_save();
|
||||
invalidate_inode_pages2(inode->i_mapping);
|
||||
memalloc_nofs_restore(nofs_flag);
|
||||
iput(inode);
|
||||
}
|
||||
spin_lock(&root->delalloc_lock);
|
||||
@ -5140,7 +5144,12 @@ static void btrfs_cleanup_bg_io(struct btrfs_block_group *cache)
|
||||
|
||||
inode = cache->io_ctl.inode;
|
||||
if (inode) {
|
||||
unsigned int nofs_flag;
|
||||
|
||||
nofs_flag = memalloc_nofs_save();
|
||||
invalidate_inode_pages2(inode->i_mapping);
|
||||
memalloc_nofs_restore(nofs_flag);
|
||||
|
||||
BTRFS_I(inode)->generation = 0;
|
||||
cache->io_ctl.inode = NULL;
|
||||
iput(inode);
|
||||
|
@ -904,6 +904,14 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
|
||||
ctx->sfu_remap = false; /* disable SFU mapping */
|
||||
}
|
||||
break;
|
||||
case Opt_mapchars:
|
||||
if (result.negated)
|
||||
ctx->sfu_remap = false;
|
||||
else {
|
||||
ctx->sfu_remap = true;
|
||||
ctx->remap = false; /* disable SFM (mapposix) mapping */
|
||||
}
|
||||
break;
|
||||
case Opt_user_xattr:
|
||||
if (result.negated)
|
||||
ctx->no_xattr = 1;
|
||||
|
@ -242,6 +242,7 @@ static int ocfs2_mknod(struct user_namespace *mnt_userns,
|
||||
int want_meta = 0;
|
||||
int xattr_credits = 0;
|
||||
struct ocfs2_security_xattr_info si = {
|
||||
.name = NULL,
|
||||
.enable = 1,
|
||||
};
|
||||
int did_quota_inode = 0;
|
||||
@ -1805,6 +1806,7 @@ static int ocfs2_symlink(struct user_namespace *mnt_userns,
|
||||
int want_clusters = 0;
|
||||
int xattr_credits = 0;
|
||||
struct ocfs2_security_xattr_info si = {
|
||||
.name = NULL,
|
||||
.enable = 1,
|
||||
};
|
||||
int did_quota = 0, did_quota_inode = 0;
|
||||
|
@ -7259,9 +7259,21 @@ static int ocfs2_xattr_security_set(const struct xattr_handler *handler,
|
||||
static int ocfs2_initxattrs(struct inode *inode, const struct xattr *xattr_array,
|
||||
void *fs_info)
|
||||
{
|
||||
struct ocfs2_security_xattr_info *si = fs_info;
|
||||
const struct xattr *xattr;
|
||||
int err = 0;
|
||||
|
||||
if (si) {
|
||||
si->value = kmemdup(xattr_array->value, xattr_array->value_len,
|
||||
GFP_KERNEL);
|
||||
if (!si->value)
|
||||
return -ENOMEM;
|
||||
|
||||
si->name = xattr_array->name;
|
||||
si->value_len = xattr_array->value_len;
|
||||
return 0;
|
||||
}
|
||||
|
||||
for (xattr = xattr_array; xattr->name != NULL; xattr++) {
|
||||
err = ocfs2_xattr_set(inode, OCFS2_XATTR_INDEX_SECURITY,
|
||||
xattr->name, xattr->value,
|
||||
@ -7277,13 +7289,23 @@ int ocfs2_init_security_get(struct inode *inode,
|
||||
const struct qstr *qstr,
|
||||
struct ocfs2_security_xattr_info *si)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/* check whether ocfs2 support feature xattr */
|
||||
if (!ocfs2_supports_xattr(OCFS2_SB(dir->i_sb)))
|
||||
return -EOPNOTSUPP;
|
||||
if (si)
|
||||
return security_old_inode_init_security(inode, dir, qstr,
|
||||
&si->name, &si->value,
|
||||
&si->value_len);
|
||||
if (si) {
|
||||
ret = security_inode_init_security(inode, dir, qstr,
|
||||
&ocfs2_initxattrs, si);
|
||||
/*
|
||||
* security_inode_init_security() does not return -EOPNOTSUPP,
|
||||
* we have to check the xattr ourselves.
|
||||
*/
|
||||
if (!ret && !si->name)
|
||||
si->enable = 0;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
return security_inode_init_security(inode, dir, qstr,
|
||||
&ocfs2_initxattrs, NULL);
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user