Merge 5.4.246 into android11-5.4-lts
Changes in 5.4.246 RDMA/efa: Fix unsupported page sizes in device RDMA/bnxt_re: Enable SRIOV VF support on Broadcom's 57500 adapter series RDMA/bnxt_re: Refactor queue pair creation code RDMA/bnxt_re: Fix return value of bnxt_re_process_raw_qp_pkt_rx iommu/rockchip: Fix unwind goto issue iommu/amd: Don't block updates to GATag if guest mode is on dmaengine: pl330: rename _start to prevent build error net/mlx5: fw_tracer, Fix event handling netrom: fix info-leak in nr_write_internal() af_packet: Fix data-races of pkt_sk(sk)->num. amd-xgbe: fix the false linkup in xgbe_phy_status mtd: rawnand: ingenic: fix empty stub helper definitions af_packet: do not use READ_ONCE() in packet_bind() tcp: deny tcp_disconnect() when threads are waiting tcp: Return user_mss for TCP_MAXSEG in CLOSE/LISTEN state if user_mss set net/sched: sch_ingress: Only create under TC_H_INGRESS net/sched: sch_clsact: Only create under TC_H_CLSACT net/sched: Reserve TC_H_INGRESS (TC_H_CLSACT) for ingress (clsact) Qdiscs net/sched: Prohibit regrafting ingress or clsact Qdiscs net: sched: fix NULL pointer dereference in mq_attach ocfs2/dlm: move BITS_TO_BYTES() to bitops.h for wider use net/netlink: fix NETLINK_LIST_MEMBERSHIPS length report udp6: Fix race condition in udp6_sendmsg & connect net/sched: flower: fix possible OOB write in fl_set_geneve_opt() net: dsa: mv88e6xxx: Increase wait after reset deactivation mtd: rawnand: marvell: ensure timing values are written mtd: rawnand: marvell: don't set the NAND frequency select watchdog: menz069_wdt: fix watchdog initialisation mailbox: mailbox-test: Fix potential double-free in mbox_test_message_write() ARM: 9295/1: unwind:fix unwind abort for uleb128 case media: rcar-vin: Select correct interrupt mode for V4L2_FIELD_ALTERNATE fbdev: modedb: Add 1920x1080 at 60 Hz video mode fbdev: stifb: Fix info entry in sti_struct on error path nbd: Fix debugfs_create_dir error checking ASoC: dwc: limit the number of overrun messages xfrm: Check if_id in inbound policy/secpath match ASoC: ssm2602: Add workaround for playback distortions media: dvb_demux: fix a bug for the continuity counter media: dvb-usb: az6027: fix three null-ptr-deref in az6027_i2c_xfer() media: dvb-usb-v2: ec168: fix null-ptr-deref in ec168_i2c_xfer() media: dvb-usb-v2: ce6230: fix null-ptr-deref in ce6230_i2c_master_xfer() media: dvb-usb-v2: rtl28xxu: fix null-ptr-deref in rtl28xxu_i2c_xfer media: dvb-usb: digitv: fix null-ptr-deref in digitv_i2c_xfer() media: dvb-usb: dw2102: fix uninit-value in su3000_read_mac_address media: netup_unidvb: fix irq init by register it at the end of probe media: dvb_ca_en50221: fix a size write bug media: ttusb-dec: fix memory leak in ttusb_dec_exit_dvb() media: mn88443x: fix !CONFIG_OF error by drop of_match_ptr from ID table media: dvb-core: Fix use-after-free due on race condition at dvb_net media: dvb-core: Fix kernel WARNING for blocking operation in wait_event*() media: dvb-core: Fix use-after-free due to race condition at dvb_ca_en50221 wifi: rtl8xxxu: fix authentication timeout due to incorrect RCR value ARM: dts: stm32: add pin map for CAN controller on stm32f7 arm64/mm: mark private VM_FAULT_X defines as vm_fault_t scsi: core: Decrease scsi_device's iorequest_cnt if dispatch failed wifi: b43: fix incorrect __packed annotation netfilter: conntrack: define variables exp_nat_nla_policy and any_addr with CONFIG_NF_NAT ALSA: oss: avoid missing-prototype warnings atm: hide unused procfs functions mailbox: mailbox-test: fix a locking issue in mbox_test_message_write() iio: adc: mxs-lradc: fix the order of two cleanup operations HID: google: add jewel USB id HID: wacom: avoid integer overflow in wacom_intuos_inout() iio: light: vcnl4035: fixed chip ID check iio: dac: mcp4725: Fix i2c_master_send() return value handling iio: dac: build ad5758 driver when AD5758 is selected net: usb: qmi_wwan: Set DTR quirk for BroadMobi BM818 usb: gadget: f_fs: Add unbind event before functionfs_unbind misc: fastrpc: return -EPIPE to invocations on device removal misc: fastrpc: reject new invocations during device removal scsi: stex: Fix gcc 13 warnings ata: libata-scsi: Use correct device no in ata_find_dev() flow_dissector: work around stack frame size warning x86/boot: Wrap literal addresses in absolute_pointer() ACPI: thermal: drop an always true check gcc-12: disable '-Wdangling-pointer' warning for now eth: sun: cassini: remove dead code kernel/extable.c: use address-of operator on section symbols treewide: Remove uninitialized_var() usage lib/dynamic_debug.c: use address-of operator on section symbols wifi: rtlwifi: remove always-true condition pointed out by GCC 12 mmc: vub300: fix invalid response handling tty: serial: fsl_lpuart: use UARTCTRL_TXINV to send break instead of UARTCTRL_SBK selinux: don't use make's grouped targets feature yet tracing/probe: trace_probe_primary_from_call(): checked list_first_entry ext4: add EA_INODE checking to ext4_iget() ext4: set lockdep subclass for the ea_inode in ext4_xattr_inode_cache_find() ext4: disallow ea_inodes with extended attributes ext4: add lockdep annotations for i_data_sem for ea_inode's fbcon: Fix null-ptr-deref in soft_cursor test_firmware: fix the memory leak of the allocated firmware buffer regmap: Account for register length when chunking scsi: dpt_i2o: Remove broken pass-through ioctl (I2OUSERCMD) scsi: dpt_i2o: Do not process completions with invalid addresses RDMA/bnxt_re: Remove set but not used variable 'dev_attr' RDMA/bnxt_re: Remove the qp from list only if the qp destroy succeeds drm/edid: Fix uninitialized variable in drm_cvt_modes() wifi: rtlwifi: 8192de: correct checking of IQK reload drm/edid: fix objtool warning in drm_cvt_modes() Linux 5.4.246 Change-Id: I8721e40543af31c56dbbd47910dd3b474e3a79ab Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
6d6982b563
6
Makefile
6
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 245
|
||||
SUBLEVEL = 246
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
@ -797,6 +797,10 @@ endif
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, unused-but-set-variable)
|
||||
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, unused-const-variable)
|
||||
|
||||
# These result in bogus false positives
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, dangling-pointer)
|
||||
|
||||
ifdef CONFIG_FRAME_POINTER
|
||||
KBUILD_CFLAGS += -fno-omit-frame-pointer -fno-optimize-sibling-calls
|
||||
else
|
||||
|
@ -284,6 +284,88 @@
|
||||
slew-rate = <2>;
|
||||
};
|
||||
};
|
||||
|
||||
can1_pins_a: can1-0 {
|
||||
pins1 {
|
||||
pinmux = <STM32_PINMUX('A', 12, AF9)>; /* CAN1_TX */
|
||||
};
|
||||
pins2 {
|
||||
pinmux = <STM32_PINMUX('A', 11, AF9)>; /* CAN1_RX */
|
||||
bias-pull-up;
|
||||
};
|
||||
};
|
||||
|
||||
can1_pins_b: can1-1 {
|
||||
pins1 {
|
||||
pinmux = <STM32_PINMUX('B', 9, AF9)>; /* CAN1_TX */
|
||||
};
|
||||
pins2 {
|
||||
pinmux = <STM32_PINMUX('B', 8, AF9)>; /* CAN1_RX */
|
||||
bias-pull-up;
|
||||
};
|
||||
};
|
||||
|
||||
can1_pins_c: can1-2 {
|
||||
pins1 {
|
||||
pinmux = <STM32_PINMUX('D', 1, AF9)>; /* CAN1_TX */
|
||||
};
|
||||
pins2 {
|
||||
pinmux = <STM32_PINMUX('D', 0, AF9)>; /* CAN1_RX */
|
||||
bias-pull-up;
|
||||
|
||||
};
|
||||
};
|
||||
|
||||
can1_pins_d: can1-3 {
|
||||
pins1 {
|
||||
pinmux = <STM32_PINMUX('H', 13, AF9)>; /* CAN1_TX */
|
||||
};
|
||||
pins2 {
|
||||
pinmux = <STM32_PINMUX('H', 14, AF9)>; /* CAN1_RX */
|
||||
bias-pull-up;
|
||||
|
||||
};
|
||||
};
|
||||
|
||||
can2_pins_a: can2-0 {
|
||||
pins1 {
|
||||
pinmux = <STM32_PINMUX('B', 6, AF9)>; /* CAN2_TX */
|
||||
};
|
||||
pins2 {
|
||||
pinmux = <STM32_PINMUX('B', 5, AF9)>; /* CAN2_RX */
|
||||
bias-pull-up;
|
||||
};
|
||||
};
|
||||
|
||||
can2_pins_b: can2-1 {
|
||||
pins1 {
|
||||
pinmux = <STM32_PINMUX('B', 13, AF9)>; /* CAN2_TX */
|
||||
};
|
||||
pins2 {
|
||||
pinmux = <STM32_PINMUX('B', 12, AF9)>; /* CAN2_RX */
|
||||
bias-pull-up;
|
||||
};
|
||||
};
|
||||
|
||||
can3_pins_a: can3-0 {
|
||||
pins1 {
|
||||
pinmux = <STM32_PINMUX('A', 15, AF11)>; /* CAN3_TX */
|
||||
};
|
||||
pins2 {
|
||||
pinmux = <STM32_PINMUX('A', 8, AF11)>; /* CAN3_RX */
|
||||
bias-pull-up;
|
||||
};
|
||||
};
|
||||
|
||||
can3_pins_b: can3-1 {
|
||||
pins1 {
|
||||
pinmux = <STM32_PINMUX('B', 4, AF11)>; /* CAN3_TX */
|
||||
};
|
||||
pins2 {
|
||||
pinmux = <STM32_PINMUX('B', 3, AF11)>; /* CAN3_RX */
|
||||
bias-pull-up;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
@ -300,6 +300,29 @@ static int unwind_exec_pop_subset_r0_to_r3(struct unwind_ctrl_block *ctrl,
|
||||
return URC_OK;
|
||||
}
|
||||
|
||||
static unsigned long unwind_decode_uleb128(struct unwind_ctrl_block *ctrl)
|
||||
{
|
||||
unsigned long bytes = 0;
|
||||
unsigned long insn;
|
||||
unsigned long result = 0;
|
||||
|
||||
/*
|
||||
* unwind_get_byte() will advance `ctrl` one instruction at a time, so
|
||||
* loop until we get an instruction byte where bit 7 is not set.
|
||||
*
|
||||
* Note: This decodes a maximum of 4 bytes to output 28 bits data where
|
||||
* max is 0xfffffff: that will cover a vsp increment of 1073742336, hence
|
||||
* it is sufficient for unwinding the stack.
|
||||
*/
|
||||
do {
|
||||
insn = unwind_get_byte(ctrl);
|
||||
result |= (insn & 0x7f) << (bytes * 7);
|
||||
bytes++;
|
||||
} while (!!(insn & 0x80) && (bytes != sizeof(result)));
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/*
|
||||
* Execute the current unwind instruction.
|
||||
*/
|
||||
@ -353,7 +376,7 @@ static int unwind_exec_insn(struct unwind_ctrl_block *ctrl)
|
||||
if (ret)
|
||||
goto error;
|
||||
} else if (insn == 0xb2) {
|
||||
unsigned long uleb128 = unwind_get_byte(ctrl);
|
||||
unsigned long uleb128 = unwind_decode_uleb128(ctrl);
|
||||
|
||||
ctrl->vrs[SP] += 0x204 + (uleb128 << 2);
|
||||
} else {
|
||||
|
@ -653,7 +653,7 @@ static void __init map_sa1100_gpio_regs( void )
|
||||
*/
|
||||
static void __init get_assabet_scr(void)
|
||||
{
|
||||
unsigned long uninitialized_var(scr), i;
|
||||
unsigned long scr, i;
|
||||
|
||||
GPDR |= 0x3fc; /* Configure GPIO 9:2 as outputs */
|
||||
GPSR = 0x3fc; /* Write 0xFF to GPIO 9:2 */
|
||||
|
@ -799,7 +799,7 @@ static int alignment_get_thumb(struct pt_regs *regs, u16 *ip, u16 *inst)
|
||||
static int
|
||||
do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
|
||||
{
|
||||
union offset_union uninitialized_var(offset);
|
||||
union offset_union offset;
|
||||
unsigned long instrptr;
|
||||
int (*handler)(unsigned long addr, u32 instr, struct pt_regs *regs);
|
||||
unsigned int type;
|
||||
|
@ -403,8 +403,8 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
|
||||
}
|
||||
}
|
||||
|
||||
#define VM_FAULT_BADMAP 0x010000
|
||||
#define VM_FAULT_BADACCESS 0x020000
|
||||
#define VM_FAULT_BADMAP ((__force vm_fault_t)0x010000)
|
||||
#define VM_FAULT_BADACCESS ((__force vm_fault_t)0x020000)
|
||||
|
||||
static vm_fault_t __do_page_fault(struct mm_struct *mm, unsigned long addr,
|
||||
unsigned int mm_flags, unsigned long vm_flags)
|
||||
|
@ -444,7 +444,7 @@ static void
|
||||
do_copy_task_regs (struct task_struct *task, struct unw_frame_info *info, void *arg)
|
||||
{
|
||||
unsigned long mask, sp, nat_bits = 0, ar_rnat, urbs_end, cfm;
|
||||
unsigned long uninitialized_var(ip); /* GCC be quiet */
|
||||
unsigned long ip;
|
||||
elf_greg_t *dst = arg;
|
||||
struct pt_regs *pt;
|
||||
char nat;
|
||||
|
@ -180,7 +180,7 @@ static void *per_cpu_node_setup(void *cpu_data, int node)
|
||||
void __init setup_per_cpu_areas(void)
|
||||
{
|
||||
struct pcpu_alloc_info *ai;
|
||||
struct pcpu_group_info *uninitialized_var(gi);
|
||||
struct pcpu_group_info *gi;
|
||||
unsigned int *cpu_map;
|
||||
void *base;
|
||||
unsigned long base_offset;
|
||||
|
@ -369,7 +369,7 @@ EXPORT_SYMBOL(flush_tlb_range);
|
||||
|
||||
void ia64_tlb_init(void)
|
||||
{
|
||||
ia64_ptce_info_t uninitialized_var(ptce_info); /* GCC be quiet */
|
||||
ia64_ptce_info_t ptce_info;
|
||||
u64 tr_pgbits;
|
||||
long status;
|
||||
pal_vm_info_1_u_t vm_info_1;
|
||||
|
@ -80,7 +80,7 @@ static void dump_tlb(int first, int last)
|
||||
unsigned int pagemask, guestctl1 = 0, c0, c1, i;
|
||||
unsigned long asidmask = cpu_asid_mask(¤t_cpu_data);
|
||||
int asidwidth = DIV_ROUND_UP(ilog2(asidmask) + 1, 4);
|
||||
unsigned long uninitialized_var(s_mmid);
|
||||
unsigned long s_mmid;
|
||||
#ifdef CONFIG_32BIT
|
||||
bool xpa = cpu_has_xpa && (read_c0_pagegrain() & PG_ELPA);
|
||||
int pwidth = xpa ? 11 : 8;
|
||||
|
@ -84,7 +84,7 @@ void setup_zero_pages(void)
|
||||
static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot)
|
||||
{
|
||||
enum fixed_addresses idx;
|
||||
unsigned int uninitialized_var(old_mmid);
|
||||
unsigned int old_mmid;
|
||||
unsigned long vaddr, flags, entrylo;
|
||||
unsigned long old_ctx;
|
||||
pte_t pte;
|
||||
|
@ -120,7 +120,7 @@ void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
|
||||
if (size <= (current_cpu_data.tlbsizeftlbsets ?
|
||||
current_cpu_data.tlbsize / 8 :
|
||||
current_cpu_data.tlbsize / 2)) {
|
||||
unsigned long old_entryhi, uninitialized_var(old_mmid);
|
||||
unsigned long old_entryhi, old_mmid;
|
||||
int newpid = cpu_asid(cpu, mm);
|
||||
|
||||
old_entryhi = read_c0_entryhi();
|
||||
@ -214,7 +214,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
|
||||
int cpu = smp_processor_id();
|
||||
|
||||
if (cpu_context(cpu, vma->vm_mm) != 0) {
|
||||
unsigned long uninitialized_var(old_mmid);
|
||||
unsigned long old_mmid;
|
||||
unsigned long flags, old_entryhi;
|
||||
int idx;
|
||||
|
||||
@ -381,7 +381,7 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
|
||||
#ifdef CONFIG_XPA
|
||||
panic("Broken for XPA kernels");
|
||||
#else
|
||||
unsigned int uninitialized_var(old_mmid);
|
||||
unsigned int old_mmid;
|
||||
unsigned long flags;
|
||||
unsigned long wired;
|
||||
unsigned long old_pagemask;
|
||||
|
@ -31,7 +31,7 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid,
|
||||
gva_t eaddr, void *to, void *from,
|
||||
unsigned long n)
|
||||
{
|
||||
int uninitialized_var(old_pid), old_lpid;
|
||||
int old_pid, old_lpid;
|
||||
unsigned long quadrant, ret = n;
|
||||
bool is_load = !!to;
|
||||
|
||||
|
@ -340,7 +340,7 @@ static int mpc52xx_irqhost_map(struct irq_domain *h, unsigned int virq,
|
||||
{
|
||||
int l1irq;
|
||||
int l2irq;
|
||||
struct irq_chip *uninitialized_var(irqchip);
|
||||
struct irq_chip *irqchip;
|
||||
void *hndlr;
|
||||
int type;
|
||||
u32 reg;
|
||||
|
@ -145,7 +145,7 @@ static int pcpu_sigp_retry(struct pcpu *pcpu, u8 order, u32 parm)
|
||||
|
||||
static inline int pcpu_stopped(struct pcpu *pcpu)
|
||||
{
|
||||
u32 uninitialized_var(status);
|
||||
u32 status;
|
||||
|
||||
if (__pcpu_sigp(pcpu->address, SIGP_SENSE,
|
||||
0, &status) != SIGP_CC_STATUS_STORED)
|
||||
|
@ -110,66 +110,78 @@ typedef unsigned int addr_t;
|
||||
|
||||
static inline u8 rdfs8(addr_t addr)
|
||||
{
|
||||
u8 *ptr = (u8 *)absolute_pointer(addr);
|
||||
u8 v;
|
||||
asm volatile("movb %%fs:%1,%0" : "=q" (v) : "m" (*(u8 *)addr));
|
||||
asm volatile("movb %%fs:%1,%0" : "=q" (v) : "m" (*ptr));
|
||||
return v;
|
||||
}
|
||||
static inline u16 rdfs16(addr_t addr)
|
||||
{
|
||||
u16 *ptr = (u16 *)absolute_pointer(addr);
|
||||
u16 v;
|
||||
asm volatile("movw %%fs:%1,%0" : "=r" (v) : "m" (*(u16 *)addr));
|
||||
asm volatile("movw %%fs:%1,%0" : "=r" (v) : "m" (*ptr));
|
||||
return v;
|
||||
}
|
||||
static inline u32 rdfs32(addr_t addr)
|
||||
{
|
||||
u32 *ptr = (u32 *)absolute_pointer(addr);
|
||||
u32 v;
|
||||
asm volatile("movl %%fs:%1,%0" : "=r" (v) : "m" (*(u32 *)addr));
|
||||
asm volatile("movl %%fs:%1,%0" : "=r" (v) : "m" (*ptr));
|
||||
return v;
|
||||
}
|
||||
|
||||
static inline void wrfs8(u8 v, addr_t addr)
|
||||
{
|
||||
asm volatile("movb %1,%%fs:%0" : "+m" (*(u8 *)addr) : "qi" (v));
|
||||
u8 *ptr = (u8 *)absolute_pointer(addr);
|
||||
asm volatile("movb %1,%%fs:%0" : "+m" (*ptr) : "qi" (v));
|
||||
}
|
||||
static inline void wrfs16(u16 v, addr_t addr)
|
||||
{
|
||||
asm volatile("movw %1,%%fs:%0" : "+m" (*(u16 *)addr) : "ri" (v));
|
||||
u16 *ptr = (u16 *)absolute_pointer(addr);
|
||||
asm volatile("movw %1,%%fs:%0" : "+m" (*ptr) : "ri" (v));
|
||||
}
|
||||
static inline void wrfs32(u32 v, addr_t addr)
|
||||
{
|
||||
asm volatile("movl %1,%%fs:%0" : "+m" (*(u32 *)addr) : "ri" (v));
|
||||
u32 *ptr = (u32 *)absolute_pointer(addr);
|
||||
asm volatile("movl %1,%%fs:%0" : "+m" (*ptr) : "ri" (v));
|
||||
}
|
||||
|
||||
static inline u8 rdgs8(addr_t addr)
|
||||
{
|
||||
u8 *ptr = (u8 *)absolute_pointer(addr);
|
||||
u8 v;
|
||||
asm volatile("movb %%gs:%1,%0" : "=q" (v) : "m" (*(u8 *)addr));
|
||||
asm volatile("movb %%gs:%1,%0" : "=q" (v) : "m" (*ptr));
|
||||
return v;
|
||||
}
|
||||
static inline u16 rdgs16(addr_t addr)
|
||||
{
|
||||
u16 *ptr = (u16 *)absolute_pointer(addr);
|
||||
u16 v;
|
||||
asm volatile("movw %%gs:%1,%0" : "=r" (v) : "m" (*(u16 *)addr));
|
||||
asm volatile("movw %%gs:%1,%0" : "=r" (v) : "m" (*ptr));
|
||||
return v;
|
||||
}
|
||||
static inline u32 rdgs32(addr_t addr)
|
||||
{
|
||||
u32 *ptr = (u32 *)absolute_pointer(addr);
|
||||
u32 v;
|
||||
asm volatile("movl %%gs:%1,%0" : "=r" (v) : "m" (*(u32 *)addr));
|
||||
asm volatile("movl %%gs:%1,%0" : "=r" (v) : "m" (*ptr));
|
||||
return v;
|
||||
}
|
||||
|
||||
static inline void wrgs8(u8 v, addr_t addr)
|
||||
{
|
||||
asm volatile("movb %1,%%gs:%0" : "+m" (*(u8 *)addr) : "qi" (v));
|
||||
u8 *ptr = (u8 *)absolute_pointer(addr);
|
||||
asm volatile("movb %1,%%gs:%0" : "+m" (*ptr) : "qi" (v));
|
||||
}
|
||||
static inline void wrgs16(u16 v, addr_t addr)
|
||||
{
|
||||
asm volatile("movw %1,%%gs:%0" : "+m" (*(u16 *)addr) : "ri" (v));
|
||||
u16 *ptr = (u16 *)absolute_pointer(addr);
|
||||
asm volatile("movw %1,%%gs:%0" : "+m" (*ptr) : "ri" (v));
|
||||
}
|
||||
static inline void wrgs32(u32 v, addr_t addr)
|
||||
{
|
||||
asm volatile("movl %1,%%gs:%0" : "+m" (*(u32 *)addr) : "ri" (v));
|
||||
u32 *ptr = (u32 *)absolute_pointer(addr);
|
||||
asm volatile("movl %1,%%gs:%0" : "+m" (*ptr) : "ri" (v));
|
||||
}
|
||||
|
||||
/* Note: these only return true/false, not a signed return value! */
|
||||
|
@ -33,7 +33,7 @@ static void copy_boot_params(void)
|
||||
u16 cl_offset;
|
||||
};
|
||||
const struct old_cmdline * const oldcmd =
|
||||
(const struct old_cmdline *)OLD_CL_ADDRESS;
|
||||
absolute_pointer(OLD_CL_ADDRESS);
|
||||
|
||||
BUILD_BUG_ON(sizeof(boot_params) != 4096);
|
||||
memcpy(&boot_params.hdr, &hdr, sizeof(hdr));
|
||||
|
@ -95,7 +95,7 @@ static void ich_force_hpet_resume(void)
|
||||
static void ich_force_enable_hpet(struct pci_dev *dev)
|
||||
{
|
||||
u32 val;
|
||||
u32 uninitialized_var(rcba);
|
||||
u32 rcba;
|
||||
int err = 0;
|
||||
|
||||
if (hpet_address || force_hpet_address)
|
||||
@ -185,7 +185,7 @@ static void hpet_print_force_info(void)
|
||||
static void old_ich_force_hpet_resume(void)
|
||||
{
|
||||
u32 val;
|
||||
u32 uninitialized_var(gen_cntl);
|
||||
u32 gen_cntl;
|
||||
|
||||
if (!force_hpet_address || !cached_dev)
|
||||
return;
|
||||
@ -207,7 +207,7 @@ static void old_ich_force_hpet_resume(void)
|
||||
static void old_ich_force_enable_hpet(struct pci_dev *dev)
|
||||
{
|
||||
u32 val;
|
||||
u32 uninitialized_var(gen_cntl);
|
||||
u32 gen_cntl;
|
||||
|
||||
if (hpet_address || force_hpet_address)
|
||||
return;
|
||||
@ -298,7 +298,7 @@ static void vt8237_force_hpet_resume(void)
|
||||
|
||||
static void vt8237_force_enable_hpet(struct pci_dev *dev)
|
||||
{
|
||||
u32 uninitialized_var(val);
|
||||
u32 val;
|
||||
|
||||
if (hpet_address || force_hpet_address)
|
||||
return;
|
||||
@ -429,7 +429,7 @@ static void nvidia_force_hpet_resume(void)
|
||||
|
||||
static void nvidia_force_enable_hpet(struct pci_dev *dev)
|
||||
{
|
||||
u32 uninitialized_var(val);
|
||||
u32 val;
|
||||
|
||||
if (hpet_address || force_hpet_address)
|
||||
return;
|
||||
|
@ -479,7 +479,7 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
|
||||
struct scatterlist *sglist,
|
||||
struct scatterlist **sg)
|
||||
{
|
||||
struct bio_vec uninitialized_var(bvec), bvprv = { NULL };
|
||||
struct bio_vec bvec, bvprv = { NULL };
|
||||
struct bvec_iter iter;
|
||||
int nsegs = 0;
|
||||
bool new_bio = false;
|
||||
|
@ -88,7 +88,7 @@ static void round_robin_cpu(unsigned int tsk_index)
|
||||
cpumask_var_t tmp;
|
||||
int cpu;
|
||||
unsigned long min_weight = -1;
|
||||
unsigned long uninitialized_var(preferred_cpu);
|
||||
unsigned long preferred_cpu;
|
||||
|
||||
if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
|
||||
return;
|
||||
|
@ -1153,8 +1153,6 @@ static int acpi_thermal_resume(struct device *dev)
|
||||
return -EINVAL;
|
||||
|
||||
for (i = 0; i < ACPI_THERMAL_MAX_ACTIVE; i++) {
|
||||
if (!(&tz->trips.active[i]))
|
||||
break;
|
||||
if (!tz->trips.active[i].flags.valid)
|
||||
break;
|
||||
tz->trips.active[i].flags.enabled = 1;
|
||||
|
@ -162,7 +162,7 @@ static ssize_t ata_scsi_park_show(struct device *device,
|
||||
struct ata_link *link;
|
||||
struct ata_device *dev;
|
||||
unsigned long now;
|
||||
unsigned int uninitialized_var(msecs);
|
||||
unsigned int msecs;
|
||||
int rc = 0;
|
||||
|
||||
ap = ata_shost_to_port(sdev->host);
|
||||
@ -3036,18 +3036,36 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct ata_device *ata_find_dev(struct ata_port *ap, int devno)
|
||||
static struct ata_device *ata_find_dev(struct ata_port *ap, unsigned int devno)
|
||||
{
|
||||
if (!sata_pmp_attached(ap)) {
|
||||
if (likely(devno >= 0 &&
|
||||
devno < ata_link_max_devices(&ap->link)))
|
||||
/*
|
||||
* For the non-PMP case, ata_link_max_devices() returns 1 (SATA case),
|
||||
* or 2 (IDE master + slave case). However, the former case includes
|
||||
* libsas hosted devices which are numbered per scsi host, leading
|
||||
* to devno potentially being larger than 0 but with each struct
|
||||
* ata_device having its own struct ata_port and struct ata_link.
|
||||
* To accommodate these, ignore devno and always use device number 0.
|
||||
*/
|
||||
if (likely(!sata_pmp_attached(ap))) {
|
||||
int link_max_devices = ata_link_max_devices(&ap->link);
|
||||
|
||||
if (link_max_devices == 1)
|
||||
return &ap->link.device[0];
|
||||
|
||||
if (devno < link_max_devices)
|
||||
return &ap->link.device[devno];
|
||||
} else {
|
||||
if (likely(devno >= 0 &&
|
||||
devno < ap->nr_pmp_links))
|
||||
return &ap->pmp_link[devno].device[0];
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* For PMP-attached devices, the device number corresponds to C
|
||||
* (channel) of SCSI [H:C:I:L], indicating the port pmp link
|
||||
* for the device.
|
||||
*/
|
||||
if (devno < ap->nr_pmp_links)
|
||||
return &ap->pmp_link[devno].device[0];
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -940,7 +940,7 @@ static int open_tx_first(struct atm_vcc *vcc)
|
||||
vcc->qos.txtp.max_pcr >= ATM_OC3_PCR);
|
||||
if (unlimited && zatm_dev->ubr != -1) zatm_vcc->shaper = zatm_dev->ubr;
|
||||
else {
|
||||
int uninitialized_var(pcr);
|
||||
int pcr;
|
||||
|
||||
if (unlimited) vcc->qos.txtp.max_sdu = ATM_MAX_AAL5_PDU;
|
||||
if ((zatm_vcc->shaper = alloc_shaper(vcc->dev,&pcr,
|
||||
|
@ -1850,6 +1850,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
|
||||
size_t val_count = val_len / val_bytes;
|
||||
size_t chunk_count, chunk_bytes;
|
||||
size_t chunk_regs = val_count;
|
||||
size_t max_data = map->max_raw_write - map->format.reg_bytes -
|
||||
map->format.pad_bytes;
|
||||
int ret, i;
|
||||
|
||||
if (!val_count)
|
||||
@ -1857,8 +1859,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
|
||||
|
||||
if (map->use_single_write)
|
||||
chunk_regs = 1;
|
||||
else if (map->max_raw_write && val_len > map->max_raw_write)
|
||||
chunk_regs = map->max_raw_write / val_bytes;
|
||||
else if (map->max_raw_write && val_len > max_data)
|
||||
chunk_regs = max_data / val_bytes;
|
||||
|
||||
chunk_count = val_count / chunk_regs;
|
||||
chunk_bytes = chunk_regs * val_bytes;
|
||||
|
@ -3426,7 +3426,7 @@ int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
{
|
||||
struct nlattr *resource_filter;
|
||||
struct drbd_resource *resource;
|
||||
struct drbd_device *uninitialized_var(device);
|
||||
struct drbd_device *device;
|
||||
int minor, err, retcode;
|
||||
struct drbd_genlmsghdr *dh;
|
||||
struct device_info device_info;
|
||||
@ -3515,7 +3515,7 @@ int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
{
|
||||
struct nlattr *resource_filter;
|
||||
struct drbd_resource *resource = NULL, *next_resource;
|
||||
struct drbd_connection *uninitialized_var(connection);
|
||||
struct drbd_connection *connection;
|
||||
int err = 0, retcode;
|
||||
struct drbd_genlmsghdr *dh;
|
||||
struct connection_info connection_info;
|
||||
@ -3677,7 +3677,7 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
{
|
||||
struct nlattr *resource_filter;
|
||||
struct drbd_resource *resource;
|
||||
struct drbd_device *uninitialized_var(device);
|
||||
struct drbd_device *device;
|
||||
struct drbd_peer_device *peer_device = NULL;
|
||||
int minor, err, retcode;
|
||||
struct drbd_genlmsghdr *dh;
|
||||
|
@ -1609,7 +1609,7 @@ static int nbd_dev_dbg_init(struct nbd_device *nbd)
|
||||
return -EIO;
|
||||
|
||||
dir = debugfs_create_dir(nbd_name(nbd), nbd_dbg_dir);
|
||||
if (!dir) {
|
||||
if (IS_ERR(dir)) {
|
||||
dev_err(nbd_to_dev(nbd), "Failed to create debugfs dir for '%s'\n",
|
||||
nbd_name(nbd));
|
||||
return -EIO;
|
||||
@ -1635,7 +1635,7 @@ static int nbd_dbg_init(void)
|
||||
struct dentry *dbg_dir;
|
||||
|
||||
dbg_dir = debugfs_create_dir("nbd", NULL);
|
||||
if (!dbg_dir)
|
||||
if (IS_ERR(dbg_dir))
|
||||
return -EIO;
|
||||
|
||||
nbd_dbg_dir = dbg_dir;
|
||||
|
@ -2087,7 +2087,7 @@ static int rbd_object_map_update_finish(struct rbd_obj_request *obj_req,
|
||||
struct rbd_device *rbd_dev = obj_req->img_request->rbd_dev;
|
||||
struct ceph_osd_data *osd_data;
|
||||
u64 objno;
|
||||
u8 state, new_state, uninitialized_var(current_state);
|
||||
u8 state, new_state, current_state;
|
||||
bool has_current_state;
|
||||
void *p;
|
||||
|
||||
|
@ -56,7 +56,7 @@ static void clk_gate_endisable(struct clk_hw *hw, int enable)
|
||||
{
|
||||
struct clk_gate *gate = to_clk_gate(hw);
|
||||
int set = gate->flags & CLK_GATE_SET_TO_DISABLE ? 1 : 0;
|
||||
unsigned long uninitialized_var(flags);
|
||||
unsigned long flags;
|
||||
u32 reg;
|
||||
|
||||
set ^= enable;
|
||||
|
@ -1048,7 +1048,7 @@ static bool _trigger(struct pl330_thread *thrd)
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool _start(struct pl330_thread *thrd)
|
||||
static bool pl330_start_thread(struct pl330_thread *thrd)
|
||||
{
|
||||
switch (_state(thrd)) {
|
||||
case PL330_STATE_FAULT_COMPLETING:
|
||||
@ -1696,7 +1696,7 @@ static int pl330_update(struct pl330_dmac *pl330)
|
||||
thrd->req_running = -1;
|
||||
|
||||
/* Get going again ASAP */
|
||||
_start(thrd);
|
||||
pl330_start_thread(thrd);
|
||||
|
||||
/* For now, just make a list of callbacks to be done */
|
||||
list_add_tail(&descdone->rqd, &pl330->req_done);
|
||||
@ -2083,7 +2083,7 @@ static void pl330_tasklet(unsigned long data)
|
||||
} else {
|
||||
/* Make sure the PL330 Channel thread is active */
|
||||
spin_lock(&pch->thread->dmac->lock);
|
||||
_start(pch->thread);
|
||||
pl330_start_thread(pch->thread);
|
||||
spin_unlock(&pch->thread->dmac->lock);
|
||||
}
|
||||
|
||||
@ -2101,7 +2101,7 @@ static void pl330_tasklet(unsigned long data)
|
||||
if (power_down) {
|
||||
pch->active = true;
|
||||
spin_lock(&pch->thread->dmac->lock);
|
||||
_start(pch->thread);
|
||||
pl330_start_thread(pch->thread);
|
||||
spin_unlock(&pch->thread->dmac->lock);
|
||||
power_down = false;
|
||||
}
|
||||
|
@ -1099,7 +1099,7 @@ static void context_tasklet(unsigned long data)
|
||||
static int context_add_buffer(struct context *ctx)
|
||||
{
|
||||
struct descriptor_buffer *desc;
|
||||
dma_addr_t uninitialized_var(bus_addr);
|
||||
dma_addr_t bus_addr;
|
||||
int offset;
|
||||
|
||||
/*
|
||||
@ -1289,7 +1289,7 @@ static int at_context_queue_packet(struct context *ctx,
|
||||
struct fw_packet *packet)
|
||||
{
|
||||
struct fw_ohci *ohci = ctx->ohci;
|
||||
dma_addr_t d_bus, uninitialized_var(payload_bus);
|
||||
dma_addr_t d_bus, payload_bus;
|
||||
struct driver_data *driver_data;
|
||||
struct descriptor *d, *last;
|
||||
__le32 *header;
|
||||
@ -2445,7 +2445,7 @@ static int ohci_set_config_rom(struct fw_card *card,
|
||||
{
|
||||
struct fw_ohci *ohci;
|
||||
__be32 *next_config_rom;
|
||||
dma_addr_t uninitialized_var(next_config_rom_bus);
|
||||
dma_addr_t next_config_rom_bus;
|
||||
|
||||
ohci = fw_ohci(card);
|
||||
|
||||
@ -2933,10 +2933,10 @@ static struct fw_iso_context *ohci_allocate_iso_context(struct fw_card *card,
|
||||
int type, int channel, size_t header_size)
|
||||
{
|
||||
struct fw_ohci *ohci = fw_ohci(card);
|
||||
struct iso_context *uninitialized_var(ctx);
|
||||
descriptor_callback_t uninitialized_var(callback);
|
||||
u64 *uninitialized_var(channels);
|
||||
u32 *uninitialized_var(mask), uninitialized_var(regs);
|
||||
struct iso_context *ctx;
|
||||
descriptor_callback_t callback;
|
||||
u64 *channels;
|
||||
u32 *mask, regs;
|
||||
int index, ret = -EBUSY;
|
||||
|
||||
spin_lock_irq(&ohci->lock);
|
||||
|
@ -985,7 +985,7 @@ static void sii8620_set_auto_zone(struct sii8620 *ctx)
|
||||
|
||||
static void sii8620_stop_video(struct sii8620 *ctx)
|
||||
{
|
||||
u8 uninitialized_var(val);
|
||||
u8 val;
|
||||
|
||||
sii8620_write_seq_static(ctx,
|
||||
REG_TPI_INTR_EN, 0,
|
||||
|
@ -2787,7 +2787,7 @@ static int drm_cvt_modes(struct drm_connector *connector,
|
||||
const u8 empty[3] = { 0, 0, 0 };
|
||||
|
||||
for (i = 0; i < 4; i++) {
|
||||
int uninitialized_var(width), height;
|
||||
int width, height;
|
||||
cvt = &(timing->data.other_data.data.cvt[i]);
|
||||
|
||||
if (!memcmp(cvt->code, empty, 3))
|
||||
@ -2795,6 +2795,8 @@ static int drm_cvt_modes(struct drm_connector *connector,
|
||||
|
||||
height = (cvt->code[0] + ((cvt->code[1] & 0xf0) << 4) + 1) * 2;
|
||||
switch (cvt->code[1] & 0x0c) {
|
||||
/* default - because compiler doesn't see that we've enumerated all cases */
|
||||
default:
|
||||
case 0x00:
|
||||
width = height * 4 / 3;
|
||||
break;
|
||||
|
@ -544,9 +544,9 @@ static unsigned long exynos_dsi_pll_find_pms(struct exynos_dsi *dsi,
|
||||
unsigned long best_freq = 0;
|
||||
u32 min_delta = 0xffffffff;
|
||||
u8 p_min, p_max;
|
||||
u8 _p, uninitialized_var(best_p);
|
||||
u16 _m, uninitialized_var(best_m);
|
||||
u8 _s, uninitialized_var(best_s);
|
||||
u8 _p, best_p;
|
||||
u16 _m, best_m;
|
||||
u8 _s, best_s;
|
||||
|
||||
p_min = DIV_ROUND_UP(fin, (12 * MHZ));
|
||||
p_max = fin / (6 * MHZ);
|
||||
|
@ -475,7 +475,7 @@ static struct i915_request *
|
||||
__unwind_incomplete_requests(struct intel_engine_cs *engine)
|
||||
{
|
||||
struct i915_request *rq, *rn, *active = NULL;
|
||||
struct list_head *uninitialized_var(pl);
|
||||
struct list_head *pl;
|
||||
int prio = I915_PRIORITY_INVALID;
|
||||
|
||||
lockdep_assert_held(&engine->active.lock);
|
||||
|
@ -1926,7 +1926,7 @@ int __intel_wait_for_register_fw(struct intel_uncore *uncore,
|
||||
unsigned int slow_timeout_ms,
|
||||
u32 *out_value)
|
||||
{
|
||||
u32 uninitialized_var(reg_value);
|
||||
u32 reg_value;
|
||||
#define done (((reg_value = intel_uncore_read_fw(uncore, reg)) & mask) == value)
|
||||
int ret;
|
||||
|
||||
|
@ -481,8 +481,8 @@ dw_mipi_dsi_get_lane_mbps(void *priv_data, const struct drm_display_mode *mode,
|
||||
unsigned long best_freq = 0;
|
||||
unsigned long fvco_min, fvco_max, fin, fout;
|
||||
unsigned int min_prediv, max_prediv;
|
||||
unsigned int _prediv, uninitialized_var(best_prediv);
|
||||
unsigned long _fbdiv, uninitialized_var(best_fbdiv);
|
||||
unsigned int _prediv, best_prediv;
|
||||
unsigned long _fbdiv, best_fbdiv;
|
||||
unsigned long min_delta = ULONG_MAX;
|
||||
|
||||
dsi->format = format;
|
||||
|
@ -473,6 +473,8 @@ static const struct hid_device_id hammer_devices[] = {
|
||||
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_EEL) },
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) },
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_JEWEL) },
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) },
|
||||
{ HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
|
||||
|
@ -490,6 +490,7 @@
|
||||
#define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044
|
||||
#define USB_DEVICE_ID_GOOGLE_DON 0x5050
|
||||
#define USB_DEVICE_ID_GOOGLE_EEL 0x5057
|
||||
#define USB_DEVICE_ID_GOOGLE_JEWEL 0x5061
|
||||
|
||||
#define USB_VENDOR_ID_GOTOP 0x08f2
|
||||
#define USB_DEVICE_ID_SUPER_Q2 0x007f
|
||||
|
@ -831,7 +831,7 @@ static int wacom_intuos_inout(struct wacom_wac *wacom)
|
||||
/* Enter report */
|
||||
if ((data[1] & 0xfc) == 0xc0) {
|
||||
/* serial number of the tool */
|
||||
wacom->serial[idx] = ((data[3] & 0x0f) << 28) +
|
||||
wacom->serial[idx] = ((__u64)(data[3] & 0x0f) << 28) +
|
||||
(data[4] << 20) + (data[5] << 12) +
|
||||
(data[6] << 4) + (data[7] >> 4);
|
||||
|
||||
|
@ -418,7 +418,7 @@ static void rk3x_i2c_handle_read(struct rk3x_i2c *i2c, unsigned int ipd)
|
||||
{
|
||||
unsigned int i;
|
||||
unsigned int len = i2c->msg->len - i2c->processed;
|
||||
u32 uninitialized_var(val);
|
||||
u32 val;
|
||||
u8 byte;
|
||||
|
||||
/* we only care for MBRF here. */
|
||||
|
@ -180,7 +180,7 @@ static int ide_get_dev_handle(struct device *dev, acpi_handle *handle,
|
||||
static acpi_handle ide_acpi_hwif_get_handle(ide_hwif_t *hwif)
|
||||
{
|
||||
struct device *dev = hwif->gendev.parent;
|
||||
acpi_handle uninitialized_var(dev_handle);
|
||||
acpi_handle dev_handle;
|
||||
u64 pcidevfn;
|
||||
acpi_handle chan_handle;
|
||||
int err;
|
||||
|
@ -608,7 +608,7 @@ static int ide_delayed_transfer_pc(ide_drive_t *drive)
|
||||
|
||||
static ide_startstop_t ide_transfer_pc(ide_drive_t *drive)
|
||||
{
|
||||
struct ide_atapi_pc *uninitialized_var(pc);
|
||||
struct ide_atapi_pc *pc;
|
||||
ide_hwif_t *hwif = drive->hwif;
|
||||
struct request *rq = hwif->rq;
|
||||
ide_expiry_t *expiry;
|
||||
|
@ -173,7 +173,7 @@ void ide_input_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
|
||||
u8 mmio = (hwif->host_flags & IDE_HFLAG_MMIO) ? 1 : 0;
|
||||
|
||||
if (io_32bit) {
|
||||
unsigned long uninitialized_var(flags);
|
||||
unsigned long flags;
|
||||
|
||||
if ((io_32bit & 2) && !mmio) {
|
||||
local_irq_save(flags);
|
||||
@ -217,7 +217,7 @@ void ide_output_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
|
||||
u8 mmio = (hwif->host_flags & IDE_HFLAG_MMIO) ? 1 : 0;
|
||||
|
||||
if (io_32bit) {
|
||||
unsigned long uninitialized_var(flags);
|
||||
unsigned long flags;
|
||||
|
||||
if ((io_32bit & 2) && !mmio) {
|
||||
local_irq_save(flags);
|
||||
|
@ -614,12 +614,12 @@ static int drive_is_ready(ide_drive_t *drive)
|
||||
void ide_timer_expiry (struct timer_list *t)
|
||||
{
|
||||
ide_hwif_t *hwif = from_timer(hwif, t, timer);
|
||||
ide_drive_t *uninitialized_var(drive);
|
||||
ide_drive_t *drive;
|
||||
ide_handler_t *handler;
|
||||
unsigned long flags;
|
||||
int wait = -1;
|
||||
int plug_device = 0;
|
||||
struct request *uninitialized_var(rq_in_flight);
|
||||
struct request *rq_in_flight;
|
||||
|
||||
spin_lock_irqsave(&hwif->lock, flags);
|
||||
|
||||
@ -772,13 +772,13 @@ irqreturn_t ide_intr (int irq, void *dev_id)
|
||||
{
|
||||
ide_hwif_t *hwif = (ide_hwif_t *)dev_id;
|
||||
struct ide_host *host = hwif->host;
|
||||
ide_drive_t *uninitialized_var(drive);
|
||||
ide_drive_t *drive;
|
||||
ide_handler_t *handler;
|
||||
unsigned long flags;
|
||||
ide_startstop_t startstop;
|
||||
irqreturn_t irq_ret = IRQ_NONE;
|
||||
int plug_device = 0;
|
||||
struct request *uninitialized_var(rq_in_flight);
|
||||
struct request *rq_in_flight;
|
||||
|
||||
if (host->host_flags & IDE_HFLAG_SERIALIZE) {
|
||||
if (hwif != host->cur_port)
|
||||
|
@ -131,7 +131,7 @@ static struct device_attribute *ide_port_attrs[] = {
|
||||
|
||||
int ide_sysfs_register_port(ide_hwif_t *hwif)
|
||||
{
|
||||
int i, uninitialized_var(rc);
|
||||
int i, rc;
|
||||
|
||||
for (i = 0; ide_port_attrs[i]; i++) {
|
||||
rc = device_create_file(hwif->portdev, ide_port_attrs[i]);
|
||||
|
@ -108,7 +108,7 @@ static void umc_set_speeds(u8 speeds[])
|
||||
static void umc_set_pio_mode(ide_hwif_t *hwif, ide_drive_t *drive)
|
||||
{
|
||||
ide_hwif_t *mate = hwif->mate;
|
||||
unsigned long uninitialized_var(flags);
|
||||
unsigned long flags;
|
||||
const u8 pio = drive->pio_mode - XFER_PIO_0;
|
||||
|
||||
printk("%s: setting umc8672 to PIO mode%d (speed %d)\n",
|
||||
|
@ -760,13 +760,13 @@ static int mxs_lradc_adc_probe(struct platform_device *pdev)
|
||||
|
||||
ret = mxs_lradc_adc_trigger_init(iio);
|
||||
if (ret)
|
||||
goto err_trig;
|
||||
return ret;
|
||||
|
||||
ret = iio_triggered_buffer_setup(iio, &iio_pollfunc_store_time,
|
||||
&mxs_lradc_adc_trigger_handler,
|
||||
&mxs_lradc_adc_buffer_ops);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err_trig;
|
||||
|
||||
adc->vref_mv = mxs_lradc_adc_vref_mv[lradc->soc];
|
||||
|
||||
@ -804,9 +804,9 @@ static int mxs_lradc_adc_probe(struct platform_device *pdev)
|
||||
|
||||
err_dev:
|
||||
mxs_lradc_adc_hw_stop(adc);
|
||||
mxs_lradc_adc_trigger_remove(iio);
|
||||
err_trig:
|
||||
iio_triggered_buffer_cleanup(iio);
|
||||
err_trig:
|
||||
mxs_lradc_adc_trigger_remove(iio);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -817,8 +817,8 @@ static int mxs_lradc_adc_remove(struct platform_device *pdev)
|
||||
|
||||
iio_device_unregister(iio);
|
||||
mxs_lradc_adc_hw_stop(adc);
|
||||
mxs_lradc_adc_trigger_remove(iio);
|
||||
iio_triggered_buffer_cleanup(iio);
|
||||
mxs_lradc_adc_trigger_remove(iio);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -16,7 +16,7 @@ obj-$(CONFIG_AD5592R_BASE) += ad5592r-base.o
|
||||
obj-$(CONFIG_AD5592R) += ad5592r.o
|
||||
obj-$(CONFIG_AD5593R) += ad5593r.o
|
||||
obj-$(CONFIG_AD5755) += ad5755.o
|
||||
obj-$(CONFIG_AD5755) += ad5758.o
|
||||
obj-$(CONFIG_AD5758) += ad5758.o
|
||||
obj-$(CONFIG_AD5761) += ad5761.o
|
||||
obj-$(CONFIG_AD5764) += ad5764.o
|
||||
obj-$(CONFIG_AD5791) += ad5791.o
|
||||
|
@ -47,12 +47,18 @@ static int __maybe_unused mcp4725_suspend(struct device *dev)
|
||||
struct mcp4725_data *data = iio_priv(i2c_get_clientdata(
|
||||
to_i2c_client(dev)));
|
||||
u8 outbuf[2];
|
||||
int ret;
|
||||
|
||||
outbuf[0] = (data->powerdown_mode + 1) << 4;
|
||||
outbuf[1] = 0;
|
||||
data->powerdown = true;
|
||||
|
||||
return i2c_master_send(data->client, outbuf, 2);
|
||||
ret = i2c_master_send(data->client, outbuf, 2);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
else if (ret != 2)
|
||||
return -EIO;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __maybe_unused mcp4725_resume(struct device *dev)
|
||||
@ -60,13 +66,19 @@ static int __maybe_unused mcp4725_resume(struct device *dev)
|
||||
struct mcp4725_data *data = iio_priv(i2c_get_clientdata(
|
||||
to_i2c_client(dev)));
|
||||
u8 outbuf[2];
|
||||
int ret;
|
||||
|
||||
/* restore previous DAC value */
|
||||
outbuf[0] = (data->dac_value >> 8) & 0xf;
|
||||
outbuf[1] = data->dac_value & 0xff;
|
||||
data->powerdown = false;
|
||||
|
||||
return i2c_master_send(data->client, outbuf, 2);
|
||||
ret = i2c_master_send(data->client, outbuf, 2);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
else if (ret != 2)
|
||||
return -EIO;
|
||||
return 0;
|
||||
}
|
||||
static SIMPLE_DEV_PM_OPS(mcp4725_pm_ops, mcp4725_suspend, mcp4725_resume);
|
||||
|
||||
|
@ -8,6 +8,7 @@
|
||||
* TODO: Proximity
|
||||
*/
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
@ -42,6 +43,7 @@
|
||||
#define VCNL4035_ALS_PERS_MASK GENMASK(3, 2)
|
||||
#define VCNL4035_INT_ALS_IF_H_MASK BIT(12)
|
||||
#define VCNL4035_INT_ALS_IF_L_MASK BIT(13)
|
||||
#define VCNL4035_DEV_ID_MASK GENMASK(7, 0)
|
||||
|
||||
/* Default values */
|
||||
#define VCNL4035_MODE_ALS_ENABLE BIT(0)
|
||||
@ -415,6 +417,7 @@ static int vcnl4035_init(struct vcnl4035_data *data)
|
||||
return ret;
|
||||
}
|
||||
|
||||
id = FIELD_GET(VCNL4035_DEV_ID_MASK, id);
|
||||
if (id != VCNL4035_DEV_ID_VAL) {
|
||||
dev_err(&data->client->dev, "Wrong id, got %x, expected %x\n",
|
||||
id, VCNL4035_DEV_ID_VAL);
|
||||
|
@ -1557,7 +1557,7 @@ static int ib_uverbs_open_qp(struct uverbs_attr_bundle *attrs)
|
||||
struct ib_uverbs_create_qp_resp resp;
|
||||
struct ib_uqp_object *obj;
|
||||
struct ib_xrcd *xrcd;
|
||||
struct ib_uobject *uninitialized_var(xrcd_uobj);
|
||||
struct ib_uobject *xrcd_uobj;
|
||||
struct ib_qp *qp;
|
||||
struct ib_qp_open_attr attr;
|
||||
int ret;
|
||||
@ -3378,7 +3378,7 @@ static int __uverbs_create_xsrq(struct uverbs_attr_bundle *attrs,
|
||||
struct ib_usrq_object *obj;
|
||||
struct ib_pd *pd;
|
||||
struct ib_srq *srq;
|
||||
struct ib_uobject *uninitialized_var(xrcd_uobj);
|
||||
struct ib_uobject *xrcd_uobj;
|
||||
struct ib_srq_init_attr attr;
|
||||
int ret;
|
||||
struct ib_device *ib_dev;
|
||||
|
@ -104,10 +104,19 @@ struct bnxt_re_sqp_entries {
|
||||
struct bnxt_re_qp *qp1_qp;
|
||||
};
|
||||
|
||||
#define BNXT_RE_MAX_GSI_SQP_ENTRIES 1024
|
||||
struct bnxt_re_gsi_context {
|
||||
struct bnxt_re_qp *gsi_qp;
|
||||
struct bnxt_re_qp *gsi_sqp;
|
||||
struct bnxt_re_ah *gsi_sah;
|
||||
struct bnxt_re_sqp_entries *sqp_tbl;
|
||||
};
|
||||
|
||||
#define BNXT_RE_MIN_MSIX 2
|
||||
#define BNXT_RE_MAX_MSIX 9
|
||||
#define BNXT_RE_AEQ_IDX 0
|
||||
#define BNXT_RE_NQ_IDX 1
|
||||
#define BNXT_RE_GEN_P5_MAX_VF 64
|
||||
|
||||
struct bnxt_re_dev {
|
||||
struct ib_device ibdev;
|
||||
@ -164,10 +173,7 @@ struct bnxt_re_dev {
|
||||
u16 cosq[2];
|
||||
|
||||
/* QP for for handling QP1 packets */
|
||||
u32 sqp_id;
|
||||
struct bnxt_re_qp *qp1_sqp;
|
||||
struct bnxt_re_ah *sqp_ah;
|
||||
struct bnxt_re_sqp_entries sqp_tbl[1024];
|
||||
struct bnxt_re_gsi_context gsi_ctx;
|
||||
atomic_t nq_alloc_cnt;
|
||||
u32 is_virtfn;
|
||||
u32 num_vfs;
|
||||
|
@ -330,7 +330,7 @@ int bnxt_re_del_gid(const struct ib_gid_attr *attr, void **context)
|
||||
*/
|
||||
if (ctx->idx == 0 &&
|
||||
rdma_link_local_addr((struct in6_addr *)gid_to_del) &&
|
||||
ctx->refcnt == 1 && rdev->qp1_sqp) {
|
||||
ctx->refcnt == 1 && rdev->gsi_ctx.gsi_sqp) {
|
||||
dev_dbg(rdev_to_dev(rdev),
|
||||
"Trying to delete GID0 while QP1 is alive\n");
|
||||
return -EFAULT;
|
||||
@ -760,6 +760,49 @@ void bnxt_re_unlock_cqs(struct bnxt_re_qp *qp,
|
||||
spin_unlock_irqrestore(&qp->scq->cq_lock, flags);
|
||||
}
|
||||
|
||||
static int bnxt_re_destroy_gsi_sqp(struct bnxt_re_qp *qp)
|
||||
{
|
||||
struct bnxt_re_qp *gsi_sqp;
|
||||
struct bnxt_re_ah *gsi_sah;
|
||||
struct bnxt_re_dev *rdev;
|
||||
int rc = 0;
|
||||
|
||||
rdev = qp->rdev;
|
||||
gsi_sqp = rdev->gsi_ctx.gsi_sqp;
|
||||
gsi_sah = rdev->gsi_ctx.gsi_sah;
|
||||
|
||||
dev_dbg(rdev_to_dev(rdev), "Destroy the shadow AH\n");
|
||||
bnxt_qplib_destroy_ah(&rdev->qplib_res,
|
||||
&gsi_sah->qplib_ah,
|
||||
true);
|
||||
bnxt_qplib_clean_qp(&qp->qplib_qp);
|
||||
|
||||
dev_dbg(rdev_to_dev(rdev), "Destroy the shadow QP\n");
|
||||
rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &gsi_sqp->qplib_qp);
|
||||
if (rc) {
|
||||
dev_err(rdev_to_dev(rdev), "Destroy Shadow QP failed");
|
||||
goto fail;
|
||||
}
|
||||
bnxt_qplib_free_qp_res(&rdev->qplib_res, &gsi_sqp->qplib_qp);
|
||||
|
||||
/* remove from active qp list */
|
||||
mutex_lock(&rdev->qp_lock);
|
||||
list_del(&gsi_sqp->list);
|
||||
mutex_unlock(&rdev->qp_lock);
|
||||
atomic_dec(&rdev->qp_count);
|
||||
|
||||
kfree(rdev->gsi_ctx.sqp_tbl);
|
||||
kfree(gsi_sah);
|
||||
kfree(gsi_sqp);
|
||||
rdev->gsi_ctx.gsi_sqp = NULL;
|
||||
rdev->gsi_ctx.gsi_sah = NULL;
|
||||
rdev->gsi_ctx.sqp_tbl = NULL;
|
||||
|
||||
return 0;
|
||||
fail:
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* Queue Pairs */
|
||||
int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata)
|
||||
{
|
||||
@ -769,6 +812,7 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata)
|
||||
int rc;
|
||||
|
||||
bnxt_qplib_flush_cqn_wq(&qp->qplib_qp);
|
||||
|
||||
rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp);
|
||||
if (rc) {
|
||||
dev_err(rdev_to_dev(rdev), "Failed to destroy HW QP");
|
||||
@ -783,40 +827,24 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp, struct ib_udata *udata)
|
||||
|
||||
bnxt_qplib_free_qp_res(&rdev->qplib_res, &qp->qplib_qp);
|
||||
|
||||
if (ib_qp->qp_type == IB_QPT_GSI && rdev->qp1_sqp) {
|
||||
bnxt_qplib_destroy_ah(&rdev->qplib_res, &rdev->sqp_ah->qplib_ah,
|
||||
false);
|
||||
|
||||
bnxt_qplib_clean_qp(&qp->qplib_qp);
|
||||
rc = bnxt_qplib_destroy_qp(&rdev->qplib_res,
|
||||
&rdev->qp1_sqp->qplib_qp);
|
||||
if (rc) {
|
||||
dev_err(rdev_to_dev(rdev),
|
||||
"Failed to destroy Shadow QP");
|
||||
return rc;
|
||||
}
|
||||
bnxt_qplib_free_qp_res(&rdev->qplib_res,
|
||||
&rdev->qp1_sqp->qplib_qp);
|
||||
mutex_lock(&rdev->qp_lock);
|
||||
list_del(&rdev->qp1_sqp->list);
|
||||
atomic_dec(&rdev->qp_count);
|
||||
mutex_unlock(&rdev->qp_lock);
|
||||
|
||||
kfree(rdev->sqp_ah);
|
||||
kfree(rdev->qp1_sqp);
|
||||
rdev->qp1_sqp = NULL;
|
||||
rdev->sqp_ah = NULL;
|
||||
if (ib_qp->qp_type == IB_QPT_GSI && rdev->gsi_ctx.gsi_sqp) {
|
||||
rc = bnxt_re_destroy_gsi_sqp(qp);
|
||||
if (rc)
|
||||
goto sh_fail;
|
||||
}
|
||||
|
||||
mutex_lock(&rdev->qp_lock);
|
||||
list_del(&qp->list);
|
||||
mutex_unlock(&rdev->qp_lock);
|
||||
atomic_dec(&rdev->qp_count);
|
||||
|
||||
ib_umem_release(qp->rumem);
|
||||
ib_umem_release(qp->sumem);
|
||||
|
||||
mutex_lock(&rdev->qp_lock);
|
||||
list_del(&qp->list);
|
||||
atomic_dec(&rdev->qp_count);
|
||||
mutex_unlock(&rdev->qp_lock);
|
||||
kfree(qp);
|
||||
return 0;
|
||||
sh_fail:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static u8 __from_ib_qp_type(enum ib_qp_type type)
|
||||
@ -984,8 +1012,6 @@ static struct bnxt_re_qp *bnxt_re_create_shadow_qp
|
||||
if (rc)
|
||||
goto fail;
|
||||
|
||||
rdev->sqp_id = qp->qplib_qp.id;
|
||||
|
||||
spin_lock_init(&qp->sq_lock);
|
||||
INIT_LIST_HEAD(&qp->list);
|
||||
mutex_lock(&rdev->qp_lock);
|
||||
@ -998,6 +1024,313 @@ static struct bnxt_re_qp *bnxt_re_create_shadow_qp
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int bnxt_re_init_rq_attr(struct bnxt_re_qp *qp,
|
||||
struct ib_qp_init_attr *init_attr)
|
||||
{
|
||||
struct bnxt_qplib_dev_attr *dev_attr;
|
||||
struct bnxt_qplib_qp *qplqp;
|
||||
struct bnxt_re_dev *rdev;
|
||||
int entries;
|
||||
|
||||
rdev = qp->rdev;
|
||||
qplqp = &qp->qplib_qp;
|
||||
dev_attr = &rdev->dev_attr;
|
||||
|
||||
if (init_attr->srq) {
|
||||
struct bnxt_re_srq *srq;
|
||||
|
||||
srq = container_of(init_attr->srq, struct bnxt_re_srq, ib_srq);
|
||||
if (!srq) {
|
||||
dev_err(rdev_to_dev(rdev), "SRQ not found");
|
||||
return -EINVAL;
|
||||
}
|
||||
qplqp->srq = &srq->qplib_srq;
|
||||
qplqp->rq.max_wqe = 0;
|
||||
} else {
|
||||
/* Allocate 1 more than what's provided so posting max doesn't
|
||||
* mean empty.
|
||||
*/
|
||||
entries = roundup_pow_of_two(init_attr->cap.max_recv_wr + 1);
|
||||
qplqp->rq.max_wqe = min_t(u32, entries,
|
||||
dev_attr->max_qp_wqes + 1);
|
||||
|
||||
qplqp->rq.q_full_delta = qplqp->rq.max_wqe -
|
||||
init_attr->cap.max_recv_wr;
|
||||
qplqp->rq.max_sge = init_attr->cap.max_recv_sge;
|
||||
if (qplqp->rq.max_sge > dev_attr->max_qp_sges)
|
||||
qplqp->rq.max_sge = dev_attr->max_qp_sges;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bnxt_re_adjust_gsi_rq_attr(struct bnxt_re_qp *qp)
|
||||
{
|
||||
struct bnxt_qplib_dev_attr *dev_attr;
|
||||
struct bnxt_qplib_qp *qplqp;
|
||||
struct bnxt_re_dev *rdev;
|
||||
|
||||
rdev = qp->rdev;
|
||||
qplqp = &qp->qplib_qp;
|
||||
dev_attr = &rdev->dev_attr;
|
||||
|
||||
qplqp->rq.max_sge = dev_attr->max_qp_sges;
|
||||
if (qplqp->rq.max_sge > dev_attr->max_qp_sges)
|
||||
qplqp->rq.max_sge = dev_attr->max_qp_sges;
|
||||
}
|
||||
|
||||
static void bnxt_re_init_sq_attr(struct bnxt_re_qp *qp,
|
||||
struct ib_qp_init_attr *init_attr,
|
||||
struct ib_udata *udata)
|
||||
{
|
||||
struct bnxt_qplib_dev_attr *dev_attr;
|
||||
struct bnxt_qplib_qp *qplqp;
|
||||
struct bnxt_re_dev *rdev;
|
||||
int entries;
|
||||
|
||||
rdev = qp->rdev;
|
||||
qplqp = &qp->qplib_qp;
|
||||
dev_attr = &rdev->dev_attr;
|
||||
|
||||
qplqp->sq.max_sge = init_attr->cap.max_send_sge;
|
||||
if (qplqp->sq.max_sge > dev_attr->max_qp_sges)
|
||||
qplqp->sq.max_sge = dev_attr->max_qp_sges;
|
||||
/*
|
||||
* Change the SQ depth if user has requested minimum using
|
||||
* configfs. Only supported for kernel consumers
|
||||
*/
|
||||
entries = init_attr->cap.max_send_wr;
|
||||
/* Allocate 128 + 1 more than what's provided */
|
||||
entries = roundup_pow_of_two(entries + BNXT_QPLIB_RESERVED_QP_WRS + 1);
|
||||
qplqp->sq.max_wqe = min_t(u32, entries, dev_attr->max_qp_wqes +
|
||||
BNXT_QPLIB_RESERVED_QP_WRS + 1);
|
||||
qplqp->sq.q_full_delta = BNXT_QPLIB_RESERVED_QP_WRS + 1;
|
||||
/*
|
||||
* Reserving one slot for Phantom WQE. Application can
|
||||
* post one extra entry in this case. But allowing this to avoid
|
||||
* unexpected Queue full condition
|
||||
*/
|
||||
qplqp->sq.q_full_delta -= 1;
|
||||
}
|
||||
|
||||
static void bnxt_re_adjust_gsi_sq_attr(struct bnxt_re_qp *qp,
|
||||
struct ib_qp_init_attr *init_attr)
|
||||
{
|
||||
struct bnxt_qplib_dev_attr *dev_attr;
|
||||
struct bnxt_qplib_qp *qplqp;
|
||||
struct bnxt_re_dev *rdev;
|
||||
int entries;
|
||||
|
||||
rdev = qp->rdev;
|
||||
qplqp = &qp->qplib_qp;
|
||||
dev_attr = &rdev->dev_attr;
|
||||
|
||||
entries = roundup_pow_of_two(init_attr->cap.max_send_wr + 1);
|
||||
qplqp->sq.max_wqe = min_t(u32, entries, dev_attr->max_qp_wqes + 1);
|
||||
qplqp->sq.q_full_delta = qplqp->sq.max_wqe -
|
||||
init_attr->cap.max_send_wr;
|
||||
qplqp->sq.max_sge++; /* Need one extra sge to put UD header */
|
||||
if (qplqp->sq.max_sge > dev_attr->max_qp_sges)
|
||||
qplqp->sq.max_sge = dev_attr->max_qp_sges;
|
||||
}
|
||||
|
||||
static int bnxt_re_init_qp_type(struct bnxt_re_dev *rdev,
|
||||
struct ib_qp_init_attr *init_attr)
|
||||
{
|
||||
struct bnxt_qplib_chip_ctx *chip_ctx;
|
||||
int qptype;
|
||||
|
||||
chip_ctx = &rdev->chip_ctx;
|
||||
|
||||
qptype = __from_ib_qp_type(init_attr->qp_type);
|
||||
if (qptype == IB_QPT_MAX) {
|
||||
dev_err(rdev_to_dev(rdev), "QP type 0x%x not supported",
|
||||
qptype);
|
||||
qptype = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (bnxt_qplib_is_chip_gen_p5(chip_ctx) &&
|
||||
init_attr->qp_type == IB_QPT_GSI)
|
||||
qptype = CMDQ_CREATE_QP_TYPE_GSI;
|
||||
out:
|
||||
return qptype;
|
||||
}
|
||||
|
||||
static int bnxt_re_init_qp_attr(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
|
||||
struct ib_qp_init_attr *init_attr,
|
||||
struct ib_udata *udata)
|
||||
{
|
||||
struct bnxt_qplib_dev_attr *dev_attr;
|
||||
struct bnxt_qplib_qp *qplqp;
|
||||
struct bnxt_re_dev *rdev;
|
||||
struct bnxt_re_cq *cq;
|
||||
int rc = 0, qptype;
|
||||
|
||||
rdev = qp->rdev;
|
||||
qplqp = &qp->qplib_qp;
|
||||
dev_attr = &rdev->dev_attr;
|
||||
|
||||
/* Setup misc params */
|
||||
ether_addr_copy(qplqp->smac, rdev->netdev->dev_addr);
|
||||
qplqp->pd = &pd->qplib_pd;
|
||||
qplqp->qp_handle = (u64)qplqp;
|
||||
qplqp->max_inline_data = init_attr->cap.max_inline_data;
|
||||
qplqp->sig_type = ((init_attr->sq_sig_type == IB_SIGNAL_ALL_WR) ?
|
||||
true : false);
|
||||
qptype = bnxt_re_init_qp_type(rdev, init_attr);
|
||||
if (qptype < 0) {
|
||||
rc = qptype;
|
||||
goto out;
|
||||
}
|
||||
qplqp->type = (u8)qptype;
|
||||
|
||||
if (init_attr->qp_type == IB_QPT_RC) {
|
||||
qplqp->max_rd_atomic = dev_attr->max_qp_rd_atom;
|
||||
qplqp->max_dest_rd_atomic = dev_attr->max_qp_init_rd_atom;
|
||||
}
|
||||
qplqp->mtu = ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu));
|
||||
qplqp->dpi = &rdev->dpi_privileged; /* Doorbell page */
|
||||
if (init_attr->create_flags)
|
||||
dev_dbg(rdev_to_dev(rdev),
|
||||
"QP create flags 0x%x not supported",
|
||||
init_attr->create_flags);
|
||||
|
||||
/* Setup CQs */
|
||||
if (init_attr->send_cq) {
|
||||
cq = container_of(init_attr->send_cq, struct bnxt_re_cq, ib_cq);
|
||||
if (!cq) {
|
||||
dev_err(rdev_to_dev(rdev), "Send CQ not found");
|
||||
rc = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
qplqp->scq = &cq->qplib_cq;
|
||||
qp->scq = cq;
|
||||
}
|
||||
|
||||
if (init_attr->recv_cq) {
|
||||
cq = container_of(init_attr->recv_cq, struct bnxt_re_cq, ib_cq);
|
||||
if (!cq) {
|
||||
dev_err(rdev_to_dev(rdev), "Receive CQ not found");
|
||||
rc = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
qplqp->rcq = &cq->qplib_cq;
|
||||
qp->rcq = cq;
|
||||
}
|
||||
|
||||
/* Setup RQ/SRQ */
|
||||
rc = bnxt_re_init_rq_attr(qp, init_attr);
|
||||
if (rc)
|
||||
goto out;
|
||||
if (init_attr->qp_type == IB_QPT_GSI)
|
||||
bnxt_re_adjust_gsi_rq_attr(qp);
|
||||
|
||||
/* Setup SQ */
|
||||
bnxt_re_init_sq_attr(qp, init_attr, udata);
|
||||
if (init_attr->qp_type == IB_QPT_GSI)
|
||||
bnxt_re_adjust_gsi_sq_attr(qp, init_attr);
|
||||
|
||||
if (udata) /* This will update DPI and qp_handle */
|
||||
rc = bnxt_re_init_user_qp(rdev, pd, qp, udata);
|
||||
out:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int bnxt_re_create_shadow_gsi(struct bnxt_re_qp *qp,
|
||||
struct bnxt_re_pd *pd)
|
||||
{
|
||||
struct bnxt_re_sqp_entries *sqp_tbl = NULL;
|
||||
struct bnxt_re_dev *rdev;
|
||||
struct bnxt_re_qp *sqp;
|
||||
struct bnxt_re_ah *sah;
|
||||
int rc = 0;
|
||||
|
||||
rdev = qp->rdev;
|
||||
/* Create a shadow QP to handle the QP1 traffic */
|
||||
sqp_tbl = kzalloc(sizeof(*sqp_tbl) * BNXT_RE_MAX_GSI_SQP_ENTRIES,
|
||||
GFP_KERNEL);
|
||||
if (!sqp_tbl)
|
||||
return -ENOMEM;
|
||||
rdev->gsi_ctx.sqp_tbl = sqp_tbl;
|
||||
|
||||
sqp = bnxt_re_create_shadow_qp(pd, &rdev->qplib_res, &qp->qplib_qp);
|
||||
if (!sqp) {
|
||||
rc = -ENODEV;
|
||||
dev_err(rdev_to_dev(rdev),
|
||||
"Failed to create Shadow QP for QP1");
|
||||
goto out;
|
||||
}
|
||||
rdev->gsi_ctx.gsi_sqp = sqp;
|
||||
|
||||
sqp->rcq = qp->rcq;
|
||||
sqp->scq = qp->scq;
|
||||
sah = bnxt_re_create_shadow_qp_ah(pd, &rdev->qplib_res,
|
||||
&qp->qplib_qp);
|
||||
if (!sah) {
|
||||
bnxt_qplib_destroy_qp(&rdev->qplib_res,
|
||||
&sqp->qplib_qp);
|
||||
rc = -ENODEV;
|
||||
dev_err(rdev_to_dev(rdev),
|
||||
"Failed to create AH entry for ShadowQP");
|
||||
goto out;
|
||||
}
|
||||
rdev->gsi_ctx.gsi_sah = sah;
|
||||
|
||||
return 0;
|
||||
out:
|
||||
kfree(sqp_tbl);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int bnxt_re_create_gsi_qp(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
|
||||
struct ib_qp_init_attr *init_attr)
|
||||
{
|
||||
struct bnxt_re_dev *rdev;
|
||||
struct bnxt_qplib_qp *qplqp;
|
||||
int rc = 0;
|
||||
|
||||
rdev = qp->rdev;
|
||||
qplqp = &qp->qplib_qp;
|
||||
|
||||
qplqp->rq_hdr_buf_size = BNXT_QPLIB_MAX_QP1_RQ_HDR_SIZE_V2;
|
||||
qplqp->sq_hdr_buf_size = BNXT_QPLIB_MAX_QP1_SQ_HDR_SIZE_V2;
|
||||
|
||||
rc = bnxt_qplib_create_qp1(&rdev->qplib_res, qplqp);
|
||||
if (rc) {
|
||||
dev_err(rdev_to_dev(rdev), "create HW QP1 failed!");
|
||||
goto out;
|
||||
}
|
||||
|
||||
rc = bnxt_re_create_shadow_gsi(qp, pd);
|
||||
out:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static bool bnxt_re_test_qp_limits(struct bnxt_re_dev *rdev,
|
||||
struct ib_qp_init_attr *init_attr,
|
||||
struct bnxt_qplib_dev_attr *dev_attr)
|
||||
{
|
||||
bool rc = true;
|
||||
|
||||
if (init_attr->cap.max_send_wr > dev_attr->max_qp_wqes ||
|
||||
init_attr->cap.max_recv_wr > dev_attr->max_qp_wqes ||
|
||||
init_attr->cap.max_send_sge > dev_attr->max_qp_sges ||
|
||||
init_attr->cap.max_recv_sge > dev_attr->max_qp_sges ||
|
||||
init_attr->cap.max_inline_data > dev_attr->max_inline_data) {
|
||||
dev_err(rdev_to_dev(rdev),
|
||||
"Create QP failed - max exceeded! 0x%x/0x%x 0x%x/0x%x 0x%x/0x%x 0x%x/0x%x 0x%x/0x%x",
|
||||
init_attr->cap.max_send_wr, dev_attr->max_qp_wqes,
|
||||
init_attr->cap.max_recv_wr, dev_attr->max_qp_wqes,
|
||||
init_attr->cap.max_send_sge, dev_attr->max_qp_sges,
|
||||
init_attr->cap.max_recv_sge, dev_attr->max_qp_sges,
|
||||
init_attr->cap.max_inline_data,
|
||||
dev_attr->max_inline_data);
|
||||
rc = false;
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
|
||||
struct ib_qp *bnxt_re_create_qp(struct ib_pd *ib_pd,
|
||||
struct ib_qp_init_attr *qp_init_attr,
|
||||
struct ib_udata *udata)
|
||||
@ -1006,197 +1339,60 @@ struct ib_qp *bnxt_re_create_qp(struct ib_pd *ib_pd,
|
||||
struct bnxt_re_dev *rdev = pd->rdev;
|
||||
struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
|
||||
struct bnxt_re_qp *qp;
|
||||
struct bnxt_re_cq *cq;
|
||||
struct bnxt_re_srq *srq;
|
||||
int rc, entries;
|
||||
int rc;
|
||||
|
||||
if ((qp_init_attr->cap.max_send_wr > dev_attr->max_qp_wqes) ||
|
||||
(qp_init_attr->cap.max_recv_wr > dev_attr->max_qp_wqes) ||
|
||||
(qp_init_attr->cap.max_send_sge > dev_attr->max_qp_sges) ||
|
||||
(qp_init_attr->cap.max_recv_sge > dev_attr->max_qp_sges) ||
|
||||
(qp_init_attr->cap.max_inline_data > dev_attr->max_inline_data))
|
||||
return ERR_PTR(-EINVAL);
|
||||
rc = bnxt_re_test_qp_limits(rdev, qp_init_attr, dev_attr);
|
||||
if (!rc) {
|
||||
rc = -EINVAL;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
qp = kzalloc(sizeof(*qp), GFP_KERNEL);
|
||||
if (!qp)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
if (!qp) {
|
||||
rc = -ENOMEM;
|
||||
goto exit;
|
||||
}
|
||||
qp->rdev = rdev;
|
||||
ether_addr_copy(qp->qplib_qp.smac, rdev->netdev->dev_addr);
|
||||
qp->qplib_qp.pd = &pd->qplib_pd;
|
||||
qp->qplib_qp.qp_handle = (u64)(unsigned long)(&qp->qplib_qp);
|
||||
qp->qplib_qp.type = __from_ib_qp_type(qp_init_attr->qp_type);
|
||||
|
||||
if (qp_init_attr->qp_type == IB_QPT_GSI &&
|
||||
bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx))
|
||||
qp->qplib_qp.type = CMDQ_CREATE_QP_TYPE_GSI;
|
||||
if (qp->qplib_qp.type == IB_QPT_MAX) {
|
||||
dev_err(rdev_to_dev(rdev), "QP type 0x%x not supported",
|
||||
qp->qplib_qp.type);
|
||||
rc = -EINVAL;
|
||||
rc = bnxt_re_init_qp_attr(qp, pd, qp_init_attr, udata);
|
||||
if (rc)
|
||||
goto fail;
|
||||
}
|
||||
|
||||
qp->qplib_qp.max_inline_data = qp_init_attr->cap.max_inline_data;
|
||||
qp->qplib_qp.sig_type = ((qp_init_attr->sq_sig_type ==
|
||||
IB_SIGNAL_ALL_WR) ? true : false);
|
||||
|
||||
qp->qplib_qp.sq.max_sge = qp_init_attr->cap.max_send_sge;
|
||||
if (qp->qplib_qp.sq.max_sge > dev_attr->max_qp_sges)
|
||||
qp->qplib_qp.sq.max_sge = dev_attr->max_qp_sges;
|
||||
|
||||
if (qp_init_attr->send_cq) {
|
||||
cq = container_of(qp_init_attr->send_cq, struct bnxt_re_cq,
|
||||
ib_cq);
|
||||
if (!cq) {
|
||||
dev_err(rdev_to_dev(rdev), "Send CQ not found");
|
||||
rc = -EINVAL;
|
||||
goto fail;
|
||||
}
|
||||
qp->qplib_qp.scq = &cq->qplib_cq;
|
||||
qp->scq = cq;
|
||||
}
|
||||
|
||||
if (qp_init_attr->recv_cq) {
|
||||
cq = container_of(qp_init_attr->recv_cq, struct bnxt_re_cq,
|
||||
ib_cq);
|
||||
if (!cq) {
|
||||
dev_err(rdev_to_dev(rdev), "Receive CQ not found");
|
||||
rc = -EINVAL;
|
||||
goto fail;
|
||||
}
|
||||
qp->qplib_qp.rcq = &cq->qplib_cq;
|
||||
qp->rcq = cq;
|
||||
}
|
||||
|
||||
if (qp_init_attr->srq) {
|
||||
srq = container_of(qp_init_attr->srq, struct bnxt_re_srq,
|
||||
ib_srq);
|
||||
if (!srq) {
|
||||
dev_err(rdev_to_dev(rdev), "SRQ not found");
|
||||
rc = -EINVAL;
|
||||
goto fail;
|
||||
}
|
||||
qp->qplib_qp.srq = &srq->qplib_srq;
|
||||
qp->qplib_qp.rq.max_wqe = 0;
|
||||
} else {
|
||||
/* Allocate 1 more than what's provided so posting max doesn't
|
||||
* mean empty
|
||||
*/
|
||||
entries = roundup_pow_of_two(qp_init_attr->cap.max_recv_wr + 1);
|
||||
qp->qplib_qp.rq.max_wqe = min_t(u32, entries,
|
||||
dev_attr->max_qp_wqes + 1);
|
||||
|
||||
qp->qplib_qp.rq.q_full_delta = qp->qplib_qp.rq.max_wqe -
|
||||
qp_init_attr->cap.max_recv_wr;
|
||||
|
||||
qp->qplib_qp.rq.max_sge = qp_init_attr->cap.max_recv_sge;
|
||||
if (qp->qplib_qp.rq.max_sge > dev_attr->max_qp_sges)
|
||||
qp->qplib_qp.rq.max_sge = dev_attr->max_qp_sges;
|
||||
}
|
||||
|
||||
qp->qplib_qp.mtu = ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu));
|
||||
|
||||
if (qp_init_attr->qp_type == IB_QPT_GSI &&
|
||||
!(bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx))) {
|
||||
/* Allocate 1 more than what's provided */
|
||||
entries = roundup_pow_of_two(qp_init_attr->cap.max_send_wr + 1);
|
||||
qp->qplib_qp.sq.max_wqe = min_t(u32, entries,
|
||||
dev_attr->max_qp_wqes + 1);
|
||||
qp->qplib_qp.sq.q_full_delta = qp->qplib_qp.sq.max_wqe -
|
||||
qp_init_attr->cap.max_send_wr;
|
||||
qp->qplib_qp.rq.max_sge = dev_attr->max_qp_sges;
|
||||
if (qp->qplib_qp.rq.max_sge > dev_attr->max_qp_sges)
|
||||
qp->qplib_qp.rq.max_sge = dev_attr->max_qp_sges;
|
||||
qp->qplib_qp.sq.max_sge++;
|
||||
if (qp->qplib_qp.sq.max_sge > dev_attr->max_qp_sges)
|
||||
qp->qplib_qp.sq.max_sge = dev_attr->max_qp_sges;
|
||||
|
||||
qp->qplib_qp.rq_hdr_buf_size =
|
||||
BNXT_QPLIB_MAX_QP1_RQ_HDR_SIZE_V2;
|
||||
|
||||
qp->qplib_qp.sq_hdr_buf_size =
|
||||
BNXT_QPLIB_MAX_QP1_SQ_HDR_SIZE_V2;
|
||||
qp->qplib_qp.dpi = &rdev->dpi_privileged;
|
||||
rc = bnxt_qplib_create_qp1(&rdev->qplib_res, &qp->qplib_qp);
|
||||
if (rc) {
|
||||
dev_err(rdev_to_dev(rdev), "Failed to create HW QP1");
|
||||
rc = bnxt_re_create_gsi_qp(qp, pd, qp_init_attr);
|
||||
if (rc == -ENODEV)
|
||||
goto qp_destroy;
|
||||
if (rc)
|
||||
goto fail;
|
||||
}
|
||||
/* Create a shadow QP to handle the QP1 traffic */
|
||||
rdev->qp1_sqp = bnxt_re_create_shadow_qp(pd, &rdev->qplib_res,
|
||||
&qp->qplib_qp);
|
||||
if (!rdev->qp1_sqp) {
|
||||
rc = -EINVAL;
|
||||
dev_err(rdev_to_dev(rdev),
|
||||
"Failed to create Shadow QP for QP1");
|
||||
goto qp_destroy;
|
||||
}
|
||||
rdev->sqp_ah = bnxt_re_create_shadow_qp_ah(pd, &rdev->qplib_res,
|
||||
&qp->qplib_qp);
|
||||
if (!rdev->sqp_ah) {
|
||||
bnxt_qplib_destroy_qp(&rdev->qplib_res,
|
||||
&rdev->qp1_sqp->qplib_qp);
|
||||
rc = -EINVAL;
|
||||
dev_err(rdev_to_dev(rdev),
|
||||
"Failed to create AH entry for ShadowQP");
|
||||
goto qp_destroy;
|
||||
}
|
||||
|
||||
} else {
|
||||
/* Allocate 128 + 1 more than what's provided */
|
||||
entries = roundup_pow_of_two(qp_init_attr->cap.max_send_wr +
|
||||
BNXT_QPLIB_RESERVED_QP_WRS + 1);
|
||||
qp->qplib_qp.sq.max_wqe = min_t(u32, entries,
|
||||
dev_attr->max_qp_wqes +
|
||||
BNXT_QPLIB_RESERVED_QP_WRS + 1);
|
||||
qp->qplib_qp.sq.q_full_delta = BNXT_QPLIB_RESERVED_QP_WRS + 1;
|
||||
|
||||
/*
|
||||
* Reserving one slot for Phantom WQE. Application can
|
||||
* post one extra entry in this case. But allowing this to avoid
|
||||
* unexpected Queue full condition
|
||||
*/
|
||||
|
||||
qp->qplib_qp.sq.q_full_delta -= 1;
|
||||
|
||||
qp->qplib_qp.max_rd_atomic = dev_attr->max_qp_rd_atom;
|
||||
qp->qplib_qp.max_dest_rd_atomic = dev_attr->max_qp_init_rd_atom;
|
||||
if (udata) {
|
||||
rc = bnxt_re_init_user_qp(rdev, pd, qp, udata);
|
||||
if (rc)
|
||||
goto fail;
|
||||
} else {
|
||||
qp->qplib_qp.dpi = &rdev->dpi_privileged;
|
||||
}
|
||||
|
||||
rc = bnxt_qplib_create_qp(&rdev->qplib_res, &qp->qplib_qp);
|
||||
if (rc) {
|
||||
dev_err(rdev_to_dev(rdev), "Failed to create HW QP");
|
||||
goto free_umem;
|
||||
}
|
||||
if (udata) {
|
||||
struct bnxt_re_qp_resp resp;
|
||||
|
||||
resp.qpid = qp->qplib_qp.id;
|
||||
resp.rsvd = 0;
|
||||
rc = ib_copy_to_udata(udata, &resp, sizeof(resp));
|
||||
if (rc) {
|
||||
dev_err(rdev_to_dev(rdev), "Failed to copy QP udata");
|
||||
goto qp_destroy;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
qp->ib_qp.qp_num = qp->qplib_qp.id;
|
||||
if (qp_init_attr->qp_type == IB_QPT_GSI)
|
||||
rdev->gsi_ctx.gsi_qp = qp;
|
||||
spin_lock_init(&qp->sq_lock);
|
||||
spin_lock_init(&qp->rq_lock);
|
||||
|
||||
if (udata) {
|
||||
struct bnxt_re_qp_resp resp;
|
||||
|
||||
resp.qpid = qp->ib_qp.qp_num;
|
||||
resp.rsvd = 0;
|
||||
rc = ib_copy_to_udata(udata, &resp, sizeof(resp));
|
||||
if (rc) {
|
||||
dev_err(rdev_to_dev(rdev), "Failed to copy QP udata");
|
||||
goto qp_destroy;
|
||||
}
|
||||
}
|
||||
INIT_LIST_HEAD(&qp->list);
|
||||
mutex_lock(&rdev->qp_lock);
|
||||
list_add_tail(&qp->list, &rdev->qp_list);
|
||||
atomic_inc(&rdev->qp_count);
|
||||
mutex_unlock(&rdev->qp_lock);
|
||||
atomic_inc(&rdev->qp_count);
|
||||
|
||||
return &qp->ib_qp;
|
||||
qp_destroy:
|
||||
@ -1206,6 +1402,7 @@ struct ib_qp *bnxt_re_create_qp(struct ib_pd *ib_pd,
|
||||
ib_umem_release(qp->sumem);
|
||||
fail:
|
||||
kfree(qp);
|
||||
exit:
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
|
||||
@ -1504,7 +1701,7 @@ static int bnxt_re_modify_shadow_qp(struct bnxt_re_dev *rdev,
|
||||
struct bnxt_re_qp *qp1_qp,
|
||||
int qp_attr_mask)
|
||||
{
|
||||
struct bnxt_re_qp *qp = rdev->qp1_sqp;
|
||||
struct bnxt_re_qp *qp = rdev->gsi_ctx.gsi_sqp;
|
||||
int rc = 0;
|
||||
|
||||
if (qp_attr_mask & IB_QP_STATE) {
|
||||
@ -1768,7 +1965,7 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
|
||||
dev_err(rdev_to_dev(rdev), "Failed to modify HW QP");
|
||||
return rc;
|
||||
}
|
||||
if (ib_qp->qp_type == IB_QPT_GSI && rdev->qp1_sqp)
|
||||
if (ib_qp->qp_type == IB_QPT_GSI && rdev->gsi_ctx.gsi_sqp)
|
||||
rc = bnxt_re_modify_shadow_qp(rdev, qp, qp_attr_mask);
|
||||
return rc;
|
||||
}
|
||||
@ -2013,9 +2210,12 @@ static int bnxt_re_build_qp1_shadow_qp_recv(struct bnxt_re_qp *qp,
|
||||
struct bnxt_qplib_swqe *wqe,
|
||||
int payload_size)
|
||||
{
|
||||
struct bnxt_qplib_sge ref, sge;
|
||||
u32 rq_prod_index;
|
||||
struct bnxt_re_sqp_entries *sqp_entry;
|
||||
struct bnxt_qplib_sge ref, sge;
|
||||
struct bnxt_re_dev *rdev;
|
||||
u32 rq_prod_index;
|
||||
|
||||
rdev = qp->rdev;
|
||||
|
||||
rq_prod_index = bnxt_qplib_get_rq_prod_index(&qp->qplib_qp);
|
||||
|
||||
@ -2030,7 +2230,7 @@ static int bnxt_re_build_qp1_shadow_qp_recv(struct bnxt_re_qp *qp,
|
||||
ref.lkey = wqe->sg_list[0].lkey;
|
||||
ref.size = wqe->sg_list[0].size;
|
||||
|
||||
sqp_entry = &qp->rdev->sqp_tbl[rq_prod_index];
|
||||
sqp_entry = &rdev->gsi_ctx.sqp_tbl[rq_prod_index];
|
||||
|
||||
/* SGE 1 */
|
||||
wqe->sg_list[0].addr = sge.addr;
|
||||
@ -2850,12 +3050,13 @@ static bool bnxt_re_is_loopback_packet(struct bnxt_re_dev *rdev,
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int bnxt_re_process_raw_qp_pkt_rx(struct bnxt_re_qp *qp1_qp,
|
||||
static int bnxt_re_process_raw_qp_pkt_rx(struct bnxt_re_qp *gsi_qp,
|
||||
struct bnxt_qplib_cqe *cqe)
|
||||
{
|
||||
struct bnxt_re_dev *rdev = qp1_qp->rdev;
|
||||
struct bnxt_re_dev *rdev = gsi_qp->rdev;
|
||||
struct bnxt_re_sqp_entries *sqp_entry = NULL;
|
||||
struct bnxt_re_qp *qp = rdev->qp1_sqp;
|
||||
struct bnxt_re_qp *gsi_sqp = rdev->gsi_ctx.gsi_sqp;
|
||||
struct bnxt_re_ah *gsi_sah;
|
||||
struct ib_send_wr *swr;
|
||||
struct ib_ud_wr udwr;
|
||||
struct ib_recv_wr rwr;
|
||||
@ -2878,19 +3079,19 @@ static int bnxt_re_process_raw_qp_pkt_rx(struct bnxt_re_qp *qp1_qp,
|
||||
swr = &udwr.wr;
|
||||
tbl_idx = cqe->wr_id;
|
||||
|
||||
rq_hdr_buf = qp1_qp->qplib_qp.rq_hdr_buf +
|
||||
(tbl_idx * qp1_qp->qplib_qp.rq_hdr_buf_size);
|
||||
rq_hdr_buf_map = bnxt_qplib_get_qp_buf_from_index(&qp1_qp->qplib_qp,
|
||||
rq_hdr_buf = gsi_qp->qplib_qp.rq_hdr_buf +
|
||||
(tbl_idx * gsi_qp->qplib_qp.rq_hdr_buf_size);
|
||||
rq_hdr_buf_map = bnxt_qplib_get_qp_buf_from_index(&gsi_qp->qplib_qp,
|
||||
tbl_idx);
|
||||
|
||||
/* Shadow QP header buffer */
|
||||
shrq_hdr_buf_map = bnxt_qplib_get_qp_buf_from_index(&qp->qplib_qp,
|
||||
shrq_hdr_buf_map = bnxt_qplib_get_qp_buf_from_index(&gsi_qp->qplib_qp,
|
||||
tbl_idx);
|
||||
sqp_entry = &rdev->sqp_tbl[tbl_idx];
|
||||
sqp_entry = &rdev->gsi_ctx.sqp_tbl[tbl_idx];
|
||||
|
||||
/* Store this cqe */
|
||||
memcpy(&sqp_entry->cqe, cqe, sizeof(struct bnxt_qplib_cqe));
|
||||
sqp_entry->qp1_qp = qp1_qp;
|
||||
sqp_entry->qp1_qp = gsi_qp;
|
||||
|
||||
/* Find packet type from the cqe */
|
||||
|
||||
@ -2944,7 +3145,7 @@ static int bnxt_re_process_raw_qp_pkt_rx(struct bnxt_re_qp *qp1_qp,
|
||||
rwr.wr_id = tbl_idx;
|
||||
rwr.next = NULL;
|
||||
|
||||
rc = bnxt_re_post_recv_shadow_qp(rdev, qp, &rwr);
|
||||
rc = bnxt_re_post_recv_shadow_qp(rdev, gsi_sqp, &rwr);
|
||||
if (rc) {
|
||||
dev_err(rdev_to_dev(rdev),
|
||||
"Failed to post Rx buffers to shadow QP");
|
||||
@ -2956,15 +3157,13 @@ static int bnxt_re_process_raw_qp_pkt_rx(struct bnxt_re_qp *qp1_qp,
|
||||
swr->wr_id = tbl_idx;
|
||||
swr->opcode = IB_WR_SEND;
|
||||
swr->next = NULL;
|
||||
|
||||
udwr.ah = &rdev->sqp_ah->ib_ah;
|
||||
udwr.remote_qpn = rdev->qp1_sqp->qplib_qp.id;
|
||||
udwr.remote_qkey = rdev->qp1_sqp->qplib_qp.qkey;
|
||||
gsi_sah = rdev->gsi_ctx.gsi_sah;
|
||||
udwr.ah = &gsi_sah->ib_ah;
|
||||
udwr.remote_qpn = gsi_sqp->qplib_qp.id;
|
||||
udwr.remote_qkey = gsi_sqp->qplib_qp.qkey;
|
||||
|
||||
/* post data received in the send queue */
|
||||
rc = bnxt_re_post_send_shadow_qp(rdev, qp, swr);
|
||||
|
||||
return 0;
|
||||
return bnxt_re_post_send_shadow_qp(rdev, gsi_sqp, swr);
|
||||
}
|
||||
|
||||
static void bnxt_re_process_res_rawqp1_wc(struct ib_wc *wc,
|
||||
@ -3029,12 +3228,12 @@ static void bnxt_re_process_res_rc_wc(struct ib_wc *wc,
|
||||
wc->opcode = IB_WC_RECV_RDMA_WITH_IMM;
|
||||
}
|
||||
|
||||
static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *qp,
|
||||
static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *gsi_sqp,
|
||||
struct ib_wc *wc,
|
||||
struct bnxt_qplib_cqe *cqe)
|
||||
{
|
||||
struct bnxt_re_dev *rdev = qp->rdev;
|
||||
struct bnxt_re_qp *qp1_qp = NULL;
|
||||
struct bnxt_re_dev *rdev = gsi_sqp->rdev;
|
||||
struct bnxt_re_qp *gsi_qp = NULL;
|
||||
struct bnxt_qplib_cqe *orig_cqe = NULL;
|
||||
struct bnxt_re_sqp_entries *sqp_entry = NULL;
|
||||
int nw_type;
|
||||
@ -3044,13 +3243,13 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *qp,
|
||||
|
||||
tbl_idx = cqe->wr_id;
|
||||
|
||||
sqp_entry = &rdev->sqp_tbl[tbl_idx];
|
||||
qp1_qp = sqp_entry->qp1_qp;
|
||||
sqp_entry = &rdev->gsi_ctx.sqp_tbl[tbl_idx];
|
||||
gsi_qp = sqp_entry->qp1_qp;
|
||||
orig_cqe = &sqp_entry->cqe;
|
||||
|
||||
wc->wr_id = sqp_entry->wrid;
|
||||
wc->byte_len = orig_cqe->length;
|
||||
wc->qp = &qp1_qp->ib_qp;
|
||||
wc->qp = &gsi_qp->ib_qp;
|
||||
|
||||
wc->ex.imm_data = orig_cqe->immdata;
|
||||
wc->src_qp = orig_cqe->src_qp;
|
||||
@ -3137,7 +3336,7 @@ static int send_phantom_wqe(struct bnxt_re_qp *qp)
|
||||
int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
|
||||
{
|
||||
struct bnxt_re_cq *cq = container_of(ib_cq, struct bnxt_re_cq, ib_cq);
|
||||
struct bnxt_re_qp *qp;
|
||||
struct bnxt_re_qp *qp, *sh_qp;
|
||||
struct bnxt_qplib_cqe *cqe;
|
||||
int i, ncqe, budget;
|
||||
struct bnxt_qplib_q *sq;
|
||||
@ -3201,8 +3400,9 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
|
||||
|
||||
switch (cqe->opcode) {
|
||||
case CQ_BASE_CQE_TYPE_REQ:
|
||||
if (qp->rdev->qp1_sqp && qp->qplib_qp.id ==
|
||||
qp->rdev->qp1_sqp->qplib_qp.id) {
|
||||
sh_qp = qp->rdev->gsi_ctx.gsi_sqp;
|
||||
if (sh_qp &&
|
||||
qp->qplib_qp.id == sh_qp->qplib_qp.id) {
|
||||
/* Handle this completion with
|
||||
* the stored completion
|
||||
*/
|
||||
@ -3228,7 +3428,7 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
|
||||
* stored in the table
|
||||
*/
|
||||
tbl_idx = cqe->wr_id;
|
||||
sqp_entry = &cq->rdev->sqp_tbl[tbl_idx];
|
||||
sqp_entry = &cq->rdev->gsi_ctx.sqp_tbl[tbl_idx];
|
||||
wc->wr_id = sqp_entry->wrid;
|
||||
bnxt_re_process_res_rawqp1_wc(wc, cqe);
|
||||
break;
|
||||
@ -3236,8 +3436,9 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
|
||||
bnxt_re_process_res_rc_wc(wc, cqe);
|
||||
break;
|
||||
case CQ_BASE_CQE_TYPE_RES_UD:
|
||||
if (qp->rdev->qp1_sqp && qp->qplib_qp.id ==
|
||||
qp->rdev->qp1_sqp->qplib_qp.id) {
|
||||
sh_qp = qp->rdev->gsi_ctx.gsi_sqp;
|
||||
if (sh_qp &&
|
||||
qp->qplib_qp.id == sh_qp->qplib_qp.id) {
|
||||
/* Handle this completion with
|
||||
* the stored completion
|
||||
*/
|
||||
|
@ -119,61 +119,76 @@ static void bnxt_re_get_sriov_func_type(struct bnxt_re_dev *rdev)
|
||||
* reserved for the function. The driver may choose to allocate fewer
|
||||
* resources than the firmware maximum.
|
||||
*/
|
||||
static void bnxt_re_limit_pf_res(struct bnxt_re_dev *rdev)
|
||||
{
|
||||
struct bnxt_qplib_dev_attr *attr;
|
||||
struct bnxt_qplib_ctx *ctx;
|
||||
int i;
|
||||
|
||||
attr = &rdev->dev_attr;
|
||||
ctx = &rdev->qplib_ctx;
|
||||
|
||||
ctx->qpc_count = min_t(u32, BNXT_RE_MAX_QPC_COUNT,
|
||||
attr->max_qp);
|
||||
ctx->mrw_count = BNXT_RE_MAX_MRW_COUNT_256K;
|
||||
/* Use max_mr from fw since max_mrw does not get set */
|
||||
ctx->mrw_count = min_t(u32, ctx->mrw_count, attr->max_mr);
|
||||
ctx->srqc_count = min_t(u32, BNXT_RE_MAX_SRQC_COUNT,
|
||||
attr->max_srq);
|
||||
ctx->cq_count = min_t(u32, BNXT_RE_MAX_CQ_COUNT, attr->max_cq);
|
||||
if (!bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx))
|
||||
for (i = 0; i < MAX_TQM_ALLOC_REQ; i++)
|
||||
rdev->qplib_ctx.tqm_count[i] =
|
||||
rdev->dev_attr.tqm_alloc_reqs[i];
|
||||
}
|
||||
|
||||
static void bnxt_re_limit_vf_res(struct bnxt_qplib_ctx *qplib_ctx, u32 num_vf)
|
||||
{
|
||||
struct bnxt_qplib_vf_res *vf_res;
|
||||
u32 mrws = 0;
|
||||
u32 vf_pct;
|
||||
u32 nvfs;
|
||||
|
||||
vf_res = &qplib_ctx->vf_res;
|
||||
/*
|
||||
* Reserve a set of resources for the PF. Divide the remaining
|
||||
* resources among the VFs
|
||||
*/
|
||||
vf_pct = 100 - BNXT_RE_PCT_RSVD_FOR_PF;
|
||||
nvfs = num_vf;
|
||||
num_vf = 100 * num_vf;
|
||||
vf_res->max_qp_per_vf = (qplib_ctx->qpc_count * vf_pct) / num_vf;
|
||||
vf_res->max_srq_per_vf = (qplib_ctx->srqc_count * vf_pct) / num_vf;
|
||||
vf_res->max_cq_per_vf = (qplib_ctx->cq_count * vf_pct) / num_vf;
|
||||
/*
|
||||
* The driver allows many more MRs than other resources. If the
|
||||
* firmware does also, then reserve a fixed amount for the PF and
|
||||
* divide the rest among VFs. VFs may use many MRs for NFS
|
||||
* mounts, ISER, NVME applications, etc. If the firmware severely
|
||||
* restricts the number of MRs, then let PF have half and divide
|
||||
* the rest among VFs, as for the other resource types.
|
||||
*/
|
||||
if (qplib_ctx->mrw_count < BNXT_RE_MAX_MRW_COUNT_64K) {
|
||||
mrws = qplib_ctx->mrw_count * vf_pct;
|
||||
nvfs = num_vf;
|
||||
} else {
|
||||
mrws = qplib_ctx->mrw_count - BNXT_RE_RESVD_MR_FOR_PF;
|
||||
}
|
||||
vf_res->max_mrw_per_vf = (mrws / nvfs);
|
||||
vf_res->max_gid_per_vf = BNXT_RE_MAX_GID_PER_VF;
|
||||
}
|
||||
|
||||
static void bnxt_re_set_resource_limits(struct bnxt_re_dev *rdev)
|
||||
{
|
||||
u32 vf_qps = 0, vf_srqs = 0, vf_cqs = 0, vf_mrws = 0, vf_gids = 0;
|
||||
u32 i;
|
||||
u32 vf_pct;
|
||||
u32 num_vfs;
|
||||
struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
|
||||
|
||||
rdev->qplib_ctx.qpc_count = min_t(u32, BNXT_RE_MAX_QPC_COUNT,
|
||||
dev_attr->max_qp);
|
||||
memset(&rdev->qplib_ctx.vf_res, 0, sizeof(struct bnxt_qplib_vf_res));
|
||||
bnxt_re_limit_pf_res(rdev);
|
||||
|
||||
rdev->qplib_ctx.mrw_count = BNXT_RE_MAX_MRW_COUNT_256K;
|
||||
/* Use max_mr from fw since max_mrw does not get set */
|
||||
rdev->qplib_ctx.mrw_count = min_t(u32, rdev->qplib_ctx.mrw_count,
|
||||
dev_attr->max_mr);
|
||||
rdev->qplib_ctx.srqc_count = min_t(u32, BNXT_RE_MAX_SRQC_COUNT,
|
||||
dev_attr->max_srq);
|
||||
rdev->qplib_ctx.cq_count = min_t(u32, BNXT_RE_MAX_CQ_COUNT,
|
||||
dev_attr->max_cq);
|
||||
|
||||
for (i = 0; i < MAX_TQM_ALLOC_REQ; i++)
|
||||
rdev->qplib_ctx.tqm_count[i] =
|
||||
rdev->dev_attr.tqm_alloc_reqs[i];
|
||||
|
||||
if (rdev->num_vfs) {
|
||||
/*
|
||||
* Reserve a set of resources for the PF. Divide the remaining
|
||||
* resources among the VFs
|
||||
*/
|
||||
vf_pct = 100 - BNXT_RE_PCT_RSVD_FOR_PF;
|
||||
num_vfs = 100 * rdev->num_vfs;
|
||||
vf_qps = (rdev->qplib_ctx.qpc_count * vf_pct) / num_vfs;
|
||||
vf_srqs = (rdev->qplib_ctx.srqc_count * vf_pct) / num_vfs;
|
||||
vf_cqs = (rdev->qplib_ctx.cq_count * vf_pct) / num_vfs;
|
||||
/*
|
||||
* The driver allows many more MRs than other resources. If the
|
||||
* firmware does also, then reserve a fixed amount for the PF
|
||||
* and divide the rest among VFs. VFs may use many MRs for NFS
|
||||
* mounts, ISER, NVME applications, etc. If the firmware
|
||||
* severely restricts the number of MRs, then let PF have
|
||||
* half and divide the rest among VFs, as for the other
|
||||
* resource types.
|
||||
*/
|
||||
if (rdev->qplib_ctx.mrw_count < BNXT_RE_MAX_MRW_COUNT_64K)
|
||||
vf_mrws = rdev->qplib_ctx.mrw_count * vf_pct / num_vfs;
|
||||
else
|
||||
vf_mrws = (rdev->qplib_ctx.mrw_count -
|
||||
BNXT_RE_RESVD_MR_FOR_PF) / rdev->num_vfs;
|
||||
vf_gids = BNXT_RE_MAX_GID_PER_VF;
|
||||
}
|
||||
rdev->qplib_ctx.vf_res.max_mrw_per_vf = vf_mrws;
|
||||
rdev->qplib_ctx.vf_res.max_gid_per_vf = vf_gids;
|
||||
rdev->qplib_ctx.vf_res.max_qp_per_vf = vf_qps;
|
||||
rdev->qplib_ctx.vf_res.max_srq_per_vf = vf_srqs;
|
||||
rdev->qplib_ctx.vf_res.max_cq_per_vf = vf_cqs;
|
||||
num_vfs = bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx) ?
|
||||
BNXT_RE_GEN_P5_MAX_VF : rdev->num_vfs;
|
||||
if (num_vfs)
|
||||
bnxt_re_limit_vf_res(&rdev->qplib_ctx, num_vfs);
|
||||
}
|
||||
|
||||
/* for handling bnxt_en callbacks later */
|
||||
@ -193,9 +208,11 @@ static void bnxt_re_sriov_config(void *p, int num_vfs)
|
||||
return;
|
||||
|
||||
rdev->num_vfs = num_vfs;
|
||||
bnxt_re_set_resource_limits(rdev);
|
||||
bnxt_qplib_set_func_resources(&rdev->qplib_res, &rdev->rcfw,
|
||||
&rdev->qplib_ctx);
|
||||
if (!bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx)) {
|
||||
bnxt_re_set_resource_limits(rdev);
|
||||
bnxt_qplib_set_func_resources(&rdev->qplib_res, &rdev->rcfw,
|
||||
&rdev->qplib_ctx);
|
||||
}
|
||||
}
|
||||
|
||||
static void bnxt_re_shutdown(void *p)
|
||||
@ -897,10 +914,14 @@ static int bnxt_re_cqn_handler(struct bnxt_qplib_nq *nq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define BNXT_RE_GEN_P5_PF_NQ_DB 0x10000
|
||||
#define BNXT_RE_GEN_P5_VF_NQ_DB 0x4000
|
||||
static u32 bnxt_re_get_nqdb_offset(struct bnxt_re_dev *rdev, u16 indx)
|
||||
{
|
||||
return bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx) ?
|
||||
0x10000 : rdev->msix_entries[indx].db_offset;
|
||||
(rdev->is_virtfn ? BNXT_RE_GEN_P5_VF_NQ_DB :
|
||||
BNXT_RE_GEN_P5_PF_NQ_DB) :
|
||||
rdev->msix_entries[indx].db_offset;
|
||||
}
|
||||
|
||||
static void bnxt_re_cleanup_res(struct bnxt_re_dev *rdev)
|
||||
@ -1106,7 +1127,8 @@ static int bnxt_re_query_hwrm_pri2cos(struct bnxt_re_dev *rdev, u8 dir,
|
||||
static bool bnxt_re_is_qp1_or_shadow_qp(struct bnxt_re_dev *rdev,
|
||||
struct bnxt_re_qp *qp)
|
||||
{
|
||||
return (qp->ib_qp.qp_type == IB_QPT_GSI) || (qp == rdev->qp1_sqp);
|
||||
return (qp->ib_qp.qp_type == IB_QPT_GSI) ||
|
||||
(qp == rdev->gsi_ctx.gsi_sqp);
|
||||
}
|
||||
|
||||
static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev)
|
||||
@ -1410,8 +1432,8 @@ static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)
|
||||
rdev->is_virtfn);
|
||||
if (rc)
|
||||
goto disable_rcfw;
|
||||
if (!rdev->is_virtfn)
|
||||
bnxt_re_set_resource_limits(rdev);
|
||||
|
||||
bnxt_re_set_resource_limits(rdev);
|
||||
|
||||
rc = bnxt_qplib_alloc_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx, 0,
|
||||
bnxt_qplib_is_chip_gen_p5(&rdev->chip_ctx));
|
||||
|
@ -494,8 +494,10 @@ int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw,
|
||||
* shall setup this area for VF. Skipping the
|
||||
* HW programming
|
||||
*/
|
||||
if (is_virtfn || bnxt_qplib_is_chip_gen_p5(rcfw->res->cctx))
|
||||
if (is_virtfn)
|
||||
goto skip_ctx_setup;
|
||||
if (bnxt_qplib_is_chip_gen_p5(rcfw->res->cctx))
|
||||
goto config_vf_res;
|
||||
|
||||
level = ctx->qpc_tbl.level;
|
||||
req.qpc_pg_size_qpc_lvl = (level << CMDQ_INITIALIZE_FW_QPC_LVL_SFT) |
|
||||
@ -540,6 +542,7 @@ int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw,
|
||||
req.number_of_srq = cpu_to_le32(ctx->srqc_tbl.max_elements);
|
||||
req.number_of_cq = cpu_to_le32(ctx->cq_tbl.max_elements);
|
||||
|
||||
config_vf_res:
|
||||
req.max_qp_per_vf = cpu_to_le32(ctx->vf_res.max_qp_per_vf);
|
||||
req.max_mrw_per_vf = cpu_to_le32(ctx->vf_res.max_mrw_per_vf);
|
||||
req.max_srq_per_vf = cpu_to_le32(ctx->vf_res.max_srq_per_vf);
|
||||
|
@ -3282,7 +3282,7 @@ static int get_lladdr(struct net_device *dev, struct in6_addr *addr,
|
||||
|
||||
static int pick_local_ip6addrs(struct c4iw_dev *dev, struct iw_cm_id *cm_id)
|
||||
{
|
||||
struct in6_addr uninitialized_var(addr);
|
||||
struct in6_addr addr;
|
||||
struct sockaddr_in6 *la6 = (struct sockaddr_in6 *)&cm_id->m_local_addr;
|
||||
struct sockaddr_in6 *ra6 = (struct sockaddr_in6 *)&cm_id->m_remote_addr;
|
||||
|
||||
|
@ -754,7 +754,7 @@ static int poll_cq(struct t4_wq *wq, struct t4_cq *cq, struct t4_cqe *cqe,
|
||||
static int __c4iw_poll_cq_one(struct c4iw_cq *chp, struct c4iw_qp *qhp,
|
||||
struct ib_wc *wc, struct c4iw_srq *srq)
|
||||
{
|
||||
struct t4_cqe uninitialized_var(cqe);
|
||||
struct t4_cqe cqe;
|
||||
struct t4_wq *wq = qhp ? &qhp->wq : NULL;
|
||||
u32 credit = 0;
|
||||
u8 cqe_flushed;
|
||||
|
@ -1231,7 +1231,7 @@ static int pbl_continuous_initialize(struct efa_dev *dev,
|
||||
*/
|
||||
static int pbl_indirect_initialize(struct efa_dev *dev, struct pbl_context *pbl)
|
||||
{
|
||||
u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, PAGE_SIZE);
|
||||
u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, EFA_CHUNK_PAYLOAD_SIZE);
|
||||
struct scatterlist *sgl;
|
||||
int sg_dma_cnt, err;
|
||||
|
||||
|
@ -3547,11 +3547,11 @@ static int _mlx4_ib_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr,
|
||||
int nreq;
|
||||
int err = 0;
|
||||
unsigned ind;
|
||||
int uninitialized_var(size);
|
||||
unsigned uninitialized_var(seglen);
|
||||
int size;
|
||||
unsigned seglen;
|
||||
__be32 dummy;
|
||||
__be32 *lso_wqe;
|
||||
__be32 uninitialized_var(lso_hdr_sz);
|
||||
__be32 lso_hdr_sz;
|
||||
__be32 blh;
|
||||
int i;
|
||||
struct mlx4_ib_dev *mdev = to_mdev(ibqp->device);
|
||||
|
@ -916,8 +916,8 @@ int mlx5_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
|
||||
struct mlx5_ib_dev *dev = to_mdev(ibdev);
|
||||
struct mlx5_ib_cq *cq = to_mcq(ibcq);
|
||||
u32 out[MLX5_ST_SZ_DW(create_cq_out)];
|
||||
int uninitialized_var(index);
|
||||
int uninitialized_var(inlen);
|
||||
int index;
|
||||
int inlen;
|
||||
u32 *cqb = NULL;
|
||||
void *cqc;
|
||||
int cqe_size;
|
||||
@ -1237,7 +1237,7 @@ int mlx5_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata)
|
||||
__be64 *pas;
|
||||
int page_shift;
|
||||
int inlen;
|
||||
int uninitialized_var(cqe_size);
|
||||
int cqe_size;
|
||||
unsigned long flags;
|
||||
|
||||
if (!MLX5_CAP_GEN(dev->mdev, cq_resize)) {
|
||||
|
@ -2556,7 +2556,7 @@ static ssize_t devx_async_event_read(struct file *filp, char __user *buf,
|
||||
{
|
||||
struct devx_async_event_file *ev_file = filp->private_data;
|
||||
struct devx_event_subscription *event_sub;
|
||||
struct devx_async_event_data *uninitialized_var(event);
|
||||
struct devx_async_event_data *event;
|
||||
int ret = 0;
|
||||
size_t eventsz;
|
||||
bool omit_data;
|
||||
|
@ -1639,8 +1639,8 @@ int mthca_tavor_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr,
|
||||
* without initializing f0 and size0, and they are in fact
|
||||
* never used uninitialized.
|
||||
*/
|
||||
int uninitialized_var(size0);
|
||||
u32 uninitialized_var(f0);
|
||||
int size0;
|
||||
u32 f0;
|
||||
int ind;
|
||||
u8 op0 = 0;
|
||||
|
||||
@ -1835,7 +1835,7 @@ int mthca_tavor_post_receive(struct ib_qp *ibqp, const struct ib_recv_wr *wr,
|
||||
* without initializing size0, and it is in fact never used
|
||||
* uninitialized.
|
||||
*/
|
||||
int uninitialized_var(size0);
|
||||
int size0;
|
||||
int ind;
|
||||
void *wqe;
|
||||
void *prev_wqe;
|
||||
@ -1943,8 +1943,8 @@ int mthca_arbel_post_send(struct ib_qp *ibqp, const struct ib_send_wr *wr,
|
||||
* without initializing f0 and size0, and they are in fact
|
||||
* never used uninitialized.
|
||||
*/
|
||||
int uninitialized_var(size0);
|
||||
u32 uninitialized_var(f0);
|
||||
int size0;
|
||||
u32 f0;
|
||||
int ind;
|
||||
u8 op0 = 0;
|
||||
|
||||
|
@ -159,7 +159,7 @@ static ssize_t serio_raw_read(struct file *file, char __user *buffer,
|
||||
{
|
||||
struct serio_raw_client *client = file->private_data;
|
||||
struct serio_raw *serio_raw = client->serio_raw;
|
||||
char uninitialized_var(c);
|
||||
char c;
|
||||
ssize_t read = 0;
|
||||
int error;
|
||||
|
||||
|
@ -4419,8 +4419,7 @@ int amd_iommu_activate_guest_mode(void *data)
|
||||
struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
|
||||
struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
|
||||
|
||||
if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
|
||||
!entry || entry->lo.fields_vapic.guest_mode)
|
||||
if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) || !entry)
|
||||
return 0;
|
||||
|
||||
entry->lo.val = 0;
|
||||
|
@ -1230,18 +1230,20 @@ static int rk_iommu_probe(struct platform_device *pdev)
|
||||
for (i = 0; i < iommu->num_irq; i++) {
|
||||
int irq = platform_get_irq(pdev, i);
|
||||
|
||||
if (irq < 0)
|
||||
return irq;
|
||||
if (irq < 0) {
|
||||
err = irq;
|
||||
goto err_pm_disable;
|
||||
}
|
||||
|
||||
err = devm_request_irq(iommu->dev, irq, rk_iommu_irq,
|
||||
IRQF_SHARED, dev_name(dev), iommu);
|
||||
if (err) {
|
||||
pm_runtime_disable(dev);
|
||||
goto err_remove_sysfs;
|
||||
}
|
||||
if (err)
|
||||
goto err_pm_disable;
|
||||
}
|
||||
|
||||
return 0;
|
||||
err_pm_disable:
|
||||
pm_runtime_disable(dev);
|
||||
err_remove_sysfs:
|
||||
iommu_device_sysfs_remove(&iommu->iommu);
|
||||
err_put_group:
|
||||
|
@ -12,6 +12,7 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mailbox_client.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/poll.h>
|
||||
@ -38,6 +39,7 @@ struct mbox_test_device {
|
||||
char *signal;
|
||||
char *message;
|
||||
spinlock_t lock;
|
||||
struct mutex mutex;
|
||||
wait_queue_head_t waitq;
|
||||
struct fasync_struct *async_queue;
|
||||
struct dentry *root_debugfs_dir;
|
||||
@ -95,6 +97,7 @@ static ssize_t mbox_test_message_write(struct file *filp,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct mbox_test_device *tdev = filp->private_data;
|
||||
char *message;
|
||||
void *data;
|
||||
int ret;
|
||||
|
||||
@ -110,10 +113,13 @@ static ssize_t mbox_test_message_write(struct file *filp,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
tdev->message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL);
|
||||
if (!tdev->message)
|
||||
message = kzalloc(MBOX_MAX_MSG_LEN, GFP_KERNEL);
|
||||
if (!message)
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_lock(&tdev->mutex);
|
||||
|
||||
tdev->message = message;
|
||||
ret = copy_from_user(tdev->message, userbuf, count);
|
||||
if (ret) {
|
||||
ret = -EFAULT;
|
||||
@ -144,6 +150,8 @@ static ssize_t mbox_test_message_write(struct file *filp,
|
||||
kfree(tdev->message);
|
||||
tdev->signal = NULL;
|
||||
|
||||
mutex_unlock(&tdev->mutex);
|
||||
|
||||
return ret < 0 ? ret : count;
|
||||
}
|
||||
|
||||
@ -392,6 +400,7 @@ static int mbox_test_probe(struct platform_device *pdev)
|
||||
platform_set_drvdata(pdev, tdev);
|
||||
|
||||
spin_lock_init(&tdev->lock);
|
||||
mutex_init(&tdev->mutex);
|
||||
|
||||
if (tdev->rx_channel) {
|
||||
tdev->rx_buffer = devm_kzalloc(&pdev->dev,
|
||||
|
@ -306,7 +306,7 @@ static void do_region(int op, int op_flags, unsigned region,
|
||||
struct request_queue *q = bdev_get_queue(where->bdev);
|
||||
unsigned short logical_block_size = queue_logical_block_size(q);
|
||||
sector_t num_sectors;
|
||||
unsigned int uninitialized_var(special_cmd_max_sectors);
|
||||
unsigned int special_cmd_max_sectors;
|
||||
|
||||
/*
|
||||
* Reject unsupported discard and write same requests.
|
||||
|
@ -1848,7 +1848,7 @@ static int ctl_ioctl(struct file *file, uint command, struct dm_ioctl __user *us
|
||||
int ioctl_flags;
|
||||
int param_flags;
|
||||
unsigned int cmd;
|
||||
struct dm_ioctl *uninitialized_var(param);
|
||||
struct dm_ioctl *param;
|
||||
ioctl_fn fn = NULL;
|
||||
size_t input_param_size;
|
||||
struct dm_ioctl param_kernel;
|
||||
|
@ -613,7 +613,7 @@ static int persistent_read_metadata(struct dm_exception_store *store,
|
||||
chunk_t old, chunk_t new),
|
||||
void *callback_context)
|
||||
{
|
||||
int r, uninitialized_var(new_snapshot);
|
||||
int r, new_snapshot;
|
||||
struct pstore *ps = get_info(store);
|
||||
|
||||
/*
|
||||
|
@ -670,7 +670,7 @@ static int validate_hardware_logical_block_alignment(struct dm_table *table,
|
||||
*/
|
||||
unsigned short remaining = 0;
|
||||
|
||||
struct dm_target *uninitialized_var(ti);
|
||||
struct dm_target *ti;
|
||||
struct queue_limits ti_limits;
|
||||
unsigned i;
|
||||
|
||||
|
@ -2599,7 +2599,7 @@ static void raid5_end_write_request(struct bio *bi)
|
||||
struct stripe_head *sh = bi->bi_private;
|
||||
struct r5conf *conf = sh->raid_conf;
|
||||
int disks = sh->disks, i;
|
||||
struct md_rdev *uninitialized_var(rdev);
|
||||
struct md_rdev *rdev;
|
||||
sector_t first_bad;
|
||||
int bad_sectors;
|
||||
int replacement = 0;
|
||||
|
@ -151,6 +151,12 @@ struct dvb_ca_private {
|
||||
|
||||
/* mutex serializing ioctls */
|
||||
struct mutex ioctl_mutex;
|
||||
|
||||
/* A mutex used when a device is disconnected */
|
||||
struct mutex remove_mutex;
|
||||
|
||||
/* Whether the device is disconnected */
|
||||
int exit;
|
||||
};
|
||||
|
||||
static void dvb_ca_private_free(struct dvb_ca_private *ca)
|
||||
@ -187,7 +193,7 @@ static void dvb_ca_en50221_thread_wakeup(struct dvb_ca_private *ca);
|
||||
static int dvb_ca_en50221_read_data(struct dvb_ca_private *ca, int slot,
|
||||
u8 *ebuf, int ecount);
|
||||
static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot,
|
||||
u8 *ebuf, int ecount);
|
||||
u8 *ebuf, int ecount, int size_write_flag);
|
||||
|
||||
/**
|
||||
* Safely find needle in haystack.
|
||||
@ -370,7 +376,7 @@ static int dvb_ca_en50221_link_init(struct dvb_ca_private *ca, int slot)
|
||||
ret = dvb_ca_en50221_wait_if_status(ca, slot, STATUSREG_FR, HZ / 10);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = dvb_ca_en50221_write_data(ca, slot, buf, 2);
|
||||
ret = dvb_ca_en50221_write_data(ca, slot, buf, 2, CMDREG_SW);
|
||||
if (ret != 2)
|
||||
return -EIO;
|
||||
ret = ca->pub->write_cam_control(ca->pub, slot, CTRLIF_COMMAND, IRQEN);
|
||||
@ -778,11 +784,13 @@ static int dvb_ca_en50221_read_data(struct dvb_ca_private *ca, int slot,
|
||||
* @buf: The data in this buffer is treated as a complete link-level packet to
|
||||
* be written.
|
||||
* @bytes_write: Size of ebuf.
|
||||
* @size_write_flag: A flag on Command Register which says whether the link size
|
||||
* information will be writen or not.
|
||||
*
|
||||
* return: Number of bytes written, or < 0 on error.
|
||||
*/
|
||||
static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot,
|
||||
u8 *buf, int bytes_write)
|
||||
u8 *buf, int bytes_write, int size_write_flag)
|
||||
{
|
||||
struct dvb_ca_slot *sl = &ca->slot_info[slot];
|
||||
int status;
|
||||
@ -817,7 +825,7 @@ static int dvb_ca_en50221_write_data(struct dvb_ca_private *ca, int slot,
|
||||
|
||||
/* OK, set HC bit */
|
||||
status = ca->pub->write_cam_control(ca->pub, slot, CTRLIF_COMMAND,
|
||||
IRQEN | CMDREG_HC);
|
||||
IRQEN | CMDREG_HC | size_write_flag);
|
||||
if (status)
|
||||
goto exit;
|
||||
|
||||
@ -1505,7 +1513,7 @@ static ssize_t dvb_ca_en50221_io_write(struct file *file,
|
||||
|
||||
mutex_lock(&sl->slot_lock);
|
||||
status = dvb_ca_en50221_write_data(ca, slot, fragbuf,
|
||||
fraglen + 2);
|
||||
fraglen + 2, 0);
|
||||
mutex_unlock(&sl->slot_lock);
|
||||
if (status == (fraglen + 2)) {
|
||||
written = 1;
|
||||
@ -1706,12 +1714,22 @@ static int dvb_ca_en50221_io_open(struct inode *inode, struct file *file)
|
||||
|
||||
dprintk("%s\n", __func__);
|
||||
|
||||
if (!try_module_get(ca->pub->owner))
|
||||
mutex_lock(&ca->remove_mutex);
|
||||
|
||||
if (ca->exit) {
|
||||
mutex_unlock(&ca->remove_mutex);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (!try_module_get(ca->pub->owner)) {
|
||||
mutex_unlock(&ca->remove_mutex);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
err = dvb_generic_open(inode, file);
|
||||
if (err < 0) {
|
||||
module_put(ca->pub->owner);
|
||||
mutex_unlock(&ca->remove_mutex);
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -1736,6 +1754,7 @@ static int dvb_ca_en50221_io_open(struct inode *inode, struct file *file)
|
||||
|
||||
dvb_ca_private_get(ca);
|
||||
|
||||
mutex_unlock(&ca->remove_mutex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1755,6 +1774,8 @@ static int dvb_ca_en50221_io_release(struct inode *inode, struct file *file)
|
||||
|
||||
dprintk("%s\n", __func__);
|
||||
|
||||
mutex_lock(&ca->remove_mutex);
|
||||
|
||||
/* mark the CA device as closed */
|
||||
ca->open = 0;
|
||||
dvb_ca_en50221_thread_update_delay(ca);
|
||||
@ -1765,6 +1786,13 @@ static int dvb_ca_en50221_io_release(struct inode *inode, struct file *file)
|
||||
|
||||
dvb_ca_private_put(ca);
|
||||
|
||||
if (dvbdev->users == 1 && ca->exit == 1) {
|
||||
mutex_unlock(&ca->remove_mutex);
|
||||
wake_up(&dvbdev->wait_queue);
|
||||
} else {
|
||||
mutex_unlock(&ca->remove_mutex);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -1888,6 +1916,7 @@ int dvb_ca_en50221_init(struct dvb_adapter *dvb_adapter,
|
||||
}
|
||||
|
||||
mutex_init(&ca->ioctl_mutex);
|
||||
mutex_init(&ca->remove_mutex);
|
||||
|
||||
if (signal_pending(current)) {
|
||||
ret = -EINTR;
|
||||
@ -1930,6 +1959,14 @@ void dvb_ca_en50221_release(struct dvb_ca_en50221 *pubca)
|
||||
|
||||
dprintk("%s\n", __func__);
|
||||
|
||||
mutex_lock(&ca->remove_mutex);
|
||||
ca->exit = 1;
|
||||
mutex_unlock(&ca->remove_mutex);
|
||||
|
||||
if (ca->dvbdev->users < 1)
|
||||
wait_event(ca->dvbdev->wait_queue,
|
||||
ca->dvbdev->users == 1);
|
||||
|
||||
/* shutdown the thread if there was one */
|
||||
kthread_stop(ca->thread);
|
||||
|
||||
|
@ -125,12 +125,12 @@ static inline int dvb_dmx_swfilter_payload(struct dvb_demux_feed *feed,
|
||||
|
||||
cc = buf[3] & 0x0f;
|
||||
ccok = ((feed->cc + 1) & 0x0f) == cc;
|
||||
feed->cc = cc;
|
||||
if (!ccok) {
|
||||
set_buf_flags(feed, DMX_BUFFER_FLAG_DISCONTINUITY_DETECTED);
|
||||
dprintk_sect_loss("missed packet: %d instead of %d!\n",
|
||||
cc, (feed->cc + 1) & 0x0f);
|
||||
}
|
||||
feed->cc = cc;
|
||||
|
||||
if (buf[1] & 0x40) // PUSI ?
|
||||
feed->peslen = 0xfffa;
|
||||
@ -310,7 +310,6 @@ static int dvb_dmx_swfilter_section_packet(struct dvb_demux_feed *feed,
|
||||
|
||||
cc = buf[3] & 0x0f;
|
||||
ccok = ((feed->cc + 1) & 0x0f) == cc;
|
||||
feed->cc = cc;
|
||||
|
||||
if (buf[3] & 0x20) {
|
||||
/* adaption field present, check for discontinuity_indicator */
|
||||
@ -346,6 +345,7 @@ static int dvb_dmx_swfilter_section_packet(struct dvb_demux_feed *feed,
|
||||
feed->pusi_seen = false;
|
||||
dvb_dmx_swfilter_section_new(feed);
|
||||
}
|
||||
feed->cc = cc;
|
||||
|
||||
if (buf[1] & 0x40) {
|
||||
/* PUSI=1 (is set), section boundary is here */
|
||||
|
@ -292,14 +292,22 @@ static int dvb_frontend_get_event(struct dvb_frontend *fe,
|
||||
}
|
||||
|
||||
if (events->eventw == events->eventr) {
|
||||
int ret;
|
||||
struct wait_queue_entry wait;
|
||||
int ret = 0;
|
||||
|
||||
if (flags & O_NONBLOCK)
|
||||
return -EWOULDBLOCK;
|
||||
|
||||
ret = wait_event_interruptible(events->wait_queue,
|
||||
dvb_frontend_test_event(fepriv, events));
|
||||
|
||||
init_waitqueue_entry(&wait, current);
|
||||
add_wait_queue(&events->wait_queue, &wait);
|
||||
while (!dvb_frontend_test_event(fepriv, events)) {
|
||||
wait_woken(&wait, TASK_INTERRUPTIBLE, 0);
|
||||
if (signal_pending(current)) {
|
||||
ret = -ERESTARTSYS;
|
||||
break;
|
||||
}
|
||||
}
|
||||
remove_wait_queue(&events->wait_queue, &wait);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
@ -1564,15 +1564,43 @@ static long dvb_net_ioctl(struct file *file,
|
||||
return dvb_usercopy(file, cmd, arg, dvb_net_do_ioctl);
|
||||
}
|
||||
|
||||
static int locked_dvb_net_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct dvb_device *dvbdev = file->private_data;
|
||||
struct dvb_net *dvbnet = dvbdev->priv;
|
||||
int ret;
|
||||
|
||||
if (mutex_lock_interruptible(&dvbnet->remove_mutex))
|
||||
return -ERESTARTSYS;
|
||||
|
||||
if (dvbnet->exit) {
|
||||
mutex_unlock(&dvbnet->remove_mutex);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
ret = dvb_generic_open(inode, file);
|
||||
|
||||
mutex_unlock(&dvbnet->remove_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int dvb_net_close(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct dvb_device *dvbdev = file->private_data;
|
||||
struct dvb_net *dvbnet = dvbdev->priv;
|
||||
|
||||
mutex_lock(&dvbnet->remove_mutex);
|
||||
|
||||
dvb_generic_release(inode, file);
|
||||
|
||||
if(dvbdev->users == 1 && dvbnet->exit == 1)
|
||||
if (dvbdev->users == 1 && dvbnet->exit == 1) {
|
||||
mutex_unlock(&dvbnet->remove_mutex);
|
||||
wake_up(&dvbdev->wait_queue);
|
||||
} else {
|
||||
mutex_unlock(&dvbnet->remove_mutex);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1580,7 +1608,7 @@ static int dvb_net_close(struct inode *inode, struct file *file)
|
||||
static const struct file_operations dvb_net_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.unlocked_ioctl = dvb_net_ioctl,
|
||||
.open = dvb_generic_open,
|
||||
.open = locked_dvb_net_open,
|
||||
.release = dvb_net_close,
|
||||
.llseek = noop_llseek,
|
||||
};
|
||||
@ -1599,10 +1627,13 @@ void dvb_net_release (struct dvb_net *dvbnet)
|
||||
{
|
||||
int i;
|
||||
|
||||
mutex_lock(&dvbnet->remove_mutex);
|
||||
dvbnet->exit = 1;
|
||||
mutex_unlock(&dvbnet->remove_mutex);
|
||||
|
||||
if (dvbnet->dvbdev->users < 1)
|
||||
wait_event(dvbnet->dvbdev->wait_queue,
|
||||
dvbnet->dvbdev->users==1);
|
||||
dvbnet->dvbdev->users == 1);
|
||||
|
||||
dvb_unregister_device(dvbnet->dvbdev);
|
||||
|
||||
@ -1621,6 +1652,7 @@ int dvb_net_init (struct dvb_adapter *adap, struct dvb_net *dvbnet,
|
||||
int i;
|
||||
|
||||
mutex_init(&dvbnet->ioctl_mutex);
|
||||
mutex_init(&dvbnet->remove_mutex);
|
||||
dvbnet->demux = dmx;
|
||||
|
||||
for (i=0; i<DVB_NET_DEVICES_MAX; i++)
|
||||
|
@ -800,7 +800,7 @@ MODULE_DEVICE_TABLE(i2c, mn88443x_i2c_id);
|
||||
static struct i2c_driver mn88443x_driver = {
|
||||
.driver = {
|
||||
.name = "mn88443x",
|
||||
.of_match_table = of_match_ptr(mn88443x_of_match),
|
||||
.of_match_table = mn88443x_of_match,
|
||||
},
|
||||
.probe = mn88443x_probe,
|
||||
.remove = mn88443x_remove,
|
||||
|
@ -640,7 +640,7 @@ static int rtl2832_read_status(struct dvb_frontend *fe, enum fe_status *status)
|
||||
struct i2c_client *client = dev->client;
|
||||
struct dtv_frontend_properties *c = &fe->dtv_property_cache;
|
||||
int ret;
|
||||
u32 uninitialized_var(tmp);
|
||||
u32 tmp;
|
||||
u8 u8tmp, buf[2];
|
||||
u16 u16tmp;
|
||||
|
||||
|
@ -887,12 +887,7 @@ static int netup_unidvb_initdev(struct pci_dev *pci_dev,
|
||||
ndev->lmmio0, (u32)pci_resource_len(pci_dev, 0),
|
||||
ndev->lmmio1, (u32)pci_resource_len(pci_dev, 1),
|
||||
pci_dev->irq);
|
||||
if (request_irq(pci_dev->irq, netup_unidvb_isr, IRQF_SHARED,
|
||||
"netup_unidvb", pci_dev) < 0) {
|
||||
dev_err(&pci_dev->dev,
|
||||
"%s(): can't get IRQ %d\n", __func__, pci_dev->irq);
|
||||
goto irq_request_err;
|
||||
}
|
||||
|
||||
ndev->dma_size = 2 * 188 *
|
||||
NETUP_DMA_BLOCKS_COUNT * NETUP_DMA_PACKETS_COUNT;
|
||||
ndev->dma_virt = dma_alloc_coherent(&pci_dev->dev,
|
||||
@ -933,6 +928,14 @@ static int netup_unidvb_initdev(struct pci_dev *pci_dev,
|
||||
dev_err(&pci_dev->dev, "netup_unidvb: DMA setup failed\n");
|
||||
goto dma_setup_err;
|
||||
}
|
||||
|
||||
if (request_irq(pci_dev->irq, netup_unidvb_isr, IRQF_SHARED,
|
||||
"netup_unidvb", pci_dev) < 0) {
|
||||
dev_err(&pci_dev->dev,
|
||||
"%s(): can't get IRQ %d\n", __func__, pci_dev->irq);
|
||||
goto dma_setup_err;
|
||||
}
|
||||
|
||||
dev_info(&pci_dev->dev,
|
||||
"netup_unidvb: device has been initialized\n");
|
||||
return 0;
|
||||
@ -951,8 +954,6 @@ static int netup_unidvb_initdev(struct pci_dev *pci_dev,
|
||||
dma_free_coherent(&pci_dev->dev, ndev->dma_size,
|
||||
ndev->dma_virt, ndev->dma_phys);
|
||||
dma_alloc_err:
|
||||
free_irq(pci_dev->irq, pci_dev);
|
||||
irq_request_err:
|
||||
iounmap(ndev->lmmio1);
|
||||
pci_bar1_error:
|
||||
iounmap(ndev->lmmio0);
|
||||
|
@ -638,6 +638,7 @@ static int rvin_setup(struct rvin_dev *vin)
|
||||
vnmc = VNMC_IM_FULL | VNMC_FOC;
|
||||
break;
|
||||
case V4L2_FIELD_NONE:
|
||||
case V4L2_FIELD_ALTERNATE:
|
||||
vnmc = VNMC_IM_ODD_EVEN;
|
||||
progressive = true;
|
||||
break;
|
||||
|
@ -215,7 +215,7 @@ static int qt1010_set_params(struct dvb_frontend *fe)
|
||||
static int qt1010_init_meas1(struct qt1010_priv *priv,
|
||||
u8 oper, u8 reg, u8 reg_init_val, u8 *retval)
|
||||
{
|
||||
u8 i, val1, uninitialized_var(val2);
|
||||
u8 i, val1, val2;
|
||||
int err;
|
||||
|
||||
qt1010_i2c_oper_t i2c_data[] = {
|
||||
@ -250,7 +250,7 @@ static int qt1010_init_meas1(struct qt1010_priv *priv,
|
||||
static int qt1010_init_meas2(struct qt1010_priv *priv,
|
||||
u8 reg_init_val, u8 *retval)
|
||||
{
|
||||
u8 i, uninitialized_var(val);
|
||||
u8 i, val;
|
||||
int err;
|
||||
qt1010_i2c_oper_t i2c_data[] = {
|
||||
{ QT1010_WR, 0x07, reg_init_val },
|
||||
|
@ -101,6 +101,10 @@ static int ce6230_i2c_master_xfer(struct i2c_adapter *adap,
|
||||
if (num > i + 1 && (msg[i+1].flags & I2C_M_RD)) {
|
||||
if (msg[i].addr ==
|
||||
ce6230_zl10353_config.demod_address) {
|
||||
if (msg[i].len < 1) {
|
||||
i = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
req.cmd = DEMOD_READ;
|
||||
req.value = msg[i].addr >> 1;
|
||||
req.index = msg[i].buf[0];
|
||||
@ -117,6 +121,10 @@ static int ce6230_i2c_master_xfer(struct i2c_adapter *adap,
|
||||
} else {
|
||||
if (msg[i].addr ==
|
||||
ce6230_zl10353_config.demod_address) {
|
||||
if (msg[i].len < 1) {
|
||||
i = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
req.cmd = DEMOD_WRITE;
|
||||
req.value = msg[i].addr >> 1;
|
||||
req.index = msg[i].buf[0];
|
||||
|
@ -115,6 +115,10 @@ static int ec168_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
while (i < num) {
|
||||
if (num > i + 1 && (msg[i+1].flags & I2C_M_RD)) {
|
||||
if (msg[i].addr == ec168_ec100_config.demod_address) {
|
||||
if (msg[i].len < 1) {
|
||||
i = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
req.cmd = READ_DEMOD;
|
||||
req.value = 0;
|
||||
req.index = 0xff00 + msg[i].buf[0]; /* reg */
|
||||
@ -131,6 +135,10 @@ static int ec168_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
}
|
||||
} else {
|
||||
if (msg[i].addr == ec168_ec100_config.demod_address) {
|
||||
if (msg[i].len < 1) {
|
||||
i = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
req.cmd = WRITE_DEMOD;
|
||||
req.value = msg[i].buf[1]; /* val */
|
||||
req.index = 0xff00 + msg[i].buf[0]; /* reg */
|
||||
@ -139,6 +147,10 @@ static int ec168_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
ret = ec168_ctrl_msg(d, &req);
|
||||
i += 1;
|
||||
} else {
|
||||
if (msg[i].len < 1) {
|
||||
i = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
req.cmd = WRITE_I2C;
|
||||
req.value = msg[i].buf[0]; /* val */
|
||||
req.index = 0x0100 + msg[i].addr; /* I2C addr */
|
||||
|
@ -176,6 +176,10 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
ret = -EOPNOTSUPP;
|
||||
goto err_mutex_unlock;
|
||||
} else if (msg[0].addr == 0x10) {
|
||||
if (msg[0].len < 1 || msg[1].len < 1) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto err_mutex_unlock;
|
||||
}
|
||||
/* method 1 - integrated demod */
|
||||
if (msg[0].buf[0] == 0x00) {
|
||||
/* return demod page from driver cache */
|
||||
@ -189,6 +193,10 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
ret = rtl28xxu_ctrl_msg(d, &req);
|
||||
}
|
||||
} else if (msg[0].len < 2) {
|
||||
if (msg[0].len < 1) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto err_mutex_unlock;
|
||||
}
|
||||
/* method 2 - old I2C */
|
||||
req.value = (msg[0].buf[0] << 8) | (msg[0].addr << 1);
|
||||
req.index = CMD_I2C_RD;
|
||||
@ -217,8 +225,16 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
ret = -EOPNOTSUPP;
|
||||
goto err_mutex_unlock;
|
||||
} else if (msg[0].addr == 0x10) {
|
||||
if (msg[0].len < 1) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto err_mutex_unlock;
|
||||
}
|
||||
/* method 1 - integrated demod */
|
||||
if (msg[0].buf[0] == 0x00) {
|
||||
if (msg[0].len < 2) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto err_mutex_unlock;
|
||||
}
|
||||
/* save demod page for later demod access */
|
||||
dev->page = msg[0].buf[1];
|
||||
ret = 0;
|
||||
@ -231,6 +247,10 @@ static int rtl28xxu_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
ret = rtl28xxu_ctrl_msg(d, &req);
|
||||
}
|
||||
} else if ((msg[0].len < 23) && (!dev->new_i2c_write)) {
|
||||
if (msg[0].len < 1) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto err_mutex_unlock;
|
||||
}
|
||||
/* method 2 - old I2C */
|
||||
req.value = (msg[0].buf[0] << 8) | (msg[0].addr << 1);
|
||||
req.index = CMD_I2C_WR;
|
||||
|
@ -988,6 +988,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
|
||||
/* write/read request */
|
||||
if (i + 1 < num && (msg[i + 1].flags & I2C_M_RD)) {
|
||||
req = 0xB9;
|
||||
if (msg[i].len < 1) {
|
||||
i = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
index = (((msg[i].buf[0] << 8) & 0xff00) | (msg[i].buf[1] & 0x00ff));
|
||||
value = msg[i].addr + (msg[i].len << 8);
|
||||
length = msg[i + 1].len + 6;
|
||||
@ -1001,6 +1005,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
|
||||
|
||||
/* demod 16bit addr */
|
||||
req = 0xBD;
|
||||
if (msg[i].len < 1) {
|
||||
i = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
index = (((msg[i].buf[0] << 8) & 0xff00) | (msg[i].buf[1] & 0x00ff));
|
||||
value = msg[i].addr + (2 << 8);
|
||||
length = msg[i].len - 2;
|
||||
@ -1026,6 +1034,10 @@ static int az6027_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[], int n
|
||||
} else {
|
||||
|
||||
req = 0xBD;
|
||||
if (msg[i].len < 1) {
|
||||
i = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
index = msg[i].buf[0] & 0x00FF;
|
||||
value = msg[i].addr + (1 << 8);
|
||||
length = msg[i].len - 1;
|
||||
|
@ -63,6 +63,10 @@ static int digitv_i2c_xfer(struct i2c_adapter *adap,struct i2c_msg msg[],int num
|
||||
warn("more than 2 i2c messages at a time is not handled yet. TODO.");
|
||||
|
||||
for (i = 0; i < num; i++) {
|
||||
if (msg[i].len < 1) {
|
||||
i = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
/* write/read request */
|
||||
if (i+1 < num && (msg[i+1].flags & I2C_M_RD)) {
|
||||
if (digitv_ctrl_msg(d, USB_READ_COFDM, msg[i].buf[0], NULL, 0,
|
||||
|
@ -946,7 +946,7 @@ static int su3000_read_mac_address(struct dvb_usb_device *d, u8 mac[6])
|
||||
for (i = 0; i < 6; i++) {
|
||||
obuf[1] = 0xf0 + i;
|
||||
if (i2c_transfer(&d->i2c_adap, msg, 2) != 2)
|
||||
break;
|
||||
return -1;
|
||||
else
|
||||
mac[i] = ibuf[0];
|
||||
}
|
||||
|
@ -225,7 +225,7 @@ static int sd_init(struct gspca_dev *gspca_dev)
|
||||
{
|
||||
int ret;
|
||||
const struct ihex_binrec *rec;
|
||||
const struct firmware *uninitialized_var(fw);
|
||||
const struct firmware *fw;
|
||||
u8 *firmware_buf;
|
||||
|
||||
ret = request_ihex_firmware(&fw, VICAM_FIRMWARE,
|
||||
|
@ -1551,8 +1551,7 @@ static void ttusb_dec_exit_dvb(struct ttusb_dec *dec)
|
||||
dvb_dmx_release(&dec->demux);
|
||||
if (dec->fe) {
|
||||
dvb_unregister_frontend(dec->fe);
|
||||
if (dec->fe->ops.release)
|
||||
dec->fe->ops.release(dec->fe);
|
||||
dvb_frontend_detach(dec->fe);
|
||||
}
|
||||
dvb_unregister_adapter(&dec->adapter);
|
||||
}
|
||||
|
@ -797,9 +797,9 @@ static void uvc_video_stats_decode(struct uvc_streaming *stream,
|
||||
unsigned int header_size;
|
||||
bool has_pts = false;
|
||||
bool has_scr = false;
|
||||
u16 uninitialized_var(scr_sof);
|
||||
u32 uninitialized_var(scr_stc);
|
||||
u32 uninitialized_var(pts);
|
||||
u16 scr_sof;
|
||||
u32 scr_stc;
|
||||
u32 pts;
|
||||
|
||||
if (stream->stats.stream.nb_frames == 0 &&
|
||||
stream->stats.frame.nb_packets == 0)
|
||||
@ -1862,7 +1862,7 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,
|
||||
struct usb_host_endpoint *best_ep = NULL;
|
||||
unsigned int best_psize = UINT_MAX;
|
||||
unsigned int bandwidth;
|
||||
unsigned int uninitialized_var(altsetting);
|
||||
unsigned int altsetting;
|
||||
int intfnum = stream->intfnum;
|
||||
|
||||
/* Isochronous endpoint, select the alternate setting. */
|
||||
|
@ -314,7 +314,7 @@ static int jmb38x_ms_transfer_data(struct jmb38x_ms_host *host)
|
||||
}
|
||||
|
||||
while (length) {
|
||||
unsigned int uninitialized_var(p_off);
|
||||
unsigned int p_off;
|
||||
|
||||
if (host->req->long_data) {
|
||||
pg = nth_page(sg_page(&host->req->sg),
|
||||
|
@ -198,7 +198,7 @@ static unsigned int tifm_ms_transfer_data(struct tifm_ms *host)
|
||||
host->block_pos);
|
||||
|
||||
while (length) {
|
||||
unsigned int uninitialized_var(p_off);
|
||||
unsigned int p_off;
|
||||
|
||||
if (host->req->long_data) {
|
||||
pg = nth_page(sg_page(&host->req->sg),
|
||||
|
@ -1484,8 +1484,10 @@ static void fastrpc_notify_users(struct fastrpc_user *user)
|
||||
struct fastrpc_invoke_ctx *ctx;
|
||||
|
||||
spin_lock(&user->lock);
|
||||
list_for_each_entry(ctx, &user->pending, node)
|
||||
list_for_each_entry(ctx, &user->pending, node) {
|
||||
ctx->retval = -EPIPE;
|
||||
complete(&ctx->work);
|
||||
}
|
||||
spin_unlock(&user->lock);
|
||||
}
|
||||
|
||||
@ -1495,7 +1497,9 @@ static void fastrpc_rpmsg_remove(struct rpmsg_device *rpdev)
|
||||
struct fastrpc_user *user;
|
||||
unsigned long flags;
|
||||
|
||||
/* No invocations past this point */
|
||||
spin_lock_irqsave(&cctx->lock, flags);
|
||||
cctx->rpdev = NULL;
|
||||
list_for_each_entry(user, &cctx->users, user)
|
||||
fastrpc_notify_users(user);
|
||||
spin_unlock_irqrestore(&cctx->lock, flags);
|
||||
@ -1503,7 +1507,6 @@ static void fastrpc_rpmsg_remove(struct rpmsg_device *rpdev)
|
||||
misc_deregister(&cctx->miscdev);
|
||||
of_platform_depopulate(&rpdev->dev);
|
||||
|
||||
cctx->rpdev = NULL;
|
||||
fastrpc_channel_ctx_put(cctx);
|
||||
}
|
||||
|
||||
|
@ -475,7 +475,7 @@ static void sdhci_read_block_pio(struct sdhci_host *host)
|
||||
{
|
||||
unsigned long flags;
|
||||
size_t blksize, len, chunk;
|
||||
u32 uninitialized_var(scratch);
|
||||
u32 scratch;
|
||||
u8 *buf;
|
||||
|
||||
DBG("PIO reading\n");
|
||||
|
@ -1715,6 +1715,9 @@ static void construct_request_response(struct vub300_mmc_host *vub300,
|
||||
int bytes = 3 & less_cmd;
|
||||
int words = less_cmd >> 2;
|
||||
u8 *r = vub300->resp.response.command_response;
|
||||
|
||||
if (!resp_len)
|
||||
return;
|
||||
if (bytes == 3) {
|
||||
cmd->resp[words] = (r[1 + (words << 2)] << 24)
|
||||
| (r[2 + (words << 2)] << 16)
|
||||
|
@ -36,25 +36,25 @@ int ingenic_ecc_correct(struct ingenic_ecc *ecc,
|
||||
void ingenic_ecc_release(struct ingenic_ecc *ecc);
|
||||
struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np);
|
||||
#else /* CONFIG_MTD_NAND_INGENIC_ECC */
|
||||
int ingenic_ecc_calculate(struct ingenic_ecc *ecc,
|
||||
static inline int ingenic_ecc_calculate(struct ingenic_ecc *ecc,
|
||||
struct ingenic_ecc_params *params,
|
||||
const u8 *buf, u8 *ecc_code)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
int ingenic_ecc_correct(struct ingenic_ecc *ecc,
|
||||
static inline int ingenic_ecc_correct(struct ingenic_ecc *ecc,
|
||||
struct ingenic_ecc_params *params, u8 *buf,
|
||||
u8 *ecc_code)
|
||||
{
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
void ingenic_ecc_release(struct ingenic_ecc *ecc)
|
||||
static inline void ingenic_ecc_release(struct ingenic_ecc *ecc)
|
||||
{
|
||||
}
|
||||
|
||||
struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np)
|
||||
static inline struct ingenic_ecc *of_ingenic_ecc_get(struct device_node *np)
|
||||
{
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
|
@ -2401,6 +2401,12 @@ static int marvell_nfc_setup_data_interface(struct nand_chip *chip, int chipnr,
|
||||
NDTR1_WAIT_MODE;
|
||||
}
|
||||
|
||||
/*
|
||||
* Reset nfc->selected_chip so the next command will cause the timing
|
||||
* registers to be updated in marvell_nfc_select_target().
|
||||
*/
|
||||
nfc->selected_chip = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -2828,10 +2834,6 @@ static int marvell_nfc_init(struct marvell_nfc *nfc)
|
||||
regmap_update_bits(sysctrl_base, GENCONF_CLK_GATING_CTRL,
|
||||
GENCONF_CLK_GATING_CTRL_ND_GATE,
|
||||
GENCONF_CLK_GATING_CTRL_ND_GATE);
|
||||
|
||||
regmap_update_bits(sysctrl_base, GENCONF_ND_CLK_CTRL,
|
||||
GENCONF_ND_CLK_CTRL_EN,
|
||||
GENCONF_ND_CLK_CTRL_EN);
|
||||
}
|
||||
|
||||
/* Configure the DMA if appropriate */
|
||||
|
@ -131,7 +131,7 @@ void __nand_calculate_ecc(const unsigned char *buf, unsigned int eccsize,
|
||||
/* rp0..rp15..rp17 are the various accumulated parities (per byte) */
|
||||
uint32_t rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7;
|
||||
uint32_t rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15, rp16;
|
||||
uint32_t uninitialized_var(rp17); /* to make compiler happy */
|
||||
uint32_t rp17;
|
||||
uint32_t par; /* the cumulative parity for all data */
|
||||
uint32_t tmppar; /* the cumulative parity for this iteration;
|
||||
for rp12, rp14 and rp16 at the end of the
|
||||
|
@ -291,7 +291,7 @@ static int s3c2410_nand_setrate(struct s3c2410_nand_info *info)
|
||||
int tacls_max = (info->cpu_type == TYPE_S3C2412) ? 8 : 4;
|
||||
int tacls, twrph0, twrph1;
|
||||
unsigned long clkrate = clk_get_rate(info->clk);
|
||||
unsigned long uninitialized_var(set), cfg, uninitialized_var(mask);
|
||||
unsigned long set, cfg, mask;
|
||||
unsigned long flags;
|
||||
|
||||
/* calculate the timing information for the controller */
|
||||
|
@ -126,8 +126,8 @@ static int afs_parse_v1_partition(struct mtd_info *mtd,
|
||||
* Static checks cannot see that we bail out if we have an error
|
||||
* reading the footer.
|
||||
*/
|
||||
u_int uninitialized_var(iis_ptr);
|
||||
u_int uninitialized_var(img_ptr);
|
||||
u_int iis_ptr;
|
||||
u_int img_ptr;
|
||||
u_int ptr;
|
||||
size_t sz;
|
||||
int ret;
|
||||
|
@ -599,7 +599,7 @@ int ubi_eba_read_leb(struct ubi_device *ubi, struct ubi_volume *vol, int lnum,
|
||||
int err, pnum, scrub = 0, vol_id = vol->vol_id;
|
||||
struct ubi_vid_io_buf *vidb;
|
||||
struct ubi_vid_hdr *vid_hdr;
|
||||
uint32_t uninitialized_var(crc);
|
||||
uint32_t crc;
|
||||
|
||||
err = leb_read_lock(ubi, vol_id, lnum);
|
||||
if (err)
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user