Merge 5.10.26 into android12-5.10-lts
Changes in 5.10.26 ASoC: ak4458: Add MODULE_DEVICE_TABLE ASoC: ak5558: Add MODULE_DEVICE_TABLE spi: cadence: set cqspi to the driver_data field of struct device ALSA: dice: fix null pointer dereference when node is disconnected ALSA: hda/realtek: apply pin quirk for XiaomiNotebook Pro ALSA: hda: generic: Fix the micmute led init state ALSA: hda/realtek: Apply headset-mic quirks for Xiaomi Redmibook Air ALSA: hda/realtek: fix mute/micmute LEDs for HP 840 G8 ALSA: hda/realtek: fix mute/micmute LEDs for HP 440 G8 ALSA: hda/realtek: fix mute/micmute LEDs for HP 850 G8 Revert "PM: runtime: Update device status before letting suppliers suspend" s390/vtime: fix increased steal time accounting s390/pci: refactor zpci_create_device() s390/pci: remove superfluous zdev->zbus check s390/pci: fix leak of PCI device structure zonefs: Fix O_APPEND async write handling zonefs: prevent use of seq files as swap file zonefs: fix to update .i_wr_refcnt correctly in zonefs_open_zone() btrfs: fix race when cloning extent buffer during rewind of an old root btrfs: fix slab cache flags for free space tree bitmap vhost-vdpa: fix use-after-free of v->config_ctx vhost-vdpa: set v->config_ctx to NULL if eventfd_ctx_fdget() fails drm/amd/display: Correct algorithm for reversed gamma ASoC: fsl_ssi: Fix TDM slot setup for I2S mode ASoC: Intel: bytcr_rt5640: Fix HP Pavilion x2 10-p0XX OVCD current threshold ASoC: SOF: Intel: unregister DMIC device on probe error ASoC: SOF: intel: fix wrong poll bits in dsp power down ASoC: qcom: sdm845: Fix array out of bounds access ASoC: qcom: sdm845: Fix array out of range on rx slim channels ASoC: codecs: wcd934x: add a sanity check in set channel map ASoC: qcom: lpass-cpu: Fix lpass dai ids parse ASoC: simple-card-utils: Do not handle device clock afs: Fix accessing YFS xattrs on a non-YFS server afs: Stop listxattr() from listing "afs.*" attributes ALSA: usb-audio: Fix unintentional sign extension issue nvme: fix Write Zeroes limitations nvme-tcp: fix misuse of __smp_processor_id with preemption enabled nvme-tcp: fix possible hang when failing to set io queues nvme-tcp: fix a NULL deref when receiving a 0-length r2t PDU nvmet: don't check iosqes,iocqes for discovery controllers nfsd: Don't keep looking up unhashed files in the nfsd file cache nfsd: don't abort copies early NFSD: Repair misuse of sv_lock in 5.10.16-rt30. NFSD: fix dest to src mount in inter-server COPY svcrdma: disable timeouts on rdma backchannel vfio: IOMMU_API should be selected vhost_vdpa: fix the missing irq_bypass_unregister_producer() invocation sunrpc: fix refcount leak for rpc auth modules i915/perf: Start hrtimer only if sampling the OA buffer pstore: Fix warning in pstore_kill_sb() io_uring: ensure that SQPOLL thread is started for exit net/qrtr: fix __netdev_alloc_skb call kbuild: Fix <linux/version.h> for empty SUBLEVEL or PATCHLEVEL again cifs: fix allocation size on newly created files riscv: Correct SPARSEMEM configuration scsi: lpfc: Fix some error codes in debugfs scsi: myrs: Fix a double free in myrs_cleanup() scsi: ufs: ufs-mediatek: Correct operator & -> && RISC-V: correct enum sbi_ext_rfence_fid counter: stm32-timer-cnt: Report count function when SLAVE_MODE_DISABLED gpiolib: Assign fwnode to parent's if no primary one provided nvme-rdma: fix possible hang when failing to set io queues ibmvnic: add some debugs ibmvnic: serialize access to work queue on remove tty: serial: stm32-usart: Remove set but unused 'cookie' variables serial: stm32: fix DMA initialization error handling bpf: Declare __bpf_free_used_maps() unconditionally RDMA/rtrs: Remove unnecessary argument dir of rtrs_iu_free RDMA/rtrs-srv: Jump to dereg_mr label if allocate iu fails RDMA/rtrs: Introduce rtrs_post_send RDMA/rtrs: Fix KASAN: stack-out-of-bounds bug module: merge repetitive strings in module_sig_check() module: avoid *goto*s in module_sig_check() module: harden ELF info handling scsi: pm80xx: Make mpi_build_cmd locking consistent scsi: pm80xx: Make running_req atomic scsi: pm80xx: Fix pm8001_mpi_get_nvmd_resp() race condition scsi: pm8001: Neaten debug logging macros and uses scsi: libsas: Remove notifier indirection scsi: libsas: Introduce a _gfp() variant of event notifiers scsi: mvsas: Pass gfp_t flags to libsas event notifiers scsi: isci: Pass gfp_t flags in isci_port_link_down() scsi: isci: Pass gfp_t flags in isci_port_link_up() scsi: isci: Pass gfp_t flags in isci_port_bc_change_received() RDMA/mlx5: Allow creating all QPs even when non RDMA profile is used powerpc/sstep: Fix load-store and update emulation powerpc/sstep: Fix darn emulation i40e: Fix endianness conversions net: phy: micrel: set soft_reset callback to genphy_soft_reset for KSZ8081 MIPS: compressed: fix build with enabled UBSAN drm/amd/display: turn DPMS off on connector unplug iwlwifi: Add a new card for MA family io_uring: fix inconsistent lock state media: cedrus: h264: Support profile controls ibmvnic: remove excessive irqsave s390/qeth: schedule TX NAPI on QAOB completion drm/amd/pm: fulfill the Polaris implementation for get_clock_by_type_with_latency() io_uring: don't attempt IO reissue from the ring exit path io_uring: clear IOCB_WAITQ for non -EIOCBQUEUED return net: bonding: fix error return code of bond_neigh_init() regulator: pca9450: Add SD_VSEL GPIO for LDO5 regulator: pca9450: Enable system reset on WDOG_B assertion regulator: pca9450: Clear PRESET_EN bit to fix BUCK1/2/3 voltage setting gfs2: Add common helper for holding and releasing the freeze glock gfs2: move freeze glock outside the make_fs_rw and _ro functions gfs2: bypass signal_our_withdraw if no journal powerpc: Force inlining of cpu_has_feature() to avoid build failure usb-storage: Add quirk to defeat Kindle's automatic unload usbip: Fix incorrect double assignment to udc->ud.tcp_rx usb: gadget: configfs: Fix KASAN use-after-free usb: typec: Remove vdo[3] part of tps6598x_rx_identity_reg struct usb: typec: tcpm: Invoke power_supply_changed for tcpm-source-psy- usb: dwc3: gadget: Allow runtime suspend if UDC unbinded usb: dwc3: gadget: Prevent EP queuing while stopping transfers thunderbolt: Initialize HopID IDAs in tb_switch_alloc() thunderbolt: Increase runtime PM reference count on DP tunnel discovery iio:adc:stm32-adc: Add HAS_IOMEM dependency iio:adc:qcom-spmi-vadc: add default scale to LR_MUX2_BAT_ID channel iio: adis16400: Fix an error code in adis16400_initial_setup() iio: gyro: mpu3050: Fix error handling in mpu3050_trigger_handler iio: adc: ab8500-gpadc: Fix off by 10 to 3 iio: adc: ad7949: fix wrong ADC result due to incorrect bit mask iio: adc: adi-axi-adc: add proper Kconfig dependencies iio: hid-sensor-humidity: Fix alignment issue of timestamp channel iio: hid-sensor-prox: Fix scale not correct issue iio: hid-sensor-temperature: Fix issues of timestamp channel counter: stm32-timer-cnt: fix ceiling write max value counter: stm32-timer-cnt: fix ceiling miss-alignment with reload register PCI: rpadlpar: Fix potential drc_name corruption in store functions perf/x86/intel: Fix a crash caused by zero PEBS status perf/x86/intel: Fix unchecked MSR access error caused by VLBR_EVENT x86/ioapic: Ignore IRQ2 again kernel, fs: Introduce and use set_restart_fn() and arch_set_restart_data() x86: Move TS_COMPAT back to asm/thread_info.h x86: Introduce TS_COMPAT_RESTART to fix get_nr_restart_syscall() efivars: respect EFI_UNSUPPORTED return from firmware ext4: fix error handling in ext4_end_enable_verity() ext4: find old entry again if failed to rename whiteout ext4: stop inode update before return ext4: do not try to set xattr into ea_inode if value is empty ext4: fix potential error in ext4_do_update_inode ext4: fix rename whiteout with fast commit MAINTAINERS: move some real subsystems off of the staging mailing list MAINTAINERS: move the staging subsystem to lists.linux.dev static_call: Fix static_call_update() sanity check efi: use 32-bit alignment for efi_guid_t literals firmware/efi: Fix a use after bug in efi_mem_reserve_persistent genirq: Disable interrupts for force threaded handlers x86/apic/of: Fix CPU devicetree-node lookups cifs: Fix preauth hash corruption Linux 5.10.26 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I6f6bdd1dc46dc744c848e778f9edd0be558b46ac
This commit is contained in:
commit
57b60a3a15
@ -189,12 +189,10 @@ num_phys
|
||||
The event interface::
|
||||
|
||||
/* LLDD calls these to notify the class of an event. */
|
||||
void (*notify_port_event)(struct sas_phy *, enum port_event);
|
||||
void (*notify_phy_event)(struct sas_phy *, enum phy_event);
|
||||
|
||||
When sas_register_ha() returns, those are set and can be
|
||||
called by the LLDD to notify the SAS layer of such events
|
||||
the SAS layer.
|
||||
void sas_notify_port_event(struct sas_phy *, enum port_event);
|
||||
void sas_notify_phy_event(struct sas_phy *, enum phy_event);
|
||||
void sas_notify_port_event_gfp(struct sas_phy *, enum port_event, gfp_t);
|
||||
void sas_notify_phy_event_gfp(struct sas_phy *, enum phy_event, gfp_t);
|
||||
|
||||
The port notification::
|
||||
|
||||
|
@ -1155,7 +1155,7 @@ M: Joel Fernandes <joel@joelfernandes.org>
|
||||
M: Christian Brauner <christian@brauner.io>
|
||||
M: Hridya Valsaraju <hridya@google.com>
|
||||
M: Suren Baghdasaryan <surenb@google.com>
|
||||
L: devel@driverdev.osuosl.org
|
||||
L: linux-kernel@vger.kernel.org
|
||||
S: Supported
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
|
||||
F: drivers/android/
|
||||
@ -8007,7 +8007,6 @@ F: drivers/crypto/hisilicon/sec2/sec_main.c
|
||||
|
||||
HISILICON STAGING DRIVERS FOR HIKEY 960/970
|
||||
M: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
|
||||
L: devel@driverdev.osuosl.org
|
||||
S: Maintained
|
||||
F: drivers/staging/hikey9xx/
|
||||
|
||||
@ -16696,7 +16695,7 @@ F: drivers/staging/vt665?/
|
||||
|
||||
STAGING SUBSYSTEM
|
||||
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
||||
L: devel@driverdev.osuosl.org
|
||||
L: linux-staging@lists.linux.dev
|
||||
S: Supported
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
|
||||
F: drivers/staging/
|
||||
@ -18738,7 +18737,7 @@ VME SUBSYSTEM
|
||||
M: Martyn Welch <martyn@welchs.me.uk>
|
||||
M: Manohar Vanga <manohar.vanga@gmail.com>
|
||||
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
||||
L: devel@driverdev.osuosl.org
|
||||
L: linux-kernel@vger.kernel.org
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
|
||||
F: Documentation/driver-api/vme.rst
|
||||
|
8
Makefile
8
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 10
|
||||
SUBLEVEL = 25
|
||||
SUBLEVEL = 26
|
||||
EXTRAVERSION =
|
||||
NAME = Dare mighty things
|
||||
|
||||
@ -1340,15 +1340,17 @@ endef
|
||||
define filechk_version.h
|
||||
if [ $(SUBLEVEL) -gt 255 ]; then \
|
||||
echo \#define LINUX_VERSION_CODE $(shell \
|
||||
expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 255); \
|
||||
expr $(VERSION) \* 65536 + $(PATCHLEVEL) \* 256 + 255); \
|
||||
else \
|
||||
echo \#define LINUX_VERSION_CODE $(shell \
|
||||
expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
|
||||
expr $(VERSION) \* 65536 + $(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
|
||||
fi; \
|
||||
echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + \
|
||||
((c) > 255 ? 255 : (c)))'
|
||||
endef
|
||||
|
||||
$(version_h): PATCHLEVEL := $(if $(PATCHLEVEL), $(PATCHLEVEL), 0)
|
||||
$(version_h): SUBLEVEL := $(if $(SUBLEVEL), $(SUBLEVEL), 0)
|
||||
$(version_h): FORCE
|
||||
$(call filechk,version.h)
|
||||
$(Q)rm -f $(old_version_h)
|
||||
|
@ -36,6 +36,7 @@ KBUILD_AFLAGS := $(KBUILD_AFLAGS) -D__ASSEMBLY__ \
|
||||
|
||||
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
|
||||
KCOV_INSTRUMENT := n
|
||||
UBSAN_SANITIZE := n
|
||||
|
||||
# decompressor objects (linked with vmlinuz)
|
||||
vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/string.o
|
||||
|
@ -7,7 +7,7 @@
|
||||
#include <linux/bug.h>
|
||||
#include <asm/cputable.h>
|
||||
|
||||
static inline bool early_cpu_has_feature(unsigned long feature)
|
||||
static __always_inline bool early_cpu_has_feature(unsigned long feature)
|
||||
{
|
||||
return !!((CPU_FTRS_ALWAYS & feature) ||
|
||||
(CPU_FTRS_POSSIBLE & cur_cpu_spec->cpu_features & feature));
|
||||
@ -46,7 +46,7 @@ static __always_inline bool cpu_has_feature(unsigned long feature)
|
||||
return static_branch_likely(&cpu_feature_keys[i]);
|
||||
}
|
||||
#else
|
||||
static inline bool cpu_has_feature(unsigned long feature)
|
||||
static __always_inline bool cpu_has_feature(unsigned long feature)
|
||||
{
|
||||
return early_cpu_has_feature(feature);
|
||||
}
|
||||
|
@ -1853,7 +1853,7 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
|
||||
goto compute_done;
|
||||
}
|
||||
|
||||
return -1;
|
||||
goto unknown_opcode;
|
||||
#ifdef __powerpc64__
|
||||
case 777: /* modsd */
|
||||
if (!cpu_has_feature(CPU_FTR_ARCH_300))
|
||||
@ -2909,6 +2909,20 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
|
||||
|
||||
}
|
||||
|
||||
if (OP_IS_LOAD_STORE(op->type) && (op->type & UPDATE)) {
|
||||
switch (GETTYPE(op->type)) {
|
||||
case LOAD:
|
||||
if (ra == rd)
|
||||
goto unknown_opcode;
|
||||
fallthrough;
|
||||
case STORE:
|
||||
case LOAD_FP:
|
||||
case STORE_FP:
|
||||
if (ra == 0)
|
||||
goto unknown_opcode;
|
||||
}
|
||||
}
|
||||
|
||||
#ifdef CONFIG_VSX
|
||||
if ((GETTYPE(op->type) == LOAD_VSX ||
|
||||
GETTYPE(op->type) == STORE_VSX) &&
|
||||
|
@ -84,7 +84,6 @@ config RISCV
|
||||
select PCI_MSI if PCI
|
||||
select RISCV_INTC
|
||||
select RISCV_TIMER if RISCV_SBI
|
||||
select SPARSEMEM_STATIC if 32BIT
|
||||
select SPARSE_IRQ
|
||||
select SYSCTL_EXCEPTION_TRACE
|
||||
select THREAD_INFO_IN_TASK
|
||||
@ -145,7 +144,8 @@ config ARCH_FLATMEM_ENABLE
|
||||
config ARCH_SPARSEMEM_ENABLE
|
||||
def_bool y
|
||||
depends on MMU
|
||||
select SPARSEMEM_VMEMMAP_ENABLE
|
||||
select SPARSEMEM_STATIC if 32BIT && SPARSMEM
|
||||
select SPARSEMEM_VMEMMAP_ENABLE if 64BIT
|
||||
|
||||
config ARCH_SELECT_MEMORY_MODEL
|
||||
def_bool ARCH_SPARSEMEM_ENABLE
|
||||
|
@ -51,10 +51,10 @@ enum sbi_ext_rfence_fid {
|
||||
SBI_EXT_RFENCE_REMOTE_FENCE_I = 0,
|
||||
SBI_EXT_RFENCE_REMOTE_SFENCE_VMA,
|
||||
SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID,
|
||||
SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA,
|
||||
SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA_VMID,
|
||||
SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA,
|
||||
SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA,
|
||||
SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID,
|
||||
SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA,
|
||||
};
|
||||
|
||||
enum sbi_ext_hsm_fid {
|
||||
|
@ -201,8 +201,8 @@ extern unsigned int s390_pci_no_rid;
|
||||
Prototypes
|
||||
----------------------------------------------------------------------------- */
|
||||
/* Base stuff */
|
||||
int zpci_create_device(struct zpci_dev *);
|
||||
void zpci_remove_device(struct zpci_dev *zdev);
|
||||
int zpci_create_device(u32 fid, u32 fh, enum zpci_state state);
|
||||
void zpci_remove_device(struct zpci_dev *zdev, bool set_error);
|
||||
int zpci_enable_device(struct zpci_dev *);
|
||||
int zpci_disable_device(struct zpci_dev *);
|
||||
int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64);
|
||||
@ -212,7 +212,7 @@ void zpci_remove_reserved_devices(void);
|
||||
/* CLP */
|
||||
int clp_setup_writeback_mio(void);
|
||||
int clp_scan_pci_devices(void);
|
||||
int clp_add_pci_device(u32, u32, int);
|
||||
int clp_query_pci_fn(struct zpci_dev *zdev);
|
||||
int clp_enable_fh(struct zpci_dev *, u8);
|
||||
int clp_disable_fh(struct zpci_dev *);
|
||||
int clp_get_state(u32 fid, enum zpci_state *state);
|
||||
|
@ -217,7 +217,7 @@ void vtime_flush(struct task_struct *tsk)
|
||||
avg_steal = S390_lowcore.avg_steal_timer / 2;
|
||||
if ((s64) steal > 0) {
|
||||
S390_lowcore.steal_timer = 0;
|
||||
account_steal_time(steal);
|
||||
account_steal_time(cputime_to_nsecs(steal));
|
||||
avg_steal += steal;
|
||||
}
|
||||
S390_lowcore.avg_steal_timer = avg_steal;
|
||||
|
@ -682,56 +682,101 @@ int zpci_disable_device(struct zpci_dev *zdev)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(zpci_disable_device);
|
||||
|
||||
void zpci_remove_device(struct zpci_dev *zdev)
|
||||
/* zpci_remove_device - Removes the given zdev from the PCI core
|
||||
* @zdev: the zdev to be removed from the PCI core
|
||||
* @set_error: if true the device's error state is set to permanent failure
|
||||
*
|
||||
* Sets a zPCI device to a configured but offline state; the zPCI
|
||||
* device is still accessible through its hotplug slot and the zPCI
|
||||
* API but is removed from the common code PCI bus, making it
|
||||
* no longer available to drivers.
|
||||
*/
|
||||
void zpci_remove_device(struct zpci_dev *zdev, bool set_error)
|
||||
{
|
||||
struct zpci_bus *zbus = zdev->zbus;
|
||||
struct pci_dev *pdev;
|
||||
|
||||
if (!zdev->zbus->bus)
|
||||
return;
|
||||
|
||||
pdev = pci_get_slot(zbus->bus, zdev->devfn);
|
||||
if (pdev) {
|
||||
if (pdev->is_virtfn)
|
||||
return zpci_iov_remove_virtfn(pdev, zdev->vfn);
|
||||
if (set_error)
|
||||
pdev->error_state = pci_channel_io_perm_failure;
|
||||
if (pdev->is_virtfn) {
|
||||
zpci_iov_remove_virtfn(pdev, zdev->vfn);
|
||||
/* balance pci_get_slot */
|
||||
pci_dev_put(pdev);
|
||||
return;
|
||||
}
|
||||
pci_stop_and_remove_bus_device_locked(pdev);
|
||||
/* balance pci_get_slot */
|
||||
pci_dev_put(pdev);
|
||||
}
|
||||
}
|
||||
|
||||
int zpci_create_device(struct zpci_dev *zdev)
|
||||
/**
|
||||
* zpci_create_device() - Create a new zpci_dev and add it to the zbus
|
||||
* @fid: Function ID of the device to be created
|
||||
* @fh: Current Function Handle of the device to be created
|
||||
* @state: Initial state after creation either Standby or Configured
|
||||
*
|
||||
* Creates a new zpci device and adds it to its, possibly newly created, zbus
|
||||
* as well as zpci_list.
|
||||
*
|
||||
* Returns: 0 on success, an error value otherwise
|
||||
*/
|
||||
int zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
|
||||
{
|
||||
struct zpci_dev *zdev;
|
||||
int rc;
|
||||
|
||||
zpci_dbg(3, "add fid:%x, fh:%x, c:%d\n", fid, fh, state);
|
||||
zdev = kzalloc(sizeof(*zdev), GFP_KERNEL);
|
||||
if (!zdev)
|
||||
return -ENOMEM;
|
||||
|
||||
/* FID and Function Handle are the static/dynamic identifiers */
|
||||
zdev->fid = fid;
|
||||
zdev->fh = fh;
|
||||
|
||||
/* Query function properties and update zdev */
|
||||
rc = clp_query_pci_fn(zdev);
|
||||
if (rc)
|
||||
goto error;
|
||||
zdev->state = state;
|
||||
|
||||
kref_init(&zdev->kref);
|
||||
mutex_init(&zdev->lock);
|
||||
|
||||
rc = zpci_init_iommu(zdev);
|
||||
if (rc)
|
||||
goto error;
|
||||
|
||||
if (zdev->state == ZPCI_FN_STATE_CONFIGURED) {
|
||||
rc = zpci_enable_device(zdev);
|
||||
if (rc)
|
||||
goto error_destroy_iommu;
|
||||
}
|
||||
|
||||
rc = zpci_bus_device_register(zdev, &pci_root_ops);
|
||||
if (rc)
|
||||
goto error_disable;
|
||||
|
||||
spin_lock(&zpci_list_lock);
|
||||
list_add_tail(&zdev->entry, &zpci_list);
|
||||
spin_unlock(&zpci_list_lock);
|
||||
|
||||
rc = zpci_init_iommu(zdev);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
mutex_init(&zdev->lock);
|
||||
if (zdev->state == ZPCI_FN_STATE_CONFIGURED) {
|
||||
rc = zpci_enable_device(zdev);
|
||||
if (rc)
|
||||
goto out_destroy_iommu;
|
||||
}
|
||||
|
||||
rc = zpci_bus_device_register(zdev, &pci_root_ops);
|
||||
if (rc)
|
||||
goto out_disable;
|
||||
|
||||
return 0;
|
||||
|
||||
out_disable:
|
||||
error_disable:
|
||||
if (zdev->state == ZPCI_FN_STATE_ONLINE)
|
||||
zpci_disable_device(zdev);
|
||||
|
||||
out_destroy_iommu:
|
||||
error_destroy_iommu:
|
||||
zpci_destroy_iommu(zdev);
|
||||
out:
|
||||
spin_lock(&zpci_list_lock);
|
||||
list_del(&zdev->entry);
|
||||
spin_unlock(&zpci_list_lock);
|
||||
error:
|
||||
zpci_dbg(0, "add fid:%x, rc:%d\n", fid, rc);
|
||||
kfree(zdev);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -740,7 +785,7 @@ void zpci_release_device(struct kref *kref)
|
||||
struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
|
||||
|
||||
if (zdev->zbus->bus)
|
||||
zpci_remove_device(zdev);
|
||||
zpci_remove_device(zdev, false);
|
||||
|
||||
switch (zdev->state) {
|
||||
case ZPCI_FN_STATE_ONLINE:
|
||||
|
@ -181,7 +181,7 @@ static int clp_store_query_pci_fn(struct zpci_dev *zdev,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int clp_query_pci_fn(struct zpci_dev *zdev, u32 fh)
|
||||
int clp_query_pci_fn(struct zpci_dev *zdev)
|
||||
{
|
||||
struct clp_req_rsp_query_pci *rrb;
|
||||
int rc;
|
||||
@ -194,7 +194,7 @@ static int clp_query_pci_fn(struct zpci_dev *zdev, u32 fh)
|
||||
rrb->request.hdr.len = sizeof(rrb->request);
|
||||
rrb->request.hdr.cmd = CLP_QUERY_PCI_FN;
|
||||
rrb->response.hdr.len = sizeof(rrb->response);
|
||||
rrb->request.fh = fh;
|
||||
rrb->request.fh = zdev->fh;
|
||||
|
||||
rc = clp_req(rrb, CLP_LPS_PCI);
|
||||
if (!rc && rrb->response.hdr.rsp == CLP_RC_OK) {
|
||||
@ -212,40 +212,6 @@ static int clp_query_pci_fn(struct zpci_dev *zdev, u32 fh)
|
||||
return rc;
|
||||
}
|
||||
|
||||
int clp_add_pci_device(u32 fid, u32 fh, int configured)
|
||||
{
|
||||
struct zpci_dev *zdev;
|
||||
int rc = -ENOMEM;
|
||||
|
||||
zpci_dbg(3, "add fid:%x, fh:%x, c:%d\n", fid, fh, configured);
|
||||
zdev = kzalloc(sizeof(*zdev), GFP_KERNEL);
|
||||
if (!zdev)
|
||||
goto error;
|
||||
|
||||
zdev->fh = fh;
|
||||
zdev->fid = fid;
|
||||
|
||||
/* Query function properties and update zdev */
|
||||
rc = clp_query_pci_fn(zdev, fh);
|
||||
if (rc)
|
||||
goto error;
|
||||
|
||||
if (configured)
|
||||
zdev->state = ZPCI_FN_STATE_CONFIGURED;
|
||||
else
|
||||
zdev->state = ZPCI_FN_STATE_STANDBY;
|
||||
|
||||
rc = zpci_create_device(zdev);
|
||||
if (rc)
|
||||
goto error;
|
||||
return 0;
|
||||
|
||||
error:
|
||||
zpci_dbg(0, "add fid:%x, rc:%d\n", fid, rc);
|
||||
kfree(zdev);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int clp_refresh_fh(u32 fid);
|
||||
/*
|
||||
* Enable/Disable a given PCI function and update its function handle if
|
||||
@ -408,7 +374,7 @@ static void __clp_add(struct clp_fh_list_entry *entry, void *data)
|
||||
|
||||
zdev = get_zdev_by_fid(entry->fid);
|
||||
if (!zdev)
|
||||
clp_add_pci_device(entry->fid, entry->fh, entry->config_state);
|
||||
zpci_create_device(entry->fid, entry->fh, entry->config_state);
|
||||
}
|
||||
|
||||
int clp_scan_pci_devices(void)
|
||||
|
@ -76,20 +76,17 @@ void zpci_event_error(void *data)
|
||||
static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
|
||||
{
|
||||
struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
|
||||
struct pci_dev *pdev = NULL;
|
||||
enum zpci_state state;
|
||||
struct pci_dev *pdev;
|
||||
int ret;
|
||||
|
||||
if (zdev && zdev->zbus && zdev->zbus->bus)
|
||||
pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
|
||||
|
||||
zpci_err("avail CCDF:\n");
|
||||
zpci_err_hex(ccdf, sizeof(*ccdf));
|
||||
|
||||
switch (ccdf->pec) {
|
||||
case 0x0301: /* Reserved|Standby -> Configured */
|
||||
if (!zdev) {
|
||||
ret = clp_add_pci_device(ccdf->fid, ccdf->fh, 1);
|
||||
zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_CONFIGURED);
|
||||
break;
|
||||
}
|
||||
/* the configuration request may be stale */
|
||||
@ -116,7 +113,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
|
||||
break;
|
||||
case 0x0302: /* Reserved -> Standby */
|
||||
if (!zdev) {
|
||||
clp_add_pci_device(ccdf->fid, ccdf->fh, 0);
|
||||
zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_STANDBY);
|
||||
break;
|
||||
}
|
||||
zdev->fh = ccdf->fh;
|
||||
@ -124,8 +121,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
|
||||
case 0x0303: /* Deconfiguration requested */
|
||||
if (!zdev)
|
||||
break;
|
||||
if (pdev)
|
||||
zpci_remove_device(zdev);
|
||||
zpci_remove_device(zdev, false);
|
||||
|
||||
ret = zpci_disable_device(zdev);
|
||||
if (ret)
|
||||
@ -140,12 +136,10 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
|
||||
case 0x0304: /* Configured -> Standby|Reserved */
|
||||
if (!zdev)
|
||||
break;
|
||||
if (pdev) {
|
||||
/* Give the driver a hint that the function is
|
||||
* already unusable. */
|
||||
pdev->error_state = pci_channel_io_perm_failure;
|
||||
zpci_remove_device(zdev);
|
||||
}
|
||||
/* Give the driver a hint that the function is
|
||||
* already unusable.
|
||||
*/
|
||||
zpci_remove_device(zdev, true);
|
||||
|
||||
zdev->fh = ccdf->fh;
|
||||
zpci_disable_device(zdev);
|
||||
|
@ -3562,6 +3562,9 @@ static int intel_pmu_hw_config(struct perf_event *event)
|
||||
return ret;
|
||||
|
||||
if (event->attr.precise_ip) {
|
||||
if ((event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_FIXED_VLBR_EVENT)
|
||||
return -EINVAL;
|
||||
|
||||
if (!(event->attr.freq || (event->attr.wakeup_events && !event->attr.watermark))) {
|
||||
event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
|
||||
if (!(event->attr.sample_type &
|
||||
|
@ -1894,7 +1894,7 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs, struct perf_sample_d
|
||||
*/
|
||||
if (!pebs_status && cpuc->pebs_enabled &&
|
||||
!(cpuc->pebs_enabled & (cpuc->pebs_enabled-1)))
|
||||
pebs_status = cpuc->pebs_enabled;
|
||||
pebs_status = p->status = cpuc->pebs_enabled;
|
||||
|
||||
bit = find_first_bit((unsigned long *)&pebs_status,
|
||||
x86_pmu.max_pebs_events);
|
||||
|
@ -552,15 +552,6 @@ static inline void arch_thread_struct_whitelist(unsigned long *offset,
|
||||
*size = fpu_kernel_xstate_size;
|
||||
}
|
||||
|
||||
/*
|
||||
* Thread-synchronous status.
|
||||
*
|
||||
* This is different from the flags in that nobody else
|
||||
* ever touches our thread-synchronous status, so we don't
|
||||
* have to worry about atomic accesses.
|
||||
*/
|
||||
#define TS_COMPAT 0x0002 /* 32bit syscall active (64BIT)*/
|
||||
|
||||
static inline void
|
||||
native_load_sp0(unsigned long sp0)
|
||||
{
|
||||
|
@ -216,10 +216,31 @@ static inline int arch_within_stack_frames(const void * const stack,
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Thread-synchronous status.
|
||||
*
|
||||
* This is different from the flags in that nobody else
|
||||
* ever touches our thread-synchronous status, so we don't
|
||||
* have to worry about atomic accesses.
|
||||
*/
|
||||
#define TS_COMPAT 0x0002 /* 32bit syscall active (64BIT)*/
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#ifdef CONFIG_COMPAT
|
||||
#define TS_I386_REGS_POKED 0x0004 /* regs poked by 32-bit ptracer */
|
||||
#define TS_COMPAT_RESTART 0x0008
|
||||
|
||||
#define arch_set_restart_data arch_set_restart_data
|
||||
|
||||
static inline void arch_set_restart_data(struct restart_block *restart)
|
||||
{
|
||||
struct thread_info *ti = current_thread_info();
|
||||
if (ti->status & TS_COMPAT)
|
||||
ti->status |= TS_COMPAT_RESTART;
|
||||
else
|
||||
ti->status &= ~TS_COMPAT_RESTART;
|
||||
}
|
||||
#endif
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
#define in_ia32_syscall() true
|
||||
|
@ -2317,6 +2317,11 @@ static int cpuid_to_apicid[] = {
|
||||
[0 ... NR_CPUS - 1] = -1,
|
||||
};
|
||||
|
||||
bool arch_match_cpu_phys_id(int cpu, u64 phys_id)
|
||||
{
|
||||
return phys_id == cpuid_to_apicid[cpu];
|
||||
}
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
/**
|
||||
* apic_id_is_primary_thread - Check whether APIC ID belongs to a primary thread
|
||||
|
@ -1033,6 +1033,16 @@ static int mp_map_pin_to_irq(u32 gsi, int idx, int ioapic, int pin,
|
||||
if (idx >= 0 && test_bit(mp_irqs[idx].srcbus, mp_bus_not_pci)) {
|
||||
irq = mp_irqs[idx].srcbusirq;
|
||||
legacy = mp_is_legacy_irq(irq);
|
||||
/*
|
||||
* IRQ2 is unusable for historical reasons on systems which
|
||||
* have a legacy PIC. See the comment vs. IRQ2 further down.
|
||||
*
|
||||
* If this gets removed at some point then the related code
|
||||
* in lapic_assign_system_vectors() needs to be adjusted as
|
||||
* well.
|
||||
*/
|
||||
if (legacy && irq == PIC_CASCADE_IR)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mutex_lock(&ioapic_mutex);
|
||||
|
@ -766,30 +766,8 @@ handle_signal(struct ksignal *ksig, struct pt_regs *regs)
|
||||
|
||||
static inline unsigned long get_nr_restart_syscall(const struct pt_regs *regs)
|
||||
{
|
||||
/*
|
||||
* This function is fundamentally broken as currently
|
||||
* implemented.
|
||||
*
|
||||
* The idea is that we want to trigger a call to the
|
||||
* restart_block() syscall and that we want in_ia32_syscall(),
|
||||
* in_x32_syscall(), etc. to match whatever they were in the
|
||||
* syscall being restarted. We assume that the syscall
|
||||
* instruction at (regs->ip - 2) matches whatever syscall
|
||||
* instruction we used to enter in the first place.
|
||||
*
|
||||
* The problem is that we can get here when ptrace pokes
|
||||
* syscall-like values into regs even if we're not in a syscall
|
||||
* at all.
|
||||
*
|
||||
* For now, we maintain historical behavior and guess based on
|
||||
* stored state. We could do better by saving the actual
|
||||
* syscall arch in restart_block or (with caveats on x32) by
|
||||
* checking if regs->ip points to 'int $0x80'. The current
|
||||
* behavior is incorrect if a tracer has a different bitness
|
||||
* than the tracee.
|
||||
*/
|
||||
#ifdef CONFIG_IA32_EMULATION
|
||||
if (current_thread_info()->status & (TS_COMPAT|TS_I386_REGS_POKED))
|
||||
if (current_thread_info()->status & TS_COMPAT_RESTART)
|
||||
return __NR_ia32_restart_syscall;
|
||||
#endif
|
||||
#ifdef CONFIG_X86_X32_ABI
|
||||
|
@ -325,22 +325,22 @@ static void rpm_put_suppliers(struct device *dev)
|
||||
static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
|
||||
__releases(&dev->power.lock) __acquires(&dev->power.lock)
|
||||
{
|
||||
bool use_links = dev->power.links_count > 0;
|
||||
bool get = false;
|
||||
int retval, idx;
|
||||
bool put;
|
||||
bool use_links = dev->power.links_count > 0;
|
||||
|
||||
if (dev->power.irq_safe) {
|
||||
spin_unlock(&dev->power.lock);
|
||||
} else if (!use_links) {
|
||||
spin_unlock_irq(&dev->power.lock);
|
||||
} else {
|
||||
get = dev->power.runtime_status == RPM_RESUMING;
|
||||
|
||||
spin_unlock_irq(&dev->power.lock);
|
||||
|
||||
/* Resume suppliers if necessary. */
|
||||
if (get) {
|
||||
/*
|
||||
* Resume suppliers if necessary.
|
||||
*
|
||||
* The device's runtime PM status cannot change until this
|
||||
* routine returns, so it is safe to read the status outside of
|
||||
* the lock.
|
||||
*/
|
||||
if (use_links && dev->power.runtime_status == RPM_RESUMING) {
|
||||
idx = device_links_read_lock();
|
||||
|
||||
retval = rpm_get_suppliers(dev);
|
||||
@ -355,36 +355,24 @@ static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
|
||||
|
||||
if (dev->power.irq_safe) {
|
||||
spin_lock(&dev->power.lock);
|
||||
return retval;
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
* If the device is suspending and the callback has returned
|
||||
* success, drop the usage counters of the suppliers that have
|
||||
* been reference counted on its resume.
|
||||
*
|
||||
* Do that if resume fails too.
|
||||
*/
|
||||
if (use_links
|
||||
&& ((dev->power.runtime_status == RPM_SUSPENDING && !retval)
|
||||
|| (dev->power.runtime_status == RPM_RESUMING && retval))) {
|
||||
idx = device_links_read_lock();
|
||||
|
||||
spin_lock_irq(&dev->power.lock);
|
||||
fail:
|
||||
rpm_put_suppliers(dev);
|
||||
|
||||
if (!use_links)
|
||||
return retval;
|
||||
|
||||
/*
|
||||
* If the device is suspending and the callback has returned success,
|
||||
* drop the usage counters of the suppliers that have been reference
|
||||
* counted on its resume.
|
||||
*
|
||||
* Do that if the resume fails too.
|
||||
*/
|
||||
put = dev->power.runtime_status == RPM_SUSPENDING && !retval;
|
||||
if (put)
|
||||
__update_runtime_status(dev, RPM_SUSPENDED);
|
||||
else
|
||||
put = get && retval;
|
||||
|
||||
if (put) {
|
||||
spin_unlock_irq(&dev->power.lock);
|
||||
|
||||
idx = device_links_read_lock();
|
||||
|
||||
fail:
|
||||
rpm_put_suppliers(dev);
|
||||
|
||||
device_links_read_unlock(idx);
|
||||
device_links_read_unlock(idx);
|
||||
}
|
||||
|
||||
spin_lock_irq(&dev->power.lock);
|
||||
}
|
||||
|
@ -31,7 +31,7 @@ struct stm32_timer_cnt {
|
||||
struct counter_device counter;
|
||||
struct regmap *regmap;
|
||||
struct clk *clk;
|
||||
u32 ceiling;
|
||||
u32 max_arr;
|
||||
bool enabled;
|
||||
struct stm32_timer_regs bak;
|
||||
};
|
||||
@ -44,13 +44,14 @@ struct stm32_timer_cnt {
|
||||
* @STM32_COUNT_ENCODER_MODE_3: counts on both TI1FP1 and TI2FP2 edges
|
||||
*/
|
||||
enum stm32_count_function {
|
||||
STM32_COUNT_SLAVE_MODE_DISABLED = -1,
|
||||
STM32_COUNT_SLAVE_MODE_DISABLED,
|
||||
STM32_COUNT_ENCODER_MODE_1,
|
||||
STM32_COUNT_ENCODER_MODE_2,
|
||||
STM32_COUNT_ENCODER_MODE_3,
|
||||
};
|
||||
|
||||
static enum counter_count_function stm32_count_functions[] = {
|
||||
[STM32_COUNT_SLAVE_MODE_DISABLED] = COUNTER_COUNT_FUNCTION_INCREASE,
|
||||
[STM32_COUNT_ENCODER_MODE_1] = COUNTER_COUNT_FUNCTION_QUADRATURE_X2_A,
|
||||
[STM32_COUNT_ENCODER_MODE_2] = COUNTER_COUNT_FUNCTION_QUADRATURE_X2_B,
|
||||
[STM32_COUNT_ENCODER_MODE_3] = COUNTER_COUNT_FUNCTION_QUADRATURE_X4,
|
||||
@ -73,8 +74,10 @@ static int stm32_count_write(struct counter_device *counter,
|
||||
const unsigned long val)
|
||||
{
|
||||
struct stm32_timer_cnt *const priv = counter->priv;
|
||||
u32 ceiling;
|
||||
|
||||
if (val > priv->ceiling)
|
||||
regmap_read(priv->regmap, TIM_ARR, &ceiling);
|
||||
if (val > ceiling)
|
||||
return -EINVAL;
|
||||
|
||||
return regmap_write(priv->regmap, TIM_CNT, val);
|
||||
@ -90,6 +93,9 @@ static int stm32_count_function_get(struct counter_device *counter,
|
||||
regmap_read(priv->regmap, TIM_SMCR, &smcr);
|
||||
|
||||
switch (smcr & TIM_SMCR_SMS) {
|
||||
case 0:
|
||||
*function = STM32_COUNT_SLAVE_MODE_DISABLED;
|
||||
return 0;
|
||||
case 1:
|
||||
*function = STM32_COUNT_ENCODER_MODE_1;
|
||||
return 0;
|
||||
@ -99,9 +105,9 @@ static int stm32_count_function_get(struct counter_device *counter,
|
||||
case 3:
|
||||
*function = STM32_COUNT_ENCODER_MODE_3;
|
||||
return 0;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int stm32_count_function_set(struct counter_device *counter,
|
||||
@ -112,6 +118,9 @@ static int stm32_count_function_set(struct counter_device *counter,
|
||||
u32 cr1, sms;
|
||||
|
||||
switch (function) {
|
||||
case STM32_COUNT_SLAVE_MODE_DISABLED:
|
||||
sms = 0;
|
||||
break;
|
||||
case STM32_COUNT_ENCODER_MODE_1:
|
||||
sms = 1;
|
||||
break;
|
||||
@ -122,8 +131,7 @@ static int stm32_count_function_set(struct counter_device *counter,
|
||||
sms = 3;
|
||||
break;
|
||||
default:
|
||||
sms = 0;
|
||||
break;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Store enable status */
|
||||
@ -131,10 +139,6 @@ static int stm32_count_function_set(struct counter_device *counter,
|
||||
|
||||
regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 0);
|
||||
|
||||
/* TIMx_ARR register shouldn't be buffered (ARPE=0) */
|
||||
regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, 0);
|
||||
regmap_write(priv->regmap, TIM_ARR, priv->ceiling);
|
||||
|
||||
regmap_update_bits(priv->regmap, TIM_SMCR, TIM_SMCR_SMS, sms);
|
||||
|
||||
/* Make sure that registers are updated */
|
||||
@ -185,11 +189,13 @@ static ssize_t stm32_count_ceiling_write(struct counter_device *counter,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (ceiling > priv->max_arr)
|
||||
return -ERANGE;
|
||||
|
||||
/* TIMx_ARR register shouldn't be buffered (ARPE=0) */
|
||||
regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, 0);
|
||||
regmap_write(priv->regmap, TIM_ARR, ceiling);
|
||||
|
||||
priv->ceiling = ceiling;
|
||||
return len;
|
||||
}
|
||||
|
||||
@ -274,31 +280,36 @@ static int stm32_action_get(struct counter_device *counter,
|
||||
size_t function;
|
||||
int err;
|
||||
|
||||
/* Default action mode (e.g. STM32_COUNT_SLAVE_MODE_DISABLED) */
|
||||
*action = STM32_SYNAPSE_ACTION_NONE;
|
||||
|
||||
err = stm32_count_function_get(counter, count, &function);
|
||||
if (err)
|
||||
return 0;
|
||||
return err;
|
||||
|
||||
switch (function) {
|
||||
case STM32_COUNT_SLAVE_MODE_DISABLED:
|
||||
/* counts on internal clock when CEN=1 */
|
||||
*action = STM32_SYNAPSE_ACTION_NONE;
|
||||
return 0;
|
||||
case STM32_COUNT_ENCODER_MODE_1:
|
||||
/* counts up/down on TI1FP1 edge depending on TI2FP2 level */
|
||||
if (synapse->signal->id == count->synapses[0].signal->id)
|
||||
*action = STM32_SYNAPSE_ACTION_BOTH_EDGES;
|
||||
break;
|
||||
else
|
||||
*action = STM32_SYNAPSE_ACTION_NONE;
|
||||
return 0;
|
||||
case STM32_COUNT_ENCODER_MODE_2:
|
||||
/* counts up/down on TI2FP2 edge depending on TI1FP1 level */
|
||||
if (synapse->signal->id == count->synapses[1].signal->id)
|
||||
*action = STM32_SYNAPSE_ACTION_BOTH_EDGES;
|
||||
break;
|
||||
else
|
||||
*action = STM32_SYNAPSE_ACTION_NONE;
|
||||
return 0;
|
||||
case STM32_COUNT_ENCODER_MODE_3:
|
||||
/* counts up/down on both TI1FP1 and TI2FP2 edges */
|
||||
*action = STM32_SYNAPSE_ACTION_BOTH_EDGES;
|
||||
break;
|
||||
return 0;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct counter_ops stm32_timer_cnt_ops = {
|
||||
@ -359,7 +370,7 @@ static int stm32_timer_cnt_probe(struct platform_device *pdev)
|
||||
|
||||
priv->regmap = ddata->regmap;
|
||||
priv->clk = ddata->clk;
|
||||
priv->ceiling = ddata->max_arr;
|
||||
priv->max_arr = ddata->max_arr;
|
||||
|
||||
priv->counter.name = dev_name(dev);
|
||||
priv->counter.parent = dev;
|
||||
|
@ -927,7 +927,7 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
|
||||
}
|
||||
|
||||
/* first try to find a slot in an existing linked list entry */
|
||||
for (prsv = efi_memreserve_root->next; prsv; prsv = rsv->next) {
|
||||
for (prsv = efi_memreserve_root->next; prsv; ) {
|
||||
rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB);
|
||||
index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size);
|
||||
if (index < rsv->size) {
|
||||
@ -937,6 +937,7 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
|
||||
memunmap(rsv);
|
||||
return efi_mem_reserve_iomem(addr, size);
|
||||
}
|
||||
prsv = rsv->next;
|
||||
memunmap(rsv);
|
||||
}
|
||||
|
||||
|
@ -484,6 +484,10 @@ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
|
||||
}
|
||||
}
|
||||
|
||||
break;
|
||||
case EFI_UNSUPPORTED:
|
||||
err = -EOPNOTSUPP;
|
||||
status = EFI_NOT_FOUND;
|
||||
break;
|
||||
case EFI_NOT_FOUND:
|
||||
break;
|
||||
|
@ -574,6 +574,7 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
|
||||
struct lock_class_key *lock_key,
|
||||
struct lock_class_key *request_key)
|
||||
{
|
||||
struct fwnode_handle *fwnode = gc->parent ? dev_fwnode(gc->parent) : NULL;
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
unsigned i;
|
||||
@ -597,6 +598,12 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
|
||||
|
||||
of_gpio_dev_init(gc, gdev);
|
||||
|
||||
/*
|
||||
* Assign fwnode depending on the result of the previous calls,
|
||||
* if none of them succeed, assign it to the parent's one.
|
||||
*/
|
||||
gdev->dev.fwnode = dev_fwnode(&gdev->dev) ?: fwnode;
|
||||
|
||||
gdev->id = ida_alloc(&gpio_ida, GFP_KERNEL);
|
||||
if (gdev->id < 0) {
|
||||
ret = gdev->id;
|
||||
|
@ -1902,6 +1902,33 @@ static void dm_gpureset_commit_state(struct dc_state *dc_state,
|
||||
return;
|
||||
}
|
||||
|
||||
static void dm_set_dpms_off(struct dc_link *link)
|
||||
{
|
||||
struct dc_stream_state *stream_state;
|
||||
struct amdgpu_dm_connector *aconnector = link->priv;
|
||||
struct amdgpu_device *adev = drm_to_adev(aconnector->base.dev);
|
||||
struct dc_stream_update stream_update;
|
||||
bool dpms_off = true;
|
||||
|
||||
memset(&stream_update, 0, sizeof(stream_update));
|
||||
stream_update.dpms_off = &dpms_off;
|
||||
|
||||
mutex_lock(&adev->dm.dc_lock);
|
||||
stream_state = dc_stream_find_from_link(link);
|
||||
|
||||
if (stream_state == NULL) {
|
||||
DRM_DEBUG_DRIVER("Error finding stream state associated with link!\n");
|
||||
mutex_unlock(&adev->dm.dc_lock);
|
||||
return;
|
||||
}
|
||||
|
||||
stream_update.stream = stream_state;
|
||||
dc_commit_updates_for_stream(stream_state->ctx->dc, NULL, 0,
|
||||
stream_state, &stream_update,
|
||||
stream_state->ctx->dc->current_state);
|
||||
mutex_unlock(&adev->dm.dc_lock);
|
||||
}
|
||||
|
||||
static int dm_resume(void *handle)
|
||||
{
|
||||
struct amdgpu_device *adev = handle;
|
||||
@ -2353,8 +2380,11 @@ static void handle_hpd_irq(void *param)
|
||||
drm_kms_helper_hotplug_event(dev);
|
||||
|
||||
} else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) {
|
||||
amdgpu_dm_update_connector_after_detect(aconnector);
|
||||
if (new_connection_type == dc_connection_none &&
|
||||
aconnector->dc_link->type == dc_connection_none)
|
||||
dm_set_dpms_off(aconnector->dc_link);
|
||||
|
||||
amdgpu_dm_update_connector_after_detect(aconnector);
|
||||
|
||||
drm_modeset_lock_all(dev);
|
||||
dm_restore_drm_connector_state(dev, connector);
|
||||
|
@ -2767,6 +2767,19 @@ struct dc_stream_state *dc_get_stream_at_index(struct dc *dc, uint8_t i)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
struct dc_stream_state *dc_stream_find_from_link(const struct dc_link *link)
|
||||
{
|
||||
uint8_t i;
|
||||
struct dc_context *ctx = link->ctx;
|
||||
|
||||
for (i = 0; i < ctx->dc->current_state->stream_count; i++) {
|
||||
if (ctx->dc->current_state->streams[i]->link == link)
|
||||
return ctx->dc->current_state->streams[i];
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
enum dc_irq_source dc_interrupt_to_irq_source(
|
||||
struct dc *dc,
|
||||
uint32_t src_id,
|
||||
|
@ -297,6 +297,7 @@ void dc_stream_log(const struct dc *dc, const struct dc_stream_state *stream);
|
||||
|
||||
uint8_t dc_get_current_stream_count(struct dc *dc);
|
||||
struct dc_stream_state *dc_get_stream_at_index(struct dc *dc, uint8_t i);
|
||||
struct dc_stream_state *dc_stream_find_from_link(const struct dc_link *link);
|
||||
|
||||
/*
|
||||
* Return the current frame counter.
|
||||
|
@ -113,6 +113,7 @@ bool cm3_helper_translate_curve_to_hw_format(
|
||||
struct pwl_result_data *rgb_resulted;
|
||||
struct pwl_result_data *rgb;
|
||||
struct pwl_result_data *rgb_plus_1;
|
||||
struct pwl_result_data *rgb_minus_1;
|
||||
struct fixed31_32 end_value;
|
||||
|
||||
int32_t region_start, region_end;
|
||||
@ -140,7 +141,7 @@ bool cm3_helper_translate_curve_to_hw_format(
|
||||
region_start = -MAX_LOW_POINT;
|
||||
region_end = NUMBER_REGIONS - MAX_LOW_POINT;
|
||||
} else {
|
||||
/* 10 segments
|
||||
/* 11 segments
|
||||
* segment is from 2^-10 to 2^0
|
||||
* There are less than 256 points, for optimization
|
||||
*/
|
||||
@ -154,9 +155,10 @@ bool cm3_helper_translate_curve_to_hw_format(
|
||||
seg_distr[7] = 4;
|
||||
seg_distr[8] = 4;
|
||||
seg_distr[9] = 4;
|
||||
seg_distr[10] = 1;
|
||||
|
||||
region_start = -10;
|
||||
region_end = 0;
|
||||
region_end = 1;
|
||||
}
|
||||
|
||||
for (i = region_end - region_start; i < MAX_REGIONS_NUMBER ; i++)
|
||||
@ -189,6 +191,10 @@ bool cm3_helper_translate_curve_to_hw_format(
|
||||
rgb_resulted[hw_points - 1].green = output_tf->tf_pts.green[start_index];
|
||||
rgb_resulted[hw_points - 1].blue = output_tf->tf_pts.blue[start_index];
|
||||
|
||||
rgb_resulted[hw_points].red = rgb_resulted[hw_points - 1].red;
|
||||
rgb_resulted[hw_points].green = rgb_resulted[hw_points - 1].green;
|
||||
rgb_resulted[hw_points].blue = rgb_resulted[hw_points - 1].blue;
|
||||
|
||||
// All 3 color channels have same x
|
||||
corner_points[0].red.x = dc_fixpt_pow(dc_fixpt_from_int(2),
|
||||
dc_fixpt_from_int(region_start));
|
||||
@ -259,15 +265,18 @@ bool cm3_helper_translate_curve_to_hw_format(
|
||||
|
||||
rgb = rgb_resulted;
|
||||
rgb_plus_1 = rgb_resulted + 1;
|
||||
rgb_minus_1 = rgb;
|
||||
|
||||
i = 1;
|
||||
while (i != hw_points + 1) {
|
||||
if (dc_fixpt_lt(rgb_plus_1->red, rgb->red))
|
||||
rgb_plus_1->red = rgb->red;
|
||||
if (dc_fixpt_lt(rgb_plus_1->green, rgb->green))
|
||||
rgb_plus_1->green = rgb->green;
|
||||
if (dc_fixpt_lt(rgb_plus_1->blue, rgb->blue))
|
||||
rgb_plus_1->blue = rgb->blue;
|
||||
if (i >= hw_points - 1) {
|
||||
if (dc_fixpt_lt(rgb_plus_1->red, rgb->red))
|
||||
rgb_plus_1->red = dc_fixpt_add(rgb->red, rgb_minus_1->delta_red);
|
||||
if (dc_fixpt_lt(rgb_plus_1->green, rgb->green))
|
||||
rgb_plus_1->green = dc_fixpt_add(rgb->green, rgb_minus_1->delta_green);
|
||||
if (dc_fixpt_lt(rgb_plus_1->blue, rgb->blue))
|
||||
rgb_plus_1->blue = dc_fixpt_add(rgb->blue, rgb_minus_1->delta_blue);
|
||||
}
|
||||
|
||||
rgb->delta_red = dc_fixpt_sub(rgb_plus_1->red, rgb->red);
|
||||
rgb->delta_green = dc_fixpt_sub(rgb_plus_1->green, rgb->green);
|
||||
@ -283,6 +292,7 @@ bool cm3_helper_translate_curve_to_hw_format(
|
||||
}
|
||||
|
||||
++rgb_plus_1;
|
||||
rgb_minus_1 = rgb;
|
||||
++rgb;
|
||||
++i;
|
||||
}
|
||||
|
@ -4771,6 +4771,72 @@ static int smu7_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu7_get_sclks_with_latency(struct pp_hwmgr *hwmgr,
|
||||
struct pp_clock_levels_with_latency *clocks)
|
||||
{
|
||||
struct phm_ppt_v1_information *table_info =
|
||||
(struct phm_ppt_v1_information *)hwmgr->pptable;
|
||||
struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table =
|
||||
table_info->vdd_dep_on_sclk;
|
||||
int i;
|
||||
|
||||
clocks->num_levels = 0;
|
||||
for (i = 0; i < dep_sclk_table->count; i++) {
|
||||
if (dep_sclk_table->entries[i].clk) {
|
||||
clocks->data[clocks->num_levels].clocks_in_khz =
|
||||
dep_sclk_table->entries[i].clk * 10;
|
||||
clocks->num_levels++;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu7_get_mclks_with_latency(struct pp_hwmgr *hwmgr,
|
||||
struct pp_clock_levels_with_latency *clocks)
|
||||
{
|
||||
struct phm_ppt_v1_information *table_info =
|
||||
(struct phm_ppt_v1_information *)hwmgr->pptable;
|
||||
struct phm_ppt_v1_clock_voltage_dependency_table *dep_mclk_table =
|
||||
table_info->vdd_dep_on_mclk;
|
||||
int i;
|
||||
|
||||
clocks->num_levels = 0;
|
||||
for (i = 0; i < dep_mclk_table->count; i++) {
|
||||
if (dep_mclk_table->entries[i].clk) {
|
||||
clocks->data[clocks->num_levels].clocks_in_khz =
|
||||
dep_mclk_table->entries[i].clk * 10;
|
||||
clocks->data[clocks->num_levels].latency_in_us =
|
||||
smu7_get_mem_latency(hwmgr, dep_mclk_table->entries[i].clk);
|
||||
clocks->num_levels++;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu7_get_clock_by_type_with_latency(struct pp_hwmgr *hwmgr,
|
||||
enum amd_pp_clock_type type,
|
||||
struct pp_clock_levels_with_latency *clocks)
|
||||
{
|
||||
if (!(hwmgr->chip_id >= CHIP_POLARIS10 &&
|
||||
hwmgr->chip_id <= CHIP_VEGAM))
|
||||
return -EINVAL;
|
||||
|
||||
switch (type) {
|
||||
case amd_pp_sys_clock:
|
||||
smu7_get_sclks_with_latency(hwmgr, clocks);
|
||||
break;
|
||||
case amd_pp_mem_clock:
|
||||
smu7_get_mclks_with_latency(hwmgr, clocks);
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int smu7_notify_cac_buffer_info(struct pp_hwmgr *hwmgr,
|
||||
uint32_t virtual_addr_low,
|
||||
uint32_t virtual_addr_hi,
|
||||
@ -5188,6 +5254,7 @@ static const struct pp_hwmgr_func smu7_hwmgr_funcs = {
|
||||
.get_mclk_od = smu7_get_mclk_od,
|
||||
.set_mclk_od = smu7_set_mclk_od,
|
||||
.get_clock_by_type = smu7_get_clock_by_type,
|
||||
.get_clock_by_type_with_latency = smu7_get_clock_by_type_with_latency,
|
||||
.read_sensor = smu7_read_sensor,
|
||||
.dynamic_state_management_disable = smu7_disable_dpm_tasks,
|
||||
.avfs_control = smu7_avfs_control,
|
||||
|
@ -600,7 +600,6 @@ static int append_oa_sample(struct i915_perf_stream *stream,
|
||||
{
|
||||
int report_size = stream->oa_buffer.format_size;
|
||||
struct drm_i915_perf_record_header header;
|
||||
u32 sample_flags = stream->sample_flags;
|
||||
|
||||
header.type = DRM_I915_PERF_RECORD_SAMPLE;
|
||||
header.pad = 0;
|
||||
@ -614,10 +613,8 @@ static int append_oa_sample(struct i915_perf_stream *stream,
|
||||
return -EFAULT;
|
||||
buf += sizeof(header);
|
||||
|
||||
if (sample_flags & SAMPLE_OA_REPORT) {
|
||||
if (copy_to_user(buf, report, report_size))
|
||||
return -EFAULT;
|
||||
}
|
||||
if (copy_to_user(buf, report, report_size))
|
||||
return -EFAULT;
|
||||
|
||||
(*offset) += header.size;
|
||||
|
||||
@ -2676,7 +2673,7 @@ static void i915_oa_stream_enable(struct i915_perf_stream *stream)
|
||||
|
||||
stream->perf->ops.oa_enable(stream);
|
||||
|
||||
if (stream->periodic)
|
||||
if (stream->sample_flags & SAMPLE_OA_REPORT)
|
||||
hrtimer_start(&stream->poll_check_timer,
|
||||
ns_to_ktime(stream->poll_oa_period),
|
||||
HRTIMER_MODE_REL_PINNED);
|
||||
@ -2739,7 +2736,7 @@ static void i915_oa_stream_disable(struct i915_perf_stream *stream)
|
||||
{
|
||||
stream->perf->ops.oa_disable(stream);
|
||||
|
||||
if (stream->periodic)
|
||||
if (stream->sample_flags & SAMPLE_OA_REPORT)
|
||||
hrtimer_cancel(&stream->poll_check_timer);
|
||||
}
|
||||
|
||||
@ -3022,7 +3019,7 @@ static ssize_t i915_perf_read(struct file *file,
|
||||
* disabled stream as an error. In particular it might otherwise lead
|
||||
* to a deadlock for blocking file descriptors...
|
||||
*/
|
||||
if (!stream->enabled)
|
||||
if (!stream->enabled || !(stream->sample_flags & SAMPLE_OA_REPORT))
|
||||
return -EIO;
|
||||
|
||||
if (!(file->f_flags & O_NONBLOCK)) {
|
||||
|
@ -266,6 +266,8 @@ config ADI_AXI_ADC
|
||||
select IIO_BUFFER
|
||||
select IIO_BUFFER_HW_CONSUMER
|
||||
select IIO_BUFFER_DMAENGINE
|
||||
depends on HAS_IOMEM
|
||||
depends on OF
|
||||
help
|
||||
Say yes here to build support for Analog Devices Generic
|
||||
AXI ADC IP core. The IP core is used for interfacing with
|
||||
@ -912,6 +914,7 @@ config STM32_ADC_CORE
|
||||
depends on ARCH_STM32 || COMPILE_TEST
|
||||
depends on OF
|
||||
depends on REGULATOR
|
||||
depends on HAS_IOMEM
|
||||
select IIO_BUFFER
|
||||
select MFD_STM32_TIMERS
|
||||
select IIO_STM32_TIMER_TRIGGER
|
||||
|
@ -918,7 +918,7 @@ static int ab8500_gpadc_read_raw(struct iio_dev *indio_dev,
|
||||
return processed;
|
||||
|
||||
/* Return millivolt or milliamps or millicentigrades */
|
||||
*val = processed * 1000;
|
||||
*val = processed;
|
||||
return IIO_VAL_INT;
|
||||
}
|
||||
|
||||
|
@ -91,7 +91,7 @@ static int ad7949_spi_read_channel(struct ad7949_adc_chip *ad7949_adc, int *val,
|
||||
int ret;
|
||||
int i;
|
||||
int bits_per_word = ad7949_adc->resolution;
|
||||
int mask = GENMASK(ad7949_adc->resolution, 0);
|
||||
int mask = GENMASK(ad7949_adc->resolution - 1, 0);
|
||||
struct spi_message msg;
|
||||
struct spi_transfer tx[] = {
|
||||
{
|
||||
|
@ -598,7 +598,7 @@ static const struct vadc_channels vadc_chans[] = {
|
||||
VADC_CHAN_NO_SCALE(P_MUX16_1_3, 1)
|
||||
|
||||
VADC_CHAN_NO_SCALE(LR_MUX1_BAT_THERM, 0)
|
||||
VADC_CHAN_NO_SCALE(LR_MUX2_BAT_ID, 0)
|
||||
VADC_CHAN_VOLT(LR_MUX2_BAT_ID, 0, SCALE_DEFAULT)
|
||||
VADC_CHAN_NO_SCALE(LR_MUX3_XO_THERM, 0)
|
||||
VADC_CHAN_NO_SCALE(LR_MUX4_AMUX_THM1, 0)
|
||||
VADC_CHAN_NO_SCALE(LR_MUX5_AMUX_THM2, 0)
|
||||
|
@ -550,6 +550,8 @@ static irqreturn_t mpu3050_trigger_handler(int irq, void *p)
|
||||
MPU3050_FIFO_R,
|
||||
&fifo_values[offset],
|
||||
toread);
|
||||
if (ret)
|
||||
goto out_trigger_unlock;
|
||||
|
||||
dev_dbg(mpu3050->dev,
|
||||
"%04x %04x %04x %04x %04x\n",
|
||||
|
@ -15,7 +15,10 @@
|
||||
struct hid_humidity_state {
|
||||
struct hid_sensor_common common_attributes;
|
||||
struct hid_sensor_hub_attribute_info humidity_attr;
|
||||
s32 humidity_data;
|
||||
struct {
|
||||
s32 humidity_data;
|
||||
u64 timestamp __aligned(8);
|
||||
} scan;
|
||||
int scale_pre_decml;
|
||||
int scale_post_decml;
|
||||
int scale_precision;
|
||||
@ -125,9 +128,8 @@ static int humidity_proc_event(struct hid_sensor_hub_device *hsdev,
|
||||
struct hid_humidity_state *humid_st = iio_priv(indio_dev);
|
||||
|
||||
if (atomic_read(&humid_st->common_attributes.data_ready))
|
||||
iio_push_to_buffers_with_timestamp(indio_dev,
|
||||
&humid_st->humidity_data,
|
||||
iio_get_time_ns(indio_dev));
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, &humid_st->scan,
|
||||
iio_get_time_ns(indio_dev));
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -142,7 +144,7 @@ static int humidity_capture_sample(struct hid_sensor_hub_device *hsdev,
|
||||
|
||||
switch (usage_id) {
|
||||
case HID_USAGE_SENSOR_ATMOSPHERIC_HUMIDITY:
|
||||
humid_st->humidity_data = *(s32 *)raw_data;
|
||||
humid_st->scan.humidity_data = *(s32 *)raw_data;
|
||||
|
||||
return 0;
|
||||
default:
|
||||
|
@ -462,8 +462,7 @@ static int adis16400_initial_setup(struct iio_dev *indio_dev)
|
||||
if (ret)
|
||||
goto err_ret;
|
||||
|
||||
ret = sscanf(indio_dev->name, "adis%u\n", &device_id);
|
||||
if (ret != 1) {
|
||||
if (sscanf(indio_dev->name, "adis%u\n", &device_id) != 1) {
|
||||
ret = -EINVAL;
|
||||
goto err_ret;
|
||||
}
|
||||
|
@ -23,6 +23,9 @@ struct prox_state {
|
||||
struct hid_sensor_common common_attributes;
|
||||
struct hid_sensor_hub_attribute_info prox_attr;
|
||||
u32 human_presence;
|
||||
int scale_pre_decml;
|
||||
int scale_post_decml;
|
||||
int scale_precision;
|
||||
};
|
||||
|
||||
/* Channel definitions */
|
||||
@ -93,8 +96,9 @@ static int prox_read_raw(struct iio_dev *indio_dev,
|
||||
ret_type = IIO_VAL_INT;
|
||||
break;
|
||||
case IIO_CHAN_INFO_SCALE:
|
||||
*val = prox_state->prox_attr.units;
|
||||
ret_type = IIO_VAL_INT;
|
||||
*val = prox_state->scale_pre_decml;
|
||||
*val2 = prox_state->scale_post_decml;
|
||||
ret_type = prox_state->scale_precision;
|
||||
break;
|
||||
case IIO_CHAN_INFO_OFFSET:
|
||||
*val = hid_sensor_convert_exponent(
|
||||
@ -234,6 +238,11 @@ static int prox_parse_report(struct platform_device *pdev,
|
||||
HID_USAGE_SENSOR_HUMAN_PRESENCE,
|
||||
&st->common_attributes.sensitivity);
|
||||
|
||||
st->scale_precision = hid_sensor_format_scale(
|
||||
hsdev->usage,
|
||||
&st->prox_attr,
|
||||
&st->scale_pre_decml, &st->scale_post_decml);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -15,7 +15,10 @@
|
||||
struct temperature_state {
|
||||
struct hid_sensor_common common_attributes;
|
||||
struct hid_sensor_hub_attribute_info temperature_attr;
|
||||
s32 temperature_data;
|
||||
struct {
|
||||
s32 temperature_data;
|
||||
u64 timestamp __aligned(8);
|
||||
} scan;
|
||||
int scale_pre_decml;
|
||||
int scale_post_decml;
|
||||
int scale_precision;
|
||||
@ -32,7 +35,7 @@ static const struct iio_chan_spec temperature_channels[] = {
|
||||
BIT(IIO_CHAN_INFO_SAMP_FREQ) |
|
||||
BIT(IIO_CHAN_INFO_HYSTERESIS),
|
||||
},
|
||||
IIO_CHAN_SOFT_TIMESTAMP(3),
|
||||
IIO_CHAN_SOFT_TIMESTAMP(1),
|
||||
};
|
||||
|
||||
/* Adjust channel real bits based on report descriptor */
|
||||
@ -123,9 +126,8 @@ static int temperature_proc_event(struct hid_sensor_hub_device *hsdev,
|
||||
struct temperature_state *temp_st = iio_priv(indio_dev);
|
||||
|
||||
if (atomic_read(&temp_st->common_attributes.data_ready))
|
||||
iio_push_to_buffers_with_timestamp(indio_dev,
|
||||
&temp_st->temperature_data,
|
||||
iio_get_time_ns(indio_dev));
|
||||
iio_push_to_buffers_with_timestamp(indio_dev, &temp_st->scan,
|
||||
iio_get_time_ns(indio_dev));
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -140,7 +142,7 @@ static int temperature_capture_sample(struct hid_sensor_hub_device *hsdev,
|
||||
|
||||
switch (usage_id) {
|
||||
case HID_USAGE_SENSOR_DATA_ENVIRONMENTAL_TEMPERATURE:
|
||||
temp_st->temperature_data = *(s32 *)raw_data;
|
||||
temp_st->scan.temperature_data = *(s32 *)raw_data;
|
||||
return 0;
|
||||
default:
|
||||
return -EINVAL;
|
||||
|
@ -2458,8 +2458,6 @@ static int check_qp_type(struct mlx5_ib_dev *dev, struct ib_qp_init_attr *attr,
|
||||
case MLX5_IB_QPT_HW_GSI:
|
||||
case IB_QPT_DRIVER:
|
||||
case IB_QPT_GSI:
|
||||
if (dev->profile == &raw_eth_profile)
|
||||
goto out;
|
||||
case IB_QPT_RAW_PACKET:
|
||||
case IB_QPT_UD:
|
||||
case MLX5_IB_QPT_REG_UMR:
|
||||
@ -2654,10 +2652,6 @@ static int process_create_flags(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
|
||||
int create_flags = attr->create_flags;
|
||||
bool cond;
|
||||
|
||||
if (qp->type == IB_QPT_UD && dev->profile == &raw_eth_profile)
|
||||
if (create_flags & ~MLX5_IB_QP_CREATE_WC_TEST)
|
||||
return -EINVAL;
|
||||
|
||||
if (qp_type == MLX5_IB_QPT_DCT)
|
||||
return (create_flags) ? -EINVAL : 0;
|
||||
|
||||
@ -4235,6 +4229,23 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool mlx5_ib_modify_qp_allowed(struct mlx5_ib_dev *dev,
|
||||
struct mlx5_ib_qp *qp,
|
||||
enum ib_qp_type qp_type)
|
||||
{
|
||||
if (dev->profile != &raw_eth_profile)
|
||||
return true;
|
||||
|
||||
if (qp_type == IB_QPT_RAW_PACKET || qp_type == MLX5_IB_QPT_REG_UMR)
|
||||
return true;
|
||||
|
||||
/* Internal QP used for wc testing, with NOPs in wq */
|
||||
if (qp->flags & MLX5_IB_QP_CREATE_WC_TEST)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
|
||||
int attr_mask, struct ib_udata *udata)
|
||||
{
|
||||
@ -4247,6 +4258,9 @@ int mlx5_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
|
||||
int err = -EINVAL;
|
||||
int port;
|
||||
|
||||
if (!mlx5_ib_modify_qp_allowed(dev, qp, ibqp->qp_type))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (ibqp->rwq_ind_tbl)
|
||||
return -ENOSYS;
|
||||
|
||||
|
@ -1237,8 +1237,7 @@ static void free_sess_reqs(struct rtrs_clt_sess *sess)
|
||||
if (req->mr)
|
||||
ib_dereg_mr(req->mr);
|
||||
kfree(req->sge);
|
||||
rtrs_iu_free(req->iu, DMA_TO_DEVICE,
|
||||
sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(req->iu, sess->s.dev->ib_dev, 1);
|
||||
}
|
||||
kfree(sess->reqs);
|
||||
sess->reqs = NULL;
|
||||
@ -1611,8 +1610,7 @@ static void destroy_con_cq_qp(struct rtrs_clt_con *con)
|
||||
|
||||
rtrs_cq_qp_destroy(&con->c);
|
||||
if (con->rsp_ius) {
|
||||
rtrs_iu_free(con->rsp_ius, DMA_FROM_DEVICE,
|
||||
sess->s.dev->ib_dev, con->queue_size);
|
||||
rtrs_iu_free(con->rsp_ius, sess->s.dev->ib_dev, con->queue_size);
|
||||
con->rsp_ius = NULL;
|
||||
con->queue_size = 0;
|
||||
}
|
||||
@ -2252,7 +2250,7 @@ static void rtrs_clt_info_req_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||
struct rtrs_iu *iu;
|
||||
|
||||
iu = container_of(wc->wr_cqe, struct rtrs_iu, cqe);
|
||||
rtrs_iu_free(iu, DMA_TO_DEVICE, sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(iu, sess->s.dev->ib_dev, 1);
|
||||
|
||||
if (unlikely(wc->status != IB_WC_SUCCESS)) {
|
||||
rtrs_err(sess->clt, "Sess info request send failed: %s\n",
|
||||
@ -2381,7 +2379,7 @@ static void rtrs_clt_info_rsp_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||
|
||||
out:
|
||||
rtrs_clt_update_wc_stats(con);
|
||||
rtrs_iu_free(iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(iu, sess->s.dev->ib_dev, 1);
|
||||
rtrs_clt_change_state(sess, state);
|
||||
}
|
||||
|
||||
@ -2443,9 +2441,9 @@ static int rtrs_send_sess_info(struct rtrs_clt_sess *sess)
|
||||
|
||||
out:
|
||||
if (tx_iu)
|
||||
rtrs_iu_free(tx_iu, DMA_TO_DEVICE, sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(tx_iu, sess->s.dev->ib_dev, 1);
|
||||
if (rx_iu)
|
||||
rtrs_iu_free(rx_iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(rx_iu, sess->s.dev->ib_dev, 1);
|
||||
if (unlikely(err))
|
||||
/* If we've never taken async path because of malloc problems */
|
||||
rtrs_clt_change_state(sess, RTRS_CLT_CONNECTING_ERR);
|
||||
|
@ -289,8 +289,7 @@ struct rtrs_msg_rdma_hdr {
|
||||
struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t t,
|
||||
struct ib_device *dev, enum dma_data_direction,
|
||||
void (*done)(struct ib_cq *cq, struct ib_wc *wc));
|
||||
void rtrs_iu_free(struct rtrs_iu *iu, enum dma_data_direction dir,
|
||||
struct ib_device *dev, u32 queue_size);
|
||||
void rtrs_iu_free(struct rtrs_iu *iu, struct ib_device *dev, u32 queue_size);
|
||||
int rtrs_iu_post_recv(struct rtrs_con *con, struct rtrs_iu *iu);
|
||||
int rtrs_iu_post_send(struct rtrs_con *con, struct rtrs_iu *iu, size_t size,
|
||||
struct ib_send_wr *head);
|
||||
|
@ -584,8 +584,7 @@ static void unmap_cont_bufs(struct rtrs_srv_sess *sess)
|
||||
struct rtrs_srv_mr *srv_mr;
|
||||
|
||||
srv_mr = &sess->mrs[i];
|
||||
rtrs_iu_free(srv_mr->iu, DMA_TO_DEVICE,
|
||||
sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(srv_mr->iu, sess->s.dev->ib_dev, 1);
|
||||
ib_dereg_mr(srv_mr->mr);
|
||||
ib_dma_unmap_sg(sess->s.dev->ib_dev, srv_mr->sgt.sgl,
|
||||
srv_mr->sgt.nents, DMA_BIDIRECTIONAL);
|
||||
@ -672,7 +671,7 @@ static int map_cont_bufs(struct rtrs_srv_sess *sess)
|
||||
if (!srv_mr->iu) {
|
||||
err = -ENOMEM;
|
||||
rtrs_err(ss, "rtrs_iu_alloc(), err: %d\n", err);
|
||||
goto free_iu;
|
||||
goto dereg_mr;
|
||||
}
|
||||
}
|
||||
/* Eventually dma addr for each chunk can be cached */
|
||||
@ -688,9 +687,7 @@ static int map_cont_bufs(struct rtrs_srv_sess *sess)
|
||||
srv_mr = &sess->mrs[mri];
|
||||
sgt = &srv_mr->sgt;
|
||||
mr = srv_mr->mr;
|
||||
free_iu:
|
||||
rtrs_iu_free(srv_mr->iu, DMA_TO_DEVICE,
|
||||
sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(srv_mr->iu, sess->s.dev->ib_dev, 1);
|
||||
dereg_mr:
|
||||
ib_dereg_mr(mr);
|
||||
unmap_sg:
|
||||
@ -742,7 +739,7 @@ static void rtrs_srv_info_rsp_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||
struct rtrs_iu *iu;
|
||||
|
||||
iu = container_of(wc->wr_cqe, struct rtrs_iu, cqe);
|
||||
rtrs_iu_free(iu, DMA_TO_DEVICE, sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(iu, sess->s.dev->ib_dev, 1);
|
||||
|
||||
if (unlikely(wc->status != IB_WC_SUCCESS)) {
|
||||
rtrs_err(s, "Sess info response send failed: %s\n",
|
||||
@ -868,7 +865,7 @@ static int process_info_req(struct rtrs_srv_con *con,
|
||||
if (unlikely(err)) {
|
||||
rtrs_err(s, "rtrs_iu_post_send(), err: %d\n", err);
|
||||
iu_free:
|
||||
rtrs_iu_free(tx_iu, DMA_TO_DEVICE, sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(tx_iu, sess->s.dev->ib_dev, 1);
|
||||
}
|
||||
rwr_free:
|
||||
kfree(rwr);
|
||||
@ -913,7 +910,7 @@ static void rtrs_srv_info_req_done(struct ib_cq *cq, struct ib_wc *wc)
|
||||
goto close;
|
||||
|
||||
out:
|
||||
rtrs_iu_free(iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(iu, sess->s.dev->ib_dev, 1);
|
||||
return;
|
||||
close:
|
||||
close_sess(sess);
|
||||
@ -936,7 +933,7 @@ static int post_recv_info_req(struct rtrs_srv_con *con)
|
||||
err = rtrs_iu_post_recv(&con->c, rx_iu);
|
||||
if (unlikely(err)) {
|
||||
rtrs_err(s, "rtrs_iu_post_recv(), err: %d\n", err);
|
||||
rtrs_iu_free(rx_iu, DMA_FROM_DEVICE, sess->s.dev->ib_dev, 1);
|
||||
rtrs_iu_free(rx_iu, sess->s.dev->ib_dev, 1);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -31,6 +31,7 @@ struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t gfp_mask,
|
||||
return NULL;
|
||||
for (i = 0; i < queue_size; i++) {
|
||||
iu = &ius[i];
|
||||
iu->direction = dir;
|
||||
iu->buf = kzalloc(size, gfp_mask);
|
||||
if (!iu->buf)
|
||||
goto err;
|
||||
@ -41,17 +42,15 @@ struct rtrs_iu *rtrs_iu_alloc(u32 queue_size, size_t size, gfp_t gfp_mask,
|
||||
|
||||
iu->cqe.done = done;
|
||||
iu->size = size;
|
||||
iu->direction = dir;
|
||||
}
|
||||
return ius;
|
||||
err:
|
||||
rtrs_iu_free(ius, dir, dma_dev, i);
|
||||
rtrs_iu_free(ius, dma_dev, i);
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rtrs_iu_alloc);
|
||||
|
||||
void rtrs_iu_free(struct rtrs_iu *ius, enum dma_data_direction dir,
|
||||
struct ib_device *ibdev, u32 queue_size)
|
||||
void rtrs_iu_free(struct rtrs_iu *ius, struct ib_device *ibdev, u32 queue_size)
|
||||
{
|
||||
struct rtrs_iu *iu;
|
||||
int i;
|
||||
@ -61,7 +60,7 @@ void rtrs_iu_free(struct rtrs_iu *ius, enum dma_data_direction dir,
|
||||
|
||||
for (i = 0; i < queue_size; i++) {
|
||||
iu = &ius[i];
|
||||
ib_dma_unmap_single(ibdev, iu->dma_addr, iu->size, dir);
|
||||
ib_dma_unmap_single(ibdev, iu->dma_addr, iu->size, iu->direction);
|
||||
kfree(iu->buf);
|
||||
}
|
||||
kfree(ius);
|
||||
@ -105,6 +104,22 @@ int rtrs_post_recv_empty(struct rtrs_con *con, struct ib_cqe *cqe)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rtrs_post_recv_empty);
|
||||
|
||||
static int rtrs_post_send(struct ib_qp *qp, struct ib_send_wr *head,
|
||||
struct ib_send_wr *wr)
|
||||
{
|
||||
if (head) {
|
||||
struct ib_send_wr *tail = head;
|
||||
|
||||
while (tail->next)
|
||||
tail = tail->next;
|
||||
tail->next = wr;
|
||||
} else {
|
||||
head = wr;
|
||||
}
|
||||
|
||||
return ib_post_send(qp, head, NULL);
|
||||
}
|
||||
|
||||
int rtrs_iu_post_send(struct rtrs_con *con, struct rtrs_iu *iu, size_t size,
|
||||
struct ib_send_wr *head)
|
||||
{
|
||||
@ -127,17 +142,7 @@ int rtrs_iu_post_send(struct rtrs_con *con, struct rtrs_iu *iu, size_t size,
|
||||
.send_flags = IB_SEND_SIGNALED,
|
||||
};
|
||||
|
||||
if (head) {
|
||||
struct ib_send_wr *tail = head;
|
||||
|
||||
while (tail->next)
|
||||
tail = tail->next;
|
||||
tail->next = ≀
|
||||
} else {
|
||||
head = ≀
|
||||
}
|
||||
|
||||
return ib_post_send(con->qp, head, NULL);
|
||||
return rtrs_post_send(con->qp, head, &wr);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rtrs_iu_post_send);
|
||||
|
||||
@ -169,17 +174,7 @@ int rtrs_iu_post_rdma_write_imm(struct rtrs_con *con, struct rtrs_iu *iu,
|
||||
if (WARN_ON(sge[i].length == 0))
|
||||
return -EINVAL;
|
||||
|
||||
if (head) {
|
||||
struct ib_send_wr *tail = head;
|
||||
|
||||
while (tail->next)
|
||||
tail = tail->next;
|
||||
tail->next = &wr.wr;
|
||||
} else {
|
||||
head = &wr.wr;
|
||||
}
|
||||
|
||||
return ib_post_send(con->qp, head, NULL);
|
||||
return rtrs_post_send(con->qp, head, &wr.wr);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rtrs_iu_post_rdma_write_imm);
|
||||
|
||||
@ -187,26 +182,16 @@ int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe,
|
||||
u32 imm_data, enum ib_send_flags flags,
|
||||
struct ib_send_wr *head)
|
||||
{
|
||||
struct ib_send_wr wr;
|
||||
struct ib_rdma_wr wr;
|
||||
|
||||
wr = (struct ib_send_wr) {
|
||||
.wr_cqe = cqe,
|
||||
.send_flags = flags,
|
||||
.opcode = IB_WR_RDMA_WRITE_WITH_IMM,
|
||||
.ex.imm_data = cpu_to_be32(imm_data),
|
||||
wr = (struct ib_rdma_wr) {
|
||||
.wr.wr_cqe = cqe,
|
||||
.wr.send_flags = flags,
|
||||
.wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM,
|
||||
.wr.ex.imm_data = cpu_to_be32(imm_data),
|
||||
};
|
||||
|
||||
if (head) {
|
||||
struct ib_send_wr *tail = head;
|
||||
|
||||
while (tail->next)
|
||||
tail = tail->next;
|
||||
tail->next = ≀
|
||||
} else {
|
||||
head = ≀
|
||||
}
|
||||
|
||||
return ib_post_send(con->qp, head, NULL);
|
||||
return rtrs_post_send(con->qp, head, &wr.wr);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rtrs_post_rdma_write_imm_empty);
|
||||
|
||||
|
@ -3918,11 +3918,15 @@ static int bond_neigh_init(struct neighbour *n)
|
||||
|
||||
rcu_read_lock();
|
||||
slave = bond_first_slave_rcu(bond);
|
||||
if (!slave)
|
||||
if (!slave) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
slave_ops = slave->dev->netdev_ops;
|
||||
if (!slave_ops->ndo_neigh_setup)
|
||||
if (!slave_ops->ndo_neigh_setup) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* TODO: find another way [1] to implement this.
|
||||
* Passing a zeroed structure is fragile,
|
||||
|
@ -409,6 +409,8 @@ static void replenish_pools(struct ibmvnic_adapter *adapter)
|
||||
if (adapter->rx_pool[i].active)
|
||||
replenish_rx_pool(adapter, &adapter->rx_pool[i]);
|
||||
}
|
||||
|
||||
netdev_dbg(adapter->netdev, "Replenished %d pools\n", i);
|
||||
}
|
||||
|
||||
static void release_stats_buffers(struct ibmvnic_adapter *adapter)
|
||||
@ -914,6 +916,7 @@ static int ibmvnic_login(struct net_device *netdev)
|
||||
|
||||
__ibmvnic_set_mac(netdev, adapter->mac_addr);
|
||||
|
||||
netdev_dbg(netdev, "[S:%d] Login succeeded\n", adapter->state);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1343,6 +1346,10 @@ static int ibmvnic_close(struct net_device *netdev)
|
||||
struct ibmvnic_adapter *adapter = netdev_priv(netdev);
|
||||
int rc;
|
||||
|
||||
netdev_dbg(netdev, "[S:%d FOP:%d FRR:%d] Closing\n",
|
||||
adapter->state, adapter->failover_pending,
|
||||
adapter->force_reset_recovery);
|
||||
|
||||
/* If device failover is pending, just set device state and return.
|
||||
* Device operation will be handled by reset routine.
|
||||
*/
|
||||
@ -1937,8 +1944,10 @@ static int do_reset(struct ibmvnic_adapter *adapter,
|
||||
struct net_device *netdev = adapter->netdev;
|
||||
int i, rc;
|
||||
|
||||
netdev_dbg(adapter->netdev, "Re-setting driver (%d)\n",
|
||||
rwi->reset_reason);
|
||||
netdev_dbg(adapter->netdev,
|
||||
"[S:%d FOP:%d] Reset reason %d, reset_state %d\n",
|
||||
adapter->state, adapter->failover_pending,
|
||||
rwi->reset_reason, reset_state);
|
||||
|
||||
rtnl_lock();
|
||||
/*
|
||||
@ -2097,6 +2106,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
|
||||
adapter->state = reset_state;
|
||||
rtnl_unlock();
|
||||
|
||||
netdev_dbg(adapter->netdev, "[S:%d FOP:%d] Reset done, rc %d\n",
|
||||
adapter->state, adapter->failover_pending, rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -2166,6 +2177,8 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter,
|
||||
/* restore adapter state if reset failed */
|
||||
if (rc)
|
||||
adapter->state = reset_state;
|
||||
netdev_dbg(adapter->netdev, "[S:%d FOP:%d] Hard reset done, rc %d\n",
|
||||
adapter->state, adapter->failover_pending, rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -2275,6 +2288,11 @@ static void __ibmvnic_reset(struct work_struct *work)
|
||||
}
|
||||
|
||||
clear_bit_unlock(0, &adapter->resetting);
|
||||
|
||||
netdev_dbg(adapter->netdev,
|
||||
"[S:%d FRR:%d WFR:%d] Done processing resets\n",
|
||||
adapter->state, adapter->force_reset_recovery,
|
||||
adapter->wait_for_reset);
|
||||
}
|
||||
|
||||
static void __ibmvnic_delayed_reset(struct work_struct *work)
|
||||
@ -2295,6 +2313,8 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
spin_lock_irqsave(&adapter->rwi_lock, flags);
|
||||
|
||||
/*
|
||||
* If failover is pending don't schedule any other reset.
|
||||
* Instead let the failover complete. If there is already a
|
||||
@ -2315,13 +2335,11 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
|
||||
goto err;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&adapter->rwi_lock, flags);
|
||||
|
||||
list_for_each(entry, &adapter->rwi_list) {
|
||||
tmp = list_entry(entry, struct ibmvnic_rwi, list);
|
||||
if (tmp->reset_reason == reason) {
|
||||
netdev_dbg(netdev, "Skipping matching reset\n");
|
||||
spin_unlock_irqrestore(&adapter->rwi_lock, flags);
|
||||
netdev_dbg(netdev, "Skipping matching reset, reason=%d\n",
|
||||
reason);
|
||||
ret = EBUSY;
|
||||
goto err;
|
||||
}
|
||||
@ -2329,8 +2347,6 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
|
||||
|
||||
rwi = kzalloc(sizeof(*rwi), GFP_ATOMIC);
|
||||
if (!rwi) {
|
||||
spin_unlock_irqrestore(&adapter->rwi_lock, flags);
|
||||
ibmvnic_close(netdev);
|
||||
ret = ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
@ -2343,12 +2359,17 @@ static int ibmvnic_reset(struct ibmvnic_adapter *adapter,
|
||||
}
|
||||
rwi->reset_reason = reason;
|
||||
list_add_tail(&rwi->list, &adapter->rwi_list);
|
||||
spin_unlock_irqrestore(&adapter->rwi_lock, flags);
|
||||
netdev_dbg(adapter->netdev, "Scheduling reset (reason %d)\n", reason);
|
||||
schedule_work(&adapter->ibmvnic_reset);
|
||||
|
||||
return 0;
|
||||
ret = 0;
|
||||
err:
|
||||
/* ibmvnic_close() below can block, so drop the lock first */
|
||||
spin_unlock_irqrestore(&adapter->rwi_lock, flags);
|
||||
|
||||
if (ret == ENOMEM)
|
||||
ibmvnic_close(netdev);
|
||||
|
||||
return -ret;
|
||||
}
|
||||
|
||||
@ -5359,7 +5380,18 @@ static int ibmvnic_remove(struct vio_dev *dev)
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&adapter->state_lock, flags);
|
||||
|
||||
/* If ibmvnic_reset() is scheduling a reset, wait for it to
|
||||
* finish. Then, set the state to REMOVING to prevent it from
|
||||
* scheduling any more work and to have reset functions ignore
|
||||
* any resets that have already been scheduled. Drop the lock
|
||||
* after setting state, so __ibmvnic_reset() which is called
|
||||
* from the flush_work() below, can make progress.
|
||||
*/
|
||||
spin_lock(&adapter->rwi_lock);
|
||||
adapter->state = VNIC_REMOVING;
|
||||
spin_unlock(&adapter->rwi_lock);
|
||||
|
||||
spin_unlock_irqrestore(&adapter->state_lock, flags);
|
||||
|
||||
flush_work(&adapter->ibmvnic_reset);
|
||||
|
@ -1080,6 +1080,7 @@ struct ibmvnic_adapter {
|
||||
struct tasklet_struct tasklet;
|
||||
enum vnic_state state;
|
||||
enum ibmvnic_reset_reason reset_reason;
|
||||
/* when taking both state and rwi locks, take state lock first */
|
||||
spinlock_t rwi_lock;
|
||||
struct list_head rwi_list;
|
||||
struct work_struct ibmvnic_reset;
|
||||
@ -1096,6 +1097,8 @@ struct ibmvnic_adapter {
|
||||
struct ibmvnic_tunables desired;
|
||||
struct ibmvnic_tunables fallback;
|
||||
|
||||
/* Used for serializatin of state field */
|
||||
/* Used for serialization of state field. When taking both state
|
||||
* and rwi locks, take state lock first.
|
||||
*/
|
||||
spinlock_t state_lock;
|
||||
};
|
||||
|
@ -5920,7 +5920,7 @@ static int i40e_add_channel(struct i40e_pf *pf, u16 uplink_seid,
|
||||
ch->enabled_tc = !i40e_is_channel_macvlan(ch) && enabled_tc;
|
||||
ch->seid = ctxt.seid;
|
||||
ch->vsi_number = ctxt.vsi_number;
|
||||
ch->stat_counter_idx = cpu_to_le16(ctxt.info.stat_counter_idx);
|
||||
ch->stat_counter_idx = le16_to_cpu(ctxt.info.stat_counter_idx);
|
||||
|
||||
/* copy just the sections touched not the entire info
|
||||
* since not all sections are valid as returned by
|
||||
@ -7599,8 +7599,8 @@ static inline void
|
||||
i40e_set_cld_element(struct i40e_cloud_filter *filter,
|
||||
struct i40e_aqc_cloud_filters_element_data *cld)
|
||||
{
|
||||
int i, j;
|
||||
u32 ipa;
|
||||
int i;
|
||||
|
||||
memset(cld, 0, sizeof(*cld));
|
||||
ether_addr_copy(cld->outer_mac, filter->dst_mac);
|
||||
@ -7611,14 +7611,14 @@ i40e_set_cld_element(struct i40e_cloud_filter *filter,
|
||||
|
||||
if (filter->n_proto == ETH_P_IPV6) {
|
||||
#define IPV6_MAX_INDEX (ARRAY_SIZE(filter->dst_ipv6) - 1)
|
||||
for (i = 0, j = 0; i < ARRAY_SIZE(filter->dst_ipv6);
|
||||
i++, j += 2) {
|
||||
for (i = 0; i < ARRAY_SIZE(filter->dst_ipv6); i++) {
|
||||
ipa = be32_to_cpu(filter->dst_ipv6[IPV6_MAX_INDEX - i]);
|
||||
ipa = cpu_to_le32(ipa);
|
||||
memcpy(&cld->ipaddr.raw_v6.data[j], &ipa, sizeof(ipa));
|
||||
|
||||
*(__le32 *)&cld->ipaddr.raw_v6.data[i * 2] = cpu_to_le32(ipa);
|
||||
}
|
||||
} else {
|
||||
ipa = be32_to_cpu(filter->dst_ipv4);
|
||||
|
||||
memcpy(&cld->ipaddr.v4.data, &ipa, sizeof(ipa));
|
||||
}
|
||||
|
||||
|
@ -1782,7 +1782,7 @@ void i40e_process_skb_fields(struct i40e_ring *rx_ring,
|
||||
skb_record_rx_queue(skb, rx_ring->queue_index);
|
||||
|
||||
if (qword & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) {
|
||||
u16 vlan_tag = rx_desc->wb.qword0.lo_dword.l2tag1;
|
||||
__le16 vlan_tag = rx_desc->wb.qword0.lo_dword.l2tag1;
|
||||
|
||||
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
|
||||
le16_to_cpu(vlan_tag));
|
||||
|
@ -1263,6 +1263,7 @@ static struct phy_driver ksphy_driver[] = {
|
||||
.probe = kszphy_probe,
|
||||
.config_init = ksz8081_config_init,
|
||||
.ack_interrupt = kszphy_ack_interrupt,
|
||||
.soft_reset = genphy_soft_reset,
|
||||
.config_intr = kszphy_config_intr,
|
||||
.get_sset_count = kszphy_get_sset_count,
|
||||
.get_strings = kszphy_get_strings,
|
||||
|
@ -92,6 +92,7 @@
|
||||
#define IWL_SNJ_A_HR_B_FW_PRE "iwlwifi-SoSnj-a0-hr-b0-"
|
||||
#define IWL_MA_A_GF_A_FW_PRE "iwlwifi-ma-a0-gf-a0-"
|
||||
#define IWL_MA_A_MR_A_FW_PRE "iwlwifi-ma-a0-mr-a0-"
|
||||
#define IWL_SNJ_A_MR_A_FW_PRE "iwlwifi-SoSnj-a0-mr-a0-"
|
||||
|
||||
#define IWL_QU_B_HR_B_MODULE_FIRMWARE(api) \
|
||||
IWL_QU_B_HR_B_FW_PRE __stringify(api) ".ucode"
|
||||
@ -127,6 +128,8 @@
|
||||
IWL_MA_A_GF_A_FW_PRE __stringify(api) ".ucode"
|
||||
#define IWL_MA_A_MR_A_FW_MODULE_FIRMWARE(api) \
|
||||
IWL_MA_A_MR_A_FW_PRE __stringify(api) ".ucode"
|
||||
#define IWL_SNJ_A_MR_A_MODULE_FIRMWARE(api) \
|
||||
IWL_SNJ_A_MR_A_FW_PRE __stringify(api) ".ucode"
|
||||
|
||||
static const struct iwl_base_params iwl_22000_base_params = {
|
||||
.eeprom_size = OTP_LOW_IMAGE_SIZE_32K,
|
||||
@ -672,6 +675,13 @@ const struct iwl_cfg iwl_cfg_ma_a0_mr_a0 = {
|
||||
.num_rbds = IWL_NUM_RBDS_AX210_HE,
|
||||
};
|
||||
|
||||
const struct iwl_cfg iwl_cfg_snj_a0_mr_a0 = {
|
||||
.fw_name_pre = IWL_SNJ_A_MR_A_FW_PRE,
|
||||
.uhb_supported = true,
|
||||
IWL_DEVICE_AX210,
|
||||
.num_rbds = IWL_NUM_RBDS_AX210_HE,
|
||||
};
|
||||
|
||||
MODULE_FIRMWARE(IWL_QU_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_QNJ_B_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_QU_C_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
@ -689,3 +699,4 @@ MODULE_FIRMWARE(IWL_SNJ_A_GF_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_SNJ_A_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_MA_A_GF_A_FW_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_MA_A_MR_A_FW_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
MODULE_FIRMWARE(IWL_SNJ_A_MR_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
|
||||
|
@ -472,6 +472,7 @@ struct iwl_cfg {
|
||||
#define IWL_CFG_MAC_TYPE_QU 0x33
|
||||
#define IWL_CFG_MAC_TYPE_QUZ 0x35
|
||||
#define IWL_CFG_MAC_TYPE_QNJ 0x36
|
||||
#define IWL_CFG_MAC_TYPE_SNJ 0x42
|
||||
#define IWL_CFG_MAC_TYPE_MA 0x44
|
||||
|
||||
#define IWL_CFG_RF_TYPE_TH 0x105
|
||||
@ -656,6 +657,7 @@ extern const struct iwl_cfg iwlax211_cfg_snj_gf_a0;
|
||||
extern const struct iwl_cfg iwlax201_cfg_snj_hr_b0;
|
||||
extern const struct iwl_cfg iwl_cfg_ma_a0_gf_a0;
|
||||
extern const struct iwl_cfg iwl_cfg_ma_a0_mr_a0;
|
||||
extern const struct iwl_cfg iwl_cfg_snj_a0_mr_a0;
|
||||
#endif /* CONFIG_IWLMVM */
|
||||
|
||||
#endif /* __IWL_CONFIG_H__ */
|
||||
|
@ -1002,6 +1002,12 @@ static const struct iwl_dev_info iwl_dev_info_table[] = {
|
||||
IWL_CFG_RF_TYPE_MR, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
iwl_cfg_ma_a0_mr_a0, iwl_ma_name),
|
||||
_IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
IWL_CFG_MAC_TYPE_SNJ, IWL_CFG_ANY,
|
||||
IWL_CFG_RF_TYPE_MR, IWL_CFG_ANY,
|
||||
IWL_CFG_ANY, IWL_CFG_ANY,
|
||||
iwl_cfg_snj_a0_mr_a0, iwl_ma_name),
|
||||
|
||||
|
||||
#endif /* CONFIG_IWLMVM */
|
||||
};
|
||||
|
@ -1894,30 +1894,18 @@ static void nvme_config_discard(struct gendisk *disk, struct nvme_ns *ns)
|
||||
blk_queue_max_write_zeroes_sectors(queue, UINT_MAX);
|
||||
}
|
||||
|
||||
static void nvme_config_write_zeroes(struct gendisk *disk, struct nvme_ns *ns)
|
||||
/*
|
||||
* Even though NVMe spec explicitly states that MDTS is not applicable to the
|
||||
* write-zeroes, we are cautious and limit the size to the controllers
|
||||
* max_hw_sectors value, which is based on the MDTS field and possibly other
|
||||
* limiting factors.
|
||||
*/
|
||||
static void nvme_config_write_zeroes(struct request_queue *q,
|
||||
struct nvme_ctrl *ctrl)
|
||||
{
|
||||
u64 max_blocks;
|
||||
|
||||
if (!(ns->ctrl->oncs & NVME_CTRL_ONCS_WRITE_ZEROES) ||
|
||||
(ns->ctrl->quirks & NVME_QUIRK_DISABLE_WRITE_ZEROES))
|
||||
return;
|
||||
/*
|
||||
* Even though NVMe spec explicitly states that MDTS is not
|
||||
* applicable to the write-zeroes:- "The restriction does not apply to
|
||||
* commands that do not transfer data between the host and the
|
||||
* controller (e.g., Write Uncorrectable ro Write Zeroes command).".
|
||||
* In order to be more cautious use controller's max_hw_sectors value
|
||||
* to configure the maximum sectors for the write-zeroes which is
|
||||
* configured based on the controller's MDTS field in the
|
||||
* nvme_init_identify() if available.
|
||||
*/
|
||||
if (ns->ctrl->max_hw_sectors == UINT_MAX)
|
||||
max_blocks = (u64)USHRT_MAX + 1;
|
||||
else
|
||||
max_blocks = ns->ctrl->max_hw_sectors + 1;
|
||||
|
||||
blk_queue_max_write_zeroes_sectors(disk->queue,
|
||||
nvme_lba_to_sect(ns, max_blocks));
|
||||
if ((ctrl->oncs & NVME_CTRL_ONCS_WRITE_ZEROES) &&
|
||||
!(ctrl->quirks & NVME_QUIRK_DISABLE_WRITE_ZEROES))
|
||||
blk_queue_max_write_zeroes_sectors(q, ctrl->max_hw_sectors);
|
||||
}
|
||||
|
||||
static bool nvme_ns_ids_valid(struct nvme_ns_ids *ids)
|
||||
@ -2089,7 +2077,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
|
||||
set_capacity_revalidate_and_notify(disk, capacity, false);
|
||||
|
||||
nvme_config_discard(disk, ns);
|
||||
nvme_config_write_zeroes(disk, ns);
|
||||
nvme_config_write_zeroes(disk->queue, ns->ctrl);
|
||||
|
||||
if (id->nsattr & NVME_NS_ATTR_RO)
|
||||
set_disk_ro(disk, true);
|
||||
|
@ -736,8 +736,11 @@ static int nvme_rdma_alloc_io_queues(struct nvme_rdma_ctrl *ctrl)
|
||||
return ret;
|
||||
|
||||
ctrl->ctrl.queue_count = nr_io_queues + 1;
|
||||
if (ctrl->ctrl.queue_count < 2)
|
||||
return 0;
|
||||
if (ctrl->ctrl.queue_count < 2) {
|
||||
dev_err(ctrl->ctrl.device,
|
||||
"unable to set any I/O queues\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
dev_info(ctrl->ctrl.device,
|
||||
"creating %d I/O queues.\n", nr_io_queues);
|
||||
|
@ -287,7 +287,7 @@ static inline void nvme_tcp_queue_request(struct nvme_tcp_request *req,
|
||||
* directly, otherwise queue io_work. Also, only do that if we
|
||||
* are on the same cpu, so we don't introduce contention.
|
||||
*/
|
||||
if (queue->io_cpu == __smp_processor_id() &&
|
||||
if (queue->io_cpu == raw_smp_processor_id() &&
|
||||
sync && empty && mutex_trylock(&queue->send_mutex)) {
|
||||
queue->more_requests = !last;
|
||||
nvme_tcp_send_all(queue);
|
||||
@ -568,6 +568,13 @@ static int nvme_tcp_setup_h2c_data_pdu(struct nvme_tcp_request *req,
|
||||
req->pdu_len = le32_to_cpu(pdu->r2t_length);
|
||||
req->pdu_sent = 0;
|
||||
|
||||
if (unlikely(!req->pdu_len)) {
|
||||
dev_err(queue->ctrl->ctrl.device,
|
||||
"req %d r2t len is %u, probably a bug...\n",
|
||||
rq->tag, req->pdu_len);
|
||||
return -EPROTO;
|
||||
}
|
||||
|
||||
if (unlikely(req->data_sent + req->pdu_len > req->data_len)) {
|
||||
dev_err(queue->ctrl->ctrl.device,
|
||||
"req %d r2t len %u exceeded data len %u (%zu sent)\n",
|
||||
@ -1748,8 +1755,11 @@ static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
|
||||
return ret;
|
||||
|
||||
ctrl->queue_count = nr_io_queues + 1;
|
||||
if (ctrl->queue_count < 2)
|
||||
return 0;
|
||||
if (ctrl->queue_count < 2) {
|
||||
dev_err(ctrl->device,
|
||||
"unable to set any I/O queues\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
dev_info(ctrl->device,
|
||||
"creating %d I/O queues.\n", nr_io_queues);
|
||||
|
@ -1109,9 +1109,20 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl)
|
||||
{
|
||||
lockdep_assert_held(&ctrl->lock);
|
||||
|
||||
if (nvmet_cc_iosqes(ctrl->cc) != NVME_NVM_IOSQES ||
|
||||
nvmet_cc_iocqes(ctrl->cc) != NVME_NVM_IOCQES ||
|
||||
nvmet_cc_mps(ctrl->cc) != 0 ||
|
||||
/*
|
||||
* Only I/O controllers should verify iosqes,iocqes.
|
||||
* Strictly speaking, the spec says a discovery controller
|
||||
* should verify iosqes,iocqes are zeroed, however that
|
||||
* would break backwards compatibility, so don't enforce it.
|
||||
*/
|
||||
if (ctrl->subsys->type != NVME_NQN_DISC &&
|
||||
(nvmet_cc_iosqes(ctrl->cc) != NVME_NVM_IOSQES ||
|
||||
nvmet_cc_iocqes(ctrl->cc) != NVME_NVM_IOCQES)) {
|
||||
ctrl->csts = NVME_CSTS_CFS;
|
||||
return;
|
||||
}
|
||||
|
||||
if (nvmet_cc_mps(ctrl->cc) != 0 ||
|
||||
nvmet_cc_ams(ctrl->cc) != 0 ||
|
||||
nvmet_cc_css(ctrl->cc) != 0) {
|
||||
ctrl->csts = NVME_CSTS_CFS;
|
||||
|
@ -34,12 +34,11 @@ static ssize_t add_slot_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
if (nbytes >= MAX_DRC_NAME_LEN)
|
||||
return 0;
|
||||
|
||||
memcpy(drc_name, buf, nbytes);
|
||||
strscpy(drc_name, buf, nbytes + 1);
|
||||
|
||||
end = strchr(drc_name, '\n');
|
||||
if (!end)
|
||||
end = &drc_name[nbytes];
|
||||
*end = '\0';
|
||||
if (end)
|
||||
*end = '\0';
|
||||
|
||||
rc = dlpar_add_slot(drc_name);
|
||||
if (rc)
|
||||
@ -65,12 +64,11 @@ static ssize_t remove_slot_store(struct kobject *kobj,
|
||||
if (nbytes >= MAX_DRC_NAME_LEN)
|
||||
return 0;
|
||||
|
||||
memcpy(drc_name, buf, nbytes);
|
||||
strscpy(drc_name, buf, nbytes + 1);
|
||||
|
||||
end = strchr(drc_name, '\n');
|
||||
if (!end)
|
||||
end = &drc_name[nbytes];
|
||||
*end = '\0';
|
||||
if (end)
|
||||
*end = '\0';
|
||||
|
||||
rc = dlpar_remove_slot(drc_name);
|
||||
if (rc)
|
||||
|
@ -93,8 +93,9 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
|
||||
pci_dev_put(pdev);
|
||||
return -EBUSY;
|
||||
}
|
||||
pci_dev_put(pdev);
|
||||
|
||||
zpci_remove_device(zdev);
|
||||
zpci_remove_device(zdev, false);
|
||||
|
||||
rc = zpci_disable_device(zdev);
|
||||
if (rc)
|
||||
|
@ -5,6 +5,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/err.h>
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/kernel.h>
|
||||
@ -32,6 +33,7 @@ struct pca9450_regulator_desc {
|
||||
struct pca9450 {
|
||||
struct device *dev;
|
||||
struct regmap *regmap;
|
||||
struct gpio_desc *sd_vsel_gpio;
|
||||
enum pca9450_chip_type type;
|
||||
unsigned int rcnt;
|
||||
int irq;
|
||||
@ -795,6 +797,34 @@ static int pca9450_i2c_probe(struct i2c_client *i2c,
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Clear PRESET_EN bit in BUCK123_DVS to use DVS registers */
|
||||
ret = regmap_clear_bits(pca9450->regmap, PCA9450_REG_BUCK123_DVS,
|
||||
BUCK123_PRESET_EN);
|
||||
if (ret) {
|
||||
dev_err(&i2c->dev, "Failed to clear PRESET_EN bit: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Set reset behavior on assertion of WDOG_B signal */
|
||||
ret = regmap_update_bits(pca9450->regmap, PCA9450_REG_RESET_CTRL,
|
||||
WDOG_B_CFG_MASK, WDOG_B_CFG_COLD_LDO12);
|
||||
if (ret) {
|
||||
dev_err(&i2c->dev, "Failed to set WDOG_B reset behavior\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* The driver uses the LDO5CTRL_H register to control the LDO5 regulator.
|
||||
* This is only valid if the SD_VSEL input of the PMIC is high. Let's
|
||||
* check if the pin is available as GPIO and set it to high.
|
||||
*/
|
||||
pca9450->sd_vsel_gpio = gpiod_get_optional(pca9450->dev, "sd-vsel", GPIOD_OUT_HIGH);
|
||||
|
||||
if (IS_ERR(pca9450->sd_vsel_gpio)) {
|
||||
dev_err(&i2c->dev, "Failed to get SD_VSEL GPIO\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
dev_info(&i2c->dev, "%s probed.\n",
|
||||
type == PCA9450_TYPE_PCA9450A ? "pca9450a" : "pca9450bc");
|
||||
|
||||
|
@ -470,6 +470,7 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
|
||||
struct qaob *aob;
|
||||
struct qeth_qdio_out_buffer *buffer;
|
||||
enum iucv_tx_notify notification;
|
||||
struct qeth_qdio_out_q *queue;
|
||||
unsigned int i;
|
||||
|
||||
aob = (struct qaob *) phys_to_virt(phys_aob_addr);
|
||||
@ -511,7 +512,9 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
|
||||
kmem_cache_free(qeth_core_header_cache, data);
|
||||
}
|
||||
|
||||
queue = buffer->q;
|
||||
atomic_set(&buffer->state, QETH_QDIO_BUF_EMPTY);
|
||||
napi_schedule(&queue->napi);
|
||||
break;
|
||||
default:
|
||||
WARN_ON_ONCE(1);
|
||||
@ -7013,9 +7016,7 @@ int qeth_open(struct net_device *dev)
|
||||
card->data.state = CH_STATE_UP;
|
||||
netif_tx_start_all_queues(dev);
|
||||
|
||||
napi_enable(&card->napi);
|
||||
local_bh_disable();
|
||||
napi_schedule(&card->napi);
|
||||
if (IS_IQD(card)) {
|
||||
struct qeth_qdio_out_q *queue;
|
||||
unsigned int i;
|
||||
@ -7027,8 +7028,12 @@ int qeth_open(struct net_device *dev)
|
||||
napi_schedule(&queue->napi);
|
||||
}
|
||||
}
|
||||
|
||||
napi_enable(&card->napi);
|
||||
napi_schedule(&card->napi);
|
||||
/* kick-start the NAPI softirq: */
|
||||
local_bh_enable();
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qeth_open);
|
||||
@ -7038,6 +7043,11 @@ int qeth_stop(struct net_device *dev)
|
||||
struct qeth_card *card = dev->ml_priv;
|
||||
|
||||
QETH_CARD_TEXT(card, 4, "qethstop");
|
||||
|
||||
napi_disable(&card->napi);
|
||||
cancel_delayed_work_sync(&card->buffer_reclaim_work);
|
||||
qdio_stop_irq(CARD_DDEV(card));
|
||||
|
||||
if (IS_IQD(card)) {
|
||||
struct qeth_qdio_out_q *queue;
|
||||
unsigned int i;
|
||||
@ -7058,10 +7068,6 @@ int qeth_stop(struct net_device *dev)
|
||||
netif_tx_disable(dev);
|
||||
}
|
||||
|
||||
napi_disable(&card->napi);
|
||||
cancel_delayed_work_sync(&card->buffer_reclaim_work);
|
||||
qdio_stop_irq(CARD_DDEV(card));
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(qeth_stop);
|
||||
|
@ -68,7 +68,6 @@ static void asd_phy_event_tasklet(struct asd_ascb *ascb,
|
||||
struct done_list_struct *dl)
|
||||
{
|
||||
struct asd_ha_struct *asd_ha = ascb->ha;
|
||||
struct sas_ha_struct *sas_ha = &asd_ha->sas_ha;
|
||||
int phy_id = dl->status_block[0] & DL_PHY_MASK;
|
||||
struct asd_phy *phy = &asd_ha->phys[phy_id];
|
||||
|
||||
@ -81,7 +80,7 @@ static void asd_phy_event_tasklet(struct asd_ascb *ascb,
|
||||
ASD_DPRINTK("phy%d: device unplugged\n", phy_id);
|
||||
asd_turn_led(asd_ha, phy_id, 0);
|
||||
sas_phy_disconnected(&phy->sas_phy);
|
||||
sas_ha->notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL);
|
||||
sas_notify_phy_event(&phy->sas_phy, PHYE_LOSS_OF_SIGNAL);
|
||||
break;
|
||||
case CURRENT_OOB_DONE:
|
||||
/* hot plugged device */
|
||||
@ -89,12 +88,12 @@ static void asd_phy_event_tasklet(struct asd_ascb *ascb,
|
||||
get_lrate_mode(phy, oob_mode);
|
||||
ASD_DPRINTK("phy%d device plugged: lrate:0x%x, proto:0x%x\n",
|
||||
phy_id, phy->sas_phy.linkrate, phy->sas_phy.iproto);
|
||||
sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
|
||||
sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_DONE);
|
||||
break;
|
||||
case CURRENT_SPINUP_HOLD:
|
||||
/* hot plug SATA, no COMWAKE sent */
|
||||
asd_turn_led(asd_ha, phy_id, 1);
|
||||
sas_ha->notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD);
|
||||
sas_notify_phy_event(&phy->sas_phy, PHYE_SPINUP_HOLD);
|
||||
break;
|
||||
case CURRENT_GTO_TIMEOUT:
|
||||
case CURRENT_OOB_ERROR:
|
||||
@ -102,7 +101,7 @@ static void asd_phy_event_tasklet(struct asd_ascb *ascb,
|
||||
dl->status_block[1]);
|
||||
asd_turn_led(asd_ha, phy_id, 0);
|
||||
sas_phy_disconnected(&phy->sas_phy);
|
||||
sas_ha->notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR);
|
||||
sas_notify_phy_event(&phy->sas_phy, PHYE_OOB_ERROR);
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -222,7 +221,6 @@ static void asd_bytes_dmaed_tasklet(struct asd_ascb *ascb,
|
||||
int edb_el = edb_id + ascb->edb_index;
|
||||
struct asd_dma_tok *edb = ascb->ha->seq.edb_arr[edb_el];
|
||||
struct asd_phy *phy = &ascb->ha->phys[phy_id];
|
||||
struct sas_ha_struct *sas_ha = phy->sas_phy.ha;
|
||||
u16 size = ((dl->status_block[3] & 7) << 8) | dl->status_block[2];
|
||||
|
||||
size = min(size, (u16) sizeof(phy->frame_rcvd));
|
||||
@ -234,7 +232,7 @@ static void asd_bytes_dmaed_tasklet(struct asd_ascb *ascb,
|
||||
spin_unlock_irqrestore(&phy->sas_phy.frame_rcvd_lock, flags);
|
||||
asd_dump_frame_rcvd(phy, dl);
|
||||
asd_form_port(ascb->ha, phy);
|
||||
sas_ha->notify_port_event(&phy->sas_phy, PORTE_BYTES_DMAED);
|
||||
sas_notify_port_event(&phy->sas_phy, PORTE_BYTES_DMAED);
|
||||
}
|
||||
|
||||
static void asd_link_reset_err_tasklet(struct asd_ascb *ascb,
|
||||
@ -270,7 +268,7 @@ static void asd_link_reset_err_tasklet(struct asd_ascb *ascb,
|
||||
asd_turn_led(asd_ha, phy_id, 0);
|
||||
sas_phy_disconnected(sas_phy);
|
||||
asd_deform_port(asd_ha, phy);
|
||||
sas_ha->notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
|
||||
sas_notify_port_event(sas_phy, PORTE_LINK_RESET_ERR);
|
||||
|
||||
if (retries_left == 0) {
|
||||
int num = 1;
|
||||
@ -315,7 +313,7 @@ static void asd_primitive_rcvd_tasklet(struct asd_ascb *ascb,
|
||||
spin_lock_irqsave(&sas_phy->sas_prim_lock, flags);
|
||||
sas_phy->sas_prim = ffs(cont);
|
||||
spin_unlock_irqrestore(&sas_phy->sas_prim_lock, flags);
|
||||
sas_ha->notify_port_event(sas_phy,PORTE_BROADCAST_RCVD);
|
||||
sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
break;
|
||||
|
||||
case LmUNKNOWNP:
|
||||
@ -336,7 +334,7 @@ static void asd_primitive_rcvd_tasklet(struct asd_ascb *ascb,
|
||||
/* The sequencer disables all phys on that port.
|
||||
* We have to re-enable the phys ourselves. */
|
||||
asd_deform_port(asd_ha, phy);
|
||||
sas_ha->notify_port_event(sas_phy, PORTE_HARD_RESET);
|
||||
sas_notify_port_event(sas_phy, PORTE_HARD_RESET);
|
||||
break;
|
||||
|
||||
default:
|
||||
@ -567,7 +565,7 @@ static void escb_tasklet_complete(struct asd_ascb *ascb,
|
||||
/* the device is gone */
|
||||
sas_phy_disconnected(sas_phy);
|
||||
asd_deform_port(asd_ha, phy);
|
||||
sas_ha->notify_port_event(sas_phy, PORTE_TIMER_EVENT);
|
||||
sas_notify_port_event(sas_phy, PORTE_TIMER_EVENT);
|
||||
break;
|
||||
default:
|
||||
ASD_DPRINTK("%s: phy%d: unknown event:0x%x\n", __func__,
|
||||
|
@ -622,7 +622,6 @@ static void hisi_sas_bytes_dmaed(struct hisi_hba *hisi_hba, int phy_no)
|
||||
{
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct sas_ha_struct *sas_ha;
|
||||
|
||||
if (!phy->phy_attached)
|
||||
return;
|
||||
@ -633,8 +632,7 @@ static void hisi_sas_bytes_dmaed(struct hisi_hba *hisi_hba, int phy_no)
|
||||
return;
|
||||
}
|
||||
|
||||
sas_ha = &hisi_hba->sha;
|
||||
sas_ha->notify_phy_event(sas_phy, PHYE_OOB_DONE);
|
||||
sas_notify_phy_event(sas_phy, PHYE_OOB_DONE);
|
||||
|
||||
if (sas_phy->phy) {
|
||||
struct sas_phy *sphy = sas_phy->phy;
|
||||
@ -662,7 +660,7 @@ static void hisi_sas_bytes_dmaed(struct hisi_hba *hisi_hba, int phy_no)
|
||||
}
|
||||
|
||||
sas_phy->frame_rcvd_size = phy->frame_rcvd_size;
|
||||
sas_ha->notify_port_event(sas_phy, PORTE_BYTES_DMAED);
|
||||
sas_notify_port_event(sas_phy, PORTE_BYTES_DMAED);
|
||||
}
|
||||
|
||||
static struct hisi_sas_device *hisi_sas_alloc_dev(struct domain_device *device)
|
||||
@ -1417,7 +1415,6 @@ static void hisi_sas_refresh_port_id(struct hisi_hba *hisi_hba)
|
||||
|
||||
static void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 state)
|
||||
{
|
||||
struct sas_ha_struct *sas_ha = &hisi_hba->sha;
|
||||
struct asd_sas_port *_sas_port = NULL;
|
||||
int phy_no;
|
||||
|
||||
@ -1438,7 +1435,7 @@ static void hisi_sas_rescan_topology(struct hisi_hba *hisi_hba, u32 state)
|
||||
_sas_port = sas_port;
|
||||
|
||||
if (dev_is_expander(dev->dev_type))
|
||||
sas_ha->notify_port_event(sas_phy,
|
||||
sas_notify_port_event(sas_phy,
|
||||
PORTE_BROADCAST_RCVD);
|
||||
}
|
||||
} else {
|
||||
@ -2200,7 +2197,6 @@ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
|
||||
{
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct sas_ha_struct *sas_ha = &hisi_hba->sha;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
|
||||
if (rdy) {
|
||||
@ -2216,7 +2212,7 @@ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
|
||||
return;
|
||||
}
|
||||
/* Phy down and not ready */
|
||||
sas_ha->notify_phy_event(sas_phy, PHYE_LOSS_OF_SIGNAL);
|
||||
sas_notify_phy_event(sas_phy, PHYE_LOSS_OF_SIGNAL);
|
||||
sas_phy_disconnected(sas_phy);
|
||||
|
||||
if (port) {
|
||||
|
@ -1408,7 +1408,6 @@ static irqreturn_t int_bcast_v1_hw(int irq, void *p)
|
||||
struct hisi_sas_phy *phy = p;
|
||||
struct hisi_hba *hisi_hba = phy->hisi_hba;
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct sas_ha_struct *sha = &hisi_hba->sha;
|
||||
struct device *dev = hisi_hba->dev;
|
||||
int phy_no = sas_phy->id;
|
||||
u32 irq_value;
|
||||
@ -1424,7 +1423,7 @@ static irqreturn_t int_bcast_v1_hw(int irq, void *p)
|
||||
}
|
||||
|
||||
if (!test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
|
||||
sha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
|
||||
end:
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2,
|
||||
|
@ -2818,14 +2818,13 @@ static void phy_bcast_v2_hw(int phy_no, struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct sas_ha_struct *sas_ha = &hisi_hba->sha;
|
||||
u32 bcast_status;
|
||||
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 1);
|
||||
bcast_status = hisi_sas_phy_read32(hisi_hba, phy_no, RX_PRIMS_STATUS);
|
||||
if ((bcast_status & RX_BCAST_CHG_MSK) &&
|
||||
!test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
|
||||
sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0,
|
||||
CHL_INT0_SL_RX_BCST_ACK_MSK);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 0);
|
||||
|
@ -1598,14 +1598,13 @@ static irqreturn_t phy_bcast_v3_hw(int phy_no, struct hisi_hba *hisi_hba)
|
||||
{
|
||||
struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct sas_ha_struct *sas_ha = &hisi_hba->sha;
|
||||
u32 bcast_status;
|
||||
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 1);
|
||||
bcast_status = hisi_sas_phy_read32(hisi_hba, phy_no, RX_PRIMS_STATUS);
|
||||
if ((bcast_status & RX_BCAST_CHG_MSK) &&
|
||||
!test_bit(HISI_SAS_RESET_BIT, &hisi_hba->flags))
|
||||
sas_ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0,
|
||||
CHL_INT0_SL_RX_BCST_ACK_MSK);
|
||||
hisi_sas_phy_write32(hisi_hba, phy_no, SL_RX_BCAST_CHK_MSK, 0);
|
||||
|
@ -164,7 +164,8 @@ static void isci_port_bc_change_received(struct isci_host *ihost,
|
||||
"%s: isci_phy = %p, sas_phy = %p\n",
|
||||
__func__, iphy, &iphy->sas_phy);
|
||||
|
||||
ihost->sas_ha.notify_port_event(&iphy->sas_phy, PORTE_BROADCAST_RCVD);
|
||||
sas_notify_port_event_gfp(&iphy->sas_phy,
|
||||
PORTE_BROADCAST_RCVD, GFP_ATOMIC);
|
||||
sci_port_bcn_enable(iport);
|
||||
}
|
||||
|
||||
@ -223,8 +224,8 @@ static void isci_port_link_up(struct isci_host *isci_host,
|
||||
/* Notify libsas that we have an address frame, if indeed
|
||||
* we've found an SSP, SMP, or STP target */
|
||||
if (success)
|
||||
isci_host->sas_ha.notify_port_event(&iphy->sas_phy,
|
||||
PORTE_BYTES_DMAED);
|
||||
sas_notify_port_event_gfp(&iphy->sas_phy,
|
||||
PORTE_BYTES_DMAED, GFP_ATOMIC);
|
||||
}
|
||||
|
||||
|
||||
@ -270,8 +271,8 @@ static void isci_port_link_down(struct isci_host *isci_host,
|
||||
* isci_port_deformed and isci_dev_gone functions.
|
||||
*/
|
||||
sas_phy_disconnected(&isci_phy->sas_phy);
|
||||
isci_host->sas_ha.notify_phy_event(&isci_phy->sas_phy,
|
||||
PHYE_LOSS_OF_SIGNAL);
|
||||
sas_notify_phy_event_gfp(&isci_phy->sas_phy,
|
||||
PHYE_LOSS_OF_SIGNAL, GFP_ATOMIC);
|
||||
|
||||
dev_dbg(&isci_host->pdev->dev,
|
||||
"%s: isci_port = %p - Done\n", __func__, isci_port);
|
||||
|
@ -109,7 +109,7 @@ void sas_enable_revalidation(struct sas_ha_struct *ha)
|
||||
|
||||
sas_phy = container_of(port->phy_list.next, struct asd_sas_phy,
|
||||
port_phy_el);
|
||||
ha->notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
sas_notify_port_event(sas_phy, PORTE_BROADCAST_RCVD);
|
||||
}
|
||||
mutex_unlock(&ha->disco_mutex);
|
||||
}
|
||||
@ -131,18 +131,15 @@ static void sas_phy_event_worker(struct work_struct *work)
|
||||
sas_free_event(ev);
|
||||
}
|
||||
|
||||
static int sas_notify_port_event(struct asd_sas_phy *phy, enum port_event event)
|
||||
static int __sas_notify_port_event(struct asd_sas_phy *phy,
|
||||
enum port_event event,
|
||||
struct asd_sas_event *ev)
|
||||
{
|
||||
struct asd_sas_event *ev;
|
||||
struct sas_ha_struct *ha = phy->ha;
|
||||
int ret;
|
||||
|
||||
BUG_ON(event >= PORT_NUM_EVENTS);
|
||||
|
||||
ev = sas_alloc_event(phy);
|
||||
if (!ev)
|
||||
return -ENOMEM;
|
||||
|
||||
INIT_SAS_EVENT(ev, sas_port_event_worker, phy, event);
|
||||
|
||||
ret = sas_queue_event(event, &ev->work, ha);
|
||||
@ -152,18 +149,40 @@ static int sas_notify_port_event(struct asd_sas_phy *phy, enum port_event event)
|
||||
return ret;
|
||||
}
|
||||
|
||||
int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event)
|
||||
int sas_notify_port_event_gfp(struct asd_sas_phy *phy, enum port_event event,
|
||||
gfp_t gfp_flags)
|
||||
{
|
||||
struct asd_sas_event *ev;
|
||||
struct sas_ha_struct *ha = phy->ha;
|
||||
int ret;
|
||||
|
||||
BUG_ON(event >= PHY_NUM_EVENTS);
|
||||
ev = sas_alloc_event_gfp(phy, gfp_flags);
|
||||
if (!ev)
|
||||
return -ENOMEM;
|
||||
|
||||
return __sas_notify_port_event(phy, event, ev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sas_notify_port_event_gfp);
|
||||
|
||||
int sas_notify_port_event(struct asd_sas_phy *phy, enum port_event event)
|
||||
{
|
||||
struct asd_sas_event *ev;
|
||||
|
||||
ev = sas_alloc_event(phy);
|
||||
if (!ev)
|
||||
return -ENOMEM;
|
||||
|
||||
return __sas_notify_port_event(phy, event, ev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sas_notify_port_event);
|
||||
|
||||
static inline int __sas_notify_phy_event(struct asd_sas_phy *phy,
|
||||
enum phy_event event,
|
||||
struct asd_sas_event *ev)
|
||||
{
|
||||
struct sas_ha_struct *ha = phy->ha;
|
||||
int ret;
|
||||
|
||||
BUG_ON(event >= PHY_NUM_EVENTS);
|
||||
|
||||
INIT_SAS_EVENT(ev, sas_phy_event_worker, phy, event);
|
||||
|
||||
ret = sas_queue_event(event, &ev->work, ha);
|
||||
@ -173,10 +192,27 @@ int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event)
|
||||
return ret;
|
||||
}
|
||||
|
||||
int sas_init_events(struct sas_ha_struct *sas_ha)
|
||||
int sas_notify_phy_event_gfp(struct asd_sas_phy *phy, enum phy_event event,
|
||||
gfp_t gfp_flags)
|
||||
{
|
||||
sas_ha->notify_port_event = sas_notify_port_event;
|
||||
sas_ha->notify_phy_event = sas_notify_phy_event;
|
||||
struct asd_sas_event *ev;
|
||||
|
||||
return 0;
|
||||
ev = sas_alloc_event_gfp(phy, gfp_flags);
|
||||
if (!ev)
|
||||
return -ENOMEM;
|
||||
|
||||
return __sas_notify_phy_event(phy, event, ev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sas_notify_phy_event_gfp);
|
||||
|
||||
int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event)
|
||||
{
|
||||
struct asd_sas_event *ev;
|
||||
|
||||
ev = sas_alloc_event(phy);
|
||||
if (!ev)
|
||||
return -ENOMEM;
|
||||
|
||||
return __sas_notify_phy_event(phy, event, ev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sas_notify_phy_event);
|
||||
|
@ -123,12 +123,6 @@ int sas_register_ha(struct sas_ha_struct *sas_ha)
|
||||
goto Undo_phys;
|
||||
}
|
||||
|
||||
error = sas_init_events(sas_ha);
|
||||
if (error) {
|
||||
pr_notice("couldn't start event thread:%d\n", error);
|
||||
goto Undo_ports;
|
||||
}
|
||||
|
||||
error = -ENOMEM;
|
||||
snprintf(name, sizeof(name), "%s_event_q", dev_name(sas_ha->dev));
|
||||
sas_ha->event_q = create_singlethread_workqueue(name);
|
||||
@ -590,16 +584,15 @@ sas_domain_attach_transport(struct sas_domain_function_template *dft)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sas_domain_attach_transport);
|
||||
|
||||
|
||||
struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
|
||||
static struct asd_sas_event *__sas_alloc_event(struct asd_sas_phy *phy,
|
||||
gfp_t gfp_flags)
|
||||
{
|
||||
struct asd_sas_event *event;
|
||||
gfp_t flags = in_interrupt() ? GFP_ATOMIC : GFP_KERNEL;
|
||||
struct sas_ha_struct *sas_ha = phy->ha;
|
||||
struct sas_internal *i =
|
||||
to_sas_internal(sas_ha->core.shost->transportt);
|
||||
|
||||
event = kmem_cache_zalloc(sas_event_cache, flags);
|
||||
event = kmem_cache_zalloc(sas_event_cache, gfp_flags);
|
||||
if (!event)
|
||||
return NULL;
|
||||
|
||||
@ -610,7 +603,8 @@ struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
|
||||
if (cmpxchg(&phy->in_shutdown, 0, 1) == 0) {
|
||||
pr_notice("The phy%d bursting events, shut it down.\n",
|
||||
phy->id);
|
||||
sas_notify_phy_event(phy, PHYE_SHUTDOWN);
|
||||
sas_notify_phy_event_gfp(phy, PHYE_SHUTDOWN,
|
||||
gfp_flags);
|
||||
}
|
||||
} else {
|
||||
/* Do not support PHY control, stop allocating events */
|
||||
@ -624,6 +618,17 @@ struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
|
||||
return event;
|
||||
}
|
||||
|
||||
struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
|
||||
{
|
||||
return __sas_alloc_event(phy, in_interrupt() ? GFP_ATOMIC : GFP_KERNEL);
|
||||
}
|
||||
|
||||
struct asd_sas_event *sas_alloc_event_gfp(struct asd_sas_phy *phy,
|
||||
gfp_t gfp_flags)
|
||||
{
|
||||
return __sas_alloc_event(phy, gfp_flags);
|
||||
}
|
||||
|
||||
void sas_free_event(struct asd_sas_event *event)
|
||||
{
|
||||
struct asd_sas_phy *phy = event->phy;
|
||||
|
@ -49,12 +49,13 @@ int sas_register_phys(struct sas_ha_struct *sas_ha);
|
||||
void sas_unregister_phys(struct sas_ha_struct *sas_ha);
|
||||
|
||||
struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy);
|
||||
struct asd_sas_event *sas_alloc_event_gfp(struct asd_sas_phy *phy,
|
||||
gfp_t gfp_flags);
|
||||
void sas_free_event(struct asd_sas_event *event);
|
||||
|
||||
int sas_register_ports(struct sas_ha_struct *sas_ha);
|
||||
void sas_unregister_ports(struct sas_ha_struct *sas_ha);
|
||||
|
||||
int sas_init_events(struct sas_ha_struct *sas_ha);
|
||||
void sas_disable_revalidation(struct sas_ha_struct *ha);
|
||||
void sas_enable_revalidation(struct sas_ha_struct *ha);
|
||||
void __sas_drain_work(struct sas_ha_struct *ha);
|
||||
@ -78,6 +79,8 @@ int sas_smp_phy_control(struct domain_device *dev, int phy_id,
|
||||
int sas_smp_get_phy_events(struct sas_phy *phy);
|
||||
|
||||
int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event);
|
||||
int sas_notify_phy_event_gfp(struct asd_sas_phy *phy, enum phy_event event,
|
||||
gfp_t flags);
|
||||
void sas_device_set_phy(struct domain_device *dev, struct sas_port *port);
|
||||
struct domain_device *sas_find_dev_by_rphy(struct sas_rphy *rphy);
|
||||
struct domain_device *sas_ex_to_ata(struct domain_device *ex_dev, int phy_id);
|
||||
|
@ -2423,7 +2423,7 @@ lpfc_debugfs_dif_err_write(struct file *file, const char __user *buf,
|
||||
memset(dstbuf, 0, 33);
|
||||
size = (nbytes < 32) ? nbytes : 32;
|
||||
if (copy_from_user(dstbuf, buf, size))
|
||||
return 0;
|
||||
return -EFAULT;
|
||||
|
||||
if (dent == phba->debug_InjErrLBA) {
|
||||
if ((dstbuf[0] == 'o') && (dstbuf[1] == 'f') &&
|
||||
@ -2432,7 +2432,7 @@ lpfc_debugfs_dif_err_write(struct file *file, const char __user *buf,
|
||||
}
|
||||
|
||||
if ((tmp == 0) && (kstrtoull(dstbuf, 0, &tmp)))
|
||||
return 0;
|
||||
return -EINVAL;
|
||||
|
||||
if (dent == phba->debug_writeGuard)
|
||||
phba->lpfc_injerr_wgrd_cnt = (uint32_t)tmp;
|
||||
|
@ -216,11 +216,11 @@ void mvs_set_sas_addr(struct mvs_info *mvi, int port_id, u32 off_lo,
|
||||
MVS_CHIP_DISP->write_port_cfg_data(mvi, port_id, hi);
|
||||
}
|
||||
|
||||
static void mvs_bytes_dmaed(struct mvs_info *mvi, int i)
|
||||
static void mvs_bytes_dmaed(struct mvs_info *mvi, int i, gfp_t gfp_flags)
|
||||
{
|
||||
struct mvs_phy *phy = &mvi->phy[i];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
struct sas_ha_struct *sas_ha;
|
||||
|
||||
if (!phy->phy_attached)
|
||||
return;
|
||||
|
||||
@ -229,8 +229,7 @@ static void mvs_bytes_dmaed(struct mvs_info *mvi, int i)
|
||||
return;
|
||||
}
|
||||
|
||||
sas_ha = mvi->sas;
|
||||
sas_ha->notify_phy_event(sas_phy, PHYE_OOB_DONE);
|
||||
sas_notify_phy_event_gfp(sas_phy, PHYE_OOB_DONE, gfp_flags);
|
||||
|
||||
if (sas_phy->phy) {
|
||||
struct sas_phy *sphy = sas_phy->phy;
|
||||
@ -262,8 +261,7 @@ static void mvs_bytes_dmaed(struct mvs_info *mvi, int i)
|
||||
|
||||
sas_phy->frame_rcvd_size = phy->frame_rcvd_size;
|
||||
|
||||
mvi->sas->notify_port_event(sas_phy,
|
||||
PORTE_BYTES_DMAED);
|
||||
sas_notify_port_event_gfp(sas_phy, PORTE_BYTES_DMAED, gfp_flags);
|
||||
}
|
||||
|
||||
void mvs_scan_start(struct Scsi_Host *shost)
|
||||
@ -279,7 +277,7 @@ void mvs_scan_start(struct Scsi_Host *shost)
|
||||
for (j = 0; j < core_nr; j++) {
|
||||
mvi = ((struct mvs_prv_info *)sha->lldd_ha)->mvi[j];
|
||||
for (i = 0; i < mvi->chip->n_phy; ++i)
|
||||
mvs_bytes_dmaed(mvi, i);
|
||||
mvs_bytes_dmaed(mvi, i, GFP_KERNEL);
|
||||
}
|
||||
mvs_prv->scan_finished = 1;
|
||||
}
|
||||
@ -1880,7 +1878,6 @@ static void mvs_work_queue(struct work_struct *work)
|
||||
struct mvs_info *mvi = mwq->mvi;
|
||||
unsigned long flags;
|
||||
u32 phy_no = (unsigned long) mwq->data;
|
||||
struct sas_ha_struct *sas_ha = mvi->sas;
|
||||
struct mvs_phy *phy = &mvi->phy[phy_no];
|
||||
struct asd_sas_phy *sas_phy = &phy->sas_phy;
|
||||
|
||||
@ -1895,21 +1892,21 @@ static void mvs_work_queue(struct work_struct *work)
|
||||
if (!(tmp & PHY_READY_MASK)) {
|
||||
sas_phy_disconnected(sas_phy);
|
||||
mvs_phy_disconnected(phy);
|
||||
sas_ha->notify_phy_event(sas_phy,
|
||||
PHYE_LOSS_OF_SIGNAL);
|
||||
sas_notify_phy_event_gfp(sas_phy,
|
||||
PHYE_LOSS_OF_SIGNAL, GFP_ATOMIC);
|
||||
mv_dprintk("phy%d Removed Device\n", phy_no);
|
||||
} else {
|
||||
MVS_CHIP_DISP->detect_porttype(mvi, phy_no);
|
||||
mvs_update_phyinfo(mvi, phy_no, 1);
|
||||
mvs_bytes_dmaed(mvi, phy_no);
|
||||
mvs_bytes_dmaed(mvi, phy_no, GFP_ATOMIC);
|
||||
mvs_port_notify_formed(sas_phy, 0);
|
||||
mv_dprintk("phy%d Attached Device\n", phy_no);
|
||||
}
|
||||
}
|
||||
} else if (mwq->handler & EXP_BRCT_CHG) {
|
||||
phy->phy_event &= ~EXP_BRCT_CHG;
|
||||
sas_ha->notify_port_event(sas_phy,
|
||||
PORTE_BROADCAST_RCVD);
|
||||
sas_notify_port_event_gfp(sas_phy,
|
||||
PORTE_BROADCAST_RCVD, GFP_ATOMIC);
|
||||
mv_dprintk("phy%d Got Broadcast Change\n", phy_no);
|
||||
}
|
||||
list_del(&mwq->entry);
|
||||
@ -2026,7 +2023,7 @@ void mvs_int_port(struct mvs_info *mvi, int phy_no, u32 events)
|
||||
mdelay(10);
|
||||
}
|
||||
|
||||
mvs_bytes_dmaed(mvi, phy_no);
|
||||
mvs_bytes_dmaed(mvi, phy_no, GFP_ATOMIC);
|
||||
/* whether driver is going to handle hot plug */
|
||||
if (phy->phy_event & PHY_PLUG_OUT) {
|
||||
mvs_port_notify_formed(&phy->sas_phy, 0);
|
||||
|
@ -2274,12 +2274,12 @@ static void myrs_cleanup(struct myrs_hba *cs)
|
||||
if (cs->mmio_base) {
|
||||
cs->disable_intr(cs);
|
||||
iounmap(cs->mmio_base);
|
||||
cs->mmio_base = NULL;
|
||||
}
|
||||
if (cs->irq)
|
||||
free_irq(cs->irq, cs);
|
||||
if (cs->io_addr)
|
||||
release_region(cs->io_addr, 0x80);
|
||||
iounmap(cs->mmio_base);
|
||||
pci_set_drvdata(pdev, NULL);
|
||||
pci_disable_device(pdev);
|
||||
scsi_host_put(cs->host);
|
||||
|
@ -841,10 +841,9 @@ static ssize_t pm8001_store_update_fw(struct device *cdev,
|
||||
pm8001_ha->dev);
|
||||
|
||||
if (ret) {
|
||||
PM8001_FAIL_DBG(pm8001_ha,
|
||||
pm8001_printk(
|
||||
"Failed to load firmware image file %s, error %d\n",
|
||||
filename_ptr, ret));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"Failed to load firmware image file %s, error %d\n",
|
||||
filename_ptr, ret);
|
||||
pm8001_ha->fw_status = FAIL_OPEN_BIOS_FILE;
|
||||
goto out;
|
||||
}
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -271,15 +271,14 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
|
||||
|
||||
spin_lock_init(&pm8001_ha->lock);
|
||||
spin_lock_init(&pm8001_ha->bitmap_lock);
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("pm8001_alloc: PHY:%x\n",
|
||||
pm8001_ha->chip->n_phy));
|
||||
pm8001_dbg(pm8001_ha, INIT, "pm8001_alloc: PHY:%x\n",
|
||||
pm8001_ha->chip->n_phy);
|
||||
|
||||
/* Setup Interrupt */
|
||||
rc = pm8001_setup_irq(pm8001_ha);
|
||||
if (rc) {
|
||||
PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
|
||||
"pm8001_setup_irq failed [ret: %d]\n", rc));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"pm8001_setup_irq failed [ret: %d]\n", rc);
|
||||
goto err_out_shost;
|
||||
}
|
||||
/* Request Interrupt */
|
||||
@ -394,9 +393,9 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
|
||||
&pm8001_ha->memoryMap.region[i].phys_addr_lo,
|
||||
pm8001_ha->memoryMap.region[i].total_len,
|
||||
pm8001_ha->memoryMap.region[i].alignment) != 0) {
|
||||
PM8001_FAIL_DBG(pm8001_ha,
|
||||
pm8001_printk("Mem%d alloc failed\n",
|
||||
i));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"Mem%d alloc failed\n",
|
||||
i);
|
||||
goto err_out;
|
||||
}
|
||||
}
|
||||
@ -412,7 +411,7 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
|
||||
pm8001_ha->devices[i].dev_type = SAS_PHY_UNUSED;
|
||||
pm8001_ha->devices[i].id = i;
|
||||
pm8001_ha->devices[i].device_id = PM8001_MAX_DEVICES;
|
||||
pm8001_ha->devices[i].running_req = 0;
|
||||
atomic_set(&pm8001_ha->devices[i].running_req, 0);
|
||||
}
|
||||
pm8001_ha->flags = PM8001F_INIT_TIME;
|
||||
/* Initialize tags */
|
||||
@ -467,15 +466,15 @@ static int pm8001_ioremap(struct pm8001_hba_info *pm8001_ha)
|
||||
pm8001_ha->io_mem[logicalBar].memvirtaddr =
|
||||
ioremap(pm8001_ha->io_mem[logicalBar].membase,
|
||||
pm8001_ha->io_mem[logicalBar].memsize);
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("PCI: bar %d, logicalBar %d ",
|
||||
bar, logicalBar));
|
||||
PM8001_INIT_DBG(pm8001_ha, pm8001_printk(
|
||||
"base addr %llx virt_addr=%llx len=%d\n",
|
||||
(u64)pm8001_ha->io_mem[logicalBar].membase,
|
||||
(u64)(unsigned long)
|
||||
pm8001_ha->io_mem[logicalBar].memvirtaddr,
|
||||
pm8001_ha->io_mem[logicalBar].memsize));
|
||||
pm8001_dbg(pm8001_ha, INIT,
|
||||
"PCI: bar %d, logicalBar %d\n",
|
||||
bar, logicalBar);
|
||||
pm8001_dbg(pm8001_ha, INIT,
|
||||
"base addr %llx virt_addr=%llx len=%d\n",
|
||||
(u64)pm8001_ha->io_mem[logicalBar].membase,
|
||||
(u64)(unsigned long)
|
||||
pm8001_ha->io_mem[logicalBar].memvirtaddr,
|
||||
pm8001_ha->io_mem[logicalBar].memsize);
|
||||
} else {
|
||||
pm8001_ha->io_mem[logicalBar].membase = 0;
|
||||
pm8001_ha->io_mem[logicalBar].memsize = 0;
|
||||
@ -520,8 +519,8 @@ static struct pm8001_hba_info *pm8001_pci_alloc(struct pci_dev *pdev,
|
||||
else {
|
||||
pm8001_ha->link_rate = LINKRATE_15 | LINKRATE_30 |
|
||||
LINKRATE_60 | LINKRATE_120;
|
||||
PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
|
||||
"Setting link rate to default value\n"));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"Setting link rate to default value\n");
|
||||
}
|
||||
sprintf(pm8001_ha->name, "%s%d", DRV_NAME, pm8001_ha->id);
|
||||
/* IOMB size is 128 for 8088/89 controllers */
|
||||
@ -684,13 +683,13 @@ static void pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha)
|
||||
payload.offset = 0;
|
||||
payload.func_specific = kzalloc(payload.rd_length, GFP_KERNEL);
|
||||
if (!payload.func_specific) {
|
||||
PM8001_INIT_DBG(pm8001_ha, pm8001_printk("mem alloc fail\n"));
|
||||
pm8001_dbg(pm8001_ha, INIT, "mem alloc fail\n");
|
||||
return;
|
||||
}
|
||||
rc = PM8001_CHIP_DISP->get_nvmd_req(pm8001_ha, &payload);
|
||||
if (rc) {
|
||||
kfree(payload.func_specific);
|
||||
PM8001_INIT_DBG(pm8001_ha, pm8001_printk("nvmd failed\n"));
|
||||
pm8001_dbg(pm8001_ha, INIT, "nvmd failed\n");
|
||||
return;
|
||||
}
|
||||
wait_for_completion(&completion);
|
||||
@ -718,9 +717,8 @@ static void pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha)
|
||||
sas_add[7] = sas_add[7] + 4;
|
||||
memcpy(&pm8001_ha->phy[i].dev_sas_addr,
|
||||
sas_add, SAS_ADDR_SIZE);
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("phy %d sas_addr = %016llx\n", i,
|
||||
pm8001_ha->phy[i].dev_sas_addr));
|
||||
pm8001_dbg(pm8001_ha, INIT, "phy %d sas_addr = %016llx\n", i,
|
||||
pm8001_ha->phy[i].dev_sas_addr);
|
||||
}
|
||||
kfree(payload.func_specific);
|
||||
#else
|
||||
@ -760,7 +758,7 @@ static int pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha)
|
||||
rc = PM8001_CHIP_DISP->get_nvmd_req(pm8001_ha, &payload);
|
||||
if (rc) {
|
||||
kfree(payload.func_specific);
|
||||
PM8001_INIT_DBG(pm8001_ha, pm8001_printk("nvmd failed\n"));
|
||||
pm8001_dbg(pm8001_ha, INIT, "nvmd failed\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
wait_for_completion(&completion);
|
||||
@ -854,9 +852,9 @@ void pm8001_get_phy_mask(struct pm8001_hba_info *pm8001_ha, int *phymask)
|
||||
break;
|
||||
|
||||
default:
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("Unknown subsystem device=0x%.04x",
|
||||
pm8001_ha->pdev->subsystem_device));
|
||||
pm8001_dbg(pm8001_ha, INIT,
|
||||
"Unknown subsystem device=0x%.04x\n",
|
||||
pm8001_ha->pdev->subsystem_device);
|
||||
}
|
||||
}
|
||||
|
||||
@ -950,9 +948,9 @@ static u32 pm8001_setup_msix(struct pm8001_hba_info *pm8001_ha)
|
||||
/* Maximum queue number updating in HBA structure */
|
||||
pm8001_ha->max_q_num = number_of_intr;
|
||||
|
||||
PM8001_INIT_DBG(pm8001_ha, pm8001_printk(
|
||||
"pci_alloc_irq_vectors request ret:%d no of intr %d\n",
|
||||
rc, pm8001_ha->number_of_intr));
|
||||
pm8001_dbg(pm8001_ha, INIT,
|
||||
"pci_alloc_irq_vectors request ret:%d no of intr %d\n",
|
||||
rc, pm8001_ha->number_of_intr);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -964,9 +962,9 @@ static u32 pm8001_request_msix(struct pm8001_hba_info *pm8001_ha)
|
||||
if (pm8001_ha->chip_id != chip_8001)
|
||||
flag &= ~IRQF_SHARED;
|
||||
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("pci_enable_msix request number of intr %d\n",
|
||||
pm8001_ha->number_of_intr));
|
||||
pm8001_dbg(pm8001_ha, INIT,
|
||||
"pci_enable_msix request number of intr %d\n",
|
||||
pm8001_ha->number_of_intr);
|
||||
|
||||
for (i = 0; i < pm8001_ha->number_of_intr; i++) {
|
||||
snprintf(pm8001_ha->intr_drvname[i],
|
||||
@ -1002,8 +1000,7 @@ static u32 pm8001_setup_irq(struct pm8001_hba_info *pm8001_ha)
|
||||
#ifdef PM8001_USE_MSIX
|
||||
if (pci_find_capability(pdev, PCI_CAP_ID_MSIX))
|
||||
return pm8001_setup_msix(pm8001_ha);
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("MSIX not supported!!!\n"));
|
||||
pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
@ -1023,8 +1020,7 @@ static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha)
|
||||
if (pdev->msix_cap && pci_msi_enabled())
|
||||
return pm8001_request_msix(pm8001_ha);
|
||||
else {
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("MSIX not supported!!!\n"));
|
||||
pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
|
||||
goto intx;
|
||||
}
|
||||
#endif
|
||||
@ -1108,8 +1104,8 @@ static int pm8001_pci_probe(struct pci_dev *pdev,
|
||||
PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha);
|
||||
rc = PM8001_CHIP_DISP->chip_init(pm8001_ha);
|
||||
if (rc) {
|
||||
PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
|
||||
"chip_init failed [ret: %d]\n", rc));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"chip_init failed [ret: %d]\n", rc);
|
||||
goto err_out_ha_free;
|
||||
}
|
||||
|
||||
@ -1138,8 +1134,8 @@ static int pm8001_pci_probe(struct pci_dev *pdev,
|
||||
pm8001_post_sas_ha_init(shost, chip);
|
||||
rc = sas_register_ha(SHOST_TO_SAS_HA(shost));
|
||||
if (rc) {
|
||||
PM8001_FAIL_DBG(pm8001_ha, pm8001_printk(
|
||||
"sas_register_ha failed [ret: %d]\n", rc));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"sas_register_ha failed [ret: %d]\n", rc);
|
||||
goto err_out_shost;
|
||||
}
|
||||
list_add_tail(&pm8001_ha->list, &hba_list);
|
||||
@ -1191,8 +1187,8 @@ pm8001_init_ccb_tag(struct pm8001_hba_info *pm8001_ha, struct Scsi_Host *shost,
|
||||
pm8001_ha->ccb_info = (struct pm8001_ccb_info *)
|
||||
kcalloc(ccb_count, sizeof(struct pm8001_ccb_info), GFP_KERNEL);
|
||||
if (!pm8001_ha->ccb_info) {
|
||||
PM8001_FAIL_DBG(pm8001_ha, pm8001_printk
|
||||
("Unable to allocate memory for ccb\n"));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"Unable to allocate memory for ccb\n");
|
||||
goto err_out_noccb;
|
||||
}
|
||||
for (i = 0; i < ccb_count; i++) {
|
||||
@ -1200,8 +1196,8 @@ pm8001_init_ccb_tag(struct pm8001_hba_info *pm8001_ha, struct Scsi_Host *shost,
|
||||
sizeof(struct pm8001_prd) * PM8001_MAX_DMA_SG,
|
||||
&pm8001_ha->ccb_info[i].ccb_dma_handle);
|
||||
if (!pm8001_ha->ccb_info[i].buf_prd) {
|
||||
PM8001_FAIL_DBG(pm8001_ha, pm8001_printk
|
||||
("pm80xx: ccb prd memory allocation error\n"));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"pm80xx: ccb prd memory allocation error\n");
|
||||
goto err_out;
|
||||
}
|
||||
pm8001_ha->ccb_info[i].task = NULL;
|
||||
@ -1345,8 +1341,7 @@ static int pm8001_pci_resume(struct pci_dev *pdev)
|
||||
/* chip soft rst only for spc */
|
||||
if (pm8001_ha->chip_id == chip_8001) {
|
||||
PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha);
|
||||
PM8001_INIT_DBG(pm8001_ha,
|
||||
pm8001_printk("chip soft reset successful\n"));
|
||||
pm8001_dbg(pm8001_ha, INIT, "chip soft reset successful\n");
|
||||
}
|
||||
rc = PM8001_CHIP_DISP->chip_init(pm8001_ha);
|
||||
if (rc)
|
||||
|
@ -158,7 +158,6 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
|
||||
int rc = 0, phy_id = sas_phy->id;
|
||||
struct pm8001_hba_info *pm8001_ha = NULL;
|
||||
struct sas_phy_linkrates *rates;
|
||||
struct sas_ha_struct *sas_ha;
|
||||
struct pm8001_phy *phy;
|
||||
DECLARE_COMPLETION_ONSTACK(completion);
|
||||
unsigned long flags;
|
||||
@ -207,18 +206,16 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
|
||||
if (pm8001_ha->chip_id != chip_8001) {
|
||||
if (pm8001_ha->phy[phy_id].phy_state ==
|
||||
PHY_STATE_LINK_UP_SPCV) {
|
||||
sas_ha = pm8001_ha->sas;
|
||||
sas_phy_disconnected(&phy->sas_phy);
|
||||
sas_ha->notify_phy_event(&phy->sas_phy,
|
||||
sas_notify_phy_event(&phy->sas_phy,
|
||||
PHYE_LOSS_OF_SIGNAL);
|
||||
phy->phy_attached = 0;
|
||||
}
|
||||
} else {
|
||||
if (pm8001_ha->phy[phy_id].phy_state ==
|
||||
PHY_STATE_LINK_UP_SPC) {
|
||||
sas_ha = pm8001_ha->sas;
|
||||
sas_phy_disconnected(&phy->sas_phy);
|
||||
sas_ha->notify_phy_event(&phy->sas_phy,
|
||||
sas_notify_phy_event(&phy->sas_phy,
|
||||
PHYE_LOSS_OF_SIGNAL);
|
||||
phy->phy_attached = 0;
|
||||
}
|
||||
@ -250,8 +247,7 @@ int pm8001_phy_control(struct asd_sas_phy *sas_phy, enum phy_func func,
|
||||
spin_unlock_irqrestore(&pm8001_ha->lock, flags);
|
||||
return 0;
|
||||
default:
|
||||
PM8001_DEVIO_DBG(pm8001_ha,
|
||||
pm8001_printk("func 0x%x\n", func));
|
||||
pm8001_dbg(pm8001_ha, DEVIO, "func 0x%x\n", func);
|
||||
rc = -EOPNOTSUPP;
|
||||
}
|
||||
msleep(300);
|
||||
@ -405,7 +401,7 @@ static int pm8001_task_exec(struct sas_task *task,
|
||||
t->task_done(t);
|
||||
return 0;
|
||||
}
|
||||
PM8001_IO_DBG(pm8001_ha, pm8001_printk("pm8001_task_exec device \n "));
|
||||
pm8001_dbg(pm8001_ha, IO, "pm8001_task_exec device\n");
|
||||
spin_lock_irqsave(&pm8001_ha->lock, flags);
|
||||
do {
|
||||
dev = t->dev;
|
||||
@ -456,9 +452,11 @@ static int pm8001_task_exec(struct sas_task *task,
|
||||
ccb->device = pm8001_dev;
|
||||
switch (task_proto) {
|
||||
case SAS_PROTOCOL_SMP:
|
||||
atomic_inc(&pm8001_dev->running_req);
|
||||
rc = pm8001_task_prep_smp(pm8001_ha, ccb);
|
||||
break;
|
||||
case SAS_PROTOCOL_SSP:
|
||||
atomic_inc(&pm8001_dev->running_req);
|
||||
if (is_tmf)
|
||||
rc = pm8001_task_prep_ssp_tm(pm8001_ha,
|
||||
ccb, tmf);
|
||||
@ -467,6 +465,7 @@ static int pm8001_task_exec(struct sas_task *task,
|
||||
break;
|
||||
case SAS_PROTOCOL_SATA:
|
||||
case SAS_PROTOCOL_STP:
|
||||
atomic_inc(&pm8001_dev->running_req);
|
||||
rc = pm8001_task_prep_ata(pm8001_ha, ccb);
|
||||
break;
|
||||
default:
|
||||
@ -477,15 +476,14 @@ static int pm8001_task_exec(struct sas_task *task,
|
||||
}
|
||||
|
||||
if (rc) {
|
||||
PM8001_IO_DBG(pm8001_ha,
|
||||
pm8001_printk("rc is %x\n", rc));
|
||||
pm8001_dbg(pm8001_ha, IO, "rc is %x\n", rc);
|
||||
atomic_dec(&pm8001_dev->running_req);
|
||||
goto err_out_tag;
|
||||
}
|
||||
/* TODO: select normal or high priority */
|
||||
spin_lock(&t->task_state_lock);
|
||||
t->task_state_flags |= SAS_TASK_AT_INITIATOR;
|
||||
spin_unlock(&t->task_state_lock);
|
||||
pm8001_dev->running_req++;
|
||||
} while (0);
|
||||
rc = 0;
|
||||
goto out_done;
|
||||
@ -567,9 +565,9 @@ static struct pm8001_device *pm8001_alloc_dev(struct pm8001_hba_info *pm8001_ha)
|
||||
}
|
||||
}
|
||||
if (dev == PM8001_MAX_DEVICES) {
|
||||
PM8001_FAIL_DBG(pm8001_ha,
|
||||
pm8001_printk("max support %d devices, ignore ..\n",
|
||||
PM8001_MAX_DEVICES));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"max support %d devices, ignore ..\n",
|
||||
PM8001_MAX_DEVICES);
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
@ -587,8 +585,7 @@ struct pm8001_device *pm8001_find_dev(struct pm8001_hba_info *pm8001_ha,
|
||||
return &pm8001_ha->devices[dev];
|
||||
}
|
||||
if (dev == PM8001_MAX_DEVICES) {
|
||||
PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("NO MATCHING "
|
||||
"DEVICE FOUND !!!\n"));
|
||||
pm8001_dbg(pm8001_ha, FAIL, "NO MATCHING DEVICE FOUND !!!\n");
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
@ -649,10 +646,10 @@ static int pm8001_dev_found_notify(struct domain_device *dev)
|
||||
}
|
||||
}
|
||||
if (phy_id == parent_dev->ex_dev.num_phys) {
|
||||
PM8001_FAIL_DBG(pm8001_ha,
|
||||
pm8001_printk("Error: no attached dev:%016llx"
|
||||
" at ex:%016llx.\n", SAS_ADDR(dev->sas_addr),
|
||||
SAS_ADDR(parent_dev->sas_addr)));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"Error: no attached dev:%016llx at ex:%016llx.\n",
|
||||
SAS_ADDR(dev->sas_addr),
|
||||
SAS_ADDR(parent_dev->sas_addr));
|
||||
res = -1;
|
||||
}
|
||||
} else {
|
||||
@ -662,7 +659,7 @@ static int pm8001_dev_found_notify(struct domain_device *dev)
|
||||
flag = 1; /* directly sata */
|
||||
}
|
||||
} /*register this device to HBA*/
|
||||
PM8001_DISC_DBG(pm8001_ha, pm8001_printk("Found device\n"));
|
||||
pm8001_dbg(pm8001_ha, DISC, "Found device\n");
|
||||
PM8001_CHIP_DISP->reg_dev_req(pm8001_ha, pm8001_device, flag);
|
||||
spin_unlock_irqrestore(&pm8001_ha->lock, flags);
|
||||
wait_for_completion(&completion);
|
||||
@ -734,9 +731,7 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
|
||||
|
||||
if (res) {
|
||||
del_timer(&task->slow_task->timer);
|
||||
PM8001_FAIL_DBG(pm8001_ha,
|
||||
pm8001_printk("Executing internal task "
|
||||
"failed\n"));
|
||||
pm8001_dbg(pm8001_ha, FAIL, "Executing internal task failed\n");
|
||||
goto ex_err;
|
||||
}
|
||||
wait_for_completion(&task->slow_task->completion);
|
||||
@ -750,9 +745,9 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
|
||||
/* Even TMF timed out, return direct. */
|
||||
if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
|
||||
if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
|
||||
PM8001_FAIL_DBG(pm8001_ha,
|
||||
pm8001_printk("TMF task[%x]timeout.\n",
|
||||
tmf->tmf));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"TMF task[%x]timeout.\n",
|
||||
tmf->tmf);
|
||||
goto ex_err;
|
||||
}
|
||||
}
|
||||
@ -773,17 +768,15 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
|
||||
|
||||
if (task->task_status.resp == SAS_TASK_COMPLETE &&
|
||||
task->task_status.stat == SAS_DATA_OVERRUN) {
|
||||
PM8001_FAIL_DBG(pm8001_ha,
|
||||
pm8001_printk("Blocked task error.\n"));
|
||||
pm8001_dbg(pm8001_ha, FAIL, "Blocked task error.\n");
|
||||
res = -EMSGSIZE;
|
||||
break;
|
||||
} else {
|
||||
PM8001_EH_DBG(pm8001_ha,
|
||||
pm8001_printk(" Task to dev %016llx response:"
|
||||
"0x%x status 0x%x\n",
|
||||
SAS_ADDR(dev->sas_addr),
|
||||
task->task_status.resp,
|
||||
task->task_status.stat));
|
||||
pm8001_dbg(pm8001_ha, EH,
|
||||
" Task to dev %016llx response:0x%x status 0x%x\n",
|
||||
SAS_ADDR(dev->sas_addr),
|
||||
task->task_status.resp,
|
||||
task->task_status.stat);
|
||||
sas_free_task(task);
|
||||
task = NULL;
|
||||
}
|
||||
@ -830,9 +823,7 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
|
||||
|
||||
if (res) {
|
||||
del_timer(&task->slow_task->timer);
|
||||
PM8001_FAIL_DBG(pm8001_ha,
|
||||
pm8001_printk("Executing internal task "
|
||||
"failed\n"));
|
||||
pm8001_dbg(pm8001_ha, FAIL, "Executing internal task failed\n");
|
||||
goto ex_err;
|
||||
}
|
||||
wait_for_completion(&task->slow_task->completion);
|
||||
@ -840,8 +831,8 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
|
||||
/* Even TMF timed out, return direct. */
|
||||
if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
|
||||
if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
|
||||
PM8001_FAIL_DBG(pm8001_ha,
|
||||
pm8001_printk("TMF task timeout.\n"));
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"TMF task timeout.\n");
|
||||
goto ex_err;
|
||||
}
|
||||
}
|
||||
@ -852,12 +843,11 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
|
||||
break;
|
||||
|
||||
} else {
|
||||
PM8001_EH_DBG(pm8001_ha,
|
||||
pm8001_printk(" Task to dev %016llx response: "
|
||||
"0x%x status 0x%x\n",
|
||||
SAS_ADDR(dev->sas_addr),
|
||||
task->task_status.resp,
|
||||
task->task_status.stat));
|
||||
pm8001_dbg(pm8001_ha, EH,
|
||||
" Task to dev %016llx response: 0x%x status 0x%x\n",
|
||||
SAS_ADDR(dev->sas_addr),
|
||||
task->task_status.resp,
|
||||
task->task_status.stat);
|
||||
sas_free_task(task);
|
||||
task = NULL;
|
||||
}
|
||||
@ -883,22 +873,20 @@ static void pm8001_dev_gone_notify(struct domain_device *dev)
|
||||
if (pm8001_dev) {
|
||||
u32 device_id = pm8001_dev->device_id;
|
||||
|
||||
PM8001_DISC_DBG(pm8001_ha,
|
||||
pm8001_printk("found dev[%d:%x] is gone.\n",
|
||||
pm8001_dev->device_id, pm8001_dev->dev_type));
|
||||
if (pm8001_dev->running_req) {
|
||||
pm8001_dbg(pm8001_ha, DISC, "found dev[%d:%x] is gone.\n",
|
||||
pm8001_dev->device_id, pm8001_dev->dev_type);
|
||||
if (atomic_read(&pm8001_dev->running_req)) {
|
||||
spin_unlock_irqrestore(&pm8001_ha->lock, flags);
|
||||
pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev ,
|
||||
dev, 1, 0);
|
||||
while (pm8001_dev->running_req)
|
||||
while (atomic_read(&pm8001_dev->running_req))
|
||||
msleep(20);
|
||||
spin_lock_irqsave(&pm8001_ha->lock, flags);
|
||||
}
|
||||
PM8001_CHIP_DISP->dereg_dev_req(pm8001_ha, device_id);
|
||||
pm8001_free_dev(pm8001_dev);
|
||||
} else {
|
||||
PM8001_DISC_DBG(pm8001_ha,
|
||||
pm8001_printk("Found dev has gone.\n"));
|
||||
pm8001_dbg(pm8001_ha, DISC, "Found dev has gone.\n");
|
||||
}
|
||||
dev->lldd_dev = NULL;
|
||||
spin_unlock_irqrestore(&pm8001_ha->lock, flags);
|
||||
@ -968,7 +956,7 @@ void pm8001_open_reject_retry(
|
||||
ts->stat = SAS_OPEN_REJECT;
|
||||
ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
|
||||
if (pm8001_dev)
|
||||
pm8001_dev->running_req--;
|
||||
atomic_dec(&pm8001_dev->running_req);
|
||||
spin_lock_irqsave(&task->task_state_lock, flags1);
|
||||
task->task_state_flags &= ~SAS_TASK_STATE_PENDING;
|
||||
task->task_state_flags &= ~SAS_TASK_AT_INITIATOR;
|
||||
@ -1018,9 +1006,9 @@ int pm8001_I_T_nexus_reset(struct domain_device *dev)
|
||||
}
|
||||
rc = sas_phy_reset(phy, 1);
|
||||
if (rc) {
|
||||
PM8001_EH_DBG(pm8001_ha,
|
||||
pm8001_printk("phy reset failed for device %x\n"
|
||||
"with rc %d\n", pm8001_dev->device_id, rc));
|
||||
pm8001_dbg(pm8001_ha, EH,
|
||||
"phy reset failed for device %x\n"
|
||||
"with rc %d\n", pm8001_dev->device_id, rc);
|
||||
rc = TMF_RESP_FUNC_FAILED;
|
||||
goto out;
|
||||
}
|
||||
@ -1028,17 +1016,16 @@ int pm8001_I_T_nexus_reset(struct domain_device *dev)
|
||||
rc = pm8001_exec_internal_task_abort(pm8001_ha, pm8001_dev ,
|
||||
dev, 1, 0);
|
||||
if (rc) {
|
||||
PM8001_EH_DBG(pm8001_ha,
|
||||
pm8001_printk("task abort failed %x\n"
|
||||
"with rc %d\n", pm8001_dev->device_id, rc));
|
||||
pm8001_dbg(pm8001_ha, EH, "task abort failed %x\n"
|
||||
"with rc %d\n", pm8001_dev->device_id, rc);
|
||||
rc = TMF_RESP_FUNC_FAILED;
|
||||
}
|
||||
} else {
|
||||
rc = sas_phy_reset(phy, 1);
|
||||
msleep(2000);
|
||||
}
|
||||
PM8001_EH_DBG(pm8001_ha, pm8001_printk(" for device[%x]:rc=%d\n",
|
||||
pm8001_dev->device_id, rc));
|
||||
pm8001_dbg(pm8001_ha, EH, " for device[%x]:rc=%d\n",
|
||||
pm8001_dev->device_id, rc);
|
||||
out:
|
||||
sas_put_local_phy(phy);
|
||||
return rc;
|
||||
@ -1061,8 +1048,7 @@ int pm8001_I_T_nexus_event_handler(struct domain_device *dev)
|
||||
pm8001_dev = dev->lldd_dev;
|
||||
pm8001_ha = pm8001_find_ha_by_dev(dev);
|
||||
|
||||
PM8001_EH_DBG(pm8001_ha,
|
||||
pm8001_printk("I_T_Nexus handler invoked !!"));
|
||||
pm8001_dbg(pm8001_ha, EH, "I_T_Nexus handler invoked !!\n");
|
||||
|
||||
phy = sas_get_local_phy(dev);
|
||||
|
||||
@ -1101,8 +1087,8 @@ int pm8001_I_T_nexus_event_handler(struct domain_device *dev)
|
||||
rc = sas_phy_reset(phy, 1);
|
||||
msleep(2000);
|
||||
}
|
||||
PM8001_EH_DBG(pm8001_ha, pm8001_printk(" for device[%x]:rc=%d\n",
|
||||
pm8001_dev->device_id, rc));
|
||||
pm8001_dbg(pm8001_ha, EH, " for device[%x]:rc=%d\n",
|
||||
pm8001_dev->device_id, rc);
|
||||
out:
|
||||
sas_put_local_phy(phy);
|
||||
|
||||
@ -1131,8 +1117,8 @@ int pm8001_lu_reset(struct domain_device *dev, u8 *lun)
|
||||
rc = pm8001_issue_ssp_tmf(dev, lun, &tmf_task);
|
||||
}
|
||||
/* If failed, fall-through I_T_Nexus reset */
|
||||
PM8001_EH_DBG(pm8001_ha, pm8001_printk("for device[%x]:rc=%d\n",
|
||||
pm8001_dev->device_id, rc));
|
||||
pm8001_dbg(pm8001_ha, EH, "for device[%x]:rc=%d\n",
|
||||
pm8001_dev->device_id, rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -1140,7 +1126,6 @@ int pm8001_lu_reset(struct domain_device *dev, u8 *lun)
|
||||
int pm8001_query_task(struct sas_task *task)
|
||||
{
|
||||
u32 tag = 0xdeadbeef;
|
||||
int i = 0;
|
||||
struct scsi_lun lun;
|
||||
struct pm8001_tmf_task tmf_task;
|
||||
int rc = TMF_RESP_FUNC_FAILED;
|
||||
@ -1159,10 +1144,7 @@ int pm8001_query_task(struct sas_task *task)
|
||||
rc = TMF_RESP_FUNC_FAILED;
|
||||
return rc;
|
||||
}
|
||||
PM8001_EH_DBG(pm8001_ha, pm8001_printk("Query:["));
|
||||
for (i = 0; i < 16; i++)
|
||||
printk(KERN_INFO "%02x ", cmnd->cmnd[i]);
|
||||
printk(KERN_INFO "]\n");
|
||||
pm8001_dbg(pm8001_ha, EH, "Query:[%16ph]\n", cmnd->cmnd);
|
||||
tmf_task.tmf = TMF_QUERY_TASK;
|
||||
tmf_task.tag_of_task_to_be_managed = tag;
|
||||
|
||||
@ -1170,15 +1152,14 @@ int pm8001_query_task(struct sas_task *task)
|
||||
switch (rc) {
|
||||
/* The task is still in Lun, release it then */
|
||||
case TMF_RESP_FUNC_SUCC:
|
||||
PM8001_EH_DBG(pm8001_ha,
|
||||
pm8001_printk("The task is still in Lun\n"));
|
||||
pm8001_dbg(pm8001_ha, EH,
|
||||
"The task is still in Lun\n");
|
||||
break;
|
||||
/* The task is not in Lun or failed, reset the phy */
|
||||
case TMF_RESP_FUNC_FAILED:
|
||||
case TMF_RESP_FUNC_COMPLETE:
|
||||
PM8001_EH_DBG(pm8001_ha,
|
||||
pm8001_printk("The task is not in Lun or failed,"
|
||||
" reset the phy\n"));
|
||||
pm8001_dbg(pm8001_ha, EH,
|
||||
"The task is not in Lun or failed, reset the phy\n");
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -1264,8 +1245,8 @@ int pm8001_abort_task(struct sas_task *task)
|
||||
* leaking the task in libsas or losing the race and
|
||||
* getting a double free.
|
||||
*/
|
||||
PM8001_MSG_DBG(pm8001_ha,
|
||||
pm8001_printk("Waiting for local phy ctl\n"));
|
||||
pm8001_dbg(pm8001_ha, MSG,
|
||||
"Waiting for local phy ctl\n");
|
||||
ret = wait_for_completion_timeout(&completion,
|
||||
PM8001_TASK_TIMEOUT * HZ);
|
||||
if (!ret || !phy->reset_success) {
|
||||
@ -1275,8 +1256,8 @@ int pm8001_abort_task(struct sas_task *task)
|
||||
/* 3. Wait for Port Reset complete or
|
||||
* Port reset TMO
|
||||
*/
|
||||
PM8001_MSG_DBG(pm8001_ha,
|
||||
pm8001_printk("Waiting for Port reset\n"));
|
||||
pm8001_dbg(pm8001_ha, MSG,
|
||||
"Waiting for Port reset\n");
|
||||
ret = wait_for_completion_timeout(
|
||||
&completion_reset,
|
||||
PM8001_TASK_TIMEOUT * HZ);
|
||||
@ -1355,9 +1336,8 @@ int pm8001_clear_task_set(struct domain_device *dev, u8 *lun)
|
||||
struct pm8001_device *pm8001_dev = dev->lldd_dev;
|
||||
struct pm8001_hba_info *pm8001_ha = pm8001_find_ha_by_dev(dev);
|
||||
|
||||
PM8001_EH_DBG(pm8001_ha,
|
||||
pm8001_printk("I_T_L_Q clear task set[%x]\n",
|
||||
pm8001_dev->device_id));
|
||||
pm8001_dbg(pm8001_ha, EH, "I_T_L_Q clear task set[%x]\n",
|
||||
pm8001_dev->device_id);
|
||||
tmf_task.tmf = TMF_CLEAR_TASK_SET;
|
||||
return pm8001_issue_ssp_tmf(dev, lun, &tmf_task);
|
||||
}
|
||||
|
@ -69,45 +69,16 @@
|
||||
#define PM8001_DEV_LOGGING 0x80 /* development message logging */
|
||||
#define PM8001_DEVIO_LOGGING 0x100 /* development io message logging */
|
||||
#define PM8001_IOERR_LOGGING 0x200 /* development io err message logging */
|
||||
#define pm8001_printk(format, arg...) pr_info("%s:: %s %d:" \
|
||||
format, pm8001_ha->name, __func__, __LINE__, ## arg)
|
||||
#define PM8001_CHECK_LOGGING(HBA, LEVEL, CMD) \
|
||||
do { \
|
||||
if (unlikely(HBA->logging_level & LEVEL)) \
|
||||
do { \
|
||||
CMD; \
|
||||
} while (0); \
|
||||
} while (0);
|
||||
|
||||
#define PM8001_EH_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_EH_LOGGING, CMD)
|
||||
#define pm8001_printk(fmt, ...) \
|
||||
pr_info("%s:: %s %d:" fmt, \
|
||||
pm8001_ha->name, __func__, __LINE__, ##__VA_ARGS__)
|
||||
|
||||
#define PM8001_INIT_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_INIT_LOGGING, CMD)
|
||||
|
||||
#define PM8001_DISC_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_DISC_LOGGING, CMD)
|
||||
|
||||
#define PM8001_IO_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_IO_LOGGING, CMD)
|
||||
|
||||
#define PM8001_FAIL_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_FAIL_LOGGING, CMD)
|
||||
|
||||
#define PM8001_IOCTL_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_IOCTL_LOGGING, CMD)
|
||||
|
||||
#define PM8001_MSG_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_MSG_LOGGING, CMD)
|
||||
|
||||
#define PM8001_DEV_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_DEV_LOGGING, CMD)
|
||||
|
||||
#define PM8001_DEVIO_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_DEVIO_LOGGING, CMD)
|
||||
|
||||
#define PM8001_IOERR_DBG(HBA, CMD) \
|
||||
PM8001_CHECK_LOGGING(HBA, PM8001_IOERR_LOGGING, CMD)
|
||||
#define pm8001_dbg(HBA, level, fmt, ...) \
|
||||
do { \
|
||||
if (unlikely((HBA)->logging_level & PM8001_##level##_LOGGING)) \
|
||||
pm8001_printk(fmt, ##__VA_ARGS__); \
|
||||
} while (0)
|
||||
|
||||
#define PM8001_USE_TASKLET
|
||||
#define PM8001_USE_MSIX
|
||||
@ -293,7 +264,7 @@ struct pm8001_device {
|
||||
struct completion *dcompletion;
|
||||
struct completion *setds_completion;
|
||||
u32 device_id;
|
||||
u32 running_req;
|
||||
atomic_t running_req;
|
||||
};
|
||||
|
||||
struct pm8001_prd_imt {
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -813,7 +813,7 @@ static void ufs_mtk_vreg_set_lpm(struct ufs_hba *hba, bool lpm)
|
||||
if (!hba->vreg_info.vccq2 || !hba->vreg_info.vcc)
|
||||
return;
|
||||
|
||||
if (lpm & !hba->vreg_info.vcc->enabled)
|
||||
if (lpm && !hba->vreg_info.vcc->enabled)
|
||||
regulator_set_mode(hba->vreg_info.vccq2->reg,
|
||||
REGULATOR_MODE_IDLE);
|
||||
else if (!lpm)
|
||||
|
@ -1198,6 +1198,7 @@ static int cqspi_probe(struct platform_device *pdev)
|
||||
cqspi = spi_master_get_devdata(master);
|
||||
|
||||
cqspi->pdev = pdev;
|
||||
platform_set_drvdata(pdev, cqspi);
|
||||
|
||||
/* Obtain configuration from OF. */
|
||||
ret = cqspi_of_get_pdata(cqspi);
|
||||
|
@ -103,6 +103,25 @@ static const struct cedrus_control cedrus_controls[] = {
|
||||
.codec = CEDRUS_CODEC_H264,
|
||||
.required = false,
|
||||
},
|
||||
/*
|
||||
* We only expose supported profiles information,
|
||||
* and not levels as it's not clear what is supported
|
||||
* for each hardware/core version.
|
||||
* In any case, TRY/S_FMT will clamp the format resolution
|
||||
* to the maximum supported.
|
||||
*/
|
||||
{
|
||||
.cfg = {
|
||||
.id = V4L2_CID_MPEG_VIDEO_H264_PROFILE,
|
||||
.min = V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE,
|
||||
.def = V4L2_MPEG_VIDEO_H264_PROFILE_MAIN,
|
||||
.max = V4L2_MPEG_VIDEO_H264_PROFILE_HIGH,
|
||||
.menu_skip_mask =
|
||||
BIT(V4L2_MPEG_VIDEO_H264_PROFILE_EXTENDED),
|
||||
},
|
||||
.codec = CEDRUS_CODEC_H264,
|
||||
.required = false,
|
||||
},
|
||||
{
|
||||
.cfg = {
|
||||
.id = V4L2_CID_MPEG_VIDEO_HEVC_SPS,
|
||||
|
@ -761,12 +761,6 @@ static int tb_init_port(struct tb_port *port)
|
||||
|
||||
tb_dump_port(port->sw->tb, &port->config);
|
||||
|
||||
/* Control port does not need HopID allocation */
|
||||
if (port->port) {
|
||||
ida_init(&port->in_hopids);
|
||||
ida_init(&port->out_hopids);
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&port->list);
|
||||
return 0;
|
||||
|
||||
@ -1764,10 +1758,8 @@ static void tb_switch_release(struct device *dev)
|
||||
dma_port_free(sw->dma_port);
|
||||
|
||||
tb_switch_for_each_port(sw, port) {
|
||||
if (!port->disabled) {
|
||||
ida_destroy(&port->in_hopids);
|
||||
ida_destroy(&port->out_hopids);
|
||||
}
|
||||
ida_destroy(&port->in_hopids);
|
||||
ida_destroy(&port->out_hopids);
|
||||
}
|
||||
|
||||
kfree(sw->uuid);
|
||||
@ -1947,6 +1939,12 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
|
||||
/* minimum setup for tb_find_cap and tb_drom_read to work */
|
||||
sw->ports[i].sw = sw;
|
||||
sw->ports[i].port = i;
|
||||
|
||||
/* Control port does not need HopID allocation */
|
||||
if (i) {
|
||||
ida_init(&sw->ports[i].in_hopids);
|
||||
ida_init(&sw->ports[i].out_hopids);
|
||||
}
|
||||
}
|
||||
|
||||
ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_PLUG_EVENTS);
|
||||
|
@ -138,6 +138,10 @@ static void tb_discover_tunnels(struct tb_switch *sw)
|
||||
parent->boot = true;
|
||||
parent = tb_switch_parent(parent);
|
||||
}
|
||||
} else if (tb_tunnel_is_dp(tunnel)) {
|
||||
/* Keep the domain from powering down */
|
||||
pm_runtime_get_sync(&tunnel->src_port->sw->dev);
|
||||
pm_runtime_get_sync(&tunnel->dst_port->sw->dev);
|
||||
}
|
||||
|
||||
list_add_tail(&tunnel->list, &tcm->tunnel_list);
|
||||
|
@ -350,7 +350,6 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
|
||||
struct stm32_usart_offsets *ofs = &stm32port->info->ofs;
|
||||
struct circ_buf *xmit = &port->state->xmit;
|
||||
struct dma_async_tx_descriptor *desc = NULL;
|
||||
dma_cookie_t cookie;
|
||||
unsigned int count, i;
|
||||
|
||||
if (stm32port->tx_dma_busy)
|
||||
@ -384,17 +383,18 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
|
||||
DMA_MEM_TO_DEV,
|
||||
DMA_PREP_INTERRUPT);
|
||||
|
||||
if (!desc) {
|
||||
for (i = count; i > 0; i--)
|
||||
stm32_transmit_chars_pio(port);
|
||||
return;
|
||||
}
|
||||
if (!desc)
|
||||
goto fallback_err;
|
||||
|
||||
desc->callback = stm32_tx_dma_complete;
|
||||
desc->callback_param = port;
|
||||
|
||||
/* Push current DMA TX transaction in the pending queue */
|
||||
cookie = dmaengine_submit(desc);
|
||||
if (dma_submit_error(dmaengine_submit(desc))) {
|
||||
/* dma no yet started, safe to free resources */
|
||||
dmaengine_terminate_async(stm32port->tx_ch);
|
||||
goto fallback_err;
|
||||
}
|
||||
|
||||
/* Issue pending DMA TX requests */
|
||||
dma_async_issue_pending(stm32port->tx_ch);
|
||||
@ -403,6 +403,11 @@ static void stm32_transmit_chars_dma(struct uart_port *port)
|
||||
|
||||
xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
|
||||
port->icount.tx += count;
|
||||
return;
|
||||
|
||||
fallback_err:
|
||||
for (i = count; i > 0; i--)
|
||||
stm32_transmit_chars_pio(port);
|
||||
}
|
||||
|
||||
static void stm32_transmit_chars(struct uart_port *port)
|
||||
@ -1087,7 +1092,6 @@ static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
|
||||
struct device *dev = &pdev->dev;
|
||||
struct dma_slave_config config;
|
||||
struct dma_async_tx_descriptor *desc = NULL;
|
||||
dma_cookie_t cookie;
|
||||
int ret;
|
||||
|
||||
/* Request DMA RX channel */
|
||||
@ -1132,7 +1136,11 @@ static int stm32_of_dma_rx_probe(struct stm32_port *stm32port,
|
||||
desc->callback_param = NULL;
|
||||
|
||||
/* Push current DMA transaction in the pending queue */
|
||||
cookie = dmaengine_submit(desc);
|
||||
ret = dma_submit_error(dmaengine_submit(desc));
|
||||
if (ret) {
|
||||
dmaengine_terminate_sync(stm32port->rx_ch);
|
||||
goto config_err;
|
||||
}
|
||||
|
||||
/* Issue pending DMA requests */
|
||||
dma_async_issue_pending(stm32port->rx_ch);
|
||||
|
@ -651,6 +651,13 @@ void usb_stor_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
|
||||
need_auto_sense = 1;
|
||||
}
|
||||
|
||||
/* Some devices (Kindle) require another command after SYNC CACHE */
|
||||
if ((us->fflags & US_FL_SENSE_AFTER_SYNC) &&
|
||||
srb->cmnd[0] == SYNCHRONIZE_CACHE) {
|
||||
usb_stor_dbg(us, "-- sense after SYNC CACHE\n");
|
||||
need_auto_sense = 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we have a failure, we're going to do a REQUEST_SENSE
|
||||
* automatically. Note that we differentiate between a command
|
||||
|
@ -2211,6 +2211,18 @@ UNUSUAL_DEV( 0x1908, 0x3335, 0x0200, 0x0200,
|
||||
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||
US_FL_NO_READ_DISC_INFO ),
|
||||
|
||||
/*
|
||||
* Reported by Matthias Schwarzott <zzam@gentoo.org>
|
||||
* The Amazon Kindle treats SYNCHRONIZE CACHE as an indication that
|
||||
* the host may be finished with it, and automatically ejects its
|
||||
* emulated media unless it receives another command within one second.
|
||||
*/
|
||||
UNUSUAL_DEV( 0x1949, 0x0004, 0x0000, 0x9999,
|
||||
"Amazon",
|
||||
"Kindle",
|
||||
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
|
||||
US_FL_SENSE_AFTER_SYNC ),
|
||||
|
||||
/*
|
||||
* Reported by Oliver Neukum <oneukum@suse.com>
|
||||
* This device morphes spontaneously into another device if the access
|
||||
|
@ -6061,7 +6061,6 @@ static int tcpm_psy_set_prop(struct power_supply *psy,
|
||||
break;
|
||||
}
|
||||
power_supply_changed(port->psy);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -174,7 +174,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
|
||||
|
||||
udc->ud.tcp_socket = socket;
|
||||
udc->ud.tcp_rx = tcp_rx;
|
||||
udc->ud.tcp_rx = tcp_tx;
|
||||
udc->ud.tcp_tx = tcp_tx;
|
||||
udc->ud.status = SDEV_ST_USED;
|
||||
|
||||
spin_unlock_irq(&udc->ud.lock);
|
||||
|
@ -21,7 +21,7 @@ config VFIO_VIRQFD
|
||||
|
||||
menuconfig VFIO
|
||||
tristate "VFIO Non-Privileged userspace driver framework"
|
||||
depends on IOMMU_API
|
||||
select IOMMU_API
|
||||
select VFIO_IOMMU_TYPE1 if (X86 || S390 || ARM || ARM64)
|
||||
help
|
||||
VFIO provides a framework for secure userspace device drivers.
|
||||
|
@ -312,8 +312,10 @@ static long vhost_vdpa_get_vring_num(struct vhost_vdpa *v, u16 __user *argp)
|
||||
|
||||
static void vhost_vdpa_config_put(struct vhost_vdpa *v)
|
||||
{
|
||||
if (v->config_ctx)
|
||||
if (v->config_ctx) {
|
||||
eventfd_ctx_put(v->config_ctx);
|
||||
v->config_ctx = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static long vhost_vdpa_set_config_call(struct vhost_vdpa *v, u32 __user *argp)
|
||||
@ -333,8 +335,12 @@ static long vhost_vdpa_set_config_call(struct vhost_vdpa *v, u32 __user *argp)
|
||||
if (!IS_ERR_OR_NULL(ctx))
|
||||
eventfd_ctx_put(ctx);
|
||||
|
||||
if (IS_ERR(v->config_ctx))
|
||||
return PTR_ERR(v->config_ctx);
|
||||
if (IS_ERR(v->config_ctx)) {
|
||||
long ret = PTR_ERR(v->config_ctx);
|
||||
|
||||
v->config_ctx = NULL;
|
||||
return ret;
|
||||
}
|
||||
|
||||
v->vdpa->config->set_config_cb(v->vdpa, &cb);
|
||||
|
||||
@ -904,14 +910,10 @@ static int vhost_vdpa_open(struct inode *inode, struct file *filep)
|
||||
|
||||
static void vhost_vdpa_clean_irq(struct vhost_vdpa *v)
|
||||
{
|
||||
struct vhost_virtqueue *vq;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < v->nvqs; i++) {
|
||||
vq = &v->vqs[i];
|
||||
if (vq->call_ctx.producer.irq)
|
||||
irq_bypass_unregister_producer(&vq->call_ctx.producer);
|
||||
}
|
||||
for (i = 0; i < v->nvqs; i++)
|
||||
vhost_vdpa_unsetup_vq_irq(v, i);
|
||||
}
|
||||
|
||||
static int vhost_vdpa_release(struct inode *inode, struct file *filep)
|
||||
|
@ -69,7 +69,6 @@ const struct inode_operations afs_dir_inode_operations = {
|
||||
.permission = afs_permission,
|
||||
.getattr = afs_getattr,
|
||||
.setattr = afs_setattr,
|
||||
.listxattr = afs_listxattr,
|
||||
};
|
||||
|
||||
const struct address_space_operations afs_dir_aops = {
|
||||
|
@ -43,7 +43,6 @@ const struct inode_operations afs_file_inode_operations = {
|
||||
.getattr = afs_getattr,
|
||||
.setattr = afs_setattr,
|
||||
.permission = afs_permission,
|
||||
.listxattr = afs_listxattr,
|
||||
};
|
||||
|
||||
const struct address_space_operations afs_fs_aops = {
|
||||
|
@ -181,10 +181,13 @@ void afs_wait_for_operation(struct afs_operation *op)
|
||||
if (test_bit(AFS_SERVER_FL_IS_YFS, &op->server->flags) &&
|
||||
op->ops->issue_yfs_rpc)
|
||||
op->ops->issue_yfs_rpc(op);
|
||||
else
|
||||
else if (op->ops->issue_afs_rpc)
|
||||
op->ops->issue_afs_rpc(op);
|
||||
else
|
||||
op->ac.error = -ENOTSUPP;
|
||||
|
||||
op->error = afs_wait_for_call_to_complete(op->call, &op->ac);
|
||||
if (op->call)
|
||||
op->error = afs_wait_for_call_to_complete(op->call, &op->ac);
|
||||
}
|
||||
|
||||
switch (op->error) {
|
||||
|
@ -27,7 +27,6 @@
|
||||
|
||||
static const struct inode_operations afs_symlink_inode_operations = {
|
||||
.get_link = page_get_link,
|
||||
.listxattr = afs_listxattr,
|
||||
};
|
||||
|
||||
static noinline void dump_vnode(struct afs_vnode *vnode, struct afs_vnode *parent_vnode)
|
||||
|
@ -1508,7 +1508,6 @@ extern int afs_launder_page(struct page *);
|
||||
* xattr.c
|
||||
*/
|
||||
extern const struct xattr_handler *afs_xattr_handlers[];
|
||||
extern ssize_t afs_listxattr(struct dentry *, char *, size_t);
|
||||
|
||||
/*
|
||||
* yfsclient.c
|
||||
|
@ -32,7 +32,6 @@ const struct inode_operations afs_mntpt_inode_operations = {
|
||||
.lookup = afs_mntpt_lookup,
|
||||
.readlink = page_readlink,
|
||||
.getattr = afs_getattr,
|
||||
.listxattr = afs_listxattr,
|
||||
};
|
||||
|
||||
const struct inode_operations afs_autocell_inode_operations = {
|
||||
|
@ -11,29 +11,6 @@
|
||||
#include <linux/xattr.h>
|
||||
#include "internal.h"
|
||||
|
||||
static const char afs_xattr_list[] =
|
||||
"afs.acl\0"
|
||||
"afs.cell\0"
|
||||
"afs.fid\0"
|
||||
"afs.volume\0"
|
||||
"afs.yfs.acl\0"
|
||||
"afs.yfs.acl_inherited\0"
|
||||
"afs.yfs.acl_num_cleaned\0"
|
||||
"afs.yfs.vol_acl";
|
||||
|
||||
/*
|
||||
* Retrieve a list of the supported xattrs.
|
||||
*/
|
||||
ssize_t afs_listxattr(struct dentry *dentry, char *buffer, size_t size)
|
||||
{
|
||||
if (size == 0)
|
||||
return sizeof(afs_xattr_list);
|
||||
if (size < sizeof(afs_xattr_list))
|
||||
return -ERANGE;
|
||||
memcpy(buffer, afs_xattr_list, sizeof(afs_xattr_list));
|
||||
return sizeof(afs_xattr_list);
|
||||
}
|
||||
|
||||
/*
|
||||
* Deal with the result of a successful fetch ACL operation.
|
||||
*/
|
||||
@ -230,6 +207,8 @@ static int afs_xattr_get_yfs(const struct xattr_handler *handler,
|
||||
else
|
||||
ret = -ERANGE;
|
||||
}
|
||||
} else if (ret == -ENOTSUPP) {
|
||||
ret = -ENODATA;
|
||||
}
|
||||
|
||||
error_yacl:
|
||||
@ -254,6 +233,7 @@ static int afs_xattr_set_yfs(const struct xattr_handler *handler,
|
||||
{
|
||||
struct afs_operation *op;
|
||||
struct afs_vnode *vnode = AFS_FS_I(inode);
|
||||
int ret;
|
||||
|
||||
if (flags == XATTR_CREATE ||
|
||||
strcmp(name, "acl") != 0)
|
||||
@ -268,7 +248,10 @@ static int afs_xattr_set_yfs(const struct xattr_handler *handler,
|
||||
return afs_put_operation(op);
|
||||
|
||||
op->ops = &yfs_store_opaque_acl2_operation;
|
||||
return afs_do_sync_operation(op);
|
||||
ret = afs_do_sync_operation(op);
|
||||
if (ret == -ENOTSUPP)
|
||||
ret = -ENODATA;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct xattr_handler afs_xattr_yfs_handler = {
|
||||
|
@ -1367,7 +1367,9 @@ get_old_root(struct btrfs_root *root, u64 time_seq)
|
||||
"failed to read tree block %llu from get_old_root",
|
||||
logical);
|
||||
} else {
|
||||
btrfs_tree_read_lock(old);
|
||||
eb = btrfs_clone_extent_buffer(old);
|
||||
btrfs_tree_read_unlock(old);
|
||||
free_extent_buffer(old);
|
||||
}
|
||||
} else if (old_root) {
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user