Merge 6.1.55 into android14-6.1-lts
Changes in 6.1.55 autofs: fix memory leak of waitqueues in autofs_catatonic_mode btrfs: output extra debug info if we failed to find an inline backref locks: fix KASAN: use-after-free in trace_event_raw_event_filelock_lock ACPICA: Add AML_NO_OPERAND_RESOLVE flag to Timer kernel/fork: beware of __put_task_struct() calling context rcuscale: Move rcu_scale_writer() schedule_timeout_uninterruptible() to _idle() scftorture: Forgive memory-allocation failure if KASAN ACPI: video: Add backlight=native DMI quirk for Lenovo Ideapad Z470 perf/smmuv3: Enable HiSilicon Erratum 162001900 quirk for HIP08/09 perf/imx_ddr: speed up overflow frequency of cycle hw_breakpoint: fix single-stepping when using bpf_overflow_handler ACPI: x86: s2idle: Catch multiple ACPI_TYPE_PACKAGE objects selftests/nolibc: fix up kernel parameters support devlink: remove reload failed checks in params get/set callbacks crypto: lrw,xts - Replace strlcpy with strscpy ice: Don't tx before switchdev is fully configured wifi: ath9k: fix fortify warnings wifi: ath9k: fix printk specifier wifi: mwifiex: fix fortify warning mt76: mt7921: don't assume adequate headroom for SDIO headers wifi: wil6210: fix fortify warnings can: sun4i_can: Add acceptance register quirk can: sun4i_can: Add support for the Allwinner D1 net: Use sockaddr_storage for getsockopt(SO_PEERNAME). net/ipv4: return the real errno instead of -EINVAL crypto: lib/mpi - avoid null pointer deref in mpi_cmp_ui() Bluetooth: Fix hci_suspend_sync crash netlink: convert nlk->flags to atomic flags tpm_tis: Resend command to recover from data transfer errors mmc: sdhci-esdhc-imx: improve ESDHC_FLAG_ERR010450 alx: fix OOB-read compiler warning wifi: mac80211: check S1G action frame size netfilter: ebtables: fix fortify warnings in size_entry_mwt() wifi: cfg80211: reject auth/assoc to AP with our address wifi: cfg80211: ocb: don't leave if not joined wifi: mac80211: check for station first in client probe wifi: mac80211_hwsim: drop short frames libbpf: Free btf_vmlinux when closing bpf_object drm/bridge: tc358762: Instruct DSI host to generate HSE packets drm/edid: Add quirk for OSVR HDK 2.0 arm64: dts: qcom: sm6125-pdx201: correct ramoops pmsg-size arm64: dts: qcom: sm6350: correct ramoops pmsg-size arm64: dts: qcom: sm8150-kumano: correct ramoops pmsg-size arm64: dts: qcom: sm8250-edo: correct ramoops pmsg-size samples/hw_breakpoint: Fix kernel BUG 'invalid opcode: 0000' drm/amd/display: Fix underflow issue on 175hz timing ASoC: SOF: topology: simplify code to prevent static analysis warnings ASoC: Intel: sof_sdw: Update BT offload config for soundwire config ALSA: hda: intel-dsp-cfg: add LunarLake support drm/amd/display: Use DTBCLK as refclk instead of DPREFCLK drm/amd/display: Blocking invalid 420 modes on HDMI TMDS for DCN31 drm/amd/display: Blocking invalid 420 modes on HDMI TMDS for DCN314 drm/exynos: fix a possible null-pointer dereference due to data race in exynos_drm_crtc_atomic_disable() drm/mediatek: dp: Change logging to dev for mtk_dp_aux_transfer() bus: ti-sysc: Configure uart quirks for k3 SoC md: raid1: fix potential OOB in raid1_remove_disk() ext2: fix datatype of block number in ext2_xattr_set2() fs/jfs: prevent double-free in dbUnmount() after failed jfs_remount() jfs: fix invalid free of JFS_IP(ipimap)->i_imap in diUnmount PCI: dwc: Provide deinit callback for i.MX ARM: 9317/1: kexec: Make smp stop calls asynchronous powerpc/pseries: fix possible memory leak in ibmebus_bus_init() PCI: vmd: Disable bridge window for domain reset PCI: fu740: Set the number of MSI vectors media: mdp3: Fix resource leaks in of_find_device_by_node media: dvb-usb-v2: af9035: Fix null-ptr-deref in af9035_i2c_master_xfer media: dw2102: Fix null-ptr-deref in dw2102_i2c_transfer() media: af9005: Fix null-ptr-deref in af9005_i2c_xfer media: anysee: fix null-ptr-deref in anysee_master_xfer media: az6007: Fix null-ptr-deref in az6007_i2c_xfer() media: dvb-usb-v2: gl861: Fix null-ptr-deref in gl861_i2c_master_xfer scsi: lpfc: Abort outstanding ELS cmds when mailbox timeout error is detected media: tuners: qt1010: replace BUG_ON with a regular error media: pci: cx23885: replace BUG with error return usb: cdns3: Put the cdns set active part outside the spin lock usb: gadget: fsl_qe_udc: validate endpoint index for ch9 udc tools: iio: iio_generic_buffer: Fix some integer type and calculation scsi: target: iscsi: Fix buffer overflow in lio_target_nacl_info_show() serial: cpm_uart: Avoid suspicious locking misc: open-dice: make OPEN_DICE depend on HAS_IOMEM usb: ehci: add workaround for chipidea PORTSC.PEC bug usb: chipidea: add workaround for chipidea PEC bug media: pci: ipu3-cio2: Initialise timing struct to avoid a compiler warning kobject: Add sanity check for kset->kobj.ktype in kset_register() interconnect: Fix locking for runpm vs reclaim printk: Keep non-panic-CPUs out of console lock printk: Consolidate console deferred printing dma-buf: Add unlocked variant of attachment-mapping functions misc: fastrpc: Prepare to dynamic dma-buf locking specification misc: fastrpc: Fix incorrect DMA mapping unmap request MIPS: Use "grep -E" instead of "egrep" btrfs: add a helper to read the superblock metadata_uuid btrfs: compare the correct fsid/metadata_uuid in btrfs_validate_super block: factor out a bvec_set_page helper nvmet: use bvec_set_page to initialize bvecs nvmet-tcp: pass iov_len instead of sg->length to bvec_set_page() drm: gm12u320: Fix the timeout usage for usb_bulk_msg() scsi: qla2xxx: Fix NULL vs IS_ERR() bug for debugfs_create_dir() selftests: tracing: Fix to unmount tracefs for recovering environment x86/ibt: Suppress spurious ENDBR riscv: kexec: Align the kexeced kernel entry scsi: target: core: Fix target_cmd_counter leak scsi: lpfc: Fix the NULL vs IS_ERR() bug for debugfs_create_file() panic: Reenable preemption in WARN slowpath x86/boot/compressed: Reserve more memory for page tables x86/purgatory: Remove LTO flags samples/hw_breakpoint: fix building without module unloading md/raid1: fix error: ISO C90 forbids mixed declarations Revert "SUNRPC: Fail faster on bad verifier" attr: block mode changes of symlinks ovl: fix failed copyup of fileattr on a symlink ovl: fix incorrect fdput() on aio completion io_uring/net: fix iter retargeting for selected buf nvme: avoid bogus CRTO values md: Put the right device in md_seq_next Revert "drm/amd: Disable S/G for APUs when 64GB or more host memory" dm: don't attempt to queue IO under RCU protection btrfs: fix lockdep splat and potential deadlock after failure running delayed items btrfs: fix a compilation error if DEBUG is defined in btree_dirty_folio btrfs: release path before inode lookup during the ino lookup ioctl btrfs: check for BTRFS_FS_ERROR in pending ordered assert tracing: Have tracing_max_latency inc the trace array ref count tracing: Have event inject files inc the trace array ref count tracing: Increase trace array ref count on enable and filter files tracing: Have current_trace inc the trace array ref count tracing: Have option files inc the trace array ref count selinux: fix handling of empty opts in selinux_fs_context_submount() nfsd: fix change_info in NFSv4 RENAME replies tracefs: Add missing lockdown check to tracefs_create_dir() i2c: aspeed: Reset the i2c controller when timeout occurs ata: libata: disallow dev-initiated LPM transitions to unsupported states ata: libahci: clear pending interrupt status scsi: megaraid_sas: Fix deadlock on firmware crashdump scsi: pm8001: Setup IRQs on resume ext4: fix rec_len verify error drm/amd/display: fix the white screen issue when >= 64GB DRAM Revert "memcg: drop kmem.limit_in_bytes" drm/amdgpu: fix amdgpu_cs_p1_user_fence net/sched: Retire rsvp classifier interconnect: Teach lockdep about icc_bw_lock order Linux 6.1.55 Change-Id: I95193a57879a13b04b5ac8647a24e6d8304fcb0e Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
1c5ec1e54d
@ -91,6 +91,8 @@ Brief summary of control files.
|
||||
memory.oom_control set/show oom controls.
|
||||
memory.numa_stat show the number of memory usage per numa
|
||||
node
|
||||
memory.kmem.limit_in_bytes This knob is deprecated and writing to
|
||||
it will return -ENOTSUPP.
|
||||
memory.kmem.usage_in_bytes show current kernel memory allocation
|
||||
memory.kmem.failcnt show the number of kernel memory usage
|
||||
hits limits
|
||||
|
@ -191,6 +191,9 @@ stable kernels.
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| Hisilicon | Hip08 SMMU PMCG | #162001800 | N/A |
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| Hisilicon | Hip08 SMMU PMCG | #162001900 | N/A |
|
||||
| | Hip09 SMMU PMCG | | |
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 1
|
||||
SUBLEVEL = 54
|
||||
SUBLEVEL = 55
|
||||
EXTRAVERSION =
|
||||
NAME = Curry Ramen
|
||||
|
||||
|
@ -626,7 +626,7 @@ int hw_breakpoint_arch_parse(struct perf_event *bp,
|
||||
hw->address &= ~alignment_mask;
|
||||
hw->ctrl.len <<= offset;
|
||||
|
||||
if (is_default_overflow_handler(bp)) {
|
||||
if (uses_default_overflow_handler(bp)) {
|
||||
/*
|
||||
* Mismatch breakpoints are required for single-stepping
|
||||
* breakpoints.
|
||||
@ -798,7 +798,7 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
|
||||
* Otherwise, insert a temporary mismatch breakpoint so that
|
||||
* we can single-step over the watchpoint trigger.
|
||||
*/
|
||||
if (!is_default_overflow_handler(wp))
|
||||
if (!uses_default_overflow_handler(wp))
|
||||
continue;
|
||||
step:
|
||||
enable_single_step(wp, instruction_pointer(regs));
|
||||
@ -811,7 +811,7 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
|
||||
info->trigger = addr;
|
||||
pr_debug("watchpoint fired: address = 0x%x\n", info->trigger);
|
||||
perf_bp_event(wp, regs);
|
||||
if (is_default_overflow_handler(wp))
|
||||
if (uses_default_overflow_handler(wp))
|
||||
enable_single_step(wp, instruction_pointer(regs));
|
||||
}
|
||||
|
||||
@ -886,7 +886,7 @@ static void breakpoint_handler(unsigned long unknown, struct pt_regs *regs)
|
||||
info->trigger = addr;
|
||||
pr_debug("breakpoint fired: address = 0x%x\n", addr);
|
||||
perf_bp_event(bp, regs);
|
||||
if (is_default_overflow_handler(bp))
|
||||
if (uses_default_overflow_handler(bp))
|
||||
enable_single_step(bp, addr);
|
||||
goto unlock;
|
||||
}
|
||||
|
@ -92,16 +92,28 @@ void machine_crash_nonpanic_core(void *unused)
|
||||
}
|
||||
}
|
||||
|
||||
static DEFINE_PER_CPU(call_single_data_t, cpu_stop_csd) =
|
||||
CSD_INIT(machine_crash_nonpanic_core, NULL);
|
||||
|
||||
void crash_smp_send_stop(void)
|
||||
{
|
||||
static int cpus_stopped;
|
||||
unsigned long msecs;
|
||||
call_single_data_t *csd;
|
||||
int cpu, this_cpu = raw_smp_processor_id();
|
||||
|
||||
if (cpus_stopped)
|
||||
return;
|
||||
|
||||
atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);
|
||||
smp_call_function(machine_crash_nonpanic_core, NULL, false);
|
||||
for_each_online_cpu(cpu) {
|
||||
if (cpu == this_cpu)
|
||||
continue;
|
||||
|
||||
csd = &per_cpu(cpu_stop_csd, cpu);
|
||||
smp_call_function_single_async(cpu, csd);
|
||||
}
|
||||
|
||||
msecs = 1000; /* Wait at most a second for the other cpus to stop */
|
||||
while ((atomic_read(&waiting_for_crash_ipi) > 0) && msecs) {
|
||||
mdelay(1);
|
||||
|
@ -73,7 +73,7 @@ pstore_mem: ramoops@ffc00000 {
|
||||
reg = <0x0 0xffc40000 0x0 0xc0000>;
|
||||
record-size = <0x1000>;
|
||||
console-size = <0x40000>;
|
||||
msg-size = <0x20000 0x20000>;
|
||||
pmsg-size = <0x20000>;
|
||||
};
|
||||
|
||||
cmdline_mem: memory@ffd00000 {
|
||||
|
@ -346,7 +346,7 @@ ramoops: ramoops@ffc00000 {
|
||||
reg = <0 0xffc00000 0 0x100000>;
|
||||
record-size = <0x1000>;
|
||||
console-size = <0x40000>;
|
||||
msg-size = <0x20000 0x20000>;
|
||||
pmsg-size = <0x20000>;
|
||||
ecc-size = <16>;
|
||||
no-map;
|
||||
};
|
||||
|
@ -127,7 +127,7 @@ ramoops@ffc00000 {
|
||||
reg = <0x0 0xffc00000 0x0 0x100000>;
|
||||
record-size = <0x1000>;
|
||||
console-size = <0x40000>;
|
||||
msg-size = <0x20000 0x20000>;
|
||||
pmsg-size = <0x20000>;
|
||||
ecc-size = <16>;
|
||||
no-map;
|
||||
};
|
||||
|
@ -126,7 +126,7 @@ ramoops@ffc00000 {
|
||||
reg = <0x0 0xffc00000 0x0 0x100000>;
|
||||
record-size = <0x1000>;
|
||||
console-size = <0x40000>;
|
||||
msg-size = <0x20000 0x20000>;
|
||||
pmsg-size = <0x20000>;
|
||||
ecc-size = <16>;
|
||||
no-map;
|
||||
};
|
||||
|
@ -654,7 +654,7 @@ static int breakpoint_handler(unsigned long unused, unsigned long esr,
|
||||
perf_bp_event(bp, regs);
|
||||
|
||||
/* Do we need to handle the stepping? */
|
||||
if (is_default_overflow_handler(bp))
|
||||
if (uses_default_overflow_handler(bp))
|
||||
step = 1;
|
||||
unlock:
|
||||
rcu_read_unlock();
|
||||
@ -733,7 +733,7 @@ static u64 get_distance_from_watchpoint(unsigned long addr, u64 val,
|
||||
static int watchpoint_report(struct perf_event *wp, unsigned long addr,
|
||||
struct pt_regs *regs)
|
||||
{
|
||||
int step = is_default_overflow_handler(wp);
|
||||
int step = uses_default_overflow_handler(wp);
|
||||
struct arch_hw_breakpoint *info = counter_arch_bp(wp);
|
||||
|
||||
info->trigger = addr;
|
||||
|
@ -352,7 +352,7 @@ KBUILD_LDFLAGS += -m $(ld-emul)
|
||||
|
||||
ifdef need-compiler
|
||||
CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \
|
||||
egrep -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \
|
||||
grep -E -vw '__GNUC_(MINOR_|PATCHLEVEL_)?_' | \
|
||||
sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g')
|
||||
endif
|
||||
|
||||
|
@ -71,7 +71,7 @@ KCOV_INSTRUMENT := n
|
||||
|
||||
# Check that we don't have PIC 'jalr t9' calls left
|
||||
quiet_cmd_vdso_mips_check = VDSOCHK $@
|
||||
cmd_vdso_mips_check = if $(OBJDUMP) --disassemble $@ | egrep -h "jalr.*t9" > /dev/null; \
|
||||
cmd_vdso_mips_check = if $(OBJDUMP) --disassemble $@ | grep -E -h "jalr.*t9" > /dev/null; \
|
||||
then (echo >&2 "$@: PIC 'jalr t9' calls are not supported"; \
|
||||
rm -f $@; /bin/false); fi
|
||||
|
||||
|
@ -455,6 +455,7 @@ static int __init ibmebus_bus_init(void)
|
||||
if (err) {
|
||||
printk(KERN_WARNING "%s: device_register returned %i\n",
|
||||
__func__, err);
|
||||
put_device(&ibmebus_bus_device);
|
||||
bus_unregister(&ibmebus_bus_type);
|
||||
|
||||
return err;
|
||||
|
@ -98,7 +98,13 @@ static int elf_find_pbase(struct kimage *image, unsigned long kernel_len,
|
||||
kbuf.image = image;
|
||||
kbuf.buf_min = lowest_paddr;
|
||||
kbuf.buf_max = ULONG_MAX;
|
||||
kbuf.buf_align = PAGE_SIZE;
|
||||
|
||||
/*
|
||||
* Current riscv boot protocol requires 2MB alignment for
|
||||
* RV64 and 4MB alignment for RV32
|
||||
*
|
||||
*/
|
||||
kbuf.buf_align = PMD_SIZE;
|
||||
kbuf.mem = KEXEC_BUF_MEM_UNKNOWN;
|
||||
kbuf.memsz = ALIGN(kernel_len, PAGE_SIZE);
|
||||
kbuf.top_down = false;
|
||||
|
@ -67,6 +67,14 @@ static void *alloc_pgt_page(void *context)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Consumed more tables than expected? */
|
||||
if (pages->pgt_buf_offset == BOOT_PGT_SIZE_WARN) {
|
||||
debug_putstr("pgt_buf running low in " __FILE__ "\n");
|
||||
debug_putstr("Need to raise BOOT_PGT_SIZE?\n");
|
||||
debug_putaddr(pages->pgt_buf_offset);
|
||||
debug_putaddr(pages->pgt_buf_size);
|
||||
}
|
||||
|
||||
entry = pages->pgt_buf + pages->pgt_buf_offset;
|
||||
pages->pgt_buf_offset += PAGE_SIZE;
|
||||
|
||||
|
@ -40,23 +40,40 @@
|
||||
#ifdef CONFIG_X86_64
|
||||
# define BOOT_STACK_SIZE 0x4000
|
||||
|
||||
# define BOOT_INIT_PGT_SIZE (6*4096)
|
||||
# ifdef CONFIG_RANDOMIZE_BASE
|
||||
/*
|
||||
* Assuming all cross the 512GB boundary:
|
||||
* 1 page for level4
|
||||
* (2+2)*4 pages for kernel, param, cmd_line, and randomized kernel
|
||||
* 2 pages for first 2M (video RAM: CONFIG_X86_VERBOSE_BOOTUP).
|
||||
* Total is 19 pages.
|
||||
* Used by decompressor's startup_32() to allocate page tables for identity
|
||||
* mapping of the 4G of RAM in 4-level paging mode:
|
||||
* - 1 level4 table;
|
||||
* - 1 level3 table;
|
||||
* - 4 level2 table that maps everything with 2M pages;
|
||||
*
|
||||
* The additional level5 table needed for 5-level paging is allocated from
|
||||
* trampoline_32bit memory.
|
||||
*/
|
||||
# ifdef CONFIG_X86_VERBOSE_BOOTUP
|
||||
# define BOOT_PGT_SIZE (19*4096)
|
||||
# else /* !CONFIG_X86_VERBOSE_BOOTUP */
|
||||
# define BOOT_PGT_SIZE (17*4096)
|
||||
# endif
|
||||
# else /* !CONFIG_RANDOMIZE_BASE */
|
||||
# define BOOT_PGT_SIZE BOOT_INIT_PGT_SIZE
|
||||
# endif
|
||||
# define BOOT_INIT_PGT_SIZE (6*4096)
|
||||
|
||||
/*
|
||||
* Total number of page tables kernel_add_identity_map() can allocate,
|
||||
* including page tables consumed by startup_32().
|
||||
*
|
||||
* Worst-case scenario:
|
||||
* - 5-level paging needs 1 level5 table;
|
||||
* - KASLR needs to map kernel, boot_params, cmdline and randomized kernel,
|
||||
* assuming all of them cross 256T boundary:
|
||||
* + 4*2 level4 table;
|
||||
* + 4*2 level3 table;
|
||||
* + 4*2 level2 table;
|
||||
* - X86_VERBOSE_BOOTUP needs to map the first 2M (video RAM):
|
||||
* + 1 level4 table;
|
||||
* + 1 level3 table;
|
||||
* + 1 level2 table;
|
||||
* Total: 28 tables
|
||||
*
|
||||
* Add 4 spare table in case decompressor touches anything beyond what is
|
||||
* accounted above. Warn if it happens.
|
||||
*/
|
||||
# define BOOT_PGT_SIZE_WARN (28*4096)
|
||||
# define BOOT_PGT_SIZE (32*4096)
|
||||
|
||||
#else /* !CONFIG_X86_64 */
|
||||
# define BOOT_STACK_SIZE 0x1000
|
||||
|
@ -8,6 +8,14 @@
|
||||
#undef notrace
|
||||
#define notrace __attribute__((no_instrument_function))
|
||||
|
||||
#ifdef CONFIG_64BIT
|
||||
/*
|
||||
* The generic version tends to create spurious ENDBR instructions under
|
||||
* certain conditions.
|
||||
*/
|
||||
#define _THIS_IP_ ({ unsigned long __here; asm ("lea 0(%%rip), %0" : "=r" (__here)); __here; })
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
#define asmlinkage CPP_ASMLINKAGE __attribute__((regparm(0)))
|
||||
#endif /* CONFIG_X86_32 */
|
||||
|
@ -19,6 +19,10 @@ CFLAGS_sha256.o := -D__DISABLE_EXPORTS
|
||||
# optimization flags.
|
||||
KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%,$(KBUILD_CFLAGS))
|
||||
|
||||
# When LTO is enabled, llvm emits many text sections, which is not supported
|
||||
# by kexec. Remove -flto=* flags.
|
||||
KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS))
|
||||
|
||||
# When linking purgatory.ro with -r unresolved symbols are not checked,
|
||||
# also link a purgatory.chk binary without -r to check for unresolved symbols.
|
||||
PURGATORY_LDFLAGS := -e purgatory_start -z nodefaultlib
|
||||
|
@ -124,23 +124,18 @@ int bio_integrity_add_page(struct bio *bio, struct page *page,
|
||||
unsigned int len, unsigned int offset)
|
||||
{
|
||||
struct bio_integrity_payload *bip = bio_integrity(bio);
|
||||
struct bio_vec *iv;
|
||||
|
||||
if (bip->bip_vcnt >= bip->bip_max_vcnt) {
|
||||
printk(KERN_ERR "%s: bip_vec full\n", __func__);
|
||||
return 0;
|
||||
}
|
||||
|
||||
iv = bip->bip_vec + bip->bip_vcnt;
|
||||
|
||||
if (bip->bip_vcnt &&
|
||||
bvec_gap_to_prev(&bdev_get_queue(bio->bi_bdev)->limits,
|
||||
&bip->bip_vec[bip->bip_vcnt - 1], offset))
|
||||
return 0;
|
||||
|
||||
iv->bv_page = page;
|
||||
iv->bv_len = len;
|
||||
iv->bv_offset = offset;
|
||||
bvec_set_page(&bip->bip_vec[bip->bip_vcnt], page, len, offset);
|
||||
bip->bip_vcnt++;
|
||||
|
||||
return len;
|
||||
|
12
block/bio.c
12
block/bio.c
@ -979,10 +979,7 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
|
||||
if (bio->bi_vcnt >= queue_max_segments(q))
|
||||
return 0;
|
||||
|
||||
bvec = &bio->bi_io_vec[bio->bi_vcnt];
|
||||
bvec->bv_page = page;
|
||||
bvec->bv_len = len;
|
||||
bvec->bv_offset = offset;
|
||||
bvec_set_page(&bio->bi_io_vec[bio->bi_vcnt], page, len, offset);
|
||||
bio->bi_vcnt++;
|
||||
bio->bi_iter.bi_size += len;
|
||||
return len;
|
||||
@ -1058,15 +1055,10 @@ EXPORT_SYMBOL_GPL(bio_add_zone_append_page);
|
||||
void __bio_add_page(struct bio *bio, struct page *page,
|
||||
unsigned int len, unsigned int off)
|
||||
{
|
||||
struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt];
|
||||
|
||||
WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED));
|
||||
WARN_ON_ONCE(bio_full(bio, len));
|
||||
|
||||
bv->bv_page = page;
|
||||
bv->bv_offset = off;
|
||||
bv->bv_len = len;
|
||||
|
||||
bvec_set_page(&bio->bi_io_vec[bio->bi_vcnt], page, len, off);
|
||||
bio->bi_iter.bi_size += len;
|
||||
bio->bi_vcnt++;
|
||||
}
|
||||
|
@ -357,10 +357,10 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
|
||||
* cipher name.
|
||||
*/
|
||||
if (!strncmp(cipher_name, "ecb(", 4)) {
|
||||
unsigned len;
|
||||
int len;
|
||||
|
||||
len = strlcpy(ecb_name, cipher_name + 4, sizeof(ecb_name));
|
||||
if (len < 2 || len >= sizeof(ecb_name))
|
||||
len = strscpy(ecb_name, cipher_name + 4, sizeof(ecb_name));
|
||||
if (len < 2)
|
||||
goto err_free_inst;
|
||||
|
||||
if (ecb_name[len - 1] != ')')
|
||||
|
@ -396,10 +396,10 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
|
||||
* cipher name.
|
||||
*/
|
||||
if (!strncmp(cipher_name, "ecb(", 4)) {
|
||||
unsigned len;
|
||||
int len;
|
||||
|
||||
len = strlcpy(ctx->name, cipher_name + 4, sizeof(ctx->name));
|
||||
if (len < 2 || len >= sizeof(ctx->name))
|
||||
len = strscpy(ctx->name, cipher_name + 4, sizeof(ctx->name));
|
||||
if (len < 2)
|
||||
goto err_free_inst;
|
||||
|
||||
if (ctx->name[len - 1] != ')')
|
||||
|
@ -603,7 +603,7 @@ const struct acpi_opcode_info acpi_gbl_aml_op_info[AML_NUM_OPCODES] = {
|
||||
|
||||
/* 7E */ ACPI_OP("Timer", ARGP_TIMER_OP, ARGI_TIMER_OP, ACPI_TYPE_ANY,
|
||||
AML_CLASS_EXECUTE, AML_TYPE_EXEC_0A_0T_1R,
|
||||
AML_FLAGS_EXEC_0A_0T_1R),
|
||||
AML_FLAGS_EXEC_0A_0T_1R | AML_NO_OPERAND_RESOLVE),
|
||||
|
||||
/* ACPI 5.0 opcodes */
|
||||
|
||||
|
@ -1699,7 +1699,10 @@ static void __init arm_smmu_v3_pmcg_init_resources(struct resource *res,
|
||||
static struct acpi_platform_list pmcg_plat_info[] __initdata = {
|
||||
/* HiSilicon Hip08 Platform */
|
||||
{"HISI ", "HIP08 ", 0, ACPI_SIG_IORT, greater_than_or_equal,
|
||||
"Erratum #162001800", IORT_SMMU_V3_PMCG_HISI_HIP08},
|
||||
"Erratum #162001800, Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP08},
|
||||
/* HiSilicon Hip09 Platform */
|
||||
{"HISI ", "HIP09 ", 0, ACPI_SIG_IORT, greater_than_or_equal,
|
||||
"Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP09},
|
||||
{ }
|
||||
};
|
||||
|
||||
|
@ -443,6 +443,15 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
|
||||
DMI_MATCH(DMI_BOARD_NAME, "Lenovo IdeaPad S405"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* https://bugzilla.suse.com/show_bug.cgi?id=1208724 */
|
||||
.callback = video_detect_force_native,
|
||||
/* Lenovo Ideapad Z470 */
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "IdeaPad Z470"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* https://bugzilla.redhat.com/show_bug.cgi?id=1187004 */
|
||||
.callback = video_detect_force_native,
|
||||
|
@ -112,6 +112,12 @@ static void lpi_device_get_constraints_amd(void)
|
||||
union acpi_object *package = &out_obj->package.elements[i];
|
||||
|
||||
if (package->type == ACPI_TYPE_PACKAGE) {
|
||||
if (lpi_constraints_table) {
|
||||
acpi_handle_err(lps0_device_handle,
|
||||
"Duplicate constraints list\n");
|
||||
goto free_acpi_buffer;
|
||||
}
|
||||
|
||||
lpi_constraints_table = kcalloc(package->package.count,
|
||||
sizeof(*lpi_constraints_table),
|
||||
GFP_KERNEL);
|
||||
|
@ -1884,6 +1884,15 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
else
|
||||
dev_info(&pdev->dev, "SSS flag set, parallel bus scan disabled\n");
|
||||
|
||||
if (!(hpriv->cap & HOST_CAP_PART))
|
||||
host->flags |= ATA_HOST_NO_PART;
|
||||
|
||||
if (!(hpriv->cap & HOST_CAP_SSC))
|
||||
host->flags |= ATA_HOST_NO_SSC;
|
||||
|
||||
if (!(hpriv->cap2 & HOST_CAP2_SDS))
|
||||
host->flags |= ATA_HOST_NO_DEVSLP;
|
||||
|
||||
if (pi.flags & ATA_FLAG_EM)
|
||||
ahci_reset_em(host);
|
||||
|
||||
|
@ -1255,6 +1255,26 @@ static ssize_t ahci_activity_show(struct ata_device *dev, char *buf)
|
||||
return sprintf(buf, "%d\n", emp->blink_policy);
|
||||
}
|
||||
|
||||
static void ahci_port_clear_pending_irq(struct ata_port *ap)
|
||||
{
|
||||
struct ahci_host_priv *hpriv = ap->host->private_data;
|
||||
void __iomem *port_mmio = ahci_port_base(ap);
|
||||
u32 tmp;
|
||||
|
||||
/* clear SError */
|
||||
tmp = readl(port_mmio + PORT_SCR_ERR);
|
||||
dev_dbg(ap->host->dev, "PORT_SCR_ERR 0x%x\n", tmp);
|
||||
writel(tmp, port_mmio + PORT_SCR_ERR);
|
||||
|
||||
/* clear port IRQ */
|
||||
tmp = readl(port_mmio + PORT_IRQ_STAT);
|
||||
dev_dbg(ap->host->dev, "PORT_IRQ_STAT 0x%x\n", tmp);
|
||||
if (tmp)
|
||||
writel(tmp, port_mmio + PORT_IRQ_STAT);
|
||||
|
||||
writel(1 << ap->port_no, hpriv->mmio + HOST_IRQ_STAT);
|
||||
}
|
||||
|
||||
static void ahci_port_init(struct device *dev, struct ata_port *ap,
|
||||
int port_no, void __iomem *mmio,
|
||||
void __iomem *port_mmio)
|
||||
@ -1269,18 +1289,7 @@ static void ahci_port_init(struct device *dev, struct ata_port *ap,
|
||||
if (rc)
|
||||
dev_warn(dev, "%s (%d)\n", emsg, rc);
|
||||
|
||||
/* clear SError */
|
||||
tmp = readl(port_mmio + PORT_SCR_ERR);
|
||||
dev_dbg(dev, "PORT_SCR_ERR 0x%x\n", tmp);
|
||||
writel(tmp, port_mmio + PORT_SCR_ERR);
|
||||
|
||||
/* clear port IRQ */
|
||||
tmp = readl(port_mmio + PORT_IRQ_STAT);
|
||||
dev_dbg(dev, "PORT_IRQ_STAT 0x%x\n", tmp);
|
||||
if (tmp)
|
||||
writel(tmp, port_mmio + PORT_IRQ_STAT);
|
||||
|
||||
writel(1 << port_no, mmio + HOST_IRQ_STAT);
|
||||
ahci_port_clear_pending_irq(ap);
|
||||
|
||||
/* mark esata ports */
|
||||
tmp = readl(port_mmio + PORT_CMD);
|
||||
@ -1601,6 +1610,8 @@ int ahci_do_hardreset(struct ata_link *link, unsigned int *class,
|
||||
tf.status = ATA_BUSY;
|
||||
ata_tf_to_fis(&tf, 0, 0, d2h_fis);
|
||||
|
||||
ahci_port_clear_pending_irq(ap);
|
||||
|
||||
rc = sata_link_hardreset(link, timing, deadline, online,
|
||||
ahci_check_ready);
|
||||
|
||||
|
@ -394,10 +394,23 @@ int sata_link_scr_lpm(struct ata_link *link, enum ata_lpm_policy policy,
|
||||
case ATA_LPM_MED_POWER_WITH_DIPM:
|
||||
case ATA_LPM_MIN_POWER_WITH_PARTIAL:
|
||||
case ATA_LPM_MIN_POWER:
|
||||
if (ata_link_nr_enabled(link) > 0)
|
||||
/* no restrictions on LPM transitions */
|
||||
if (ata_link_nr_enabled(link) > 0) {
|
||||
/* assume no restrictions on LPM transitions */
|
||||
scontrol &= ~(0x7 << 8);
|
||||
else {
|
||||
|
||||
/*
|
||||
* If the controller does not support partial, slumber,
|
||||
* or devsleep, then disallow these transitions.
|
||||
*/
|
||||
if (link->ap->host->flags & ATA_HOST_NO_PART)
|
||||
scontrol |= (0x1 << 8);
|
||||
|
||||
if (link->ap->host->flags & ATA_HOST_NO_SSC)
|
||||
scontrol |= (0x2 << 8);
|
||||
|
||||
if (link->ap->host->flags & ATA_HOST_NO_DEVSLP)
|
||||
scontrol |= (0x4 << 8);
|
||||
} else {
|
||||
/* empty port, power off */
|
||||
scontrol &= ~0xf;
|
||||
scontrol |= (0x1 << 2);
|
||||
|
@ -1548,6 +1548,8 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
|
||||
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
|
||||
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47422e03, 0xffffffff,
|
||||
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
|
||||
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47424e03, 0xffffffff,
|
||||
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
|
||||
|
||||
/* Quirks that need to be set based on the module address */
|
||||
SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -ENODEV, 0x50000800, 0xffffffff,
|
||||
|
@ -498,10 +498,17 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len)
|
||||
int rc;
|
||||
u32 ordinal;
|
||||
unsigned long dur;
|
||||
unsigned int try;
|
||||
|
||||
rc = tpm_tis_send_data(chip, buf, len);
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
for (try = 0; try < TPM_RETRY; try++) {
|
||||
rc = tpm_tis_send_data(chip, buf, len);
|
||||
if (rc >= 0)
|
||||
/* Data transfer done successfully */
|
||||
break;
|
||||
else if (rc != -EIO)
|
||||
/* Data transfer failed, not recoverable */
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = tpm_tis_verify_crc(priv, len, buf);
|
||||
if (rc < 0) {
|
||||
|
@ -1167,6 +1167,34 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
|
||||
}
|
||||
EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF);
|
||||
|
||||
/**
|
||||
* dma_buf_map_attachment_unlocked - Returns the scatterlist table of the attachment;
|
||||
* mapped into _device_ address space. Is a wrapper for map_dma_buf() of the
|
||||
* dma_buf_ops.
|
||||
* @attach: [in] attachment whose scatterlist is to be returned
|
||||
* @direction: [in] direction of DMA transfer
|
||||
*
|
||||
* Unlocked variant of dma_buf_map_attachment().
|
||||
*/
|
||||
struct sg_table *
|
||||
dma_buf_map_attachment_unlocked(struct dma_buf_attachment *attach,
|
||||
enum dma_data_direction direction)
|
||||
{
|
||||
struct sg_table *sg_table;
|
||||
|
||||
might_sleep();
|
||||
|
||||
if (WARN_ON(!attach || !attach->dmabuf))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
dma_resv_lock(attach->dmabuf->resv, NULL);
|
||||
sg_table = dma_buf_map_attachment(attach, direction);
|
||||
dma_resv_unlock(attach->dmabuf->resv);
|
||||
|
||||
return sg_table;
|
||||
}
|
||||
EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_unlocked, DMA_BUF);
|
||||
|
||||
/**
|
||||
* dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might
|
||||
* deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
|
||||
@ -1203,6 +1231,31 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
|
||||
}
|
||||
EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
|
||||
|
||||
/**
|
||||
* dma_buf_unmap_attachment_unlocked - unmaps and decreases usecount of the buffer;might
|
||||
* deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of
|
||||
* dma_buf_ops.
|
||||
* @attach: [in] attachment to unmap buffer from
|
||||
* @sg_table: [in] scatterlist info of the buffer to unmap
|
||||
* @direction: [in] direction of DMA transfer
|
||||
*
|
||||
* Unlocked variant of dma_buf_unmap_attachment().
|
||||
*/
|
||||
void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
|
||||
struct sg_table *sg_table,
|
||||
enum dma_data_direction direction)
|
||||
{
|
||||
might_sleep();
|
||||
|
||||
if (WARN_ON(!attach || !attach->dmabuf || !sg_table))
|
||||
return;
|
||||
|
||||
dma_resv_lock(attach->dmabuf->resv, NULL);
|
||||
dma_buf_unmap_attachment(attach, sg_table, direction);
|
||||
dma_resv_unlock(attach->dmabuf->resv);
|
||||
}
|
||||
EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, DMA_BUF);
|
||||
|
||||
/**
|
||||
* dma_buf_move_notify - notify attachments that DMA-buf is moving
|
||||
*
|
||||
|
@ -1266,7 +1266,6 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
|
||||
void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
|
||||
int amdgpu_device_pci_reset(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_need_post(struct amdgpu_device *adev);
|
||||
bool amdgpu_sg_display_supported(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_pcie_dynamic_switching_supported(void);
|
||||
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev);
|
||||
bool amdgpu_device_aspm_support_quirk(void);
|
||||
|
@ -120,7 +120,6 @@ static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p,
|
||||
struct drm_gem_object *gobj;
|
||||
struct amdgpu_bo *bo;
|
||||
unsigned long size;
|
||||
int r;
|
||||
|
||||
gobj = drm_gem_object_lookup(p->filp, data->handle);
|
||||
if (gobj == NULL)
|
||||
@ -132,23 +131,14 @@ static int amdgpu_cs_p1_user_fence(struct amdgpu_cs_parser *p,
|
||||
drm_gem_object_put(gobj);
|
||||
|
||||
size = amdgpu_bo_size(bo);
|
||||
if (size != PAGE_SIZE || (data->offset + 8) > size) {
|
||||
r = -EINVAL;
|
||||
goto error_unref;
|
||||
}
|
||||
if (size != PAGE_SIZE || data->offset > (size - 8))
|
||||
return -EINVAL;
|
||||
|
||||
if (amdgpu_ttm_tt_get_usermm(bo->tbo.ttm)) {
|
||||
r = -EINVAL;
|
||||
goto error_unref;
|
||||
}
|
||||
if (amdgpu_ttm_tt_get_usermm(bo->tbo.ttm))
|
||||
return -EINVAL;
|
||||
|
||||
*offset = data->offset;
|
||||
|
||||
return 0;
|
||||
|
||||
error_unref:
|
||||
amdgpu_bo_unref(&bo);
|
||||
return r;
|
||||
}
|
||||
|
||||
static int amdgpu_cs_p1_bo_handles(struct amdgpu_cs_parser *p,
|
||||
|
@ -1336,32 +1336,6 @@ bool amdgpu_device_need_post(struct amdgpu_device *adev)
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* On APUs with >= 64GB white flickering has been observed w/ SG enabled.
|
||||
* Disable S/G on such systems until we have a proper fix.
|
||||
* https://gitlab.freedesktop.org/drm/amd/-/issues/2354
|
||||
* https://gitlab.freedesktop.org/drm/amd/-/issues/2735
|
||||
*/
|
||||
bool amdgpu_sg_display_supported(struct amdgpu_device *adev)
|
||||
{
|
||||
switch (amdgpu_sg_display) {
|
||||
case -1:
|
||||
break;
|
||||
case 0:
|
||||
return false;
|
||||
case 1:
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
if ((totalram_pages() << (PAGE_SHIFT - 10)) +
|
||||
(adev->gmc.real_vram_size / 1024) >= 64000000) {
|
||||
DRM_WARN("Disabling S/G due to >=64GB RAM\n");
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Intel hosts such as Raptor Lake and Sapphire Rapids don't support dynamic
|
||||
* speed switching. Until we have confirmation from Intel that a specific host
|
||||
|
@ -1265,11 +1265,15 @@ static void mmhub_read_system_context(struct amdgpu_device *adev, struct dc_phy_
|
||||
|
||||
pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);
|
||||
|
||||
page_table_start.high_part = (u32)(adev->gmc.gart_start >> 44) & 0xF;
|
||||
page_table_start.low_part = (u32)(adev->gmc.gart_start >> 12);
|
||||
page_table_end.high_part = (u32)(adev->gmc.gart_end >> 44) & 0xF;
|
||||
page_table_end.low_part = (u32)(adev->gmc.gart_end >> 12);
|
||||
page_table_base.high_part = upper_32_bits(pt_base) & 0xF;
|
||||
page_table_start.high_part = upper_32_bits(adev->gmc.gart_start >>
|
||||
AMDGPU_GPU_PAGE_SHIFT);
|
||||
page_table_start.low_part = lower_32_bits(adev->gmc.gart_start >>
|
||||
AMDGPU_GPU_PAGE_SHIFT);
|
||||
page_table_end.high_part = upper_32_bits(adev->gmc.gart_end >>
|
||||
AMDGPU_GPU_PAGE_SHIFT);
|
||||
page_table_end.low_part = lower_32_bits(adev->gmc.gart_end >>
|
||||
AMDGPU_GPU_PAGE_SHIFT);
|
||||
page_table_base.high_part = upper_32_bits(pt_base);
|
||||
page_table_base.low_part = lower_32_bits(pt_base);
|
||||
|
||||
pa_config->system_aperture.start_addr = (uint64_t)logical_addr_low << 18;
|
||||
@ -1634,8 +1638,9 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
|
||||
}
|
||||
break;
|
||||
}
|
||||
if (init_data.flags.gpu_vm_support)
|
||||
init_data.flags.gpu_vm_support = amdgpu_sg_display_supported(adev);
|
||||
if (init_data.flags.gpu_vm_support &&
|
||||
(amdgpu_sg_display == 0))
|
||||
init_data.flags.gpu_vm_support = false;
|
||||
|
||||
if (init_data.flags.gpu_vm_support)
|
||||
adev->mode_info.gpu_vm_support = true;
|
||||
|
@ -290,7 +290,8 @@ static void dccg32_set_dpstreamclk(
|
||||
struct dcn_dccg *dccg_dcn = TO_DCN_DCCG(dccg);
|
||||
|
||||
/* set the dtbclk_p source */
|
||||
dccg32_set_dtbclk_p_src(dccg, src, otg_inst);
|
||||
/* always program refclk as DTBCLK. No use-case expected to require DPREFCLK as refclk */
|
||||
dccg32_set_dtbclk_p_src(dccg, DTBCLK0, otg_inst);
|
||||
|
||||
/* enabled to select one of the DTBCLKs for pipe */
|
||||
switch (dp_hpo_inst) {
|
||||
|
@ -4133,7 +4133,9 @@ void dml31_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
|
||||
}
|
||||
if (v->OutputFormat[k] == dm_420 && v->HActive[k] > DCN31_MAX_FMT_420_BUFFER_WIDTH
|
||||
&& v->ODMCombineEnablePerState[i][k] != dm_odm_combine_mode_4to1) {
|
||||
if (v->HActive[k] / 2 > DCN31_MAX_FMT_420_BUFFER_WIDTH) {
|
||||
if (v->Output[k] == dm_hdmi) {
|
||||
FMTBufferExceeded = true;
|
||||
} else if (v->HActive[k] / 2 > DCN31_MAX_FMT_420_BUFFER_WIDTH) {
|
||||
v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_4to1;
|
||||
v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine4To1;
|
||||
|
||||
|
@ -4225,7 +4225,9 @@ void dml314_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_
|
||||
}
|
||||
if (v->OutputFormat[k] == dm_420 && v->HActive[k] > DCN314_MAX_FMT_420_BUFFER_WIDTH
|
||||
&& v->ODMCombineEnablePerState[i][k] != dm_odm_combine_mode_4to1) {
|
||||
if (v->HActive[k] / 2 > DCN314_MAX_FMT_420_BUFFER_WIDTH) {
|
||||
if (v->Output[k] == dm_hdmi) {
|
||||
FMTBufferExceeded = true;
|
||||
} else if (v->HActive[k] / 2 > DCN314_MAX_FMT_420_BUFFER_WIDTH) {
|
||||
v->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_4to1;
|
||||
v->PlaneRequiredDISPCLK = v->PlaneRequiredDISPCLKWithODMCombine4To1;
|
||||
|
||||
|
@ -3454,6 +3454,7 @@ bool dml32_CalculatePrefetchSchedule(
|
||||
double TimeForFetchingMetaPTE = 0;
|
||||
double TimeForFetchingRowInVBlank = 0;
|
||||
double LinesToRequestPrefetchPixelData = 0;
|
||||
double LinesForPrefetchBandwidth = 0;
|
||||
unsigned int HostVMDynamicLevelsTrips;
|
||||
double trip_to_mem;
|
||||
double Tvm_trips;
|
||||
@ -3883,11 +3884,15 @@ bool dml32_CalculatePrefetchSchedule(
|
||||
TimeForFetchingMetaPTE = Tvm_oto;
|
||||
TimeForFetchingRowInVBlank = Tr0_oto;
|
||||
*PrefetchBandwidth = prefetch_bw_oto;
|
||||
/* Clamp to oto for bandwidth calculation */
|
||||
LinesForPrefetchBandwidth = dst_y_prefetch_oto;
|
||||
} else {
|
||||
*DestinationLinesForPrefetch = dst_y_prefetch_equ;
|
||||
TimeForFetchingMetaPTE = Tvm_equ;
|
||||
TimeForFetchingRowInVBlank = Tr0_equ;
|
||||
*PrefetchBandwidth = prefetch_bw_equ;
|
||||
/* Clamp to equ for bandwidth calculation */
|
||||
LinesForPrefetchBandwidth = dst_y_prefetch_equ;
|
||||
}
|
||||
|
||||
*DestinationLinesToRequestVMInVBlank = dml_ceil(4.0 * TimeForFetchingMetaPTE / LineTime, 1.0) / 4.0;
|
||||
@ -3895,7 +3900,7 @@ bool dml32_CalculatePrefetchSchedule(
|
||||
*DestinationLinesToRequestRowInVBlank =
|
||||
dml_ceil(4.0 * TimeForFetchingRowInVBlank / LineTime, 1.0) / 4.0;
|
||||
|
||||
LinesToRequestPrefetchPixelData = *DestinationLinesForPrefetch -
|
||||
LinesToRequestPrefetchPixelData = LinesForPrefetchBandwidth -
|
||||
*DestinationLinesToRequestVMInVBlank - 2 * *DestinationLinesToRequestRowInVBlank;
|
||||
|
||||
#ifdef __DML_VBA_DEBUG__
|
||||
|
@ -216,7 +216,7 @@ static int tc358762_probe(struct mipi_dsi_device *dsi)
|
||||
dsi->lanes = 1;
|
||||
dsi->format = MIPI_DSI_FMT_RGB888;
|
||||
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE |
|
||||
MIPI_DSI_MODE_LPM;
|
||||
MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_VIDEO_HSE;
|
||||
|
||||
ret = tc358762_parse_dt(ctx);
|
||||
if (ret < 0)
|
||||
|
@ -231,6 +231,7 @@ static const struct edid_quirk {
|
||||
|
||||
/* OSVR HDK and HDK2 VR Headsets */
|
||||
EDID_QUIRK('S', 'V', 'R', 0x1019, EDID_QUIRK_NON_DESKTOP),
|
||||
EDID_QUIRK('A', 'U', 'O', 0x1111, EDID_QUIRK_NON_DESKTOP),
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -39,13 +39,12 @@ static void exynos_drm_crtc_atomic_disable(struct drm_crtc *crtc,
|
||||
if (exynos_crtc->ops->atomic_disable)
|
||||
exynos_crtc->ops->atomic_disable(exynos_crtc);
|
||||
|
||||
spin_lock_irq(&crtc->dev->event_lock);
|
||||
if (crtc->state->event && !crtc->state->active) {
|
||||
spin_lock_irq(&crtc->dev->event_lock);
|
||||
drm_crtc_send_vblank_event(crtc, crtc->state->event);
|
||||
spin_unlock_irq(&crtc->dev->event_lock);
|
||||
|
||||
crtc->state->event = NULL;
|
||||
}
|
||||
spin_unlock_irq(&crtc->dev->event_lock);
|
||||
}
|
||||
|
||||
static int exynos_crtc_atomic_check(struct drm_crtc *crtc,
|
||||
|
@ -847,7 +847,7 @@ static int mtk_dp_aux_do_transfer(struct mtk_dp *mtk_dp, bool is_read, u8 cmd,
|
||||
u32 phy_status = mtk_dp_read(mtk_dp, MTK_DP_AUX_P0_3628) &
|
||||
AUX_RX_PHY_STATE_AUX_TX_P0_MASK;
|
||||
if (phy_status != AUX_RX_PHY_STATE_AUX_TX_P0_RX_IDLE) {
|
||||
drm_err(mtk_dp->drm_dev,
|
||||
dev_err(mtk_dp->dev,
|
||||
"AUX Rx Aux hang, need SW reset\n");
|
||||
return -EIO;
|
||||
}
|
||||
@ -2062,7 +2062,7 @@ static ssize_t mtk_dp_aux_transfer(struct drm_dp_aux *mtk_aux,
|
||||
is_read = true;
|
||||
break;
|
||||
default:
|
||||
drm_err(mtk_aux->drm_dev, "invalid aux cmd = %d\n",
|
||||
dev_err(mtk_dp->dev, "invalid aux cmd = %d\n",
|
||||
msg->request);
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
@ -2078,7 +2078,7 @@ static ssize_t mtk_dp_aux_transfer(struct drm_dp_aux *mtk_aux,
|
||||
to_access, &msg->reply);
|
||||
|
||||
if (ret) {
|
||||
drm_info(mtk_dp->drm_dev,
|
||||
dev_info(mtk_dp->dev,
|
||||
"Failed to do AUX transfer: %d\n", ret);
|
||||
goto err;
|
||||
}
|
||||
|
@ -69,10 +69,10 @@ MODULE_PARM_DESC(eco_mode, "Turn on Eco mode (less bright, more silent)");
|
||||
#define READ_STATUS_SIZE 13
|
||||
#define MISC_VALUE_SIZE 4
|
||||
|
||||
#define CMD_TIMEOUT msecs_to_jiffies(200)
|
||||
#define DATA_TIMEOUT msecs_to_jiffies(1000)
|
||||
#define IDLE_TIMEOUT msecs_to_jiffies(2000)
|
||||
#define FIRST_FRAME_TIMEOUT msecs_to_jiffies(2000)
|
||||
#define CMD_TIMEOUT 200
|
||||
#define DATA_TIMEOUT 1000
|
||||
#define IDLE_TIMEOUT 2000
|
||||
#define FIRST_FRAME_TIMEOUT 2000
|
||||
|
||||
#define MISC_REQ_GET_SET_ECO_A 0xff
|
||||
#define MISC_REQ_GET_SET_ECO_B 0x35
|
||||
@ -388,7 +388,7 @@ static void gm12u320_fb_update_work(struct work_struct *work)
|
||||
* switches back to showing its logo.
|
||||
*/
|
||||
queue_delayed_work(system_long_wq, &gm12u320->fb_update.work,
|
||||
IDLE_TIMEOUT);
|
||||
msecs_to_jiffies(IDLE_TIMEOUT));
|
||||
|
||||
return;
|
||||
err:
|
||||
|
@ -698,13 +698,16 @@ static int aspeed_i2c_master_xfer(struct i2c_adapter *adap,
|
||||
|
||||
if (time_left == 0) {
|
||||
/*
|
||||
* If timed out and bus is still busy in a multi master
|
||||
* environment, attempt recovery at here.
|
||||
* In a multi-master setup, if a timeout occurs, attempt
|
||||
* recovery. But if the bus is idle, we still need to reset the
|
||||
* i2c controller to clear the remaining interrupts.
|
||||
*/
|
||||
if (bus->multi_master &&
|
||||
(readl(bus->base + ASPEED_I2C_CMD_REG) &
|
||||
ASPEED_I2CD_BUS_BUSY_STS))
|
||||
aspeed_i2c_recover_bus(bus);
|
||||
else
|
||||
aspeed_i2c_reset(bus);
|
||||
|
||||
/*
|
||||
* If timed out and the state is still pending, drop the pending
|
||||
|
@ -29,6 +29,7 @@ static LIST_HEAD(icc_providers);
|
||||
static int providers_count;
|
||||
static bool synced_state;
|
||||
static DEFINE_MUTEX(icc_lock);
|
||||
static DEFINE_MUTEX(icc_bw_lock);
|
||||
static struct dentry *icc_debugfs_dir;
|
||||
|
||||
static void icc_summary_show_one(struct seq_file *s, struct icc_node *n)
|
||||
@ -632,7 +633,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw)
|
||||
if (WARN_ON(IS_ERR(path) || !path->num_nodes))
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&icc_lock);
|
||||
mutex_lock(&icc_bw_lock);
|
||||
|
||||
old_avg = path->reqs[0].avg_bw;
|
||||
old_peak = path->reqs[0].peak_bw;
|
||||
@ -664,7 +665,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw)
|
||||
apply_constraints(path);
|
||||
}
|
||||
|
||||
mutex_unlock(&icc_lock);
|
||||
mutex_unlock(&icc_bw_lock);
|
||||
|
||||
trace_icc_set_bw_end(path, ret);
|
||||
|
||||
@ -967,6 +968,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider)
|
||||
return;
|
||||
|
||||
mutex_lock(&icc_lock);
|
||||
mutex_lock(&icc_bw_lock);
|
||||
|
||||
node->provider = provider;
|
||||
list_add_tail(&node->node_list, &provider->nodes);
|
||||
@ -992,6 +994,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider)
|
||||
node->avg_bw = 0;
|
||||
node->peak_bw = 0;
|
||||
|
||||
mutex_unlock(&icc_bw_lock);
|
||||
mutex_unlock(&icc_lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(icc_node_add);
|
||||
@ -1129,6 +1132,7 @@ void icc_sync_state(struct device *dev)
|
||||
return;
|
||||
|
||||
mutex_lock(&icc_lock);
|
||||
mutex_lock(&icc_bw_lock);
|
||||
synced_state = true;
|
||||
list_for_each_entry(p, &icc_providers, provider_list) {
|
||||
dev_dbg(p->dev, "interconnect provider is in synced state\n");
|
||||
@ -1141,13 +1145,21 @@ void icc_sync_state(struct device *dev)
|
||||
}
|
||||
}
|
||||
}
|
||||
mutex_unlock(&icc_bw_lock);
|
||||
mutex_unlock(&icc_lock);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(icc_sync_state);
|
||||
|
||||
static int __init icc_init(void)
|
||||
{
|
||||
struct device_node *root = of_find_node_by_path("/");
|
||||
struct device_node *root;
|
||||
|
||||
/* Teach lockdep about lock ordering wrt. shrinker: */
|
||||
fs_reclaim_acquire(GFP_KERNEL);
|
||||
might_lock(&icc_bw_lock);
|
||||
fs_reclaim_release(GFP_KERNEL);
|
||||
|
||||
root = of_find_node_by_path("/");
|
||||
|
||||
providers_count = of_count_icc_providers(root);
|
||||
of_node_put(root);
|
||||
|
@ -707,24 +707,6 @@ static void dm_put_live_table_fast(struct mapped_device *md) __releases(RCU)
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static inline struct dm_table *dm_get_live_table_bio(struct mapped_device *md,
|
||||
int *srcu_idx, blk_opf_t bio_opf)
|
||||
{
|
||||
if (bio_opf & REQ_NOWAIT)
|
||||
return dm_get_live_table_fast(md);
|
||||
else
|
||||
return dm_get_live_table(md, srcu_idx);
|
||||
}
|
||||
|
||||
static inline void dm_put_live_table_bio(struct mapped_device *md, int srcu_idx,
|
||||
blk_opf_t bio_opf)
|
||||
{
|
||||
if (bio_opf & REQ_NOWAIT)
|
||||
dm_put_live_table_fast(md);
|
||||
else
|
||||
dm_put_live_table(md, srcu_idx);
|
||||
}
|
||||
|
||||
static char *_dm_claim_ptr = "I belong to device-mapper";
|
||||
|
||||
/*
|
||||
@ -1805,9 +1787,8 @@ static void dm_submit_bio(struct bio *bio)
|
||||
struct mapped_device *md = bio->bi_bdev->bd_disk->private_data;
|
||||
int srcu_idx;
|
||||
struct dm_table *map;
|
||||
blk_opf_t bio_opf = bio->bi_opf;
|
||||
|
||||
map = dm_get_live_table_bio(md, &srcu_idx, bio_opf);
|
||||
map = dm_get_live_table(md, &srcu_idx);
|
||||
|
||||
/* If suspended, or map not yet available, queue this IO for later */
|
||||
if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) ||
|
||||
@ -1823,7 +1804,7 @@ static void dm_submit_bio(struct bio *bio)
|
||||
|
||||
dm_split_and_process_bio(md, map, bio);
|
||||
out:
|
||||
dm_put_live_table_bio(md, srcu_idx, bio_opf);
|
||||
dm_put_live_table(md, srcu_idx);
|
||||
}
|
||||
|
||||
static bool dm_poll_dm_io(struct dm_io *io, struct io_comp_batch *iob,
|
||||
|
@ -8228,7 +8228,7 @@ static void *md_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
||||
spin_unlock(&all_mddevs_lock);
|
||||
|
||||
if (to_put)
|
||||
mddev_put(mddev);
|
||||
mddev_put(to_put);
|
||||
return next_mddev;
|
||||
|
||||
}
|
||||
|
@ -1828,6 +1828,9 @@ static int raid1_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
|
||||
int number = rdev->raid_disk;
|
||||
struct raid1_info *p = conf->mirrors + number;
|
||||
|
||||
if (unlikely(number >= conf->raid_disks))
|
||||
goto abort;
|
||||
|
||||
if (rdev != p->rdev)
|
||||
p = conf->mirrors + conf->raid_disks + number;
|
||||
|
||||
|
@ -413,7 +413,7 @@ static int buffer_prepare(struct vb2_buffer *vb)
|
||||
dev->height >> 1);
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
return -EINVAL; /* should not happen */
|
||||
}
|
||||
dprintk(2, "[%p/%d] buffer_init - %dx%d %dbpp 0x%08x - dma=0x%08lx\n",
|
||||
buf, buf->vb.vb2_buf.index,
|
||||
|
@ -354,7 +354,7 @@ static int cio2_hw_init(struct cio2_device *cio2, struct cio2_queue *q)
|
||||
void __iomem *const base = cio2->base;
|
||||
u8 lanes, csi2bus = q->csi2.port;
|
||||
u8 sensor_vc = SENSOR_VIR_CH_DFLT;
|
||||
struct cio2_csi2_timing timing;
|
||||
struct cio2_csi2_timing timing = { 0 };
|
||||
int i, r;
|
||||
|
||||
fmt = cio2_find_format(NULL, &q->subdev_fmt.code);
|
||||
|
@ -775,11 +775,13 @@ static int mdp_get_subsys_id(struct device *dev, struct device_node *node,
|
||||
ret = cmdq_dev_get_client_reg(&comp_pdev->dev, &cmdq_reg, index);
|
||||
if (ret != 0) {
|
||||
dev_err(&comp_pdev->dev, "cmdq_dev_get_subsys fail!\n");
|
||||
put_device(&comp_pdev->dev);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
comp->subsys_id = cmdq_reg.subsys;
|
||||
dev_dbg(&comp_pdev->dev, "subsys id=%d\n", cmdq_reg.subsys);
|
||||
put_device(&comp_pdev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -345,11 +345,12 @@ static int qt1010_init(struct dvb_frontend *fe)
|
||||
else
|
||||
valptr = &tmpval;
|
||||
|
||||
BUG_ON(i >= ARRAY_SIZE(i2c_data) - 1);
|
||||
|
||||
err = qt1010_init_meas1(priv, i2c_data[i+1].reg,
|
||||
i2c_data[i].reg,
|
||||
i2c_data[i].val, valptr);
|
||||
if (i >= ARRAY_SIZE(i2c_data) - 1)
|
||||
err = -EIO;
|
||||
else
|
||||
err = qt1010_init_meas1(priv, i2c_data[i + 1].reg,
|
||||
i2c_data[i].reg,
|
||||
i2c_data[i].val, valptr);
|
||||
i++;
|
||||
break;
|
||||
}
|
||||
|
@ -270,6 +270,7 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
|
||||
struct dvb_usb_device *d = i2c_get_adapdata(adap);
|
||||
struct state *state = d_to_priv(d);
|
||||
int ret;
|
||||
u32 reg;
|
||||
|
||||
if (mutex_lock_interruptible(&d->i2c_mutex) < 0)
|
||||
return -EAGAIN;
|
||||
@ -322,8 +323,10 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
|
||||
ret = -EOPNOTSUPP;
|
||||
} else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
|
||||
(msg[0].addr == state->af9033_i2c_addr[1])) {
|
||||
if (msg[0].len < 3 || msg[1].len < 1)
|
||||
return -EOPNOTSUPP;
|
||||
/* demod access via firmware interface */
|
||||
u32 reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
|
||||
reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
|
||||
msg[0].buf[2];
|
||||
|
||||
if (msg[0].addr == state->af9033_i2c_addr[1])
|
||||
@ -381,17 +384,16 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
|
||||
ret = -EOPNOTSUPP;
|
||||
} else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
|
||||
(msg[0].addr == state->af9033_i2c_addr[1])) {
|
||||
if (msg[0].len < 3)
|
||||
return -EOPNOTSUPP;
|
||||
/* demod access via firmware interface */
|
||||
u32 reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
|
||||
reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
|
||||
msg[0].buf[2];
|
||||
|
||||
if (msg[0].addr == state->af9033_i2c_addr[1])
|
||||
reg |= 0x100000;
|
||||
|
||||
ret = (msg[0].len >= 3) ? af9035_wr_regs(d, reg,
|
||||
&msg[0].buf[3],
|
||||
msg[0].len - 3)
|
||||
: -EOPNOTSUPP;
|
||||
ret = af9035_wr_regs(d, reg, &msg[0].buf[3], msg[0].len - 3);
|
||||
} else {
|
||||
/* I2C write */
|
||||
u8 buf[MAX_XFER_SIZE];
|
||||
|
@ -202,7 +202,7 @@ static int anysee_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msg,
|
||||
|
||||
while (i < num) {
|
||||
if (num > i + 1 && (msg[i+1].flags & I2C_M_RD)) {
|
||||
if (msg[i].len > 2 || msg[i+1].len > 60) {
|
||||
if (msg[i].len != 2 || msg[i + 1].len > 60) {
|
||||
ret = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
|
@ -788,6 +788,10 @@ static int az6007_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
|
||||
if (az6007_xfer_debug)
|
||||
printk(KERN_DEBUG "az6007: I2C W addr=0x%x len=%d\n",
|
||||
addr, msgs[i].len);
|
||||
if (msgs[i].len < 1) {
|
||||
ret = -EIO;
|
||||
goto err;
|
||||
}
|
||||
req = AZ6007_I2C_WR;
|
||||
index = msgs[i].buf[0];
|
||||
value = addr | (1 << 8);
|
||||
@ -802,6 +806,10 @@ static int az6007_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[],
|
||||
if (az6007_xfer_debug)
|
||||
printk(KERN_DEBUG "az6007: I2C R addr=0x%x len=%d\n",
|
||||
addr, msgs[i].len);
|
||||
if (msgs[i].len < 1) {
|
||||
ret = -EIO;
|
||||
goto err;
|
||||
}
|
||||
req = AZ6007_I2C_RD;
|
||||
index = msgs[i].buf[0];
|
||||
value = addr;
|
||||
|
@ -120,7 +120,7 @@ static int gl861_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
} else if (num == 2 && !(msg[0].flags & I2C_M_RD) &&
|
||||
(msg[1].flags & I2C_M_RD)) {
|
||||
/* I2C write + read */
|
||||
if (msg[0].len > 1 || msg[1].len > sizeof(ctx->buf)) {
|
||||
if (msg[0].len != 1 || msg[1].len > sizeof(ctx->buf)) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto err;
|
||||
}
|
||||
|
@ -422,6 +422,10 @@ static int af9005_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
if (ret == 0)
|
||||
ret = 2;
|
||||
} else {
|
||||
if (msg[0].len < 2) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto unlock;
|
||||
}
|
||||
/* write one or more registers */
|
||||
reg = msg[0].buf[0];
|
||||
addr = msg[0].addr;
|
||||
@ -431,6 +435,7 @@ static int af9005_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
ret = 1;
|
||||
}
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&d->i2c_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
@ -128,6 +128,10 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
|
||||
switch (num) {
|
||||
case 2:
|
||||
if (msg[0].len < 1) {
|
||||
num = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
/* read stv0299 register */
|
||||
value = msg[0].buf[0];/* register */
|
||||
for (i = 0; i < msg[1].len; i++) {
|
||||
@ -139,6 +143,10 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
case 1:
|
||||
switch (msg[0].addr) {
|
||||
case 0x68:
|
||||
if (msg[0].len < 2) {
|
||||
num = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
/* write to stv0299 register */
|
||||
buf6[0] = 0x2a;
|
||||
buf6[1] = msg[0].buf[0];
|
||||
@ -148,6 +156,10 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
break;
|
||||
case 0x60:
|
||||
if (msg[0].flags == 0) {
|
||||
if (msg[0].len < 4) {
|
||||
num = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
/* write to tuner pll */
|
||||
buf6[0] = 0x2c;
|
||||
buf6[1] = 5;
|
||||
@ -159,6 +171,10 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
dw210x_op_rw(d->udev, 0xb2, 0, 0,
|
||||
buf6, 7, DW210X_WRITE_MSG);
|
||||
} else {
|
||||
if (msg[0].len < 1) {
|
||||
num = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
/* read from tuner */
|
||||
dw210x_op_rw(d->udev, 0xb5, 0, 0,
|
||||
buf6, 1, DW210X_READ_MSG);
|
||||
@ -166,12 +182,20 @@ static int dw2102_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
|
||||
}
|
||||
break;
|
||||
case (DW2102_RC_QUERY):
|
||||
if (msg[0].len < 2) {
|
||||
num = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
dw210x_op_rw(d->udev, 0xb8, 0, 0,
|
||||
buf6, 2, DW210X_READ_MSG);
|
||||
msg[0].buf[0] = buf6[0];
|
||||
msg[0].buf[1] = buf6[1];
|
||||
break;
|
||||
case (DW2102_VOLTAGE_CTRL):
|
||||
if (msg[0].len < 1) {
|
||||
num = -EOPNOTSUPP;
|
||||
break;
|
||||
}
|
||||
buf6[0] = 0x30;
|
||||
buf6[1] = msg[0].buf[0];
|
||||
dw210x_op_rw(d->udev, 0xb2, 0, 0,
|
||||
|
@ -489,6 +489,7 @@ config HISI_HIKEY_USB
|
||||
config OPEN_DICE
|
||||
tristate "Open Profile for DICE driver"
|
||||
depends on OF_RESERVED_MEM
|
||||
depends on HAS_IOMEM
|
||||
help
|
||||
This driver exposes a DICE reserved memory region to userspace via
|
||||
a character device. The memory region contains Compound Device
|
||||
|
@ -310,8 +310,8 @@ static void fastrpc_free_map(struct kref *ref)
|
||||
return;
|
||||
}
|
||||
}
|
||||
dma_buf_unmap_attachment(map->attach, map->table,
|
||||
DMA_BIDIRECTIONAL);
|
||||
dma_buf_unmap_attachment_unlocked(map->attach, map->table,
|
||||
DMA_BIDIRECTIONAL);
|
||||
dma_buf_detach(map->buf, map->attach);
|
||||
dma_buf_put(map->buf);
|
||||
}
|
||||
@ -711,6 +711,7 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd,
|
||||
{
|
||||
struct fastrpc_session_ctx *sess = fl->sctx;
|
||||
struct fastrpc_map *map = NULL;
|
||||
struct sg_table *table;
|
||||
int err = 0;
|
||||
|
||||
if (!fastrpc_map_lookup(fl, fd, ppmap, true))
|
||||
@ -736,11 +737,12 @@ static int fastrpc_map_create(struct fastrpc_user *fl, int fd,
|
||||
goto attach_err;
|
||||
}
|
||||
|
||||
map->table = dma_buf_map_attachment(map->attach, DMA_BIDIRECTIONAL);
|
||||
if (IS_ERR(map->table)) {
|
||||
err = PTR_ERR(map->table);
|
||||
table = dma_buf_map_attachment_unlocked(map->attach, DMA_BIDIRECTIONAL);
|
||||
if (IS_ERR(table)) {
|
||||
err = PTR_ERR(table);
|
||||
goto map_err;
|
||||
}
|
||||
map->table = table;
|
||||
|
||||
map->phys = sg_dma_address(map->table->sgl);
|
||||
map->phys += ((u64)fl->sctx->sid << 32);
|
||||
|
@ -171,8 +171,8 @@
|
||||
#define ESDHC_FLAG_HS400 BIT(9)
|
||||
/*
|
||||
* The IP has errata ERR010450
|
||||
* uSDHC: Due to the I/O timing limit, for SDR mode, SD card clock can't
|
||||
* exceed 150MHz, for DDR mode, SD card clock can't exceed 45MHz.
|
||||
* uSDHC: At 1.8V due to the I/O timing limit, for SDR mode, SD card
|
||||
* clock can't exceed 150MHz, for DDR mode, SD card clock can't exceed 45MHz.
|
||||
*/
|
||||
#define ESDHC_FLAG_ERR010450 BIT(10)
|
||||
/* The IP supports HS400ES mode */
|
||||
@ -932,7 +932,8 @@ static inline void esdhc_pltfm_set_clock(struct sdhci_host *host,
|
||||
| ESDHC_CLOCK_MASK);
|
||||
sdhci_writel(host, temp, ESDHC_SYSTEM_CONTROL);
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_ERR010450) {
|
||||
if ((imx_data->socdata->flags & ESDHC_FLAG_ERR010450) &&
|
||||
(!(host->quirks2 & SDHCI_QUIRK2_NO_1_8_V))) {
|
||||
unsigned int max_clock;
|
||||
|
||||
max_clock = imx_data->is_ddr ? 45000000 : 150000000;
|
||||
|
@ -174,10 +174,10 @@ config CAN_SLCAN
|
||||
|
||||
config CAN_SUN4I
|
||||
tristate "Allwinner A10 CAN controller"
|
||||
depends on MACH_SUN4I || MACH_SUN7I || COMPILE_TEST
|
||||
depends on MACH_SUN4I || MACH_SUN7I || RISCV || COMPILE_TEST
|
||||
help
|
||||
Say Y here if you want to use CAN controller found on Allwinner
|
||||
A10/A20 SoCs.
|
||||
A10/A20/D1 SoCs.
|
||||
|
||||
To compile this driver as a module, choose M here: the module will
|
||||
be called sun4i_can.
|
||||
|
@ -91,6 +91,8 @@
|
||||
#define SUN4I_REG_BUF12_ADDR 0x0070 /* CAN Tx/Rx Buffer 12 */
|
||||
#define SUN4I_REG_ACPC_ADDR 0x0040 /* CAN Acceptance Code 0 */
|
||||
#define SUN4I_REG_ACPM_ADDR 0x0044 /* CAN Acceptance Mask 0 */
|
||||
#define SUN4I_REG_ACPC_ADDR_D1 0x0028 /* CAN Acceptance Code 0 on the D1 */
|
||||
#define SUN4I_REG_ACPM_ADDR_D1 0x002C /* CAN Acceptance Mask 0 on the D1 */
|
||||
#define SUN4I_REG_RBUF_RBACK_START_ADDR 0x0180 /* CAN transmit buffer start */
|
||||
#define SUN4I_REG_RBUF_RBACK_END_ADDR 0x01b0 /* CAN transmit buffer end */
|
||||
|
||||
@ -205,9 +207,11 @@
|
||||
* struct sun4ican_quirks - Differences between SoC variants.
|
||||
*
|
||||
* @has_reset: SoC needs reset deasserted.
|
||||
* @acp_offset: Offset of ACPC and ACPM registers
|
||||
*/
|
||||
struct sun4ican_quirks {
|
||||
bool has_reset;
|
||||
int acp_offset;
|
||||
};
|
||||
|
||||
struct sun4ican_priv {
|
||||
@ -216,6 +220,7 @@ struct sun4ican_priv {
|
||||
struct clk *clk;
|
||||
struct reset_control *reset;
|
||||
spinlock_t cmdreg_lock; /* lock for concurrent cmd register writes */
|
||||
int acp_offset;
|
||||
};
|
||||
|
||||
static const struct can_bittiming_const sun4ican_bittiming_const = {
|
||||
@ -338,8 +343,8 @@ static int sun4i_can_start(struct net_device *dev)
|
||||
}
|
||||
|
||||
/* set filters - we accept all */
|
||||
writel(0x00000000, priv->base + SUN4I_REG_ACPC_ADDR);
|
||||
writel(0xFFFFFFFF, priv->base + SUN4I_REG_ACPM_ADDR);
|
||||
writel(0x00000000, priv->base + SUN4I_REG_ACPC_ADDR + priv->acp_offset);
|
||||
writel(0xFFFFFFFF, priv->base + SUN4I_REG_ACPM_ADDR + priv->acp_offset);
|
||||
|
||||
/* clear error counters and error code capture */
|
||||
writel(0, priv->base + SUN4I_REG_ERRC_ADDR);
|
||||
@ -768,10 +773,17 @@ static const struct ethtool_ops sun4ican_ethtool_ops = {
|
||||
|
||||
static const struct sun4ican_quirks sun4ican_quirks_a10 = {
|
||||
.has_reset = false,
|
||||
.acp_offset = 0,
|
||||
};
|
||||
|
||||
static const struct sun4ican_quirks sun4ican_quirks_r40 = {
|
||||
.has_reset = true,
|
||||
.acp_offset = 0,
|
||||
};
|
||||
|
||||
static const struct sun4ican_quirks sun4ican_quirks_d1 = {
|
||||
.has_reset = true,
|
||||
.acp_offset = (SUN4I_REG_ACPC_ADDR_D1 - SUN4I_REG_ACPC_ADDR),
|
||||
};
|
||||
|
||||
static const struct of_device_id sun4ican_of_match[] = {
|
||||
@ -784,6 +796,9 @@ static const struct of_device_id sun4ican_of_match[] = {
|
||||
}, {
|
||||
.compatible = "allwinner,sun8i-r40-can",
|
||||
.data = &sun4ican_quirks_r40
|
||||
}, {
|
||||
.compatible = "allwinner,sun20i-d1-can",
|
||||
.data = &sun4ican_quirks_d1
|
||||
}, {
|
||||
/* sentinel */
|
||||
},
|
||||
@ -872,6 +887,7 @@ static int sun4ican_probe(struct platform_device *pdev)
|
||||
priv->base = addr;
|
||||
priv->clk = clk;
|
||||
priv->reset = reset;
|
||||
priv->acp_offset = quirks->acp_offset;
|
||||
spin_lock_init(&priv->cmdreg_lock);
|
||||
|
||||
platform_set_drvdata(pdev, dev);
|
||||
@ -909,4 +925,4 @@ module_platform_driver(sun4i_can_driver);
|
||||
MODULE_AUTHOR("Peter Chen <xingkongcp@gmail.com>");
|
||||
MODULE_AUTHOR("Gerhard Bertelsmann <info@gerhard-bertelsmann.de>");
|
||||
MODULE_LICENSE("Dual BSD/GPL");
|
||||
MODULE_DESCRIPTION("CAN driver for Allwinner SoCs (A10/A20)");
|
||||
MODULE_DESCRIPTION("CAN driver for Allwinner SoCs (A10/A20/D1)");
|
||||
|
@ -292,9 +292,8 @@ static void alx_get_ethtool_stats(struct net_device *netdev,
|
||||
spin_lock(&alx->stats_lock);
|
||||
|
||||
alx_update_hw_stats(hw);
|
||||
BUILD_BUG_ON(sizeof(hw->stats) - offsetof(struct alx_hw_stats, rx_ok) <
|
||||
ALX_NUM_STATS * sizeof(u64));
|
||||
memcpy(data, &hw->stats.rx_ok, ALX_NUM_STATS * sizeof(u64));
|
||||
BUILD_BUG_ON(sizeof(hw->stats) != ALX_NUM_STATS * sizeof(u64));
|
||||
memcpy(data, &hw->stats, sizeof(hw->stats));
|
||||
|
||||
spin_unlock(&alx->stats_lock);
|
||||
}
|
||||
|
@ -361,6 +361,9 @@ ice_eswitch_port_start_xmit(struct sk_buff *skb, struct net_device *netdev)
|
||||
np = netdev_priv(netdev);
|
||||
vsi = np->vsi;
|
||||
|
||||
if (!vsi || !ice_is_switchdev_running(vsi->back))
|
||||
return NETDEV_TX_BUSY;
|
||||
|
||||
if (ice_is_reset_in_progress(vsi->back->state) ||
|
||||
test_bit(ICE_VF_DIS, vsi->back->state))
|
||||
return NETDEV_TX_BUSY;
|
||||
|
@ -132,8 +132,8 @@ static int ath_ahb_probe(struct platform_device *pdev)
|
||||
|
||||
ah = sc->sc_ah;
|
||||
ath9k_hw_name(ah, hw_name, sizeof(hw_name));
|
||||
wiphy_info(hw->wiphy, "%s mem=0x%lx, irq=%d\n",
|
||||
hw_name, (unsigned long)mem, irq);
|
||||
wiphy_info(hw->wiphy, "%s mem=0x%p, irq=%d\n",
|
||||
hw_name, mem, irq);
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -115,8 +115,10 @@ struct ath_tx_status {
|
||||
u8 qid;
|
||||
u16 desc_id;
|
||||
u8 tid;
|
||||
u32 ba_low;
|
||||
u32 ba_high;
|
||||
struct_group(ba,
|
||||
u32 ba_low;
|
||||
u32 ba_high;
|
||||
);
|
||||
u32 evm0;
|
||||
u32 evm1;
|
||||
u32 evm2;
|
||||
|
@ -988,8 +988,8 @@ static int ath_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
sc->sc_ah->msi_reg = 0;
|
||||
|
||||
ath9k_hw_name(sc->sc_ah, hw_name, sizeof(hw_name));
|
||||
wiphy_info(hw->wiphy, "%s mem=0x%lx, irq=%d\n",
|
||||
hw_name, (unsigned long)sc->mem, pdev->irq);
|
||||
wiphy_info(hw->wiphy, "%s mem=0x%p, irq=%d\n",
|
||||
hw_name, sc->mem, pdev->irq);
|
||||
|
||||
return 0;
|
||||
|
||||
|
@ -462,7 +462,7 @@ static void ath_tx_count_frames(struct ath_softc *sc, struct ath_buf *bf,
|
||||
isaggr = bf_isaggr(bf);
|
||||
if (isaggr) {
|
||||
seq_st = ts->ts_seqnum;
|
||||
memcpy(ba, &ts->ba_low, WME_BA_BMP_SIZE >> 3);
|
||||
memcpy(ba, &ts->ba, WME_BA_BMP_SIZE >> 3);
|
||||
}
|
||||
|
||||
while (bf) {
|
||||
@ -545,7 +545,7 @@ static void ath_tx_complete_aggr(struct ath_softc *sc, struct ath_txq *txq,
|
||||
if (isaggr && txok) {
|
||||
if (ts->ts_flags & ATH9K_TX_BA) {
|
||||
seq_st = ts->ts_seqnum;
|
||||
memcpy(ba, &ts->ba_low, WME_BA_BMP_SIZE >> 3);
|
||||
memcpy(ba, &ts->ba, WME_BA_BMP_SIZE >> 3);
|
||||
} else {
|
||||
/*
|
||||
* AR5416 can become deaf/mute when BA
|
||||
|
@ -666,7 +666,7 @@ static int wil_rx_crypto_check(struct wil6210_priv *wil, struct sk_buff *skb)
|
||||
struct wil_tid_crypto_rx *c = mc ? &s->group_crypto_rx :
|
||||
&s->tid_crypto_rx[tid];
|
||||
struct wil_tid_crypto_rx_single *cc = &c->key_id[key_id];
|
||||
const u8 *pn = (u8 *)&d->mac.pn_15_0;
|
||||
const u8 *pn = (u8 *)&d->mac.pn;
|
||||
|
||||
if (!cc->key_set) {
|
||||
wil_err_ratelimited(wil,
|
||||
|
@ -343,8 +343,10 @@ struct vring_rx_mac {
|
||||
u32 d0;
|
||||
u32 d1;
|
||||
u16 w4;
|
||||
u16 pn_15_0;
|
||||
u32 pn_47_16;
|
||||
struct_group_attr(pn, __packed,
|
||||
u16 pn_15_0;
|
||||
u32 pn_47_16;
|
||||
);
|
||||
} __packed;
|
||||
|
||||
/* Rx descriptor - DMA part
|
||||
|
@ -548,7 +548,7 @@ static int wil_rx_crypto_check_edma(struct wil6210_priv *wil,
|
||||
s = &wil->sta[cid];
|
||||
c = mc ? &s->group_crypto_rx : &s->tid_crypto_rx[tid];
|
||||
cc = &c->key_id[key_id];
|
||||
pn = (u8 *)&st->ext.pn_15_0;
|
||||
pn = (u8 *)&st->ext.pn;
|
||||
|
||||
if (!cc->key_set) {
|
||||
wil_err_ratelimited(wil,
|
||||
|
@ -330,8 +330,10 @@ struct wil_rx_status_extension {
|
||||
u32 d0;
|
||||
u32 d1;
|
||||
__le16 seq_num; /* only lower 12 bits */
|
||||
u16 pn_15_0;
|
||||
u32 pn_47_16;
|
||||
struct_group_attr(pn, __packed,
|
||||
u16 pn_15_0;
|
||||
u32 pn_47_16;
|
||||
);
|
||||
} __packed;
|
||||
|
||||
struct wil_rx_status_extended {
|
||||
|
@ -4906,14 +4906,15 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2,
|
||||
frame_data_len = nla_len(info->attrs[HWSIM_ATTR_FRAME]);
|
||||
frame_data = (void *)nla_data(info->attrs[HWSIM_ATTR_FRAME]);
|
||||
|
||||
if (frame_data_len < sizeof(struct ieee80211_hdr_3addr) ||
|
||||
frame_data_len > IEEE80211_MAX_DATA_LEN)
|
||||
goto err;
|
||||
|
||||
/* Allocate new skb here */
|
||||
skb = alloc_skb(frame_data_len, GFP_KERNEL);
|
||||
if (skb == NULL)
|
||||
goto err;
|
||||
|
||||
if (frame_data_len > IEEE80211_MAX_DATA_LEN)
|
||||
goto err;
|
||||
|
||||
/* Copy the data */
|
||||
skb_put_data(skb, frame_data, frame_data_len);
|
||||
|
||||
|
@ -735,6 +735,7 @@ mwifiex_construct_tdls_action_frame(struct mwifiex_private *priv,
|
||||
int ret;
|
||||
u16 capab;
|
||||
struct ieee80211_ht_cap *ht_cap;
|
||||
unsigned int extra;
|
||||
u8 radio, *pos;
|
||||
|
||||
capab = priv->curr_bss_params.bss_descriptor.cap_info_bitmap;
|
||||
@ -753,7 +754,10 @@ mwifiex_construct_tdls_action_frame(struct mwifiex_private *priv,
|
||||
|
||||
switch (action_code) {
|
||||
case WLAN_PUB_ACTION_TDLS_DISCOVER_RES:
|
||||
skb_put(skb, sizeof(mgmt->u.action.u.tdls_discover_resp) + 1);
|
||||
/* See the layout of 'struct ieee80211_mgmt'. */
|
||||
extra = sizeof(mgmt->u.action.u.tdls_discover_resp) +
|
||||
sizeof(mgmt->u.action.category);
|
||||
skb_put(skb, extra);
|
||||
mgmt->u.action.category = WLAN_CATEGORY_PUBLIC;
|
||||
mgmt->u.action.u.tdls_discover_resp.action_code =
|
||||
WLAN_PUB_ACTION_TDLS_DISCOVER_RES;
|
||||
@ -762,8 +766,7 @@ mwifiex_construct_tdls_action_frame(struct mwifiex_private *priv,
|
||||
mgmt->u.action.u.tdls_discover_resp.capability =
|
||||
cpu_to_le16(capab);
|
||||
/* move back for addr4 */
|
||||
memmove(pos + ETH_ALEN, &mgmt->u.action.category,
|
||||
sizeof(mgmt->u.action.u.tdls_discover_resp));
|
||||
memmove(pos + ETH_ALEN, &mgmt->u.action, extra);
|
||||
/* init address 4 */
|
||||
eth_broadcast_addr(pos);
|
||||
|
||||
|
@ -1167,6 +1167,10 @@ int mt7921_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
|
||||
if (unlikely(tx_info->skb->len <= ETH_HLEN))
|
||||
return -EINVAL;
|
||||
|
||||
err = skb_cow_head(skb, MT_SDIO_TXD_SIZE + MT_SDIO_HDR_SIZE);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (!wcid)
|
||||
wcid = &dev->mt76.global_wcid;
|
||||
|
||||
|
@ -2368,25 +2368,8 @@ int nvme_enable_ctrl(struct nvme_ctrl *ctrl)
|
||||
else
|
||||
ctrl->ctrl_config = NVME_CC_CSS_NVM;
|
||||
|
||||
if (ctrl->cap & NVME_CAP_CRMS_CRWMS) {
|
||||
u32 crto;
|
||||
|
||||
ret = ctrl->ops->reg_read32(ctrl, NVME_REG_CRTO, &crto);
|
||||
if (ret) {
|
||||
dev_err(ctrl->device, "Reading CRTO failed (%d)\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (ctrl->cap & NVME_CAP_CRMS_CRIMS) {
|
||||
ctrl->ctrl_config |= NVME_CC_CRIME;
|
||||
timeout = NVME_CRTO_CRIMT(crto);
|
||||
} else {
|
||||
timeout = NVME_CRTO_CRWMT(crto);
|
||||
}
|
||||
} else {
|
||||
timeout = NVME_CAP_TIMEOUT(ctrl->cap);
|
||||
}
|
||||
if (ctrl->cap & NVME_CAP_CRMS_CRWMS && ctrl->cap & NVME_CAP_CRMS_CRIMS)
|
||||
ctrl->ctrl_config |= NVME_CC_CRIME;
|
||||
|
||||
ctrl->ctrl_config |= (NVME_CTRL_PAGE_SHIFT - 12) << NVME_CC_MPS_SHIFT;
|
||||
ctrl->ctrl_config |= NVME_CC_AMS_RR | NVME_CC_SHN_NONE;
|
||||
@ -2400,6 +2383,39 @@ int nvme_enable_ctrl(struct nvme_ctrl *ctrl)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* CAP value may change after initial CC write */
|
||||
ret = ctrl->ops->reg_read64(ctrl, NVME_REG_CAP, &ctrl->cap);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
timeout = NVME_CAP_TIMEOUT(ctrl->cap);
|
||||
if (ctrl->cap & NVME_CAP_CRMS_CRWMS) {
|
||||
u32 crto, ready_timeout;
|
||||
|
||||
ret = ctrl->ops->reg_read32(ctrl, NVME_REG_CRTO, &crto);
|
||||
if (ret) {
|
||||
dev_err(ctrl->device, "Reading CRTO failed (%d)\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* CRTO should always be greater or equal to CAP.TO, but some
|
||||
* devices are known to get this wrong. Use the larger of the
|
||||
* two values.
|
||||
*/
|
||||
if (ctrl->ctrl_config & NVME_CC_CRIME)
|
||||
ready_timeout = NVME_CRTO_CRIMT(crto);
|
||||
else
|
||||
ready_timeout = NVME_CRTO_CRWMT(crto);
|
||||
|
||||
if (ready_timeout < timeout)
|
||||
dev_warn_once(ctrl->device, "bad crto:%x cap:%llx\n",
|
||||
crto, ctrl->cap);
|
||||
else
|
||||
timeout = ready_timeout;
|
||||
}
|
||||
|
||||
ctrl->ctrl_config |= NVME_CC_ENABLE;
|
||||
ret = ctrl->ops->reg_write32(ctrl, NVME_REG_CC, ctrl->ctrl_config);
|
||||
if (ret)
|
||||
|
@ -73,13 +73,6 @@ int nvmet_file_ns_enable(struct nvmet_ns *ns)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void nvmet_file_init_bvec(struct bio_vec *bv, struct scatterlist *sg)
|
||||
{
|
||||
bv->bv_page = sg_page(sg);
|
||||
bv->bv_offset = sg->offset;
|
||||
bv->bv_len = sg->length;
|
||||
}
|
||||
|
||||
static ssize_t nvmet_file_submit_bvec(struct nvmet_req *req, loff_t pos,
|
||||
unsigned long nr_segs, size_t count, int ki_flags)
|
||||
{
|
||||
@ -146,7 +139,8 @@ static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags)
|
||||
|
||||
memset(&req->f.iocb, 0, sizeof(struct kiocb));
|
||||
for_each_sg(req->sg, sg, req->sg_cnt, i) {
|
||||
nvmet_file_init_bvec(&req->f.bvec[bv_cnt], sg);
|
||||
bvec_set_page(&req->f.bvec[bv_cnt], sg_page(sg), sg->length,
|
||||
sg->offset);
|
||||
len += req->f.bvec[bv_cnt].bv_len;
|
||||
total_len += req->f.bvec[bv_cnt].bv_len;
|
||||
bv_cnt++;
|
||||
|
@ -321,9 +321,8 @@ static void nvmet_tcp_build_pdu_iovec(struct nvmet_tcp_cmd *cmd)
|
||||
while (length) {
|
||||
u32 iov_len = min_t(u32, length, sg->length - sg_offset);
|
||||
|
||||
iov->bv_page = sg_page(sg);
|
||||
iov->bv_len = sg->length;
|
||||
iov->bv_offset = sg->offset + sg_offset;
|
||||
bvec_set_page(iov, sg_page(sg), iov_len,
|
||||
sg->offset + sg_offset);
|
||||
|
||||
length -= iov_len;
|
||||
sg = sg_next(sg);
|
||||
|
@ -999,6 +999,7 @@ static void imx6_pcie_host_exit(struct dw_pcie_rp *pp)
|
||||
|
||||
static const struct dw_pcie_host_ops imx6_pcie_host_ops = {
|
||||
.host_init = imx6_pcie_host_init,
|
||||
.host_deinit = imx6_pcie_host_exit,
|
||||
};
|
||||
|
||||
static const struct dw_pcie_ops dw_pcie_ops = {
|
||||
|
@ -299,6 +299,7 @@ static int fu740_pcie_probe(struct platform_device *pdev)
|
||||
pci->dev = dev;
|
||||
pci->ops = &dw_pcie_ops;
|
||||
pci->pp.ops = &fu740_pcie_host_ops;
|
||||
pci->pp.num_vectors = MAX_MSI_IRQS;
|
||||
|
||||
/* SiFive specific region: mgmt */
|
||||
afp->mgmt_base = devm_platform_ioremap_resource_byname(pdev, "mgmt");
|
||||
|
@ -526,8 +526,23 @@ static void vmd_domain_reset(struct vmd_dev *vmd)
|
||||
PCI_CLASS_BRIDGE_PCI))
|
||||
continue;
|
||||
|
||||
memset_io(base + PCI_IO_BASE, 0,
|
||||
PCI_ROM_ADDRESS1 - PCI_IO_BASE);
|
||||
/*
|
||||
* Temporarily disable the I/O range before updating
|
||||
* PCI_IO_BASE.
|
||||
*/
|
||||
writel(0x0000ffff, base + PCI_IO_BASE_UPPER16);
|
||||
/* Update lower 16 bits of I/O base/limit */
|
||||
writew(0x00f0, base + PCI_IO_BASE);
|
||||
/* Update upper 16 bits of I/O base/limit */
|
||||
writel(0, base + PCI_IO_BASE_UPPER16);
|
||||
|
||||
/* MMIO Base/Limit */
|
||||
writel(0x0000fff0, base + PCI_MEMORY_BASE);
|
||||
|
||||
/* Prefetchable MMIO Base/Limit */
|
||||
writel(0, base + PCI_PREF_LIMIT_UPPER32);
|
||||
writel(0x0000fff0, base + PCI_PREF_MEMORY_BASE);
|
||||
writel(0xffffffff, base + PCI_PREF_BASE_UPPER32);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -115,6 +115,7 @@
|
||||
#define SMMU_PMCG_PA_SHIFT 12
|
||||
|
||||
#define SMMU_PMCG_EVCNTR_RDONLY BIT(0)
|
||||
#define SMMU_PMCG_HARDEN_DISABLE BIT(1)
|
||||
|
||||
static int cpuhp_state_num;
|
||||
|
||||
@ -159,6 +160,20 @@ static inline void smmu_pmu_enable(struct pmu *pmu)
|
||||
writel(SMMU_PMCG_CR_ENABLE, smmu_pmu->reg_base + SMMU_PMCG_CR);
|
||||
}
|
||||
|
||||
static int smmu_pmu_apply_event_filter(struct smmu_pmu *smmu_pmu,
|
||||
struct perf_event *event, int idx);
|
||||
|
||||
static inline void smmu_pmu_enable_quirk_hip08_09(struct pmu *pmu)
|
||||
{
|
||||
struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
|
||||
unsigned int idx;
|
||||
|
||||
for_each_set_bit(idx, smmu_pmu->used_counters, smmu_pmu->num_counters)
|
||||
smmu_pmu_apply_event_filter(smmu_pmu, smmu_pmu->events[idx], idx);
|
||||
|
||||
smmu_pmu_enable(pmu);
|
||||
}
|
||||
|
||||
static inline void smmu_pmu_disable(struct pmu *pmu)
|
||||
{
|
||||
struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
|
||||
@ -167,6 +182,22 @@ static inline void smmu_pmu_disable(struct pmu *pmu)
|
||||
writel(0, smmu_pmu->reg_base + SMMU_PMCG_IRQ_CTRL);
|
||||
}
|
||||
|
||||
static inline void smmu_pmu_disable_quirk_hip08_09(struct pmu *pmu)
|
||||
{
|
||||
struct smmu_pmu *smmu_pmu = to_smmu_pmu(pmu);
|
||||
unsigned int idx;
|
||||
|
||||
/*
|
||||
* The global disable of PMU sometimes fail to stop the counting.
|
||||
* Harden this by writing an invalid event type to each used counter
|
||||
* to forcibly stop counting.
|
||||
*/
|
||||
for_each_set_bit(idx, smmu_pmu->used_counters, smmu_pmu->num_counters)
|
||||
writel(0xffff, smmu_pmu->reg_base + SMMU_PMCG_EVTYPER(idx));
|
||||
|
||||
smmu_pmu_disable(pmu);
|
||||
}
|
||||
|
||||
static inline void smmu_pmu_counter_set_value(struct smmu_pmu *smmu_pmu,
|
||||
u32 idx, u64 value)
|
||||
{
|
||||
@ -765,7 +796,10 @@ static void smmu_pmu_get_acpi_options(struct smmu_pmu *smmu_pmu)
|
||||
switch (model) {
|
||||
case IORT_SMMU_V3_PMCG_HISI_HIP08:
|
||||
/* HiSilicon Erratum 162001800 */
|
||||
smmu_pmu->options |= SMMU_PMCG_EVCNTR_RDONLY;
|
||||
smmu_pmu->options |= SMMU_PMCG_EVCNTR_RDONLY | SMMU_PMCG_HARDEN_DISABLE;
|
||||
break;
|
||||
case IORT_SMMU_V3_PMCG_HISI_HIP09:
|
||||
smmu_pmu->options |= SMMU_PMCG_HARDEN_DISABLE;
|
||||
break;
|
||||
}
|
||||
|
||||
@ -890,6 +924,16 @@ static int smmu_pmu_probe(struct platform_device *pdev)
|
||||
if (!dev->of_node)
|
||||
smmu_pmu_get_acpi_options(smmu_pmu);
|
||||
|
||||
/*
|
||||
* For platforms suffer this quirk, the PMU disable sometimes fails to
|
||||
* stop the counters. This will leads to inaccurate or error counting.
|
||||
* Forcibly disable the counters with these quirk handler.
|
||||
*/
|
||||
if (smmu_pmu->options & SMMU_PMCG_HARDEN_DISABLE) {
|
||||
smmu_pmu->pmu.pmu_enable = smmu_pmu_enable_quirk_hip08_09;
|
||||
smmu_pmu->pmu.pmu_disable = smmu_pmu_disable_quirk_hip08_09;
|
||||
}
|
||||
|
||||
/* Pick one CPU to be the preferred one to use */
|
||||
smmu_pmu->on_cpu = raw_smp_processor_id();
|
||||
WARN_ON(irq_set_affinity(smmu_pmu->irq, cpumask_of(smmu_pmu->on_cpu)));
|
||||
|
@ -28,6 +28,8 @@
|
||||
#define CNTL_CLEAR_MASK 0xFFFFFFFD
|
||||
#define CNTL_OVER_MASK 0xFFFFFFFE
|
||||
|
||||
#define CNTL_CP_SHIFT 16
|
||||
#define CNTL_CP_MASK (0xFF << CNTL_CP_SHIFT)
|
||||
#define CNTL_CSV_SHIFT 24
|
||||
#define CNTL_CSV_MASK (0xFFU << CNTL_CSV_SHIFT)
|
||||
|
||||
@ -35,6 +37,8 @@
|
||||
#define EVENT_CYCLES_COUNTER 0
|
||||
#define NUM_COUNTERS 4
|
||||
|
||||
/* For removing bias if cycle counter CNTL.CP is set to 0xf0 */
|
||||
#define CYCLES_COUNTER_MASK 0x0FFFFFFF
|
||||
#define AXI_MASKING_REVERT 0xffff0000 /* AXI_MASKING(MSB 16bits) + AXI_ID(LSB 16bits) */
|
||||
|
||||
#define to_ddr_pmu(p) container_of(p, struct ddr_pmu, pmu)
|
||||
@ -429,6 +433,17 @@ static void ddr_perf_counter_enable(struct ddr_pmu *pmu, int config,
|
||||
writel(0, pmu->base + reg);
|
||||
val = CNTL_EN | CNTL_CLEAR;
|
||||
val |= FIELD_PREP(CNTL_CSV_MASK, config);
|
||||
|
||||
/*
|
||||
* On i.MX8MP we need to bias the cycle counter to overflow more often.
|
||||
* We do this by initializing bits [23:16] of the counter value via the
|
||||
* COUNTER_CTRL Counter Parameter (CP) field.
|
||||
*/
|
||||
if (pmu->devtype_data->quirks & DDR_CAP_AXI_ID_FILTER_ENHANCED) {
|
||||
if (counter == EVENT_CYCLES_COUNTER)
|
||||
val |= FIELD_PREP(CNTL_CP_MASK, 0xf0);
|
||||
}
|
||||
|
||||
writel(val, pmu->base + reg);
|
||||
} else {
|
||||
/* Disable counter */
|
||||
@ -468,6 +483,12 @@ static void ddr_perf_event_update(struct perf_event *event)
|
||||
int ret;
|
||||
|
||||
new_raw_count = ddr_perf_read_counter(pmu, counter);
|
||||
/* Remove the bias applied in ddr_perf_counter_enable(). */
|
||||
if (pmu->devtype_data->quirks & DDR_CAP_AXI_ID_FILTER_ENHANCED) {
|
||||
if (counter == EVENT_CYCLES_COUNTER)
|
||||
new_raw_count &= CYCLES_COUNTER_MASK;
|
||||
}
|
||||
|
||||
local64_add(new_raw_count, &event->count);
|
||||
|
||||
/*
|
||||
|
@ -895,6 +895,7 @@ enum lpfc_irq_chann_mode {
|
||||
enum lpfc_hba_bit_flags {
|
||||
FABRIC_COMANDS_BLOCKED,
|
||||
HBA_PCI_ERR,
|
||||
MBX_TMO_ERR,
|
||||
};
|
||||
|
||||
struct lpfc_hba {
|
||||
|
@ -6069,7 +6069,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
|
||||
phba->hba_debugfs_root,
|
||||
phba,
|
||||
&lpfc_debugfs_op_multixripools);
|
||||
if (!phba->debug_multixri_pools) {
|
||||
if (IS_ERR(phba->debug_multixri_pools)) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
|
||||
"0527 Cannot create debugfs multixripools\n");
|
||||
goto debug_failed;
|
||||
@ -6081,7 +6081,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
|
||||
debugfs_create_file(name, S_IFREG | 0644,
|
||||
phba->hba_debugfs_root,
|
||||
phba, &lpfc_cgn_buffer_op);
|
||||
if (!phba->debug_cgn_buffer) {
|
||||
if (IS_ERR(phba->debug_cgn_buffer)) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
|
||||
"6527 Cannot create debugfs "
|
||||
"cgn_buffer\n");
|
||||
@ -6094,7 +6094,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
|
||||
debugfs_create_file(name, S_IFREG | 0644,
|
||||
phba->hba_debugfs_root,
|
||||
phba, &lpfc_rx_monitor_op);
|
||||
if (!phba->debug_rx_monitor) {
|
||||
if (IS_ERR(phba->debug_rx_monitor)) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
|
||||
"6528 Cannot create debugfs "
|
||||
"rx_monitor\n");
|
||||
@ -6107,7 +6107,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
|
||||
debugfs_create_file(name, 0644,
|
||||
phba->hba_debugfs_root,
|
||||
phba, &lpfc_debugfs_ras_log);
|
||||
if (!phba->debug_ras_log) {
|
||||
if (IS_ERR(phba->debug_ras_log)) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
|
||||
"6148 Cannot create debugfs"
|
||||
" ras_log\n");
|
||||
@ -6128,7 +6128,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
|
||||
debugfs_create_file(name, S_IFREG | 0644,
|
||||
phba->hba_debugfs_root,
|
||||
phba, &lpfc_debugfs_op_lockstat);
|
||||
if (!phba->debug_lockstat) {
|
||||
if (IS_ERR(phba->debug_lockstat)) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
|
||||
"4610 Can't create debugfs lockstat\n");
|
||||
goto debug_failed;
|
||||
@ -6354,7 +6354,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
|
||||
debugfs_create_file(name, 0644,
|
||||
vport->vport_debugfs_root,
|
||||
vport, &lpfc_debugfs_op_scsistat);
|
||||
if (!vport->debug_scsistat) {
|
||||
if (IS_ERR(vport->debug_scsistat)) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
|
||||
"4611 Cannot create debugfs scsistat\n");
|
||||
goto debug_failed;
|
||||
@ -6365,7 +6365,7 @@ lpfc_debugfs_initialize(struct lpfc_vport *vport)
|
||||
debugfs_create_file(name, 0644,
|
||||
vport->vport_debugfs_root,
|
||||
vport, &lpfc_debugfs_op_ioktime);
|
||||
if (!vport->debug_ioktime) {
|
||||
if (IS_ERR(vport->debug_ioktime)) {
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT,
|
||||
"0815 Cannot create debugfs ioktime\n");
|
||||
goto debug_failed;
|
||||
|
@ -9410,11 +9410,13 @@ void
|
||||
lpfc_els_flush_cmd(struct lpfc_vport *vport)
|
||||
{
|
||||
LIST_HEAD(abort_list);
|
||||
LIST_HEAD(cancel_list);
|
||||
struct lpfc_hba *phba = vport->phba;
|
||||
struct lpfc_sli_ring *pring;
|
||||
struct lpfc_iocbq *tmp_iocb, *piocb;
|
||||
u32 ulp_command;
|
||||
unsigned long iflags = 0;
|
||||
bool mbx_tmo_err;
|
||||
|
||||
lpfc_fabric_abort_vport(vport);
|
||||
|
||||
@ -9436,15 +9438,16 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
|
||||
if (phba->sli_rev == LPFC_SLI_REV4)
|
||||
spin_lock(&pring->ring_lock);
|
||||
|
||||
mbx_tmo_err = test_bit(MBX_TMO_ERR, &phba->bit_flags);
|
||||
/* First we need to issue aborts to outstanding cmds on txcmpl */
|
||||
list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) {
|
||||
if (piocb->cmd_flag & LPFC_IO_LIBDFC)
|
||||
if (piocb->cmd_flag & LPFC_IO_LIBDFC && !mbx_tmo_err)
|
||||
continue;
|
||||
|
||||
if (piocb->vport != vport)
|
||||
continue;
|
||||
|
||||
if (piocb->cmd_flag & LPFC_DRIVER_ABORTED)
|
||||
if (piocb->cmd_flag & LPFC_DRIVER_ABORTED && !mbx_tmo_err)
|
||||
continue;
|
||||
|
||||
/* On the ELS ring we can have ELS_REQUESTs or
|
||||
@ -9463,8 +9466,8 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
|
||||
*/
|
||||
if (phba->link_state == LPFC_LINK_DOWN)
|
||||
piocb->cmd_cmpl = lpfc_cmpl_els_link_down;
|
||||
}
|
||||
if (ulp_command == CMD_GEN_REQUEST64_CR)
|
||||
} else if (ulp_command == CMD_GEN_REQUEST64_CR ||
|
||||
mbx_tmo_err)
|
||||
list_add_tail(&piocb->dlist, &abort_list);
|
||||
}
|
||||
|
||||
@ -9476,11 +9479,19 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
|
||||
list_for_each_entry_safe(piocb, tmp_iocb, &abort_list, dlist) {
|
||||
spin_lock_irqsave(&phba->hbalock, iflags);
|
||||
list_del_init(&piocb->dlist);
|
||||
lpfc_sli_issue_abort_iotag(phba, pring, piocb, NULL);
|
||||
if (mbx_tmo_err)
|
||||
list_move_tail(&piocb->list, &cancel_list);
|
||||
else
|
||||
lpfc_sli_issue_abort_iotag(phba, pring, piocb, NULL);
|
||||
|
||||
spin_unlock_irqrestore(&phba->hbalock, iflags);
|
||||
}
|
||||
/* Make sure HBA is alive */
|
||||
lpfc_issue_hb_tmo(phba);
|
||||
if (!list_empty(&cancel_list))
|
||||
lpfc_sli_cancel_iocbs(phba, &cancel_list, IOSTAT_LOCAL_REJECT,
|
||||
IOERR_SLI_ABORTED);
|
||||
else
|
||||
/* Make sure HBA is alive */
|
||||
lpfc_issue_hb_tmo(phba);
|
||||
|
||||
if (!list_empty(&abort_list))
|
||||
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
|
||||
|
@ -7563,6 +7563,8 @@ lpfc_disable_pci_dev(struct lpfc_hba *phba)
|
||||
void
|
||||
lpfc_reset_hba(struct lpfc_hba *phba)
|
||||
{
|
||||
int rc = 0;
|
||||
|
||||
/* If resets are disabled then set error state and return. */
|
||||
if (!phba->cfg_enable_hba_reset) {
|
||||
phba->link_state = LPFC_HBA_ERROR;
|
||||
@ -7573,13 +7575,25 @@ lpfc_reset_hba(struct lpfc_hba *phba)
|
||||
if (phba->sli.sli_flag & LPFC_SLI_ACTIVE) {
|
||||
lpfc_offline_prep(phba, LPFC_MBX_WAIT);
|
||||
} else {
|
||||
if (test_bit(MBX_TMO_ERR, &phba->bit_flags)) {
|
||||
/* Perform a PCI function reset to start from clean */
|
||||
rc = lpfc_pci_function_reset(phba);
|
||||
lpfc_els_flush_all_cmd(phba);
|
||||
}
|
||||
lpfc_offline_prep(phba, LPFC_MBX_NO_WAIT);
|
||||
lpfc_sli_flush_io_rings(phba);
|
||||
}
|
||||
lpfc_offline(phba);
|
||||
lpfc_sli_brdrestart(phba);
|
||||
lpfc_online(phba);
|
||||
lpfc_unblock_mgmt_io(phba);
|
||||
clear_bit(MBX_TMO_ERR, &phba->bit_flags);
|
||||
if (unlikely(rc)) {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
|
||||
"8888 PCI function reset failed rc %x\n",
|
||||
rc);
|
||||
} else {
|
||||
lpfc_sli_brdrestart(phba);
|
||||
lpfc_online(phba);
|
||||
lpfc_unblock_mgmt_io(phba);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -3919,6 +3919,8 @@ void lpfc_poll_eratt(struct timer_list *t)
|
||||
uint64_t sli_intr, cnt;
|
||||
|
||||
phba = from_timer(phba, t, eratt_poll);
|
||||
if (!(phba->hba_flag & HBA_SETUP))
|
||||
return;
|
||||
|
||||
/* Here we will also keep track of interrupts per sec of the hba */
|
||||
sli_intr = phba->sli.slistat.sli_intr;
|
||||
@ -7712,7 +7714,9 @@ lpfc_sli4_repost_sgl_list(struct lpfc_hba *phba,
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
} else {
|
||||
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
|
||||
"3161 Failure to post sgl to port.\n");
|
||||
"3161 Failure to post sgl to port,status %x "
|
||||
"blkcnt %d totalcnt %d postcnt %d\n",
|
||||
status, block_cnt, total_cnt, post_cnt);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
@ -8495,6 +8499,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
|
||||
spin_unlock_irq(&phba->hbalock);
|
||||
}
|
||||
}
|
||||
phba->hba_flag &= ~HBA_SETUP;
|
||||
|
||||
lpfc_sli4_dip(phba);
|
||||
|
||||
@ -9317,6 +9322,7 @@ lpfc_mbox_timeout_handler(struct lpfc_hba *phba)
|
||||
* would get IOCB_ERROR from lpfc_sli_issue_iocb, allowing
|
||||
* it to fail all outstanding SCSI IO.
|
||||
*/
|
||||
set_bit(MBX_TMO_ERR, &phba->bit_flags);
|
||||
spin_lock_irq(&phba->pport->work_port_lock);
|
||||
phba->pport->work_port_events &= ~WORKER_MBOX_TMO;
|
||||
spin_unlock_irq(&phba->pport->work_port_lock);
|
||||
|
@ -2332,7 +2332,7 @@ struct megasas_instance {
|
||||
u32 support_morethan256jbod; /* FW support for more than 256 PD/JBOD */
|
||||
bool use_seqnum_jbod_fp; /* Added for PD sequence */
|
||||
bool smp_affinity_enable;
|
||||
spinlock_t crashdump_lock;
|
||||
struct mutex crashdump_lock;
|
||||
|
||||
struct megasas_register_set __iomem *reg_set;
|
||||
u32 __iomem *reply_post_host_index_addr[MR_MAX_MSIX_REG_ARRAY];
|
||||
|
@ -3271,14 +3271,13 @@ fw_crash_buffer_store(struct device *cdev,
|
||||
struct megasas_instance *instance =
|
||||
(struct megasas_instance *) shost->hostdata;
|
||||
int val = 0;
|
||||
unsigned long flags;
|
||||
|
||||
if (kstrtoint(buf, 0, &val) != 0)
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock_irqsave(&instance->crashdump_lock, flags);
|
||||
mutex_lock(&instance->crashdump_lock);
|
||||
instance->fw_crash_buffer_offset = val;
|
||||
spin_unlock_irqrestore(&instance->crashdump_lock, flags);
|
||||
mutex_unlock(&instance->crashdump_lock);
|
||||
return strlen(buf);
|
||||
}
|
||||
|
||||
@ -3293,24 +3292,23 @@ fw_crash_buffer_show(struct device *cdev,
|
||||
unsigned long dmachunk = CRASH_DMA_BUF_SIZE;
|
||||
unsigned long chunk_left_bytes;
|
||||
unsigned long src_addr;
|
||||
unsigned long flags;
|
||||
u32 buff_offset;
|
||||
|
||||
spin_lock_irqsave(&instance->crashdump_lock, flags);
|
||||
mutex_lock(&instance->crashdump_lock);
|
||||
buff_offset = instance->fw_crash_buffer_offset;
|
||||
if (!instance->crash_dump_buf ||
|
||||
!((instance->fw_crash_state == AVAILABLE) ||
|
||||
(instance->fw_crash_state == COPYING))) {
|
||||
dev_err(&instance->pdev->dev,
|
||||
"Firmware crash dump is not available\n");
|
||||
spin_unlock_irqrestore(&instance->crashdump_lock, flags);
|
||||
mutex_unlock(&instance->crashdump_lock);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (buff_offset > (instance->fw_crash_buffer_size * dmachunk)) {
|
||||
dev_err(&instance->pdev->dev,
|
||||
"Firmware crash dump offset is out of range\n");
|
||||
spin_unlock_irqrestore(&instance->crashdump_lock, flags);
|
||||
mutex_unlock(&instance->crashdump_lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -3322,7 +3320,7 @@ fw_crash_buffer_show(struct device *cdev,
|
||||
src_addr = (unsigned long)instance->crash_buf[buff_offset / dmachunk] +
|
||||
(buff_offset % dmachunk);
|
||||
memcpy(buf, (void *)src_addr, size);
|
||||
spin_unlock_irqrestore(&instance->crashdump_lock, flags);
|
||||
mutex_unlock(&instance->crashdump_lock);
|
||||
|
||||
return size;
|
||||
}
|
||||
@ -3347,7 +3345,6 @@ fw_crash_state_store(struct device *cdev,
|
||||
struct megasas_instance *instance =
|
||||
(struct megasas_instance *) shost->hostdata;
|
||||
int val = 0;
|
||||
unsigned long flags;
|
||||
|
||||
if (kstrtoint(buf, 0, &val) != 0)
|
||||
return -EINVAL;
|
||||
@ -3361,9 +3358,9 @@ fw_crash_state_store(struct device *cdev,
|
||||
instance->fw_crash_state = val;
|
||||
|
||||
if ((val == COPIED) || (val == COPY_ERROR)) {
|
||||
spin_lock_irqsave(&instance->crashdump_lock, flags);
|
||||
mutex_lock(&instance->crashdump_lock);
|
||||
megasas_free_host_crash_buffer(instance);
|
||||
spin_unlock_irqrestore(&instance->crashdump_lock, flags);
|
||||
mutex_unlock(&instance->crashdump_lock);
|
||||
if (val == COPY_ERROR)
|
||||
dev_info(&instance->pdev->dev, "application failed to "
|
||||
"copy Firmware crash dump\n");
|
||||
@ -7422,7 +7419,7 @@ static inline void megasas_init_ctrl_params(struct megasas_instance *instance)
|
||||
init_waitqueue_head(&instance->int_cmd_wait_q);
|
||||
init_waitqueue_head(&instance->abort_cmd_wait_q);
|
||||
|
||||
spin_lock_init(&instance->crashdump_lock);
|
||||
mutex_init(&instance->crashdump_lock);
|
||||
spin_lock_init(&instance->mfi_pool_lock);
|
||||
spin_lock_init(&instance->hba_lock);
|
||||
spin_lock_init(&instance->stream_lock);
|
||||
|
@ -274,7 +274,6 @@ static irqreturn_t pm8001_interrupt_handler_intx(int irq, void *dev_id)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static u32 pm8001_setup_irq(struct pm8001_hba_info *pm8001_ha);
|
||||
static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha);
|
||||
|
||||
/**
|
||||
@ -295,13 +294,6 @@ static int pm8001_alloc(struct pm8001_hba_info *pm8001_ha,
|
||||
pm8001_dbg(pm8001_ha, INIT, "pm8001_alloc: PHY:%x\n",
|
||||
pm8001_ha->chip->n_phy);
|
||||
|
||||
/* Setup Interrupt */
|
||||
rc = pm8001_setup_irq(pm8001_ha);
|
||||
if (rc) {
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"pm8001_setup_irq failed [ret: %d]\n", rc);
|
||||
goto err_out;
|
||||
}
|
||||
/* Request Interrupt */
|
||||
rc = pm8001_request_irq(pm8001_ha);
|
||||
if (rc)
|
||||
@ -1021,47 +1013,38 @@ static u32 pm8001_request_msix(struct pm8001_hba_info *pm8001_ha)
|
||||
}
|
||||
#endif
|
||||
|
||||
static u32 pm8001_setup_irq(struct pm8001_hba_info *pm8001_ha)
|
||||
{
|
||||
struct pci_dev *pdev;
|
||||
|
||||
pdev = pm8001_ha->pdev;
|
||||
|
||||
#ifdef PM8001_USE_MSIX
|
||||
if (pci_find_capability(pdev, PCI_CAP_ID_MSIX))
|
||||
return pm8001_setup_msix(pm8001_ha);
|
||||
pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm8001_request_irq - register interrupt
|
||||
* @pm8001_ha: our ha struct.
|
||||
*/
|
||||
static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha)
|
||||
{
|
||||
struct pci_dev *pdev;
|
||||
struct pci_dev *pdev = pm8001_ha->pdev;
|
||||
#ifdef PM8001_USE_MSIX
|
||||
int rc;
|
||||
|
||||
pdev = pm8001_ha->pdev;
|
||||
if (pci_find_capability(pdev, PCI_CAP_ID_MSIX)) {
|
||||
rc = pm8001_setup_msix(pm8001_ha);
|
||||
if (rc) {
|
||||
pm8001_dbg(pm8001_ha, FAIL,
|
||||
"pm8001_setup_irq failed [ret: %d]\n", rc);
|
||||
return rc;
|
||||
}
|
||||
|
||||
#ifdef PM8001_USE_MSIX
|
||||
if (pdev->msix_cap && pci_msi_enabled())
|
||||
return pm8001_request_msix(pm8001_ha);
|
||||
else {
|
||||
pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
|
||||
goto intx;
|
||||
if (pdev->msix_cap && pci_msi_enabled())
|
||||
return pm8001_request_msix(pm8001_ha);
|
||||
}
|
||||
|
||||
pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
|
||||
#endif
|
||||
|
||||
intx:
|
||||
/* initialize the INT-X interrupt */
|
||||
pm8001_ha->irq_vector[0].irq_id = 0;
|
||||
pm8001_ha->irq_vector[0].drv_inst = pm8001_ha;
|
||||
rc = request_irq(pdev->irq, pm8001_interrupt_handler_intx, IRQF_SHARED,
|
||||
pm8001_ha->name, SHOST_TO_SAS_HA(pm8001_ha->shost));
|
||||
return rc;
|
||||
|
||||
return request_irq(pdev->irq, pm8001_interrupt_handler_intx,
|
||||
IRQF_SHARED, pm8001_ha->name,
|
||||
SHOST_TO_SAS_HA(pm8001_ha->shost));
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -116,7 +116,7 @@ qla2x00_dfs_create_rport(scsi_qla_host_t *vha, struct fc_port *fp)
|
||||
|
||||
sprintf(wwn, "pn-%016llx", wwn_to_u64(fp->port_name));
|
||||
fp->dfs_rport_dir = debugfs_create_dir(wwn, vha->dfs_rport_root);
|
||||
if (!fp->dfs_rport_dir)
|
||||
if (IS_ERR(fp->dfs_rport_dir))
|
||||
return;
|
||||
if (NVME_TARGET(vha->hw, fp))
|
||||
debugfs_create_file("dev_loss_tmo", 0600, fp->dfs_rport_dir,
|
||||
@ -708,14 +708,14 @@ qla2x00_dfs_setup(scsi_qla_host_t *vha)
|
||||
if (IS_QLA27XX(ha) || IS_QLA83XX(ha) || IS_QLA28XX(ha)) {
|
||||
ha->tgt.dfs_naqp = debugfs_create_file("naqp",
|
||||
0400, ha->dfs_dir, vha, &dfs_naqp_ops);
|
||||
if (!ha->tgt.dfs_naqp) {
|
||||
if (IS_ERR(ha->tgt.dfs_naqp)) {
|
||||
ql_log(ql_log_warn, vha, 0xd011,
|
||||
"Unable to create debugFS naqp node.\n");
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
vha->dfs_rport_root = debugfs_create_dir("rports", ha->dfs_dir);
|
||||
if (!vha->dfs_rport_root) {
|
||||
if (IS_ERR(vha->dfs_rport_root)) {
|
||||
ql_log(ql_log_warn, vha, 0xd012,
|
||||
"Unable to create debugFS rports node.\n");
|
||||
goto out;
|
||||
|
@ -533,102 +533,102 @@ static ssize_t lio_target_nacl_info_show(struct config_item *item, char *page)
|
||||
spin_lock_bh(&se_nacl->nacl_sess_lock);
|
||||
se_sess = se_nacl->nacl_sess;
|
||||
if (!se_sess) {
|
||||
rb += sprintf(page+rb, "No active iSCSI Session for Initiator"
|
||||
rb += sysfs_emit_at(page, rb, "No active iSCSI Session for Initiator"
|
||||
" Endpoint: %s\n", se_nacl->initiatorname);
|
||||
} else {
|
||||
sess = se_sess->fabric_sess_ptr;
|
||||
|
||||
rb += sprintf(page+rb, "InitiatorName: %s\n",
|
||||
rb += sysfs_emit_at(page, rb, "InitiatorName: %s\n",
|
||||
sess->sess_ops->InitiatorName);
|
||||
rb += sprintf(page+rb, "InitiatorAlias: %s\n",
|
||||
rb += sysfs_emit_at(page, rb, "InitiatorAlias: %s\n",
|
||||
sess->sess_ops->InitiatorAlias);
|
||||
|
||||
rb += sprintf(page+rb,
|
||||
rb += sysfs_emit_at(page, rb,
|
||||
"LIO Session ID: %u ISID: 0x%6ph TSIH: %hu ",
|
||||
sess->sid, sess->isid, sess->tsih);
|
||||
rb += sprintf(page+rb, "SessionType: %s\n",
|
||||
rb += sysfs_emit_at(page, rb, "SessionType: %s\n",
|
||||
(sess->sess_ops->SessionType) ?
|
||||
"Discovery" : "Normal");
|
||||
rb += sprintf(page+rb, "Session State: ");
|
||||
rb += sysfs_emit_at(page, rb, "Session State: ");
|
||||
switch (sess->session_state) {
|
||||
case TARG_SESS_STATE_FREE:
|
||||
rb += sprintf(page+rb, "TARG_SESS_FREE\n");
|
||||
rb += sysfs_emit_at(page, rb, "TARG_SESS_FREE\n");
|
||||
break;
|
||||
case TARG_SESS_STATE_ACTIVE:
|
||||
rb += sprintf(page+rb, "TARG_SESS_STATE_ACTIVE\n");
|
||||
rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_ACTIVE\n");
|
||||
break;
|
||||
case TARG_SESS_STATE_LOGGED_IN:
|
||||
rb += sprintf(page+rb, "TARG_SESS_STATE_LOGGED_IN\n");
|
||||
rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_LOGGED_IN\n");
|
||||
break;
|
||||
case TARG_SESS_STATE_FAILED:
|
||||
rb += sprintf(page+rb, "TARG_SESS_STATE_FAILED\n");
|
||||
rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_FAILED\n");
|
||||
break;
|
||||
case TARG_SESS_STATE_IN_CONTINUE:
|
||||
rb += sprintf(page+rb, "TARG_SESS_STATE_IN_CONTINUE\n");
|
||||
rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_IN_CONTINUE\n");
|
||||
break;
|
||||
default:
|
||||
rb += sprintf(page+rb, "ERROR: Unknown Session"
|
||||
rb += sysfs_emit_at(page, rb, "ERROR: Unknown Session"
|
||||
" State!\n");
|
||||
break;
|
||||
}
|
||||
|
||||
rb += sprintf(page+rb, "---------------------[iSCSI Session"
|
||||
rb += sysfs_emit_at(page, rb, "---------------------[iSCSI Session"
|
||||
" Values]-----------------------\n");
|
||||
rb += sprintf(page+rb, " CmdSN/WR : CmdSN/WC : ExpCmdSN"
|
||||
rb += sysfs_emit_at(page, rb, " CmdSN/WR : CmdSN/WC : ExpCmdSN"
|
||||
" : MaxCmdSN : ITT : TTT\n");
|
||||
max_cmd_sn = (u32) atomic_read(&sess->max_cmd_sn);
|
||||
rb += sprintf(page+rb, " 0x%08x 0x%08x 0x%08x 0x%08x"
|
||||
rb += sysfs_emit_at(page, rb, " 0x%08x 0x%08x 0x%08x 0x%08x"
|
||||
" 0x%08x 0x%08x\n",
|
||||
sess->cmdsn_window,
|
||||
(max_cmd_sn - sess->exp_cmd_sn) + 1,
|
||||
sess->exp_cmd_sn, max_cmd_sn,
|
||||
sess->init_task_tag, sess->targ_xfer_tag);
|
||||
rb += sprintf(page+rb, "----------------------[iSCSI"
|
||||
rb += sysfs_emit_at(page, rb, "----------------------[iSCSI"
|
||||
" Connections]-------------------------\n");
|
||||
|
||||
spin_lock(&sess->conn_lock);
|
||||
list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
|
||||
rb += sprintf(page+rb, "CID: %hu Connection"
|
||||
rb += sysfs_emit_at(page, rb, "CID: %hu Connection"
|
||||
" State: ", conn->cid);
|
||||
switch (conn->conn_state) {
|
||||
case TARG_CONN_STATE_FREE:
|
||||
rb += sprintf(page+rb,
|
||||
rb += sysfs_emit_at(page, rb,
|
||||
"TARG_CONN_STATE_FREE\n");
|
||||
break;
|
||||
case TARG_CONN_STATE_XPT_UP:
|
||||
rb += sprintf(page+rb,
|
||||
rb += sysfs_emit_at(page, rb,
|
||||
"TARG_CONN_STATE_XPT_UP\n");
|
||||
break;
|
||||
case TARG_CONN_STATE_IN_LOGIN:
|
||||
rb += sprintf(page+rb,
|
||||
rb += sysfs_emit_at(page, rb,
|
||||
"TARG_CONN_STATE_IN_LOGIN\n");
|
||||
break;
|
||||
case TARG_CONN_STATE_LOGGED_IN:
|
||||
rb += sprintf(page+rb,
|
||||
rb += sysfs_emit_at(page, rb,
|
||||
"TARG_CONN_STATE_LOGGED_IN\n");
|
||||
break;
|
||||
case TARG_CONN_STATE_IN_LOGOUT:
|
||||
rb += sprintf(page+rb,
|
||||
rb += sysfs_emit_at(page, rb,
|
||||
"TARG_CONN_STATE_IN_LOGOUT\n");
|
||||
break;
|
||||
case TARG_CONN_STATE_LOGOUT_REQUESTED:
|
||||
rb += sprintf(page+rb,
|
||||
rb += sysfs_emit_at(page, rb,
|
||||
"TARG_CONN_STATE_LOGOUT_REQUESTED\n");
|
||||
break;
|
||||
case TARG_CONN_STATE_CLEANUP_WAIT:
|
||||
rb += sprintf(page+rb,
|
||||
rb += sysfs_emit_at(page, rb,
|
||||
"TARG_CONN_STATE_CLEANUP_WAIT\n");
|
||||
break;
|
||||
default:
|
||||
rb += sprintf(page+rb,
|
||||
rb += sysfs_emit_at(page, rb,
|
||||
"ERROR: Unknown Connection State!\n");
|
||||
break;
|
||||
}
|
||||
|
||||
rb += sprintf(page+rb, " Address %pISc %s", &conn->login_sockaddr,
|
||||
rb += sysfs_emit_at(page, rb, " Address %pISc %s", &conn->login_sockaddr,
|
||||
(conn->network_transport == ISCSI_TCP) ?
|
||||
"TCP" : "SCTP");
|
||||
rb += sprintf(page+rb, " StatSN: 0x%08x\n",
|
||||
rb += sysfs_emit_at(page, rb, " StatSN: 0x%08x\n",
|
||||
conn->stat_sn);
|
||||
}
|
||||
spin_unlock(&sess->conn_lock);
|
||||
|
@ -264,6 +264,7 @@ void target_free_cmd_counter(struct target_cmd_counter *cmd_cnt)
|
||||
percpu_ref_put(&cmd_cnt->refcnt);
|
||||
|
||||
percpu_ref_exit(&cmd_cnt->refcnt);
|
||||
kfree(cmd_cnt);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(target_free_cmd_counter);
|
||||
|
||||
|
@ -1257,19 +1257,14 @@ static void cpm_uart_console_write(struct console *co, const char *s,
|
||||
{
|
||||
struct uart_cpm_port *pinfo = &cpm_uart_ports[co->index];
|
||||
unsigned long flags;
|
||||
int nolock = oops_in_progress;
|
||||
|
||||
if (unlikely(nolock)) {
|
||||
if (unlikely(oops_in_progress)) {
|
||||
local_irq_save(flags);
|
||||
} else {
|
||||
spin_lock_irqsave(&pinfo->port.lock, flags);
|
||||
}
|
||||
|
||||
cpm_uart_early_write(pinfo, s, count, true);
|
||||
|
||||
if (unlikely(nolock)) {
|
||||
cpm_uart_early_write(pinfo, s, count, true);
|
||||
local_irq_restore(flags);
|
||||
} else {
|
||||
spin_lock_irqsave(&pinfo->port.lock, flags);
|
||||
cpm_uart_early_write(pinfo, s, count, true);
|
||||
spin_unlock_irqrestore(&pinfo->port.lock, flags);
|
||||
}
|
||||
}
|
||||
|
@ -256,9 +256,10 @@ static int cdns3_controller_resume(struct device *dev, pm_message_t msg)
|
||||
cdns3_set_platform_suspend(cdns->dev, false, false);
|
||||
|
||||
spin_lock_irqsave(&cdns->lock, flags);
|
||||
cdns_resume(cdns, !PMSG_IS_AUTO(msg));
|
||||
cdns_resume(cdns);
|
||||
cdns->in_lpm = false;
|
||||
spin_unlock_irqrestore(&cdns->lock, flags);
|
||||
cdns_set_active(cdns, !PMSG_IS_AUTO(msg));
|
||||
if (cdns->wakeup_pending) {
|
||||
cdns->wakeup_pending = false;
|
||||
enable_irq(cdns->wakeup_irq);
|
||||
|
@ -210,8 +210,9 @@ static int __maybe_unused cdnsp_pci_resume(struct device *dev)
|
||||
int ret;
|
||||
|
||||
spin_lock_irqsave(&cdns->lock, flags);
|
||||
ret = cdns_resume(cdns, 1);
|
||||
ret = cdns_resume(cdns);
|
||||
spin_unlock_irqrestore(&cdns->lock, flags);
|
||||
cdns_set_active(cdns, 1);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user