This is the 5.4.43 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl7Oi20ACgkQONu9yGCS aT4ipBAA1Kqh2mLEcDBISubrU4CuOl/iHmkCXyF1FeF9+vJKz25whbfYO/FNYweP 2HYxGyuqLTQ0OnsfrXeEoImlxdAcWp3TjAFPgJdonLBvnVDmvlPe6Pzk1NRPhvce zU/Y1leE+LoQ7xDfICPJ9BwuwwYTRzRqMQHmIuVlsHLSiN+rextPj6vkzD+7h4ux i9VKoDvzmWuLrHmc9RYNoGxuZ5tGogBaCxI8tnzHGcm21bNVvsKZiANQ2J+6G2bJ sJwqq5tH2gZ6cJxmJ1tVyMbXLIJanNKLeBC5sDQN4rss9pU4gtyEARqVG+9RlglQ FeSlBuoaISJYYejo6aSH7nw81bTQrXexd0sH94qYqnqPlZo+OXN8vxHTaIapYEfd fjqyEblZXqpnMNVQcZOxbrYaefuIrZ9Q8pWUFTwVj34P8RNJLBIvg5gy2dlRvHbC PGLJewOXySZaXVpD5gFU349L32d4QPw9MmMU5php+LOl4idN8RlVY0pOaUuO0idH ewO+6vijLgHq/5HBO6BBToRlNUvLauoUeAaQwoHfPiuuYnGGFCZ9GEjPRsHnCBok IAKQ2Uj+IqlMy7gKVtG1ryekil7TVktrZQ1JBokRLWQPZiED84r7P1lQqPaH/4f4 GFFRhx3tekJs4LMMUEaUR019Q9ZcQMWkikT1/HpVOYUjQd55pc4= =jmiq -----END PGP SIGNATURE----- Merge 5.4.43 into android-5.4-stable Changes in 5.4.43 i2c: dev: Fix the race between the release of i2c_dev and cdev KVM: SVM: Fix potential memory leak in svm_cpu_init() ima: Set file->f_mode instead of file->f_flags in ima_calc_file_hash() evm: Check also if *tfm is an error pointer in init_desc() ima: Fix return value of ima_write_policy() ubifs: fix wrong use of crypto_shash_descsize() ACPI: EC: PM: Avoid flushing EC work when EC GPE is inactive mtd: spinand: Propagate ECC information to the MTD structure fix multiplication overflow in copy_fdtable() ubifs: remove broken lazytime support i2c: fix missing pm_runtime_put_sync in i2c_device_probe iommu/amd: Fix over-read of ACPI UID from IVRS table evm: Fix a small race in init_desc() i2c: mux: demux-pinctrl: Fix an error handling path in 'i2c_demux_pinctrl_probe()' ubi: Fix seq_file usage in detailed_erase_block_info debugfs file afs: Don't unlock fetched data pages until the op completes successfully mtd: Fix mtd not registered due to nvmem name collision kbuild: avoid concurrency issue in parallel building dtbs and dtbs_check net: drop_monitor: use IS_REACHABLE() to guard net_dm_hw_report() gcc-common.h: Update for GCC 10 HID: multitouch: add eGalaxTouch P80H84 support HID: alps: Add AUI1657 device ID HID: alps: ALPS_1657 is too specific; use U1_UNICORN_LEGACY instead scsi: qla2xxx: Fix hang when issuing nvme disconnect-all in NPIV scsi: qla2xxx: Delete all sessions before unregister local nvme port configfs: fix config_item refcnt leak in configfs_rmdir() vhost/vsock: fix packet delivery order to monitoring devices aquantia: Fix the media type of AQC100 ethernet controller in the driver component: Silence bind error on -EPROBE_DEFER net/ena: Fix build warning in ena_xdp_set() scsi: ibmvscsi: Fix WARN_ON during event pool release HID: i2c-hid: reset Synaptics SYNA2393 on resume x86/mm/cpa: Flush direct map alias during cpa ibmvnic: Skip fatal error reset after passive init ftrace/selftest: make unresolved cases cause failure if --fail-unresolved set x86/apic: Move TSC deadline timer debug printk gtp: set NLM_F_MULTI flag in gtp_genl_dump_pdp() HID: quirks: Add HID_QUIRK_NO_INIT_REPORTS quirk for Dell K12A keyboard-dock ceph: fix double unlock in handle_cap_export() stmmac: fix pointer check after utilization in stmmac_interrupt USB: core: Fix misleading driver bug report platform/x86: asus-nb-wmi: Do not load on Asus T100TA and T200TA iommu/amd: Call domain_flush_complete() in update_domain() drm/amd/display: Prevent dpcd reads with passive dongles KVM: selftests: Fix build for evmcs.h ARM: futex: Address build warning scripts/gdb: repair rb_first() and rb_last() ALSA: hda - constify and cleanup static NodeID tables ALSA: hda: patch_realtek: fix empty macro usage in if block ALSA: hda: Manage concurrent reg access more properly ALSA: hda/realtek - Add supported new mute Led for HP ALSA: hda/realtek - Add HP new mute led supported for ALC236 ALSA: hda/realtek: Add quirk for Samsung Notebook ALSA: hda/realtek - Enable headset mic of ASUS GL503VM with ALC295 ALSA: hda/realtek - Enable headset mic of ASUS UX550GE with ALC295 ALSA: hda/realtek: Enable headset mic of ASUS UX581LV with ALC295 KVM: x86: Fix pkru save/restore when guest CR4.PKE=0, move it to x86.c ALSA: iec1712: Initialize STDSP24 properly when using the model=staudio option ALSA: pcm: fix incorrect hw_base increase ALSA: hda/realtek - Fix silent output on Gigabyte X570 Aorus Xtreme ALSA: hda/realtek - Add more fixup entries for Clevo machines scsi: qla2xxx: Do not log message when reading port speed via sysfs scsi: target: Put lun_ref at end of tmr processing arm64: Fix PTRACE_SYSEMU semantics drm/etnaviv: fix perfmon domain interation apparmor: Fix use-after-free in aa_audit_rule_init apparmor: fix potential label refcnt leak in aa_change_profile apparmor: Fix aa_label refcnt leak in policy_update dmaengine: tegra210-adma: Fix an error handling path in 'tegra_adma_probe()' drm/etnaviv: Fix a leak in submit_pin_objects() dmaengine: dmatest: Restore default for channel dmaengine: owl: Use correct lock in owl_dma_get_pchan() vsprintf: don't obfuscate NULL and error pointers drm/i915/gvt: Init DPLL/DDI vreg for virtual display instead of inheritance. drm/i915: Propagate error from completed fences powerpc: Remove STRICT_KERNEL_RWX incompatibility with RELOCATABLE powerpc/64s: Disable STRICT_KERNEL_RWX bpf: Avoid setting bpf insns pages read-only when prog is jited kbuild: Remove debug info from kallsyms linking Revert "gfs2: Don't demote a glock until its revokes are written" media: fdp1: Fix R-Car M3-N naming in debug message staging: iio: ad2s1210: Fix SPI reading staging: kpc2000: fix error return code in kp2000_pcie_probe() staging: greybus: Fix uninitialized scalar variable iio: sca3000: Remove an erroneous 'get_device()' iio: dac: vf610: Fix an error handling path in 'vf610_dac_probe()' iio: adc: ti-ads8344: Fix channel selection misc: rtsx: Add short delay after exit from ASPM tty: serial: add missing spin_lock_init for SiFive serial console mei: release me_cl object reference ipack: tpci200: fix error return code in tpci200_register() s390/pci: Fix s390_mmio_read/write with MIO s390/kaslr: add support for R_390_JMP_SLOT relocation type device-dax: don't leak kernel memory to user space after unloading kmem rapidio: fix an error in get_user_pages_fast() error handling kasan: disable branch tracing for core runtime rxrpc: Fix the excessive initial retransmission timeout rxrpc: Fix a memory leak in rxkad_verify_response() s390/kexec_file: fix initrd location for kdump kernel flow_dissector: Drop BPF flow dissector prog ref on netns cleanup x86/unwind/orc: Fix unwind_get_return_address_ptr() for inactive tasks iio: adc: stm32-adc: Use dma_request_chan() instead dma_request_slave_channel() iio: adc: stm32-adc: fix device used to request dma iio: adc: stm32-dfsdm: Use dma_request_chan() instead dma_request_slave_channel() iio: adc: stm32-dfsdm: fix device used to request dma rxrpc: Trace discarded ACKs rxrpc: Fix ack discard tpm: check event log version before reading final events sched/fair: Reorder enqueue/dequeue_task_fair path sched/fair: Fix reordering of enqueue/dequeue_task_fair() sched/fair: Fix enqueue_task_fair() warning some more Linux 5.4.43 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I1582df67569f34c4455c482ed0eaf10fc1a34e03
This commit is contained in:
commit
f7b4f375c7
10
Makefile
10
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 42
|
||||
SUBLEVEL = 43
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
@ -1313,11 +1313,15 @@ ifneq ($(dtstree),)
|
||||
$(Q)$(MAKE) $(build)=$(dtstree) $(dtstree)/$@
|
||||
|
||||
PHONY += dtbs dtbs_install dtbs_check
|
||||
dtbs dtbs_check: include/config/kernel.release scripts_dtc
|
||||
dtbs: include/config/kernel.release scripts_dtc
|
||||
$(Q)$(MAKE) $(build)=$(dtstree)
|
||||
|
||||
ifneq ($(filter dtbs_check, $(MAKECMDGOALS)),)
|
||||
dtbs: dt_binding_check
|
||||
endif
|
||||
|
||||
dtbs_check: export CHECK_DTBS=1
|
||||
dtbs_check: dt_binding_check
|
||||
dtbs_check: dtbs
|
||||
|
||||
dtbs_install:
|
||||
$(Q)$(MAKE) $(dtbinst)=$(dtstree)
|
||||
|
@ -164,8 +164,13 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
|
||||
preempt_enable();
|
||||
#endif
|
||||
|
||||
if (!ret)
|
||||
*oval = oldval;
|
||||
/*
|
||||
* Store unconditionally. If ret != 0 the extra store is the least
|
||||
* of the worries but GCC cannot figure out that __futex_atomic_op()
|
||||
* is either setting ret to -EFAULT or storing the old value in
|
||||
* oldval which results in a uninitialized warning at the call site.
|
||||
*/
|
||||
*oval = oldval;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -1829,10 +1829,11 @@ static void tracehook_report_syscall(struct pt_regs *regs,
|
||||
|
||||
int syscall_trace_enter(struct pt_regs *regs)
|
||||
{
|
||||
if (test_thread_flag(TIF_SYSCALL_TRACE) ||
|
||||
test_thread_flag(TIF_SYSCALL_EMU)) {
|
||||
unsigned long flags = READ_ONCE(current_thread_info()->flags);
|
||||
|
||||
if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
|
||||
tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);
|
||||
if (!in_syscall(regs) || test_thread_flag(TIF_SYSCALL_EMU))
|
||||
if (!in_syscall(regs) || (flags & _TIF_SYSCALL_EMU))
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -133,7 +133,7 @@ config PPC
|
||||
select ARCH_HAS_PTE_SPECIAL
|
||||
select ARCH_HAS_MEMBARRIER_CALLBACKS
|
||||
select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
|
||||
select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
|
||||
select ARCH_HAS_STRICT_KERNEL_RWX if (PPC32 && !HIBERNATION)
|
||||
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
|
||||
select ARCH_HAS_UACCESS_FLUSHCACHE
|
||||
select ARCH_HAS_UACCESS_MCSAFE if PPC64
|
||||
|
@ -8,6 +8,10 @@
|
||||
#include <linux/slab.h>
|
||||
#include <asm/pci_insn.h>
|
||||
|
||||
/* I/O size constraints */
|
||||
#define ZPCI_MAX_READ_SIZE 8
|
||||
#define ZPCI_MAX_WRITE_SIZE 128
|
||||
|
||||
/* I/O Map */
|
||||
#define ZPCI_IOMAP_SHIFT 48
|
||||
#define ZPCI_IOMAP_ADDR_BASE 0x8000000000000000UL
|
||||
@ -140,7 +144,8 @@ static inline int zpci_memcpy_fromio(void *dst,
|
||||
|
||||
while (n > 0) {
|
||||
size = zpci_get_max_write_size((u64 __force) src,
|
||||
(u64) dst, n, 8);
|
||||
(u64) dst, n,
|
||||
ZPCI_MAX_READ_SIZE);
|
||||
rc = zpci_read_single(dst, src, size);
|
||||
if (rc)
|
||||
break;
|
||||
@ -161,7 +166,8 @@ static inline int zpci_memcpy_toio(volatile void __iomem *dst,
|
||||
|
||||
while (n > 0) {
|
||||
size = zpci_get_max_write_size((u64 __force) dst,
|
||||
(u64) src, n, 128);
|
||||
(u64) src, n,
|
||||
ZPCI_MAX_WRITE_SIZE);
|
||||
if (size > 8) /* main path */
|
||||
rc = zpci_write_block(dst, src, size);
|
||||
else
|
||||
|
@ -151,7 +151,7 @@ static int kexec_file_add_initrd(struct kimage *image,
|
||||
buf.mem += crashk_res.start;
|
||||
buf.memsz = buf.bufsz;
|
||||
|
||||
data->parm->initrd_start = buf.mem;
|
||||
data->parm->initrd_start = data->memsz;
|
||||
data->parm->initrd_size = buf.memsz;
|
||||
data->memsz += buf.memsz;
|
||||
|
||||
|
@ -28,6 +28,7 @@ int arch_kexec_do_relocs(int r_type, void *loc, unsigned long val,
|
||||
break;
|
||||
case R_390_64: /* Direct 64 bit. */
|
||||
case R_390_GLOB_DAT:
|
||||
case R_390_JMP_SLOT:
|
||||
*(u64 *)loc = val;
|
||||
break;
|
||||
case R_390_PC16: /* PC relative 16 bit. */
|
||||
|
@ -11,6 +11,113 @@
|
||||
#include <linux/mm.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/pci.h>
|
||||
#include <asm/pci_io.h>
|
||||
#include <asm/pci_debug.h>
|
||||
|
||||
static inline void zpci_err_mmio(u8 cc, u8 status, u64 offset)
|
||||
{
|
||||
struct {
|
||||
u64 offset;
|
||||
u8 cc;
|
||||
u8 status;
|
||||
} data = {offset, cc, status};
|
||||
|
||||
zpci_err_hex(&data, sizeof(data));
|
||||
}
|
||||
|
||||
static inline int __pcistb_mio_inuser(
|
||||
void __iomem *ioaddr, const void __user *src,
|
||||
u64 len, u8 *status)
|
||||
{
|
||||
int cc = -ENXIO;
|
||||
|
||||
asm volatile (
|
||||
" sacf 256\n"
|
||||
"0: .insn rsy,0xeb00000000d4,%[len],%[ioaddr],%[src]\n"
|
||||
"1: ipm %[cc]\n"
|
||||
" srl %[cc],28\n"
|
||||
"2: sacf 768\n"
|
||||
EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
|
||||
: [cc] "+d" (cc), [len] "+d" (len)
|
||||
: [ioaddr] "a" (ioaddr), [src] "Q" (*((u8 __force *)src))
|
||||
: "cc", "memory");
|
||||
*status = len >> 24 & 0xff;
|
||||
return cc;
|
||||
}
|
||||
|
||||
static inline int __pcistg_mio_inuser(
|
||||
void __iomem *ioaddr, const void __user *src,
|
||||
u64 ulen, u8 *status)
|
||||
{
|
||||
register u64 addr asm("2") = (u64 __force) ioaddr;
|
||||
register u64 len asm("3") = ulen;
|
||||
int cc = -ENXIO;
|
||||
u64 val = 0;
|
||||
u64 cnt = ulen;
|
||||
u8 tmp;
|
||||
|
||||
/*
|
||||
* copy 0 < @len <= 8 bytes from @src into the right most bytes of
|
||||
* a register, then store it to PCI at @ioaddr while in secondary
|
||||
* address space. pcistg then uses the user mappings.
|
||||
*/
|
||||
asm volatile (
|
||||
" sacf 256\n"
|
||||
"0: llgc %[tmp],0(%[src])\n"
|
||||
" sllg %[val],%[val],8\n"
|
||||
" aghi %[src],1\n"
|
||||
" ogr %[val],%[tmp]\n"
|
||||
" brctg %[cnt],0b\n"
|
||||
"1: .insn rre,0xb9d40000,%[val],%[ioaddr]\n"
|
||||
"2: ipm %[cc]\n"
|
||||
" srl %[cc],28\n"
|
||||
"3: sacf 768\n"
|
||||
EX_TABLE(0b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
|
||||
:
|
||||
[src] "+a" (src), [cnt] "+d" (cnt),
|
||||
[val] "+d" (val), [tmp] "=d" (tmp),
|
||||
[len] "+d" (len), [cc] "+d" (cc),
|
||||
[ioaddr] "+a" (addr)
|
||||
:: "cc", "memory");
|
||||
*status = len >> 24 & 0xff;
|
||||
|
||||
/* did we read everything from user memory? */
|
||||
if (!cc && cnt != 0)
|
||||
cc = -EFAULT;
|
||||
|
||||
return cc;
|
||||
}
|
||||
|
||||
static inline int __memcpy_toio_inuser(void __iomem *dst,
|
||||
const void __user *src, size_t n)
|
||||
{
|
||||
int size, rc = 0;
|
||||
u8 status = 0;
|
||||
mm_segment_t old_fs;
|
||||
|
||||
if (!src)
|
||||
return -EINVAL;
|
||||
|
||||
old_fs = enable_sacf_uaccess();
|
||||
while (n > 0) {
|
||||
size = zpci_get_max_write_size((u64 __force) dst,
|
||||
(u64 __force) src, n,
|
||||
ZPCI_MAX_WRITE_SIZE);
|
||||
if (size > 8) /* main path */
|
||||
rc = __pcistb_mio_inuser(dst, src, size, &status);
|
||||
else
|
||||
rc = __pcistg_mio_inuser(dst, src, size, &status);
|
||||
if (rc)
|
||||
break;
|
||||
src += size;
|
||||
dst += size;
|
||||
n -= size;
|
||||
}
|
||||
disable_sacf_uaccess(old_fs);
|
||||
if (rc)
|
||||
zpci_err_mmio(rc, status, (__force u64) dst);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static long get_pfn(unsigned long user_addr, unsigned long access,
|
||||
unsigned long *pfn)
|
||||
@ -46,6 +153,20 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
|
||||
|
||||
if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Only support read access to MIO capable devices on a MIO enabled
|
||||
* system. Otherwise we would have to check for every address if it is
|
||||
* a special ZPCI_ADDR and we would have to do a get_pfn() which we
|
||||
* don't need for MIO capable devices.
|
||||
*/
|
||||
if (static_branch_likely(&have_mio)) {
|
||||
ret = __memcpy_toio_inuser((void __iomem *) mmio_addr,
|
||||
user_buffer,
|
||||
length);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (length > 64) {
|
||||
buf = kmalloc(length, GFP_KERNEL);
|
||||
if (!buf)
|
||||
@ -56,7 +177,8 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
|
||||
ret = get_pfn(mmio_addr, VM_WRITE, &pfn);
|
||||
if (ret)
|
||||
goto out;
|
||||
io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK));
|
||||
io_addr = (void __iomem *)((pfn << PAGE_SHIFT) |
|
||||
(mmio_addr & ~PAGE_MASK));
|
||||
|
||||
ret = -EFAULT;
|
||||
if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE)
|
||||
@ -72,6 +194,78 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline int __pcilg_mio_inuser(
|
||||
void __user *dst, const void __iomem *ioaddr,
|
||||
u64 ulen, u8 *status)
|
||||
{
|
||||
register u64 addr asm("2") = (u64 __force) ioaddr;
|
||||
register u64 len asm("3") = ulen;
|
||||
u64 cnt = ulen;
|
||||
int shift = ulen * 8;
|
||||
int cc = -ENXIO;
|
||||
u64 val, tmp;
|
||||
|
||||
/*
|
||||
* read 0 < @len <= 8 bytes from the PCI memory mapped at @ioaddr (in
|
||||
* user space) into a register using pcilg then store these bytes at
|
||||
* user address @dst
|
||||
*/
|
||||
asm volatile (
|
||||
" sacf 256\n"
|
||||
"0: .insn rre,0xb9d60000,%[val],%[ioaddr]\n"
|
||||
"1: ipm %[cc]\n"
|
||||
" srl %[cc],28\n"
|
||||
" ltr %[cc],%[cc]\n"
|
||||
" jne 4f\n"
|
||||
"2: ahi %[shift],-8\n"
|
||||
" srlg %[tmp],%[val],0(%[shift])\n"
|
||||
"3: stc %[tmp],0(%[dst])\n"
|
||||
" aghi %[dst],1\n"
|
||||
" brctg %[cnt],2b\n"
|
||||
"4: sacf 768\n"
|
||||
EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b)
|
||||
:
|
||||
[cc] "+d" (cc), [val] "=d" (val), [len] "+d" (len),
|
||||
[dst] "+a" (dst), [cnt] "+d" (cnt), [tmp] "=d" (tmp),
|
||||
[shift] "+d" (shift)
|
||||
:
|
||||
[ioaddr] "a" (addr)
|
||||
: "cc", "memory");
|
||||
|
||||
/* did we write everything to the user space buffer? */
|
||||
if (!cc && cnt != 0)
|
||||
cc = -EFAULT;
|
||||
|
||||
*status = len >> 24 & 0xff;
|
||||
return cc;
|
||||
}
|
||||
|
||||
static inline int __memcpy_fromio_inuser(void __user *dst,
|
||||
const void __iomem *src,
|
||||
unsigned long n)
|
||||
{
|
||||
int size, rc = 0;
|
||||
u8 status;
|
||||
mm_segment_t old_fs;
|
||||
|
||||
old_fs = enable_sacf_uaccess();
|
||||
while (n > 0) {
|
||||
size = zpci_get_max_write_size((u64 __force) src,
|
||||
(u64 __force) dst, n,
|
||||
ZPCI_MAX_READ_SIZE);
|
||||
rc = __pcilg_mio_inuser(dst, src, size, &status);
|
||||
if (rc)
|
||||
break;
|
||||
src += size;
|
||||
dst += size;
|
||||
n -= size;
|
||||
}
|
||||
disable_sacf_uaccess(old_fs);
|
||||
if (rc)
|
||||
zpci_err_mmio(rc, status, (__force u64) dst);
|
||||
return rc;
|
||||
}
|
||||
|
||||
SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
|
||||
void __user *, user_buffer, size_t, length)
|
||||
{
|
||||
@ -86,12 +280,27 @@ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
|
||||
|
||||
if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Only support write access to MIO capable devices on a MIO enabled
|
||||
* system. Otherwise we would have to check for every address if it is
|
||||
* a special ZPCI_ADDR and we would have to do a get_pfn() which we
|
||||
* don't need for MIO capable devices.
|
||||
*/
|
||||
if (static_branch_likely(&have_mio)) {
|
||||
ret = __memcpy_fromio_inuser(
|
||||
user_buffer, (const void __iomem *)mmio_addr,
|
||||
length);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (length > 64) {
|
||||
buf = kmalloc(length, GFP_KERNEL);
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
} else
|
||||
} else {
|
||||
buf = local_buf;
|
||||
}
|
||||
|
||||
ret = get_pfn(mmio_addr, VM_READ, &pfn);
|
||||
if (ret)
|
||||
|
@ -550,6 +550,7 @@ struct kvm_vcpu_arch {
|
||||
unsigned long cr4;
|
||||
unsigned long cr4_guest_owned_bits;
|
||||
unsigned long cr8;
|
||||
u32 host_pkru;
|
||||
u32 pkru;
|
||||
u32 hflags;
|
||||
u64 efer;
|
||||
|
@ -352,8 +352,6 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
|
||||
* According to Intel, MFENCE can do the serialization here.
|
||||
*/
|
||||
asm volatile("mfence" : : : "memory");
|
||||
|
||||
printk_once(KERN_DEBUG "TSC deadline timer enabled\n");
|
||||
return;
|
||||
}
|
||||
|
||||
@ -552,7 +550,7 @@ static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
|
||||
#define DEADLINE_MODEL_MATCH_REV(model, rev) \
|
||||
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)rev }
|
||||
|
||||
static u32 hsx_deadline_rev(void)
|
||||
static __init u32 hsx_deadline_rev(void)
|
||||
{
|
||||
switch (boot_cpu_data.x86_stepping) {
|
||||
case 0x02: return 0x3a; /* EP */
|
||||
@ -562,7 +560,7 @@ static u32 hsx_deadline_rev(void)
|
||||
return ~0U;
|
||||
}
|
||||
|
||||
static u32 bdx_deadline_rev(void)
|
||||
static __init u32 bdx_deadline_rev(void)
|
||||
{
|
||||
switch (boot_cpu_data.x86_stepping) {
|
||||
case 0x02: return 0x00000011;
|
||||
@ -574,7 +572,7 @@ static u32 bdx_deadline_rev(void)
|
||||
return ~0U;
|
||||
}
|
||||
|
||||
static u32 skx_deadline_rev(void)
|
||||
static __init u32 skx_deadline_rev(void)
|
||||
{
|
||||
switch (boot_cpu_data.x86_stepping) {
|
||||
case 0x03: return 0x01000136;
|
||||
@ -587,7 +585,7 @@ static u32 skx_deadline_rev(void)
|
||||
return ~0U;
|
||||
}
|
||||
|
||||
static const struct x86_cpu_id deadline_match[] = {
|
||||
static const struct x86_cpu_id deadline_match[] __initconst = {
|
||||
DEADLINE_MODEL_MATCH_FUNC( INTEL_FAM6_HASWELL_X, hsx_deadline_rev),
|
||||
DEADLINE_MODEL_MATCH_REV ( INTEL_FAM6_BROADWELL_X, 0x0b000020),
|
||||
DEADLINE_MODEL_MATCH_FUNC( INTEL_FAM6_BROADWELL_D, bdx_deadline_rev),
|
||||
@ -609,18 +607,19 @@ static const struct x86_cpu_id deadline_match[] = {
|
||||
{},
|
||||
};
|
||||
|
||||
static void apic_check_deadline_errata(void)
|
||||
static __init bool apic_validate_deadline_timer(void)
|
||||
{
|
||||
const struct x86_cpu_id *m;
|
||||
u32 rev;
|
||||
|
||||
if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER) ||
|
||||
boot_cpu_has(X86_FEATURE_HYPERVISOR))
|
||||
return;
|
||||
if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER))
|
||||
return false;
|
||||
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
|
||||
return true;
|
||||
|
||||
m = x86_match_cpu(deadline_match);
|
||||
if (!m)
|
||||
return;
|
||||
return true;
|
||||
|
||||
/*
|
||||
* Function pointers will have the MSB set due to address layout,
|
||||
@ -632,11 +631,12 @@ static void apic_check_deadline_errata(void)
|
||||
rev = (u32)m->driver_data;
|
||||
|
||||
if (boot_cpu_data.microcode >= rev)
|
||||
return;
|
||||
return true;
|
||||
|
||||
setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
|
||||
pr_err(FW_BUG "TSC_DEADLINE disabled due to Errata; "
|
||||
"please update microcode to version: 0x%x (or later)\n", rev);
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -2098,7 +2098,8 @@ void __init init_apic_mappings(void)
|
||||
{
|
||||
unsigned int new_apicid;
|
||||
|
||||
apic_check_deadline_errata();
|
||||
if (apic_validate_deadline_timer())
|
||||
pr_debug("TSC deadline timer available\n");
|
||||
|
||||
if (x2apic_mode) {
|
||||
boot_cpu_physical_apicid = read_apic_id();
|
||||
|
@ -311,12 +311,19 @@ EXPORT_SYMBOL_GPL(unwind_get_return_address);
|
||||
|
||||
unsigned long *unwind_get_return_address_ptr(struct unwind_state *state)
|
||||
{
|
||||
struct task_struct *task = state->task;
|
||||
|
||||
if (unwind_done(state))
|
||||
return NULL;
|
||||
|
||||
if (state->regs)
|
||||
return &state->regs->ip;
|
||||
|
||||
if (task != current && state->sp == task->thread.sp) {
|
||||
struct inactive_task_frame *frame = (void *)task->thread.sp;
|
||||
return &frame->ret_addr;
|
||||
}
|
||||
|
||||
if (state->sp)
|
||||
return (unsigned long *)state->sp - 1;
|
||||
|
||||
|
@ -998,33 +998,32 @@ static void svm_cpu_uninit(int cpu)
|
||||
static int svm_cpu_init(int cpu)
|
||||
{
|
||||
struct svm_cpu_data *sd;
|
||||
int r;
|
||||
|
||||
sd = kzalloc(sizeof(struct svm_cpu_data), GFP_KERNEL);
|
||||
if (!sd)
|
||||
return -ENOMEM;
|
||||
sd->cpu = cpu;
|
||||
r = -ENOMEM;
|
||||
sd->save_area = alloc_page(GFP_KERNEL);
|
||||
if (!sd->save_area)
|
||||
goto err_1;
|
||||
goto free_cpu_data;
|
||||
|
||||
if (svm_sev_enabled()) {
|
||||
r = -ENOMEM;
|
||||
sd->sev_vmcbs = kmalloc_array(max_sev_asid + 1,
|
||||
sizeof(void *),
|
||||
GFP_KERNEL);
|
||||
if (!sd->sev_vmcbs)
|
||||
goto err_1;
|
||||
goto free_save_area;
|
||||
}
|
||||
|
||||
per_cpu(svm_data, cpu) = sd;
|
||||
|
||||
return 0;
|
||||
|
||||
err_1:
|
||||
free_save_area:
|
||||
__free_page(sd->save_area);
|
||||
free_cpu_data:
|
||||
kfree(sd);
|
||||
return r;
|
||||
return -ENOMEM;
|
||||
|
||||
}
|
||||
|
||||
|
@ -1360,7 +1360,6 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
|
||||
|
||||
vmx_vcpu_pi_load(vcpu, cpu);
|
||||
|
||||
vmx->host_pkru = read_pkru();
|
||||
vmx->host_debugctlmsr = get_debugctlmsr();
|
||||
}
|
||||
|
||||
@ -6521,11 +6520,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
|
||||
|
||||
kvm_load_guest_xcr0(vcpu);
|
||||
|
||||
if (static_cpu_has(X86_FEATURE_PKU) &&
|
||||
kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
|
||||
vcpu->arch.pkru != vmx->host_pkru)
|
||||
__write_pkru(vcpu->arch.pkru);
|
||||
|
||||
pt_guest_enter(vmx);
|
||||
|
||||
atomic_switch_perf_msrs(vmx);
|
||||
@ -6614,18 +6608,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
|
||||
|
||||
pt_guest_exit(vmx);
|
||||
|
||||
/*
|
||||
* eager fpu is enabled if PKEY is supported and CR4 is switched
|
||||
* back on host, so it is safe to read guest PKRU from current
|
||||
* XSAVE.
|
||||
*/
|
||||
if (static_cpu_has(X86_FEATURE_PKU) &&
|
||||
kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {
|
||||
vcpu->arch.pkru = rdpkru();
|
||||
if (vcpu->arch.pkru != vmx->host_pkru)
|
||||
__write_pkru(vmx->host_pkru);
|
||||
}
|
||||
|
||||
kvm_put_guest_xcr0(vcpu);
|
||||
|
||||
vmx->nested.nested_run_pending = 0;
|
||||
|
@ -832,11 +832,25 @@ void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
|
||||
xsetbv(XCR_XFEATURE_ENABLED_MASK, vcpu->arch.xcr0);
|
||||
vcpu->guest_xcr0_loaded = 1;
|
||||
}
|
||||
|
||||
if (static_cpu_has(X86_FEATURE_PKU) &&
|
||||
(kvm_read_cr4_bits(vcpu, X86_CR4_PKE) ||
|
||||
(vcpu->arch.xcr0 & XFEATURE_MASK_PKRU)) &&
|
||||
vcpu->arch.pkru != vcpu->arch.host_pkru)
|
||||
__write_pkru(vcpu->arch.pkru);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0);
|
||||
|
||||
void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
if (static_cpu_has(X86_FEATURE_PKU) &&
|
||||
(kvm_read_cr4_bits(vcpu, X86_CR4_PKE) ||
|
||||
(vcpu->arch.xcr0 & XFEATURE_MASK_PKRU))) {
|
||||
vcpu->arch.pkru = rdpkru();
|
||||
if (vcpu->arch.pkru != vcpu->arch.host_pkru)
|
||||
__write_pkru(vcpu->arch.host_pkru);
|
||||
}
|
||||
|
||||
if (vcpu->guest_xcr0_loaded) {
|
||||
if (vcpu->arch.xcr0 != host_xcr0)
|
||||
xsetbv(XCR_XFEATURE_ENABLED_MASK, host_xcr0);
|
||||
@ -8222,6 +8236,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
|
||||
trace_kvm_entry(vcpu->vcpu_id);
|
||||
guest_enter_irqoff();
|
||||
|
||||
/* Save host pkru register if supported */
|
||||
vcpu->arch.host_pkru = read_pkru();
|
||||
|
||||
fpregs_assert_state_consistent();
|
||||
if (test_thread_flag(TIF_NEED_FPU_LOAD))
|
||||
switch_fpu_return();
|
||||
|
@ -42,7 +42,8 @@ struct cpa_data {
|
||||
unsigned long pfn;
|
||||
unsigned int flags;
|
||||
unsigned int force_split : 1,
|
||||
force_static_prot : 1;
|
||||
force_static_prot : 1,
|
||||
force_flush_all : 1;
|
||||
struct page **pages;
|
||||
};
|
||||
|
||||
@ -352,10 +353,10 @@ static void cpa_flush(struct cpa_data *data, int cache)
|
||||
return;
|
||||
}
|
||||
|
||||
if (cpa->numpages <= tlb_single_page_flush_ceiling)
|
||||
on_each_cpu(__cpa_flush_tlb, cpa, 1);
|
||||
else
|
||||
if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling)
|
||||
flush_tlb_all();
|
||||
else
|
||||
on_each_cpu(__cpa_flush_tlb, cpa, 1);
|
||||
|
||||
if (!cache)
|
||||
return;
|
||||
@ -1584,6 +1585,8 @@ static int cpa_process_alias(struct cpa_data *cpa)
|
||||
alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
|
||||
alias_cpa.curpage = 0;
|
||||
|
||||
cpa->force_flush_all = 1;
|
||||
|
||||
ret = __change_page_attr_set_clr(&alias_cpa, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -1604,6 +1607,7 @@ static int cpa_process_alias(struct cpa_data *cpa)
|
||||
alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
|
||||
alias_cpa.curpage = 0;
|
||||
|
||||
cpa->force_flush_all = 1;
|
||||
/*
|
||||
* The high mapping range is imprecise, so ignore the
|
||||
* return value.
|
||||
|
@ -1984,9 +1984,13 @@ bool acpi_ec_dispatch_gpe(void)
|
||||
* to allow the caller to process events properly after that.
|
||||
*/
|
||||
ret = acpi_dispatch_gpe(NULL, first_ec->gpe);
|
||||
if (ret == ACPI_INTERRUPT_HANDLED)
|
||||
if (ret == ACPI_INTERRUPT_HANDLED) {
|
||||
pm_pr_dbg("EC GPE dispatched\n");
|
||||
|
||||
/* Flush the event and query workqueues. */
|
||||
acpi_ec_flush_work();
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
#endif /* CONFIG_PM_SLEEP */
|
||||
|
@ -977,13 +977,6 @@ static int acpi_s2idle_prepare_late(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void acpi_s2idle_sync(void)
|
||||
{
|
||||
/* The EC driver uses special workqueues that need to be flushed. */
|
||||
acpi_ec_flush_work();
|
||||
acpi_os_wait_events_complete(); /* synchronize Notify handling */
|
||||
}
|
||||
|
||||
static bool acpi_s2idle_wake(void)
|
||||
{
|
||||
if (!acpi_sci_irq_valid())
|
||||
@ -1015,7 +1008,7 @@ static bool acpi_s2idle_wake(void)
|
||||
return true;
|
||||
|
||||
/*
|
||||
* Cancel the wakeup and process all pending events in case
|
||||
* Cancel the SCI wakeup and process all pending events in case
|
||||
* there are any wakeup ones in there.
|
||||
*
|
||||
* Note that if any non-EC GPEs are active at this point, the
|
||||
@ -1023,8 +1016,7 @@ static bool acpi_s2idle_wake(void)
|
||||
* should be missed by canceling the wakeup here.
|
||||
*/
|
||||
pm_system_cancel_wakeup();
|
||||
|
||||
acpi_s2idle_sync();
|
||||
acpi_os_wait_events_complete();
|
||||
|
||||
/*
|
||||
* The SCI is in the "suspended" state now and it cannot produce
|
||||
@ -1057,7 +1049,8 @@ static void acpi_s2idle_restore(void)
|
||||
* of GPEs.
|
||||
*/
|
||||
acpi_os_wait_events_complete(); /* synchronize GPE processing */
|
||||
acpi_s2idle_sync();
|
||||
acpi_ec_flush_work(); /* flush the EC driver's workqueues */
|
||||
acpi_os_wait_events_complete(); /* synchronize Notify handling */
|
||||
|
||||
s2idle_wakeup = false;
|
||||
|
||||
|
@ -257,7 +257,8 @@ static int try_to_bring_up_master(struct master *master,
|
||||
ret = master->ops->bind(master->dev);
|
||||
if (ret < 0) {
|
||||
devres_release_group(master->dev, NULL);
|
||||
dev_info(master->dev, "master bind failed: %d\n", ret);
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_info(master->dev, "master bind failed: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -611,8 +612,9 @@ static int component_bind(struct component *component, struct master *master,
|
||||
devres_release_group(component->dev, NULL);
|
||||
devres_release_group(master->dev, NULL);
|
||||
|
||||
dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
|
||||
dev_name(component->dev), component->ops, ret);
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
|
||||
dev_name(component->dev), component->ops, ret);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
@ -22,6 +22,7 @@ int dev_dax_kmem_probe(struct device *dev)
|
||||
resource_size_t kmem_size;
|
||||
resource_size_t kmem_end;
|
||||
struct resource *new_res;
|
||||
const char *new_res_name;
|
||||
int numa_node;
|
||||
int rc;
|
||||
|
||||
@ -48,11 +49,16 @@ int dev_dax_kmem_probe(struct device *dev)
|
||||
kmem_size &= ~(memory_block_size_bytes() - 1);
|
||||
kmem_end = kmem_start + kmem_size;
|
||||
|
||||
/* Region is permanently reserved. Hot-remove not yet implemented. */
|
||||
new_res = request_mem_region(kmem_start, kmem_size, dev_name(dev));
|
||||
new_res_name = kstrdup(dev_name(dev), GFP_KERNEL);
|
||||
if (!new_res_name)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Region is permanently reserved if hotremove fails. */
|
||||
new_res = request_mem_region(kmem_start, kmem_size, new_res_name);
|
||||
if (!new_res) {
|
||||
dev_warn(dev, "could not reserve region [%pa-%pa]\n",
|
||||
&kmem_start, &kmem_end);
|
||||
kfree(new_res_name);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
@ -63,12 +69,12 @@ int dev_dax_kmem_probe(struct device *dev)
|
||||
* unknown to us that will break add_memory() below.
|
||||
*/
|
||||
new_res->flags = IORESOURCE_SYSTEM_RAM;
|
||||
new_res->name = dev_name(dev);
|
||||
|
||||
rc = add_memory(numa_node, new_res->start, resource_size(new_res));
|
||||
if (rc) {
|
||||
release_resource(new_res);
|
||||
kfree(new_res);
|
||||
kfree(new_res_name);
|
||||
return rc;
|
||||
}
|
||||
dev_dax->dax_kmem_res = new_res;
|
||||
@ -83,6 +89,7 @@ static int dev_dax_kmem_remove(struct device *dev)
|
||||
struct resource *res = dev_dax->dax_kmem_res;
|
||||
resource_size_t kmem_start = res->start;
|
||||
resource_size_t kmem_size = resource_size(res);
|
||||
const char *res_name = res->name;
|
||||
int rc;
|
||||
|
||||
/*
|
||||
@ -102,6 +109,7 @@ static int dev_dax_kmem_remove(struct device *dev)
|
||||
/* Release and free dax resources */
|
||||
release_resource(res);
|
||||
kfree(res);
|
||||
kfree(res_name);
|
||||
dev_dax->dax_kmem_res = NULL;
|
||||
|
||||
return 0;
|
||||
|
@ -1166,10 +1166,11 @@ static int dmatest_run_set(const char *val, const struct kernel_param *kp)
|
||||
mutex_unlock(&info->lock);
|
||||
return ret;
|
||||
} else if (dmatest_run) {
|
||||
if (is_threaded_test_pending(info))
|
||||
start_threaded_tests(info);
|
||||
else
|
||||
pr_info("Could not start test, no channels configured\n");
|
||||
if (!is_threaded_test_pending(info)) {
|
||||
pr_info("No channels configured, continue with any\n");
|
||||
add_threaded_test(info);
|
||||
}
|
||||
start_threaded_tests(info);
|
||||
} else {
|
||||
stop_threaded_test(info);
|
||||
}
|
||||
|
@ -175,13 +175,11 @@ struct owl_dma_txd {
|
||||
* @id: physical index to this channel
|
||||
* @base: virtual memory base for the dma channel
|
||||
* @vchan: the virtual channel currently being served by this physical channel
|
||||
* @lock: a lock to use when altering an instance of this struct
|
||||
*/
|
||||
struct owl_dma_pchan {
|
||||
u32 id;
|
||||
void __iomem *base;
|
||||
struct owl_dma_vchan *vchan;
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
/**
|
||||
@ -437,14 +435,14 @@ static struct owl_dma_pchan *owl_dma_get_pchan(struct owl_dma *od,
|
||||
for (i = 0; i < od->nr_pchans; i++) {
|
||||
pchan = &od->pchans[i];
|
||||
|
||||
spin_lock_irqsave(&pchan->lock, flags);
|
||||
spin_lock_irqsave(&od->lock, flags);
|
||||
if (!pchan->vchan) {
|
||||
pchan->vchan = vchan;
|
||||
spin_unlock_irqrestore(&pchan->lock, flags);
|
||||
spin_unlock_irqrestore(&od->lock, flags);
|
||||
break;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&pchan->lock, flags);
|
||||
spin_unlock_irqrestore(&od->lock, flags);
|
||||
}
|
||||
|
||||
return pchan;
|
||||
|
@ -900,7 +900,7 @@ static int tegra_adma_probe(struct platform_device *pdev)
|
||||
ret = dma_async_device_register(&tdma->dma_dev);
|
||||
if (ret < 0) {
|
||||
dev_err(&pdev->dev, "ADMA registration failed: %d\n", ret);
|
||||
goto irq_dispose;
|
||||
goto rpm_put;
|
||||
}
|
||||
|
||||
ret = of_dma_controller_register(pdev->dev.of_node,
|
||||
|
@ -64,7 +64,7 @@ void efi_retrieve_tpm2_eventlog(efi_system_table_t *sys_table_arg)
|
||||
efi_status_t status;
|
||||
efi_physical_addr_t log_location = 0, log_last_entry = 0;
|
||||
struct linux_efi_tpm_eventlog *log_tbl = NULL;
|
||||
struct efi_tcg2_final_events_table *final_events_table;
|
||||
struct efi_tcg2_final_events_table *final_events_table = NULL;
|
||||
unsigned long first_entry_addr, last_entry_addr;
|
||||
size_t log_size, last_entry_size;
|
||||
efi_bool_t truncated;
|
||||
@ -140,7 +140,8 @@ void efi_retrieve_tpm2_eventlog(efi_system_table_t *sys_table_arg)
|
||||
* Figure out whether any events have already been logged to the
|
||||
* final events structure, and if so how much space they take up
|
||||
*/
|
||||
final_events_table = get_efi_config_table(sys_table_arg,
|
||||
if (version == EFI_TCG2_EVENT_LOG_FORMAT_TCG_2)
|
||||
final_events_table = get_efi_config_table(sys_table_arg,
|
||||
LINUX_EFI_TPM_FINAL_LOG_GUID);
|
||||
if (final_events_table && final_events_table->nr_events) {
|
||||
struct tcg_pcr_event2_head *header;
|
||||
|
@ -62,8 +62,11 @@ int __init efi_tpm_eventlog_init(void)
|
||||
tbl_size = sizeof(*log_tbl) + log_tbl->size;
|
||||
memblock_reserve(efi.tpm_log, tbl_size);
|
||||
|
||||
if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR)
|
||||
if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR ||
|
||||
log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {
|
||||
pr_warn(FW_BUG "TPM Final Events table missing or invalid\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
final_tbl = early_memremap(efi.tpm_final_log, sizeof(*final_tbl));
|
||||
|
||||
|
@ -1422,17 +1422,22 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
|
||||
dc_sink_retain(aconnector->dc_sink);
|
||||
if (sink->dc_edid.length == 0) {
|
||||
aconnector->edid = NULL;
|
||||
drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux);
|
||||
if (aconnector->dc_link->aux_mode) {
|
||||
drm_dp_cec_unset_edid(
|
||||
&aconnector->dm_dp_aux.aux);
|
||||
}
|
||||
} else {
|
||||
aconnector->edid =
|
||||
(struct edid *) sink->dc_edid.raw_edid;
|
||||
|
||||
(struct edid *)sink->dc_edid.raw_edid;
|
||||
|
||||
drm_connector_update_edid_property(connector,
|
||||
aconnector->edid);
|
||||
drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
|
||||
aconnector->edid);
|
||||
aconnector->edid);
|
||||
|
||||
if (aconnector->dc_link->aux_mode)
|
||||
drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
|
||||
aconnector->edid);
|
||||
}
|
||||
|
||||
amdgpu_dm_update_freesync_caps(connector, aconnector->edid);
|
||||
|
||||
} else {
|
||||
|
@ -240,8 +240,10 @@ static int submit_pin_objects(struct etnaviv_gem_submit *submit)
|
||||
}
|
||||
|
||||
if ((submit->flags & ETNA_SUBMIT_SOFTPIN) &&
|
||||
submit->bos[i].va != mapping->iova)
|
||||
submit->bos[i].va != mapping->iova) {
|
||||
etnaviv_gem_mapping_unreference(mapping);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
atomic_inc(&etnaviv_obj->gpu_active);
|
||||
|
||||
|
@ -453,7 +453,7 @@ static const struct etnaviv_pm_domain *pm_domain(const struct etnaviv_gpu *gpu,
|
||||
if (!(gpu->identity.features & meta->feature))
|
||||
continue;
|
||||
|
||||
if (meta->nr_domains < (index - offset)) {
|
||||
if (index - offset >= meta->nr_domains) {
|
||||
offset += meta->nr_domains;
|
||||
continue;
|
||||
}
|
||||
|
@ -207,14 +207,41 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
|
||||
SKL_FUSE_PG_DIST_STATUS(SKL_PG0) |
|
||||
SKL_FUSE_PG_DIST_STATUS(SKL_PG1) |
|
||||
SKL_FUSE_PG_DIST_STATUS(SKL_PG2);
|
||||
vgpu_vreg_t(vgpu, LCPLL1_CTL) |=
|
||||
LCPLL_PLL_ENABLE |
|
||||
LCPLL_PLL_LOCK;
|
||||
vgpu_vreg_t(vgpu, LCPLL2_CTL) |= LCPLL_PLL_ENABLE;
|
||||
|
||||
/*
|
||||
* Only 1 PIPE enabled in current vGPU display and PIPE_A is
|
||||
* tied to TRANSCODER_A in HW, so it's safe to assume PIPE_A,
|
||||
* TRANSCODER_A can be enabled. PORT_x depends on the input of
|
||||
* setup_virtual_dp_monitor, we can bind DPLL0 to any PORT_x
|
||||
* so we fixed to DPLL0 here.
|
||||
* Setup DPLL0: DP link clk 1620 MHz, non SSC, DP Mode
|
||||
*/
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL1) =
|
||||
DPLL_CTRL1_OVERRIDE(DPLL_ID_SKL_DPLL0);
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL1) |=
|
||||
DPLL_CTRL1_LINK_RATE(DPLL_CTRL1_LINK_RATE_1620, DPLL_ID_SKL_DPLL0);
|
||||
vgpu_vreg_t(vgpu, LCPLL1_CTL) =
|
||||
LCPLL_PLL_ENABLE | LCPLL_PLL_LOCK;
|
||||
vgpu_vreg_t(vgpu, DPLL_STATUS) = DPLL_LOCK(DPLL_ID_SKL_DPLL0);
|
||||
/*
|
||||
* Golden M/N are calculated based on:
|
||||
* 24 bpp, 4 lanes, 154000 pixel clk (from virtual EDID),
|
||||
* DP link clk 1620 MHz and non-constant_n.
|
||||
* TODO: calculate DP link symbol clk and stream clk m/n.
|
||||
*/
|
||||
vgpu_vreg_t(vgpu, PIPE_DATA_M1(TRANSCODER_A)) = 63 << TU_SIZE_SHIFT;
|
||||
vgpu_vreg_t(vgpu, PIPE_DATA_M1(TRANSCODER_A)) |= 0x5b425e;
|
||||
vgpu_vreg_t(vgpu, PIPE_DATA_N1(TRANSCODER_A)) = 0x800000;
|
||||
vgpu_vreg_t(vgpu, PIPE_LINK_M1(TRANSCODER_A)) = 0x3cd6e;
|
||||
vgpu_vreg_t(vgpu, PIPE_LINK_N1(TRANSCODER_A)) = 0x80000;
|
||||
}
|
||||
|
||||
if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL2) &=
|
||||
~DPLL_CTRL2_DDI_CLK_OFF(PORT_B);
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
|
||||
DPLL_CTRL2_DDI_CLK_SEL(DPLL_ID_SKL_DPLL0, PORT_B);
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
|
||||
DPLL_CTRL2_DDI_SEL_OVERRIDE(PORT_B);
|
||||
vgpu_vreg_t(vgpu, SFUSE_STRAP) |= SFUSE_STRAP_DDIB_DETECTED;
|
||||
vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) &=
|
||||
~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
|
||||
@ -235,6 +262,12 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
|
||||
}
|
||||
|
||||
if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL2) &=
|
||||
~DPLL_CTRL2_DDI_CLK_OFF(PORT_C);
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
|
||||
DPLL_CTRL2_DDI_CLK_SEL(DPLL_ID_SKL_DPLL0, PORT_C);
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
|
||||
DPLL_CTRL2_DDI_SEL_OVERRIDE(PORT_C);
|
||||
vgpu_vreg_t(vgpu, SDEISR) |= SDE_PORTC_HOTPLUG_CPT;
|
||||
vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) &=
|
||||
~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
|
||||
@ -255,6 +288,12 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
|
||||
}
|
||||
|
||||
if (intel_vgpu_has_monitor_on_port(vgpu, PORT_D)) {
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL2) &=
|
||||
~DPLL_CTRL2_DDI_CLK_OFF(PORT_D);
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
|
||||
DPLL_CTRL2_DDI_CLK_SEL(DPLL_ID_SKL_DPLL0, PORT_D);
|
||||
vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
|
||||
DPLL_CTRL2_DDI_SEL_OVERRIDE(PORT_D);
|
||||
vgpu_vreg_t(vgpu, SDEISR) |= SDE_PORTD_HOTPLUG_CPT;
|
||||
vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) &=
|
||||
~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
|
||||
|
@ -894,8 +894,10 @@ i915_request_await_request(struct i915_request *to, struct i915_request *from)
|
||||
GEM_BUG_ON(to == from);
|
||||
GEM_BUG_ON(to->timeline == from->timeline);
|
||||
|
||||
if (i915_request_completed(from))
|
||||
if (i915_request_completed(from)) {
|
||||
i915_sw_fence_set_error_once(&to->submit, from->fence.error);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (to->engine->schedule) {
|
||||
ret = i915_sched_node_add_dependency(&to->sched, &from->sched);
|
||||
|
@ -802,6 +802,7 @@ static int alps_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
break;
|
||||
case HID_DEVICE_ID_ALPS_U1_DUAL:
|
||||
case HID_DEVICE_ID_ALPS_U1:
|
||||
case HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY:
|
||||
data->dev_type = U1;
|
||||
break;
|
||||
default:
|
||||
|
@ -79,10 +79,10 @@
|
||||
#define HID_DEVICE_ID_ALPS_U1_DUAL_PTP 0x121F
|
||||
#define HID_DEVICE_ID_ALPS_U1_DUAL_3BTN_PTP 0x1220
|
||||
#define HID_DEVICE_ID_ALPS_U1 0x1215
|
||||
#define HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY 0x121E
|
||||
#define HID_DEVICE_ID_ALPS_T4_BTNLESS 0x120C
|
||||
#define HID_DEVICE_ID_ALPS_1222 0x1222
|
||||
|
||||
|
||||
#define USB_VENDOR_ID_AMI 0x046b
|
||||
#define USB_DEVICE_ID_AMI_VIRT_KEYBOARD_AND_MOUSE 0xff10
|
||||
|
||||
@ -385,6 +385,7 @@
|
||||
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7349 0x7349
|
||||
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_73F7 0x73f7
|
||||
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001 0xa001
|
||||
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002 0xc002
|
||||
|
||||
#define USB_VENDOR_ID_ELAN 0x04f3
|
||||
#define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401
|
||||
@ -1091,6 +1092,9 @@
|
||||
#define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300
|
||||
#define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200
|
||||
|
||||
#define I2C_VENDOR_ID_SYNAPTICS 0x06cb
|
||||
#define I2C_PRODUCT_ID_SYNAPTICS_SYNA2393 0x7a13
|
||||
|
||||
#define USB_VENDOR_ID_SYNAPTICS 0x06cb
|
||||
#define USB_DEVICE_ID_SYNAPTICS_TP 0x0001
|
||||
#define USB_DEVICE_ID_SYNAPTICS_INT_TP 0x0002
|
||||
@ -1105,6 +1109,7 @@
|
||||
#define USB_DEVICE_ID_SYNAPTICS_LTS2 0x1d10
|
||||
#define USB_DEVICE_ID_SYNAPTICS_HD 0x0ac3
|
||||
#define USB_DEVICE_ID_SYNAPTICS_QUAD_HD 0x1ac3
|
||||
#define USB_DEVICE_ID_SYNAPTICS_DELL_K12A 0x2819
|
||||
#define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012 0x2968
|
||||
#define USB_DEVICE_ID_SYNAPTICS_TP_V103 0x5710
|
||||
#define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5 0x81a7
|
||||
|
@ -1922,6 +1922,9 @@ static const struct hid_device_id mt_devices[] = {
|
||||
{ .driver_data = MT_CLS_EGALAX_SERIAL,
|
||||
MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
|
||||
USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001) },
|
||||
{ .driver_data = MT_CLS_EGALAX,
|
||||
MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
|
||||
USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002) },
|
||||
|
||||
/* Elitegroup panel */
|
||||
{ .driver_data = MT_CLS_SERIAL,
|
||||
|
@ -163,6 +163,7 @@ static const struct hid_device_id hid_quirks[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_LTS2), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_QUAD_HD), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_TP_V103), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DELL_K12A), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_TOPMAX, USB_DEVICE_ID_TOPMAX_COBRAPAD), HID_QUIRK_BADPAD },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882), HID_QUIRK_NOGET },
|
||||
|
@ -179,6 +179,8 @@ static const struct i2c_hid_quirks {
|
||||
I2C_HID_QUIRK_BOGUS_IRQ },
|
||||
{ USB_VENDOR_ID_ALPS_JP, HID_ANY_ID,
|
||||
I2C_HID_QUIRK_RESET_ON_RESUME },
|
||||
{ I2C_VENDOR_ID_SYNAPTICS, I2C_PRODUCT_ID_SYNAPTICS_SYNA2393,
|
||||
I2C_HID_QUIRK_RESET_ON_RESUME },
|
||||
{ USB_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720,
|
||||
I2C_HID_QUIRK_BAD_INPUT_SIZE },
|
||||
{ 0, 0 }
|
||||
|
@ -338,8 +338,10 @@ static int i2c_device_probe(struct device *dev)
|
||||
} else if (ACPI_COMPANION(dev)) {
|
||||
irq = i2c_acpi_get_irq(client);
|
||||
}
|
||||
if (irq == -EPROBE_DEFER)
|
||||
return irq;
|
||||
if (irq == -EPROBE_DEFER) {
|
||||
status = irq;
|
||||
goto put_sync_adapter;
|
||||
}
|
||||
|
||||
if (irq < 0)
|
||||
irq = 0;
|
||||
@ -353,15 +355,19 @@ static int i2c_device_probe(struct device *dev)
|
||||
*/
|
||||
if (!driver->id_table &&
|
||||
!i2c_acpi_match_device(dev->driver->acpi_match_table, client) &&
|
||||
!i2c_of_match_device(dev->driver->of_match_table, client))
|
||||
return -ENODEV;
|
||||
!i2c_of_match_device(dev->driver->of_match_table, client)) {
|
||||
status = -ENODEV;
|
||||
goto put_sync_adapter;
|
||||
}
|
||||
|
||||
if (client->flags & I2C_CLIENT_WAKE) {
|
||||
int wakeirq;
|
||||
|
||||
wakeirq = of_irq_get_byname(dev->of_node, "wakeup");
|
||||
if (wakeirq == -EPROBE_DEFER)
|
||||
return wakeirq;
|
||||
if (wakeirq == -EPROBE_DEFER) {
|
||||
status = wakeirq;
|
||||
goto put_sync_adapter;
|
||||
}
|
||||
|
||||
device_init_wakeup(&client->dev, true);
|
||||
|
||||
@ -408,6 +414,10 @@ static int i2c_device_probe(struct device *dev)
|
||||
err_clear_wakeup_irq:
|
||||
dev_pm_clear_wake_irq(&client->dev);
|
||||
device_init_wakeup(&client->dev, false);
|
||||
put_sync_adapter:
|
||||
if (client->flags & I2C_CLIENT_HOST_NOTIFY)
|
||||
pm_runtime_put_sync(&client->adapter->dev);
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
|
@ -40,7 +40,7 @@
|
||||
struct i2c_dev {
|
||||
struct list_head list;
|
||||
struct i2c_adapter *adap;
|
||||
struct device *dev;
|
||||
struct device dev;
|
||||
struct cdev cdev;
|
||||
};
|
||||
|
||||
@ -84,12 +84,14 @@ static struct i2c_dev *get_free_i2c_dev(struct i2c_adapter *adap)
|
||||
return i2c_dev;
|
||||
}
|
||||
|
||||
static void put_i2c_dev(struct i2c_dev *i2c_dev)
|
||||
static void put_i2c_dev(struct i2c_dev *i2c_dev, bool del_cdev)
|
||||
{
|
||||
spin_lock(&i2c_dev_list_lock);
|
||||
list_del(&i2c_dev->list);
|
||||
spin_unlock(&i2c_dev_list_lock);
|
||||
kfree(i2c_dev);
|
||||
if (del_cdev)
|
||||
cdev_device_del(&i2c_dev->cdev, &i2c_dev->dev);
|
||||
put_device(&i2c_dev->dev);
|
||||
}
|
||||
|
||||
static ssize_t name_show(struct device *dev,
|
||||
@ -628,6 +630,14 @@ static const struct file_operations i2cdev_fops = {
|
||||
|
||||
static struct class *i2c_dev_class;
|
||||
|
||||
static void i2cdev_dev_release(struct device *dev)
|
||||
{
|
||||
struct i2c_dev *i2c_dev;
|
||||
|
||||
i2c_dev = container_of(dev, struct i2c_dev, dev);
|
||||
kfree(i2c_dev);
|
||||
}
|
||||
|
||||
static int i2cdev_attach_adapter(struct device *dev, void *dummy)
|
||||
{
|
||||
struct i2c_adapter *adap;
|
||||
@ -644,27 +654,23 @@ static int i2cdev_attach_adapter(struct device *dev, void *dummy)
|
||||
|
||||
cdev_init(&i2c_dev->cdev, &i2cdev_fops);
|
||||
i2c_dev->cdev.owner = THIS_MODULE;
|
||||
res = cdev_add(&i2c_dev->cdev, MKDEV(I2C_MAJOR, adap->nr), 1);
|
||||
if (res)
|
||||
goto error_cdev;
|
||||
|
||||
/* register this i2c device with the driver core */
|
||||
i2c_dev->dev = device_create(i2c_dev_class, &adap->dev,
|
||||
MKDEV(I2C_MAJOR, adap->nr), NULL,
|
||||
"i2c-%d", adap->nr);
|
||||
if (IS_ERR(i2c_dev->dev)) {
|
||||
res = PTR_ERR(i2c_dev->dev);
|
||||
goto error;
|
||||
device_initialize(&i2c_dev->dev);
|
||||
i2c_dev->dev.devt = MKDEV(I2C_MAJOR, adap->nr);
|
||||
i2c_dev->dev.class = i2c_dev_class;
|
||||
i2c_dev->dev.parent = &adap->dev;
|
||||
i2c_dev->dev.release = i2cdev_dev_release;
|
||||
dev_set_name(&i2c_dev->dev, "i2c-%d", adap->nr);
|
||||
|
||||
res = cdev_device_add(&i2c_dev->cdev, &i2c_dev->dev);
|
||||
if (res) {
|
||||
put_i2c_dev(i2c_dev, false);
|
||||
return res;
|
||||
}
|
||||
|
||||
pr_debug("i2c-dev: adapter [%s] registered as minor %d\n",
|
||||
adap->name, adap->nr);
|
||||
return 0;
|
||||
error:
|
||||
cdev_del(&i2c_dev->cdev);
|
||||
error_cdev:
|
||||
put_i2c_dev(i2c_dev);
|
||||
return res;
|
||||
}
|
||||
|
||||
static int i2cdev_detach_adapter(struct device *dev, void *dummy)
|
||||
@ -680,9 +686,7 @@ static int i2cdev_detach_adapter(struct device *dev, void *dummy)
|
||||
if (!i2c_dev) /* attach_adapter must have failed */
|
||||
return 0;
|
||||
|
||||
cdev_del(&i2c_dev->cdev);
|
||||
put_i2c_dev(i2c_dev);
|
||||
device_destroy(i2c_dev_class, MKDEV(I2C_MAJOR, adap->nr));
|
||||
put_i2c_dev(i2c_dev, true);
|
||||
|
||||
pr_debug("i2c-dev: adapter [%s] unregistered\n", adap->name);
|
||||
return 0;
|
||||
|
@ -272,6 +272,7 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
|
||||
err_rollback_available:
|
||||
device_remove_file(&pdev->dev, &dev_attr_available_masters);
|
||||
err_rollback:
|
||||
i2c_demux_deactivate_master(priv);
|
||||
for (j = 0; j < i; j++) {
|
||||
of_node_put(priv->chan[j].parent_np);
|
||||
of_changeset_destroy(&priv->chan[j].chgset);
|
||||
|
@ -980,7 +980,7 @@ static int sca3000_read_data(struct sca3000_state *st,
|
||||
st->tx[0] = SCA3000_READ_REG(reg_address_high);
|
||||
ret = spi_sync_transfer(st->us, xfer, ARRAY_SIZE(xfer));
|
||||
if (ret) {
|
||||
dev_err(get_device(&st->us->dev), "problem reading register");
|
||||
dev_err(&st->us->dev, "problem reading register\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1757,15 +1757,27 @@ static int stm32_adc_chan_of_init(struct iio_dev *indio_dev)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int stm32_adc_dma_request(struct iio_dev *indio_dev)
|
||||
static int stm32_adc_dma_request(struct device *dev, struct iio_dev *indio_dev)
|
||||
{
|
||||
struct stm32_adc *adc = iio_priv(indio_dev);
|
||||
struct dma_slave_config config;
|
||||
int ret;
|
||||
|
||||
adc->dma_chan = dma_request_slave_channel(&indio_dev->dev, "rx");
|
||||
if (!adc->dma_chan)
|
||||
adc->dma_chan = dma_request_chan(dev, "rx");
|
||||
if (IS_ERR(adc->dma_chan)) {
|
||||
ret = PTR_ERR(adc->dma_chan);
|
||||
if (ret != -ENODEV) {
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(dev,
|
||||
"DMA channel request failed with %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* DMA is optional: fall back to IRQ mode */
|
||||
adc->dma_chan = NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
adc->rx_buf = dma_alloc_coherent(adc->dma_chan->device->dev,
|
||||
STM32_DMA_BUFFER_SIZE,
|
||||
@ -1862,7 +1874,7 @@ static int stm32_adc_probe(struct platform_device *pdev)
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret = stm32_adc_dma_request(indio_dev);
|
||||
ret = stm32_adc_dma_request(dev, indio_dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
@ -62,7 +62,7 @@ enum sd_converter_type {
|
||||
|
||||
struct stm32_dfsdm_dev_data {
|
||||
int type;
|
||||
int (*init)(struct iio_dev *indio_dev);
|
||||
int (*init)(struct device *dev, struct iio_dev *indio_dev);
|
||||
unsigned int num_channels;
|
||||
const struct regmap_config *regmap_cfg;
|
||||
};
|
||||
@ -1359,13 +1359,18 @@ static void stm32_dfsdm_dma_release(struct iio_dev *indio_dev)
|
||||
}
|
||||
}
|
||||
|
||||
static int stm32_dfsdm_dma_request(struct iio_dev *indio_dev)
|
||||
static int stm32_dfsdm_dma_request(struct device *dev,
|
||||
struct iio_dev *indio_dev)
|
||||
{
|
||||
struct stm32_dfsdm_adc *adc = iio_priv(indio_dev);
|
||||
|
||||
adc->dma_chan = dma_request_slave_channel(&indio_dev->dev, "rx");
|
||||
if (!adc->dma_chan)
|
||||
return -EINVAL;
|
||||
adc->dma_chan = dma_request_chan(dev, "rx");
|
||||
if (IS_ERR(adc->dma_chan)) {
|
||||
int ret = PTR_ERR(adc->dma_chan);
|
||||
|
||||
adc->dma_chan = NULL;
|
||||
return ret;
|
||||
}
|
||||
|
||||
adc->rx_buf = dma_alloc_coherent(adc->dma_chan->device->dev,
|
||||
DFSDM_DMA_BUFFER_SIZE,
|
||||
@ -1415,7 +1420,7 @@ static int stm32_dfsdm_adc_chan_init_one(struct iio_dev *indio_dev,
|
||||
&adc->dfsdm->ch_list[ch->channel]);
|
||||
}
|
||||
|
||||
static int stm32_dfsdm_audio_init(struct iio_dev *indio_dev)
|
||||
static int stm32_dfsdm_audio_init(struct device *dev, struct iio_dev *indio_dev)
|
||||
{
|
||||
struct iio_chan_spec *ch;
|
||||
struct stm32_dfsdm_adc *adc = iio_priv(indio_dev);
|
||||
@ -1442,10 +1447,10 @@ static int stm32_dfsdm_audio_init(struct iio_dev *indio_dev)
|
||||
indio_dev->num_channels = 1;
|
||||
indio_dev->channels = ch;
|
||||
|
||||
return stm32_dfsdm_dma_request(indio_dev);
|
||||
return stm32_dfsdm_dma_request(dev, indio_dev);
|
||||
}
|
||||
|
||||
static int stm32_dfsdm_adc_init(struct iio_dev *indio_dev)
|
||||
static int stm32_dfsdm_adc_init(struct device *dev, struct iio_dev *indio_dev)
|
||||
{
|
||||
struct iio_chan_spec *ch;
|
||||
struct stm32_dfsdm_adc *adc = iio_priv(indio_dev);
|
||||
@ -1489,8 +1494,17 @@ static int stm32_dfsdm_adc_init(struct iio_dev *indio_dev)
|
||||
init_completion(&adc->completion);
|
||||
|
||||
/* Optionally request DMA */
|
||||
if (stm32_dfsdm_dma_request(indio_dev)) {
|
||||
dev_dbg(&indio_dev->dev, "No DMA support\n");
|
||||
ret = stm32_dfsdm_dma_request(dev, indio_dev);
|
||||
if (ret) {
|
||||
if (ret != -ENODEV) {
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(dev,
|
||||
"DMA channel request failed with %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
dev_dbg(dev, "No DMA support\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1603,7 +1617,7 @@ static int stm32_dfsdm_adc_probe(struct platform_device *pdev)
|
||||
adc->dfsdm->fl_list[adc->fl_id].sync_mode = val;
|
||||
|
||||
adc->dev_data = dev_data;
|
||||
ret = dev_data->init(iio);
|
||||
ret = dev_data->init(dev, iio);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
@ -32,16 +32,17 @@ struct ads8344 {
|
||||
u8 rx_buf[3];
|
||||
};
|
||||
|
||||
#define ADS8344_VOLTAGE_CHANNEL(chan, si) \
|
||||
#define ADS8344_VOLTAGE_CHANNEL(chan, addr) \
|
||||
{ \
|
||||
.type = IIO_VOLTAGE, \
|
||||
.indexed = 1, \
|
||||
.channel = chan, \
|
||||
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
|
||||
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
|
||||
.address = addr, \
|
||||
}
|
||||
|
||||
#define ADS8344_VOLTAGE_CHANNEL_DIFF(chan1, chan2, si) \
|
||||
#define ADS8344_VOLTAGE_CHANNEL_DIFF(chan1, chan2, addr) \
|
||||
{ \
|
||||
.type = IIO_VOLTAGE, \
|
||||
.indexed = 1, \
|
||||
@ -50,6 +51,7 @@ struct ads8344 {
|
||||
.differential = 1, \
|
||||
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
|
||||
.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
|
||||
.address = addr, \
|
||||
}
|
||||
|
||||
static const struct iio_chan_spec ads8344_channels[] = {
|
||||
@ -105,7 +107,7 @@ static int ads8344_read_raw(struct iio_dev *iio,
|
||||
switch (mask) {
|
||||
case IIO_CHAN_INFO_RAW:
|
||||
mutex_lock(&adc->lock);
|
||||
*value = ads8344_adc_conversion(adc, channel->scan_index,
|
||||
*value = ads8344_adc_conversion(adc, channel->address,
|
||||
channel->differential);
|
||||
mutex_unlock(&adc->lock);
|
||||
if (*value < 0)
|
||||
|
@ -225,6 +225,7 @@ static int vf610_dac_probe(struct platform_device *pdev)
|
||||
return 0;
|
||||
|
||||
error_iio_device_register:
|
||||
vf610_dac_exit(info);
|
||||
clk_disable_unprepare(info->clk);
|
||||
|
||||
return ret;
|
||||
|
@ -2386,6 +2386,7 @@ static void update_domain(struct protection_domain *domain)
|
||||
|
||||
domain_flush_devices(domain);
|
||||
domain_flush_tlb_pde(domain);
|
||||
domain_flush_complete(domain);
|
||||
}
|
||||
|
||||
static int dir2prot(enum dma_data_direction direction)
|
||||
|
@ -1331,8 +1331,8 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
|
||||
}
|
||||
case IVHD_DEV_ACPI_HID: {
|
||||
u16 devid;
|
||||
u8 hid[ACPIHID_HID_LEN] = {0};
|
||||
u8 uid[ACPIHID_UID_LEN] = {0};
|
||||
u8 hid[ACPIHID_HID_LEN];
|
||||
u8 uid[ACPIHID_UID_LEN];
|
||||
int ret;
|
||||
|
||||
if (h->type != 0x40) {
|
||||
@ -1349,6 +1349,7 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
|
||||
break;
|
||||
}
|
||||
|
||||
uid[0] = '\0';
|
||||
switch (e->uidf) {
|
||||
case UID_NOT_PRESENT:
|
||||
|
||||
@ -1363,8 +1364,8 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
|
||||
break;
|
||||
case UID_IS_CHARACTER:
|
||||
|
||||
memcpy(uid, (u8 *)(&e->uid), ACPIHID_UID_LEN - 1);
|
||||
uid[ACPIHID_UID_LEN - 1] = '\0';
|
||||
memcpy(uid, &e->uid, e->uidl);
|
||||
uid[e->uidl] = '\0';
|
||||
|
||||
break;
|
||||
default:
|
||||
|
@ -306,6 +306,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
|
||||
"(bn 0x%X, sn 0x%X) failed to map driver user space!",
|
||||
tpci200->info->pdev->bus->number,
|
||||
tpci200->info->pdev->devfn);
|
||||
res = -ENOMEM;
|
||||
goto out_release_mem8_space;
|
||||
}
|
||||
|
||||
|
@ -2369,7 +2369,7 @@ static int fdp1_probe(struct platform_device *pdev)
|
||||
dprintk(fdp1, "FDP1 Version R-Car H3\n");
|
||||
break;
|
||||
case FD1_IP_M3N:
|
||||
dprintk(fdp1, "FDP1 Version R-Car M3N\n");
|
||||
dprintk(fdp1, "FDP1 Version R-Car M3-N\n");
|
||||
break;
|
||||
case FD1_IP_E3:
|
||||
dprintk(fdp1, "FDP1 Version R-Car E3\n");
|
||||
|
@ -143,6 +143,9 @@ static void rtsx_comm_pm_full_on(struct rtsx_pcr *pcr)
|
||||
|
||||
rtsx_disable_aspm(pcr);
|
||||
|
||||
/* Fixes DMA transfer timout issue after disabling ASPM on RTS5260 */
|
||||
msleep(1);
|
||||
|
||||
if (option->ltr_enabled)
|
||||
rtsx_set_ltr_latency(pcr, option->ltr_active_latency);
|
||||
|
||||
|
@ -266,6 +266,7 @@ void mei_me_cl_rm_by_uuid(struct mei_device *dev, const uuid_le *uuid)
|
||||
down_write(&dev->me_clients_rwsem);
|
||||
me_cl = __mei_me_cl_by_uuid(dev, uuid);
|
||||
__mei_me_cl_del(dev, me_cl);
|
||||
mei_me_cl_put(me_cl);
|
||||
up_write(&dev->me_clients_rwsem);
|
||||
}
|
||||
|
||||
@ -287,6 +288,7 @@ void mei_me_cl_rm_by_uuid_id(struct mei_device *dev, const uuid_le *uuid, u8 id)
|
||||
down_write(&dev->me_clients_rwsem);
|
||||
me_cl = __mei_me_cl_by_uuid_id(dev, uuid, id);
|
||||
__mei_me_cl_del(dev, me_cl);
|
||||
mei_me_cl_put(me_cl);
|
||||
up_write(&dev->me_clients_rwsem);
|
||||
}
|
||||
|
||||
|
@ -563,7 +563,7 @@ static int mtd_nvmem_add(struct mtd_info *mtd)
|
||||
|
||||
config.id = -1;
|
||||
config.dev = &mtd->dev;
|
||||
config.name = mtd->name;
|
||||
config.name = dev_name(&mtd->dev);
|
||||
config.owner = THIS_MODULE;
|
||||
config.reg_read = mtd_nvmem_reg_read;
|
||||
config.size = mtd->size;
|
||||
|
@ -1049,6 +1049,10 @@ static int spinand_init(struct spinand_device *spinand)
|
||||
|
||||
mtd->oobavail = ret;
|
||||
|
||||
/* Propagate ECC information to mtd_info */
|
||||
mtd->ecc_strength = nand->eccreq.strength;
|
||||
mtd->ecc_step_size = nand->eccreq.step_size;
|
||||
|
||||
return 0;
|
||||
|
||||
err_cleanup_nanddev:
|
||||
|
@ -392,9 +392,6 @@ static void *eraseblk_count_seq_start(struct seq_file *s, loff_t *pos)
|
||||
{
|
||||
struct ubi_device *ubi = s->private;
|
||||
|
||||
if (*pos == 0)
|
||||
return SEQ_START_TOKEN;
|
||||
|
||||
if (*pos < ubi->peb_count)
|
||||
return pos;
|
||||
|
||||
@ -408,8 +405,6 @@ static void *eraseblk_count_seq_next(struct seq_file *s, void *v, loff_t *pos)
|
||||
{
|
||||
struct ubi_device *ubi = s->private;
|
||||
|
||||
if (v == SEQ_START_TOKEN)
|
||||
return pos;
|
||||
(*pos)++;
|
||||
|
||||
if (*pos < ubi->peb_count)
|
||||
@ -431,11 +426,8 @@ static int eraseblk_count_seq_show(struct seq_file *s, void *iter)
|
||||
int err;
|
||||
|
||||
/* If this is the start, print a header */
|
||||
if (iter == SEQ_START_TOKEN) {
|
||||
seq_puts(s,
|
||||
"physical_block_number\terase_count\tblock_status\tread_status\n");
|
||||
return 0;
|
||||
}
|
||||
if (*block_number == 0)
|
||||
seq_puts(s, "physical_block_number\terase_count\n");
|
||||
|
||||
err = ubi_io_is_bad(ubi, *block_number);
|
||||
if (err)
|
||||
|
@ -68,7 +68,7 @@
|
||||
* 16kB.
|
||||
*/
|
||||
#if PAGE_SIZE > SZ_16K
|
||||
#define ENA_PAGE_SIZE SZ_16K
|
||||
#define ENA_PAGE_SIZE (_AC(SZ_16K, UL))
|
||||
#else
|
||||
#define ENA_PAGE_SIZE PAGE_SIZE
|
||||
#endif
|
||||
|
@ -56,7 +56,7 @@ static const struct aq_board_revision_s hw_atl_boards[] = {
|
||||
{ AQ_DEVICE_ID_D108, AQ_HWREV_2, &hw_atl_ops_b0, &hw_atl_b0_caps_aqc108, },
|
||||
{ AQ_DEVICE_ID_D109, AQ_HWREV_2, &hw_atl_ops_b0, &hw_atl_b0_caps_aqc109, },
|
||||
|
||||
{ AQ_DEVICE_ID_AQC100, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc107, },
|
||||
{ AQ_DEVICE_ID_AQC100, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc100, },
|
||||
{ AQ_DEVICE_ID_AQC107, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc107, },
|
||||
{ AQ_DEVICE_ID_AQC108, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc108, },
|
||||
{ AQ_DEVICE_ID_AQC109, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc109, },
|
||||
|
@ -2086,7 +2086,8 @@ static void __ibmvnic_reset(struct work_struct *work)
|
||||
rc = do_hard_reset(adapter, rwi, reset_state);
|
||||
rtnl_unlock();
|
||||
}
|
||||
} else {
|
||||
} else if (!(rwi->reset_reason == VNIC_RESET_FATAL &&
|
||||
adapter->from_passive_init)) {
|
||||
rc = do_reset(adapter, rwi, reset_state);
|
||||
}
|
||||
kfree(rwi);
|
||||
|
@ -3832,7 +3832,7 @@ static int stmmac_set_features(struct net_device *netdev,
|
||||
/**
|
||||
* stmmac_interrupt - main ISR
|
||||
* @irq: interrupt number.
|
||||
* @dev_id: to pass the net device pointer.
|
||||
* @dev_id: to pass the net device pointer (must be valid).
|
||||
* Description: this is the main driver interrupt service routine.
|
||||
* It can call:
|
||||
* o DMA service routine (to manage incoming frame reception and transmission
|
||||
@ -3856,11 +3856,6 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
|
||||
if (priv->irq_wake)
|
||||
pm_wakeup_event(priv->device, 0);
|
||||
|
||||
if (unlikely(!dev)) {
|
||||
netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__);
|
||||
return IRQ_NONE;
|
||||
}
|
||||
|
||||
/* Check if adapter is up */
|
||||
if (test_bit(STMMAC_DOWN, &priv->state))
|
||||
return IRQ_HANDLED;
|
||||
|
@ -1172,11 +1172,11 @@ static int gtp_genl_del_pdp(struct sk_buff *skb, struct genl_info *info)
|
||||
static struct genl_family gtp_genl_family;
|
||||
|
||||
static int gtp_genl_fill_info(struct sk_buff *skb, u32 snd_portid, u32 snd_seq,
|
||||
u32 type, struct pdp_ctx *pctx)
|
||||
int flags, u32 type, struct pdp_ctx *pctx)
|
||||
{
|
||||
void *genlh;
|
||||
|
||||
genlh = genlmsg_put(skb, snd_portid, snd_seq, >p_genl_family, 0,
|
||||
genlh = genlmsg_put(skb, snd_portid, snd_seq, >p_genl_family, flags,
|
||||
type);
|
||||
if (genlh == NULL)
|
||||
goto nlmsg_failure;
|
||||
@ -1230,8 +1230,8 @@ static int gtp_genl_get_pdp(struct sk_buff *skb, struct genl_info *info)
|
||||
goto err_unlock;
|
||||
}
|
||||
|
||||
err = gtp_genl_fill_info(skb2, NETLINK_CB(skb).portid,
|
||||
info->snd_seq, info->nlhdr->nlmsg_type, pctx);
|
||||
err = gtp_genl_fill_info(skb2, NETLINK_CB(skb).portid, info->snd_seq,
|
||||
0, info->nlhdr->nlmsg_type, pctx);
|
||||
if (err < 0)
|
||||
goto err_unlock_free;
|
||||
|
||||
@ -1274,6 +1274,7 @@ static int gtp_genl_dump_pdp(struct sk_buff *skb,
|
||||
gtp_genl_fill_info(skb,
|
||||
NETLINK_CB(cb->skb).portid,
|
||||
cb->nlh->nlmsg_seq,
|
||||
NLM_F_MULTI,
|
||||
cb->nlh->nlmsg_type, pctx)) {
|
||||
cb->args[0] = i;
|
||||
cb->args[1] = j;
|
||||
|
@ -514,9 +514,33 @@ static struct asus_wmi_driver asus_nb_wmi_driver = {
|
||||
.detect_quirks = asus_nb_wmi_quirks,
|
||||
};
|
||||
|
||||
static const struct dmi_system_id asus_nb_wmi_blacklist[] __initconst = {
|
||||
{
|
||||
/*
|
||||
* asus-nb-wm adds no functionality. The T100TA has a detachable
|
||||
* USB kbd, so no hotkeys and it has no WMI rfkill; and loading
|
||||
* asus-nb-wm causes the camera LED to turn and _stay_ on.
|
||||
*/
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TA"),
|
||||
},
|
||||
},
|
||||
{
|
||||
/* The Asus T200TA has the same issue as the T100TA */
|
||||
.matches = {
|
||||
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
|
||||
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T200TA"),
|
||||
},
|
||||
},
|
||||
{} /* Terminating entry */
|
||||
};
|
||||
|
||||
static int __init asus_nb_wmi_init(void)
|
||||
{
|
||||
if (dmi_check_system(asus_nb_wmi_blacklist))
|
||||
return -ENODEV;
|
||||
|
||||
return asus_wmi_register_driver(&asus_nb_wmi_driver);
|
||||
}
|
||||
|
||||
|
@ -877,6 +877,11 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
|
||||
rmcd_error("pinned %ld out of %ld pages",
|
||||
pinned, nr_pages);
|
||||
ret = -EFAULT;
|
||||
/*
|
||||
* Set nr_pages up to mean "how many pages to unpin, in
|
||||
* the error handler:
|
||||
*/
|
||||
nr_pages = pinned;
|
||||
goto err_pg;
|
||||
}
|
||||
|
||||
|
@ -2320,16 +2320,12 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
|
||||
static int ibmvscsi_remove(struct vio_dev *vdev)
|
||||
{
|
||||
struct ibmvscsi_host_data *hostdata = dev_get_drvdata(&vdev->dev);
|
||||
unsigned long flags;
|
||||
|
||||
srp_remove_host(hostdata->host);
|
||||
scsi_remove_host(hostdata->host);
|
||||
|
||||
purge_requests(hostdata, DID_ERROR);
|
||||
|
||||
spin_lock_irqsave(hostdata->host->host_lock, flags);
|
||||
release_event_pool(&hostdata->pool, hostdata);
|
||||
spin_unlock_irqrestore(hostdata->host->host_lock, flags);
|
||||
|
||||
ibmvscsi_release_crq_queue(&hostdata->queue, hostdata,
|
||||
max_events);
|
||||
|
@ -1775,9 +1775,6 @@ qla2x00_port_speed_show(struct device *dev, struct device_attribute *attr,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ql_log(ql_log_info, vha, 0x70d6,
|
||||
"port speed:%d\n", ha->link_data_rate);
|
||||
|
||||
return scnprintf(buf, PAGE_SIZE, "%s\n", spd[ha->link_data_rate]);
|
||||
}
|
||||
|
||||
@ -2926,11 +2923,11 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
|
||||
test_bit(FCPORT_UPDATE_NEEDED, &vha->dpc_flags))
|
||||
msleep(1000);
|
||||
|
||||
qla_nvme_delete(vha);
|
||||
|
||||
qla24xx_disable_vp(vha);
|
||||
qla2x00_wait_for_sess_deletion(vha);
|
||||
|
||||
qla_nvme_delete(vha);
|
||||
vha->flags.delete_progress = 1;
|
||||
|
||||
qlt_remove_target(ha, vha);
|
||||
|
@ -3117,7 +3117,7 @@ qla24xx_abort_command(srb_t *sp)
|
||||
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x108c,
|
||||
"Entered %s.\n", __func__);
|
||||
|
||||
if (vha->flags.qpairs_available && sp->qpair)
|
||||
if (sp->qpair)
|
||||
req = sp->qpair->req;
|
||||
else
|
||||
return QLA_FUNCTION_FAILED;
|
||||
|
@ -537,9 +537,9 @@ static void gb_tty_set_termios(struct tty_struct *tty,
|
||||
}
|
||||
|
||||
if (C_CRTSCTS(tty) && C_BAUD(tty) != B0)
|
||||
newline.flow_control |= GB_SERIAL_AUTO_RTSCTS_EN;
|
||||
newline.flow_control = GB_SERIAL_AUTO_RTSCTS_EN;
|
||||
else
|
||||
newline.flow_control &= ~GB_SERIAL_AUTO_RTSCTS_EN;
|
||||
newline.flow_control = 0;
|
||||
|
||||
if (memcmp(&gb_tty->line_coding, &newline, sizeof(newline))) {
|
||||
memcpy(&gb_tty->line_coding, &newline, sizeof(newline));
|
||||
|
@ -130,17 +130,24 @@ static int ad2s1210_config_write(struct ad2s1210_state *st, u8 data)
|
||||
static int ad2s1210_config_read(struct ad2s1210_state *st,
|
||||
unsigned char address)
|
||||
{
|
||||
struct spi_transfer xfer = {
|
||||
.len = 2,
|
||||
.rx_buf = st->rx,
|
||||
.tx_buf = st->tx,
|
||||
struct spi_transfer xfers[] = {
|
||||
{
|
||||
.len = 1,
|
||||
.rx_buf = &st->rx[0],
|
||||
.tx_buf = &st->tx[0],
|
||||
.cs_change = 1,
|
||||
}, {
|
||||
.len = 1,
|
||||
.rx_buf = &st->rx[1],
|
||||
.tx_buf = &st->tx[1],
|
||||
},
|
||||
};
|
||||
int ret = 0;
|
||||
|
||||
ad2s1210_set_mode(MOD_CONFIG, st);
|
||||
st->tx[0] = address | AD2S1210_MSB_IS_HIGH;
|
||||
st->tx[1] = AD2S1210_REG_FAULT;
|
||||
ret = spi_sync_transfer(st->sdev, &xfer, 1);
|
||||
ret = spi_sync_transfer(st->sdev, xfers, 2);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
@ -298,7 +298,6 @@ static int kp2000_pcie_probe(struct pci_dev *pdev,
|
||||
{
|
||||
int err = 0;
|
||||
struct kp2000_device *pcard;
|
||||
int rv;
|
||||
unsigned long reg_bar_phys_addr;
|
||||
unsigned long reg_bar_phys_len;
|
||||
unsigned long dma_bar_phys_addr;
|
||||
@ -445,11 +444,11 @@ static int kp2000_pcie_probe(struct pci_dev *pdev,
|
||||
if (err < 0)
|
||||
goto err_release_dma;
|
||||
|
||||
rv = request_irq(pcard->pdev->irq, kp2000_irq_handler, IRQF_SHARED,
|
||||
pcard->name, pcard);
|
||||
if (rv) {
|
||||
err = request_irq(pcard->pdev->irq, kp2000_irq_handler, IRQF_SHARED,
|
||||
pcard->name, pcard);
|
||||
if (err) {
|
||||
dev_err(&pcard->pdev->dev,
|
||||
"%s: failed to request_irq: %d\n", __func__, rv);
|
||||
"%s: failed to request_irq: %d\n", __func__, err);
|
||||
goto err_disable_msi;
|
||||
}
|
||||
|
||||
|
@ -3336,6 +3336,7 @@ static void target_tmr_work(struct work_struct *work)
|
||||
|
||||
cmd->se_tfo->queue_tm_rsp(cmd);
|
||||
|
||||
transport_lun_remove_cmd(cmd);
|
||||
transport_cmd_check_stop_to_fabric(cmd);
|
||||
return;
|
||||
|
||||
|
@ -840,6 +840,7 @@ console_initcall(sifive_console_init);
|
||||
|
||||
static void __ssp_add_console_port(struct sifive_serial_port *ssp)
|
||||
{
|
||||
spin_lock_init(&ssp->port.lock);
|
||||
sifive_serial_console_ports[ssp->port.line] = ssp;
|
||||
}
|
||||
|
||||
|
@ -1143,11 +1143,11 @@ void usb_disable_endpoint(struct usb_device *dev, unsigned int epaddr,
|
||||
|
||||
if (usb_endpoint_out(epaddr)) {
|
||||
ep = dev->ep_out[epnum];
|
||||
if (reset_hardware)
|
||||
if (reset_hardware && epnum != 0)
|
||||
dev->ep_out[epnum] = NULL;
|
||||
} else {
|
||||
ep = dev->ep_in[epnum];
|
||||
if (reset_hardware)
|
||||
if (reset_hardware && epnum != 0)
|
||||
dev->ep_in[epnum] = NULL;
|
||||
}
|
||||
if (ep) {
|
||||
|
@ -181,14 +181,14 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
|
||||
break;
|
||||
}
|
||||
|
||||
vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);
|
||||
added = true;
|
||||
|
||||
/* Deliver to monitoring devices all correctly transmitted
|
||||
* packets.
|
||||
/* Deliver to monitoring devices all packets that we
|
||||
* will transmit.
|
||||
*/
|
||||
virtio_transport_deliver_tap_pkt(pkt);
|
||||
|
||||
vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);
|
||||
added = true;
|
||||
|
||||
pkt->off += payload_len;
|
||||
total_len += payload_len;
|
||||
|
||||
|
@ -32,9 +32,8 @@ void afs_fileserver_probe_result(struct afs_call *call)
|
||||
struct afs_server *server = call->server;
|
||||
unsigned int server_index = call->server_index;
|
||||
unsigned int index = call->addr_ix;
|
||||
unsigned int rtt = UINT_MAX;
|
||||
unsigned int rtt_us;
|
||||
bool have_result = false;
|
||||
u64 _rtt;
|
||||
int ret = call->error;
|
||||
|
||||
_enter("%pU,%u", &server->uuid, index);
|
||||
@ -93,15 +92,9 @@ void afs_fileserver_probe_result(struct afs_call *call)
|
||||
}
|
||||
}
|
||||
|
||||
/* Get the RTT and scale it to fit into a 32-bit value that represents
|
||||
* over a minute of time so that we can access it with one instruction
|
||||
* on a 32-bit system.
|
||||
*/
|
||||
_rtt = rxrpc_kernel_get_rtt(call->net->socket, call->rxcall);
|
||||
_rtt /= 64;
|
||||
rtt = (_rtt > UINT_MAX) ? UINT_MAX : _rtt;
|
||||
if (rtt < server->probe.rtt) {
|
||||
server->probe.rtt = rtt;
|
||||
rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);
|
||||
if (rtt_us < server->probe.rtt) {
|
||||
server->probe.rtt = rtt_us;
|
||||
alist->preferred = index;
|
||||
have_result = true;
|
||||
}
|
||||
@ -113,8 +106,7 @@ void afs_fileserver_probe_result(struct afs_call *call)
|
||||
spin_unlock(&server->probe_lock);
|
||||
|
||||
_debug("probe [%u][%u] %pISpc rtt=%u ret=%d",
|
||||
server_index, index, &alist->addrs[index].transport,
|
||||
(unsigned int)rtt, ret);
|
||||
server_index, index, &alist->addrs[index].transport, rtt_us, ret);
|
||||
|
||||
have_result |= afs_fs_probe_done(server);
|
||||
if (have_result) {
|
||||
|
@ -385,8 +385,6 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call)
|
||||
ASSERTCMP(req->offset, <=, PAGE_SIZE);
|
||||
if (req->offset == PAGE_SIZE) {
|
||||
req->offset = 0;
|
||||
if (req->page_done)
|
||||
req->page_done(req);
|
||||
req->index++;
|
||||
if (req->remain > 0)
|
||||
goto begin_page;
|
||||
@ -440,11 +438,13 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call)
|
||||
if (req->offset < PAGE_SIZE)
|
||||
zero_user_segment(req->pages[req->index],
|
||||
req->offset, PAGE_SIZE);
|
||||
if (req->page_done)
|
||||
req->page_done(req);
|
||||
req->offset = 0;
|
||||
}
|
||||
|
||||
if (req->page_done)
|
||||
for (req->index = 0; req->index < req->nr_pages; req->index++)
|
||||
req->page_done(req);
|
||||
|
||||
_leave(" = 0 [done]");
|
||||
return 0;
|
||||
}
|
||||
|
@ -31,10 +31,9 @@ void afs_vlserver_probe_result(struct afs_call *call)
|
||||
struct afs_addr_list *alist = call->alist;
|
||||
struct afs_vlserver *server = call->vlserver;
|
||||
unsigned int server_index = call->server_index;
|
||||
unsigned int rtt_us = 0;
|
||||
unsigned int index = call->addr_ix;
|
||||
unsigned int rtt = UINT_MAX;
|
||||
bool have_result = false;
|
||||
u64 _rtt;
|
||||
int ret = call->error;
|
||||
|
||||
_enter("%s,%u,%u,%d,%d", server->name, server_index, index, ret, call->abort_code);
|
||||
@ -93,15 +92,9 @@ void afs_vlserver_probe_result(struct afs_call *call)
|
||||
}
|
||||
}
|
||||
|
||||
/* Get the RTT and scale it to fit into a 32-bit value that represents
|
||||
* over a minute of time so that we can access it with one instruction
|
||||
* on a 32-bit system.
|
||||
*/
|
||||
_rtt = rxrpc_kernel_get_rtt(call->net->socket, call->rxcall);
|
||||
_rtt /= 64;
|
||||
rtt = (_rtt > UINT_MAX) ? UINT_MAX : _rtt;
|
||||
if (rtt < server->probe.rtt) {
|
||||
server->probe.rtt = rtt;
|
||||
rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);
|
||||
if (rtt_us < server->probe.rtt) {
|
||||
server->probe.rtt = rtt_us;
|
||||
alist->preferred = index;
|
||||
have_result = true;
|
||||
}
|
||||
@ -113,8 +106,7 @@ void afs_vlserver_probe_result(struct afs_call *call)
|
||||
spin_unlock(&server->probe_lock);
|
||||
|
||||
_debug("probe [%u][%u] %pISpc rtt=%u ret=%d",
|
||||
server_index, index, &alist->addrs[index].transport,
|
||||
(unsigned int)rtt, ret);
|
||||
server_index, index, &alist->addrs[index].transport, rtt_us, ret);
|
||||
|
||||
have_result |= afs_vl_probe_done(server);
|
||||
if (have_result) {
|
||||
|
@ -497,8 +497,6 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call)
|
||||
ASSERTCMP(req->offset, <=, PAGE_SIZE);
|
||||
if (req->offset == PAGE_SIZE) {
|
||||
req->offset = 0;
|
||||
if (req->page_done)
|
||||
req->page_done(req);
|
||||
req->index++;
|
||||
if (req->remain > 0)
|
||||
goto begin_page;
|
||||
@ -556,11 +554,13 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call)
|
||||
if (req->offset < PAGE_SIZE)
|
||||
zero_user_segment(req->pages[req->index],
|
||||
req->offset, PAGE_SIZE);
|
||||
if (req->page_done)
|
||||
req->page_done(req);
|
||||
req->offset = 0;
|
||||
}
|
||||
|
||||
if (req->page_done)
|
||||
for (req->index = 0; req->index < req->nr_pages; req->index++)
|
||||
req->page_done(req);
|
||||
|
||||
_leave(" = 0 [done]");
|
||||
return 0;
|
||||
}
|
||||
|
@ -3693,6 +3693,7 @@ static void handle_cap_export(struct inode *inode, struct ceph_mds_caps *ex,
|
||||
WARN_ON(1);
|
||||
tsession = NULL;
|
||||
target = -1;
|
||||
mutex_lock(&session->s_mutex);
|
||||
}
|
||||
goto retry;
|
||||
|
||||
|
@ -1519,6 +1519,7 @@ static int configfs_rmdir(struct inode *dir, struct dentry *dentry)
|
||||
spin_lock(&configfs_dirent_lock);
|
||||
configfs_detach_rollback(dentry);
|
||||
spin_unlock(&configfs_dirent_lock);
|
||||
config_item_put(parent_item);
|
||||
return -EINTR;
|
||||
}
|
||||
frag->frag_dead = true;
|
||||
|
@ -70,7 +70,7 @@ static void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt,
|
||||
*/
|
||||
static void copy_fdtable(struct fdtable *nfdt, struct fdtable *ofdt)
|
||||
{
|
||||
unsigned int cpy, set;
|
||||
size_t cpy, set;
|
||||
|
||||
BUG_ON(nfdt->max_fds < ofdt->max_fds);
|
||||
|
||||
|
@ -639,9 +639,6 @@ __acquires(&gl->gl_lockref.lock)
|
||||
goto out_unlock;
|
||||
if (nonblock)
|
||||
goto out_sched;
|
||||
smp_mb();
|
||||
if (atomic_read(&gl->gl_revokes) != 0)
|
||||
goto out_sched;
|
||||
set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
|
||||
GLOCK_BUG_ON(gl, gl->gl_demote_state == LM_ST_EXCLUSIVE);
|
||||
gl->gl_target = gl->gl_demote_state;
|
||||
|
@ -79,13 +79,9 @@ int ubifs_prepare_auth_node(struct ubifs_info *c, void *node,
|
||||
struct shash_desc *inhash)
|
||||
{
|
||||
struct ubifs_auth_node *auth = node;
|
||||
u8 *hash;
|
||||
u8 hash[UBIFS_HASH_ARR_SZ];
|
||||
int err;
|
||||
|
||||
hash = kmalloc(crypto_shash_descsize(c->hash_tfm), GFP_NOFS);
|
||||
if (!hash)
|
||||
return -ENOMEM;
|
||||
|
||||
{
|
||||
SHASH_DESC_ON_STACK(hash_desc, c->hash_tfm);
|
||||
|
||||
@ -94,21 +90,16 @@ int ubifs_prepare_auth_node(struct ubifs_info *c, void *node,
|
||||
|
||||
err = crypto_shash_final(hash_desc, hash);
|
||||
if (err)
|
||||
goto out;
|
||||
return err;
|
||||
}
|
||||
|
||||
err = ubifs_hash_calc_hmac(c, hash, auth->hmac);
|
||||
if (err)
|
||||
goto out;
|
||||
return err;
|
||||
|
||||
auth->ch.node_type = UBIFS_AUTH_NODE;
|
||||
ubifs_prepare_node(c, auth, ubifs_auth_node_sz(c), 0);
|
||||
|
||||
err = 0;
|
||||
out:
|
||||
kfree(hash);
|
||||
|
||||
return err;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct shash_desc *ubifs_get_desc(const struct ubifs_info *c,
|
||||
|
@ -1375,7 +1375,6 @@ int ubifs_update_time(struct inode *inode, struct timespec64 *time,
|
||||
struct ubifs_info *c = inode->i_sb->s_fs_info;
|
||||
struct ubifs_budget_req req = { .dirtied_ino = 1,
|
||||
.dirtied_ino_d = ALIGN(ui->data_len, 8) };
|
||||
int iflags = I_DIRTY_TIME;
|
||||
int err, release;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_UBIFS_ATIME_SUPPORT))
|
||||
@ -1393,11 +1392,8 @@ int ubifs_update_time(struct inode *inode, struct timespec64 *time,
|
||||
if (flags & S_MTIME)
|
||||
inode->i_mtime = *time;
|
||||
|
||||
if (!(inode->i_sb->s_flags & SB_LAZYTIME))
|
||||
iflags |= I_DIRTY_SYNC;
|
||||
|
||||
release = ui->dirty;
|
||||
__mark_inode_dirty(inode, iflags);
|
||||
__mark_inode_dirty(inode, I_DIRTY_SYNC);
|
||||
mutex_unlock(&ui->ui_mutex);
|
||||
if (release)
|
||||
ubifs_release_budget(c, &req);
|
||||
|
@ -601,18 +601,12 @@ static int authenticate_sleb(struct ubifs_info *c, struct ubifs_scan_leb *sleb,
|
||||
struct ubifs_scan_node *snod;
|
||||
int n_nodes = 0;
|
||||
int err;
|
||||
u8 *hash, *hmac;
|
||||
u8 hash[UBIFS_HASH_ARR_SZ];
|
||||
u8 hmac[UBIFS_HMAC_ARR_SZ];
|
||||
|
||||
if (!ubifs_authenticated(c))
|
||||
return sleb->nodes_cnt;
|
||||
|
||||
hash = kmalloc(crypto_shash_descsize(c->hash_tfm), GFP_NOFS);
|
||||
hmac = kmalloc(c->hmac_desc_len, GFP_NOFS);
|
||||
if (!hash || !hmac) {
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
list_for_each_entry(snod, &sleb->nodes, list) {
|
||||
|
||||
n_nodes++;
|
||||
@ -662,9 +656,6 @@ static int authenticate_sleb(struct ubifs_info *c, struct ubifs_scan_leb *sleb,
|
||||
err = 0;
|
||||
}
|
||||
out:
|
||||
kfree(hash);
|
||||
kfree(hmac);
|
||||
|
||||
return err ? err : n_nodes - n_not_auth;
|
||||
}
|
||||
|
||||
|
@ -830,8 +830,12 @@ bpf_ctx_narrow_access_offset(u32 off, u32 size, u32 size_default)
|
||||
|
||||
static inline void bpf_prog_lock_ro(struct bpf_prog *fp)
|
||||
{
|
||||
set_vm_flush_reset_perms(fp);
|
||||
set_memory_ro((unsigned long)fp, fp->pages);
|
||||
#ifndef CONFIG_BPF_JIT_ALWAYS_ON
|
||||
if (!fp->jited) {
|
||||
set_vm_flush_reset_perms(fp);
|
||||
set_memory_ro((unsigned long)fp, fp->pages);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
|
||||
|
@ -59,7 +59,7 @@ bool rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *,
|
||||
void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *);
|
||||
void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *,
|
||||
struct sockaddr_rxrpc *);
|
||||
u64 rxrpc_kernel_get_rtt(struct socket *, struct rxrpc_call *);
|
||||
u32 rxrpc_kernel_get_srtt(struct socket *, struct rxrpc_call *);
|
||||
int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t,
|
||||
rxrpc_user_attach_call_t, unsigned long, gfp_t,
|
||||
unsigned int);
|
||||
|
@ -19,7 +19,7 @@ struct net_dm_hw_metadata {
|
||||
struct net_device *input_dev;
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_NET_DROP_MONITOR)
|
||||
#if IS_REACHABLE(CONFIG_NET_DROP_MONITOR)
|
||||
void net_dm_hw_report(struct sk_buff *skb,
|
||||
const struct net_dm_hw_metadata *hw_metadata);
|
||||
#else
|
||||
|
@ -24,6 +24,9 @@ int snd_hdac_regmap_write_raw(struct hdac_device *codec, unsigned int reg,
|
||||
unsigned int val);
|
||||
int snd_hdac_regmap_update_raw(struct hdac_device *codec, unsigned int reg,
|
||||
unsigned int mask, unsigned int val);
|
||||
int snd_hdac_regmap_update_raw_once(struct hdac_device *codec, unsigned int reg,
|
||||
unsigned int mask, unsigned int val);
|
||||
void snd_hdac_regmap_sync(struct hdac_device *codec);
|
||||
|
||||
/**
|
||||
* snd_hdac_regmap_encode_verb - encode the verb to a pseudo register
|
||||
|
@ -87,6 +87,7 @@ struct hdac_device {
|
||||
|
||||
/* regmap */
|
||||
struct regmap *regmap;
|
||||
struct mutex regmap_lock;
|
||||
struct snd_array vendor_verbs;
|
||||
bool lazy_cache:1; /* don't wake up for writes */
|
||||
bool caps_overwriting:1; /* caps overwrite being in process */
|
||||
|
@ -1112,18 +1112,17 @@ TRACE_EVENT(rxrpc_rtt_tx,
|
||||
TRACE_EVENT(rxrpc_rtt_rx,
|
||||
TP_PROTO(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,
|
||||
rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial,
|
||||
s64 rtt, u8 nr, s64 avg),
|
||||
u32 rtt, u32 rto),
|
||||
|
||||
TP_ARGS(call, why, send_serial, resp_serial, rtt, nr, avg),
|
||||
TP_ARGS(call, why, send_serial, resp_serial, rtt, rto),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(unsigned int, call )
|
||||
__field(enum rxrpc_rtt_rx_trace, why )
|
||||
__field(u8, nr )
|
||||
__field(rxrpc_serial_t, send_serial )
|
||||
__field(rxrpc_serial_t, resp_serial )
|
||||
__field(s64, rtt )
|
||||
__field(u64, avg )
|
||||
__field(u32, rtt )
|
||||
__field(u32, rto )
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
@ -1132,18 +1131,16 @@ TRACE_EVENT(rxrpc_rtt_rx,
|
||||
__entry->send_serial = send_serial;
|
||||
__entry->resp_serial = resp_serial;
|
||||
__entry->rtt = rtt;
|
||||
__entry->nr = nr;
|
||||
__entry->avg = avg;
|
||||
__entry->rto = rto;
|
||||
),
|
||||
|
||||
TP_printk("c=%08x %s sr=%08x rr=%08x rtt=%lld nr=%u avg=%lld",
|
||||
TP_printk("c=%08x %s sr=%08x rr=%08x rtt=%u rto=%u",
|
||||
__entry->call,
|
||||
__print_symbolic(__entry->why, rxrpc_rtt_rx_traces),
|
||||
__entry->send_serial,
|
||||
__entry->resp_serial,
|
||||
__entry->rtt,
|
||||
__entry->nr,
|
||||
__entry->avg)
|
||||
__entry->rto)
|
||||
);
|
||||
|
||||
TRACE_EVENT(rxrpc_timer,
|
||||
@ -1544,6 +1541,41 @@ TRACE_EVENT(rxrpc_notify_socket,
|
||||
__entry->serial)
|
||||
);
|
||||
|
||||
TRACE_EVENT(rxrpc_rx_discard_ack,
|
||||
TP_PROTO(unsigned int debug_id, rxrpc_serial_t serial,
|
||||
rxrpc_seq_t first_soft_ack, rxrpc_seq_t call_ackr_first,
|
||||
rxrpc_seq_t prev_pkt, rxrpc_seq_t call_ackr_prev),
|
||||
|
||||
TP_ARGS(debug_id, serial, first_soft_ack, call_ackr_first,
|
||||
prev_pkt, call_ackr_prev),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(unsigned int, debug_id )
|
||||
__field(rxrpc_serial_t, serial )
|
||||
__field(rxrpc_seq_t, first_soft_ack)
|
||||
__field(rxrpc_seq_t, call_ackr_first)
|
||||
__field(rxrpc_seq_t, prev_pkt)
|
||||
__field(rxrpc_seq_t, call_ackr_prev)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->debug_id = debug_id;
|
||||
__entry->serial = serial;
|
||||
__entry->first_soft_ack = first_soft_ack;
|
||||
__entry->call_ackr_first = call_ackr_first;
|
||||
__entry->prev_pkt = prev_pkt;
|
||||
__entry->call_ackr_prev = call_ackr_prev;
|
||||
),
|
||||
|
||||
TP_printk("c=%08x r=%08x %08x<%08x %08x<%08x",
|
||||
__entry->debug_id,
|
||||
__entry->serial,
|
||||
__entry->first_soft_ack,
|
||||
__entry->call_ackr_first,
|
||||
__entry->prev_pkt,
|
||||
__entry->call_ackr_prev)
|
||||
);
|
||||
|
||||
#endif /* _TRACE_RXRPC_H */
|
||||
|
||||
/* This part must be outside protection */
|
||||
|
@ -5246,32 +5246,38 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
|
||||
cfs_rq = cfs_rq_of(se);
|
||||
enqueue_entity(cfs_rq, se, flags);
|
||||
|
||||
/*
|
||||
* end evaluation on encountering a throttled cfs_rq
|
||||
*
|
||||
* note: in the case of encountering a throttled cfs_rq we will
|
||||
* post the final h_nr_running increment below.
|
||||
*/
|
||||
if (cfs_rq_throttled(cfs_rq))
|
||||
break;
|
||||
cfs_rq->h_nr_running++;
|
||||
cfs_rq->idle_h_nr_running += idle_h_nr_running;
|
||||
|
||||
/* end evaluation on encountering a throttled cfs_rq */
|
||||
if (cfs_rq_throttled(cfs_rq))
|
||||
goto enqueue_throttle;
|
||||
|
||||
flags = ENQUEUE_WAKEUP;
|
||||
}
|
||||
|
||||
for_each_sched_entity(se) {
|
||||
cfs_rq = cfs_rq_of(se);
|
||||
cfs_rq->h_nr_running++;
|
||||
cfs_rq->idle_h_nr_running += idle_h_nr_running;
|
||||
|
||||
if (cfs_rq_throttled(cfs_rq))
|
||||
break;
|
||||
|
||||
update_load_avg(cfs_rq, se, UPDATE_TG);
|
||||
update_cfs_group(se);
|
||||
|
||||
cfs_rq->h_nr_running++;
|
||||
cfs_rq->idle_h_nr_running += idle_h_nr_running;
|
||||
|
||||
/* end evaluation on encountering a throttled cfs_rq */
|
||||
if (cfs_rq_throttled(cfs_rq))
|
||||
goto enqueue_throttle;
|
||||
|
||||
/*
|
||||
* One parent has been throttled and cfs_rq removed from the
|
||||
* list. Add it back to not break the leaf list.
|
||||
*/
|
||||
if (throttled_hierarchy(cfs_rq))
|
||||
list_add_leaf_cfs_rq(cfs_rq);
|
||||
}
|
||||
|
||||
enqueue_throttle:
|
||||
if (!se) {
|
||||
add_nr_running(rq, 1);
|
||||
/*
|
||||
@ -5331,17 +5337,13 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
|
||||
cfs_rq = cfs_rq_of(se);
|
||||
dequeue_entity(cfs_rq, se, flags);
|
||||
|
||||
/*
|
||||
* end evaluation on encountering a throttled cfs_rq
|
||||
*
|
||||
* note: in the case of encountering a throttled cfs_rq we will
|
||||
* post the final h_nr_running decrement below.
|
||||
*/
|
||||
if (cfs_rq_throttled(cfs_rq))
|
||||
break;
|
||||
cfs_rq->h_nr_running--;
|
||||
cfs_rq->idle_h_nr_running -= idle_h_nr_running;
|
||||
|
||||
/* end evaluation on encountering a throttled cfs_rq */
|
||||
if (cfs_rq_throttled(cfs_rq))
|
||||
goto dequeue_throttle;
|
||||
|
||||
/* Don't dequeue parent if it has other entities besides us */
|
||||
if (cfs_rq->load.weight) {
|
||||
/* Avoid re-evaluating load for this entity: */
|
||||
@ -5359,16 +5361,20 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
|
||||
|
||||
for_each_sched_entity(se) {
|
||||
cfs_rq = cfs_rq_of(se);
|
||||
cfs_rq->h_nr_running--;
|
||||
cfs_rq->idle_h_nr_running -= idle_h_nr_running;
|
||||
|
||||
if (cfs_rq_throttled(cfs_rq))
|
||||
break;
|
||||
|
||||
update_load_avg(cfs_rq, se, UPDATE_TG);
|
||||
update_cfs_group(se);
|
||||
|
||||
cfs_rq->h_nr_running--;
|
||||
cfs_rq->idle_h_nr_running -= idle_h_nr_running;
|
||||
|
||||
/* end evaluation on encountering a throttled cfs_rq */
|
||||
if (cfs_rq_throttled(cfs_rq))
|
||||
goto dequeue_throttle;
|
||||
|
||||
}
|
||||
|
||||
dequeue_throttle:
|
||||
if (!se)
|
||||
sub_nr_running(rq, 1);
|
||||
|
||||
|
@ -212,6 +212,7 @@ test_string(void)
|
||||
#define PTR_STR "ffff0123456789ab"
|
||||
#define PTR_VAL_NO_CRNG "(____ptrval____)"
|
||||
#define ZEROS "00000000" /* hex 32 zero bits */
|
||||
#define ONES "ffffffff" /* hex 32 one bits */
|
||||
|
||||
static int __init
|
||||
plain_format(void)
|
||||
@ -243,6 +244,7 @@ plain_format(void)
|
||||
#define PTR_STR "456789ab"
|
||||
#define PTR_VAL_NO_CRNG "(ptrval)"
|
||||
#define ZEROS ""
|
||||
#define ONES ""
|
||||
|
||||
static int __init
|
||||
plain_format(void)
|
||||
@ -328,14 +330,28 @@ test_hashed(const char *fmt, const void *p)
|
||||
test(buf, fmt, p);
|
||||
}
|
||||
|
||||
/*
|
||||
* NULL pointers aren't hashed.
|
||||
*/
|
||||
static void __init
|
||||
null_pointer(void)
|
||||
{
|
||||
test_hashed("%p", NULL);
|
||||
test(ZEROS "00000000", "%p", NULL);
|
||||
test(ZEROS "00000000", "%px", NULL);
|
||||
test("(null)", "%pE", NULL);
|
||||
}
|
||||
|
||||
/*
|
||||
* Error pointers aren't hashed.
|
||||
*/
|
||||
static void __init
|
||||
error_pointer(void)
|
||||
{
|
||||
test(ONES "fffffff5", "%p", ERR_PTR(-11));
|
||||
test(ONES "fffffff5", "%px", ERR_PTR(-11));
|
||||
test("(efault)", "%pE", ERR_PTR(-11));
|
||||
}
|
||||
|
||||
#define PTR_INVALID ((void *)0x000000ab)
|
||||
|
||||
static void __init
|
||||
@ -598,6 +614,7 @@ test_pointer(void)
|
||||
{
|
||||
plain();
|
||||
null_pointer();
|
||||
error_pointer();
|
||||
invalid_pointer();
|
||||
symbol_ptr();
|
||||
kernel_ptr();
|
||||
|
@ -773,6 +773,13 @@ static char *ptr_to_id(char *buf, char *end, const void *ptr,
|
||||
unsigned long hashval;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* Print the real pointer value for NULL and error pointers,
|
||||
* as they are not actual addresses.
|
||||
*/
|
||||
if (IS_ERR_OR_NULL(ptr))
|
||||
return pointer_string(buf, end, ptr, spec);
|
||||
|
||||
/* When debugging early boot use non-cryptographically secure hash. */
|
||||
if (unlikely(debug_boot_weak_hash)) {
|
||||
hashval = hash_long((unsigned long)ptr, 32);
|
||||
|
@ -14,10 +14,10 @@ CFLAGS_REMOVE_tags.o = $(CC_FLAGS_FTRACE)
|
||||
# Function splitter causes unnecessary splits in __asan_load1/__asan_store1
|
||||
# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
|
||||
|
||||
CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
|
||||
CFLAGS_generic.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
|
||||
CFLAGS_generic_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
|
||||
CFLAGS_tags.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
|
||||
CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING
|
||||
CFLAGS_generic.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING
|
||||
CFLAGS_generic_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING
|
||||
CFLAGS_tags.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING
|
||||
|
||||
obj-$(CONFIG_KASAN) := common.o init.o report.o
|
||||
obj-$(CONFIG_KASAN_GENERIC) += generic.o generic_report.o quarantine.o
|
||||
|
@ -15,7 +15,6 @@
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
#define DISABLE_BRANCH_PROFILING
|
||||
|
||||
#include <linux/export.h>
|
||||
#include <linux/interrupt.h>
|
||||
|
@ -12,7 +12,6 @@
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
#define DISABLE_BRANCH_PROFILING
|
||||
|
||||
#include <linux/export.h>
|
||||
#include <linux/interrupt.h>
|
||||
|
@ -129,12 +129,10 @@ int skb_flow_dissector_bpf_prog_attach(const union bpf_attr *attr,
|
||||
return 0;
|
||||
}
|
||||
|
||||
int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)
|
||||
static int flow_dissector_bpf_prog_detach(struct net *net)
|
||||
{
|
||||
struct bpf_prog *attached;
|
||||
struct net *net;
|
||||
|
||||
net = current->nsproxy->net_ns;
|
||||
mutex_lock(&flow_dissector_mutex);
|
||||
attached = rcu_dereference_protected(net->flow_dissector_prog,
|
||||
lockdep_is_held(&flow_dissector_mutex));
|
||||
@ -169,6 +167,24 @@ static __be16 skb_flow_get_be16(const struct sk_buff *skb, int poff,
|
||||
return 0;
|
||||
}
|
||||
|
||||
int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)
|
||||
{
|
||||
return flow_dissector_bpf_prog_detach(current->nsproxy->net_ns);
|
||||
}
|
||||
|
||||
static void __net_exit flow_dissector_pernet_pre_exit(struct net *net)
|
||||
{
|
||||
/* We're not racing with attach/detach because there are no
|
||||
* references to netns left when pre_exit gets called.
|
||||
*/
|
||||
if (rcu_access_pointer(net->flow_dissector_prog))
|
||||
flow_dissector_bpf_prog_detach(net);
|
||||
}
|
||||
|
||||
static struct pernet_operations flow_dissector_pernet_ops __net_initdata = {
|
||||
.pre_exit = flow_dissector_pernet_pre_exit,
|
||||
};
|
||||
|
||||
/**
|
||||
* __skb_flow_get_ports - extract the upper layer ports and return them
|
||||
* @skb: sk_buff to extract the ports from
|
||||
@ -1759,7 +1775,7 @@ static int __init init_default_flow_dissectors(void)
|
||||
skb_flow_dissector_init(&flow_keys_basic_dissector,
|
||||
flow_keys_basic_dissector_keys,
|
||||
ARRAY_SIZE(flow_keys_basic_dissector_keys));
|
||||
return 0;
|
||||
}
|
||||
|
||||
return register_pernet_subsys(&flow_dissector_pernet_ops);
|
||||
}
|
||||
core_initcall(init_default_flow_dissectors);
|
||||
|
@ -25,6 +25,7 @@ rxrpc-y := \
|
||||
peer_event.o \
|
||||
peer_object.o \
|
||||
recvmsg.o \
|
||||
rtt.o \
|
||||
security.o \
|
||||
sendmsg.o \
|
||||
skbuff.o \
|
||||
|
@ -7,6 +7,7 @@
|
||||
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/seqlock.h>
|
||||
#include <linux/win_minmax.h>
|
||||
#include <net/net_namespace.h>
|
||||
#include <net/netns/generic.h>
|
||||
#include <net/sock.h>
|
||||
@ -311,11 +312,14 @@ struct rxrpc_peer {
|
||||
#define RXRPC_RTT_CACHE_SIZE 32
|
||||
spinlock_t rtt_input_lock; /* RTT lock for input routine */
|
||||
ktime_t rtt_last_req; /* Time of last RTT request */
|
||||
u64 rtt; /* Current RTT estimate (in nS) */
|
||||
u64 rtt_sum; /* Sum of cache contents */
|
||||
u64 rtt_cache[RXRPC_RTT_CACHE_SIZE]; /* Determined RTT cache */
|
||||
u8 rtt_cursor; /* next entry at which to insert */
|
||||
u8 rtt_usage; /* amount of cache actually used */
|
||||
unsigned int rtt_count; /* Number of samples we've got */
|
||||
|
||||
u32 srtt_us; /* smoothed round trip time << 3 in usecs */
|
||||
u32 mdev_us; /* medium deviation */
|
||||
u32 mdev_max_us; /* maximal mdev for the last rtt period */
|
||||
u32 rttvar_us; /* smoothed mdev_max */
|
||||
u32 rto_j; /* Retransmission timeout in jiffies */
|
||||
u8 backoff; /* Backoff timeout */
|
||||
|
||||
u8 cong_cwnd; /* Congestion window size */
|
||||
};
|
||||
@ -1041,7 +1045,6 @@ extern unsigned long rxrpc_idle_ack_delay;
|
||||
extern unsigned int rxrpc_rx_window_size;
|
||||
extern unsigned int rxrpc_rx_mtu;
|
||||
extern unsigned int rxrpc_rx_jumbo_max;
|
||||
extern unsigned long rxrpc_resend_timeout;
|
||||
|
||||
extern const s8 rxrpc_ack_priority[];
|
||||
|
||||
@ -1069,8 +1072,6 @@ void rxrpc_send_keepalive(struct rxrpc_peer *);
|
||||
* peer_event.c
|
||||
*/
|
||||
void rxrpc_error_report(struct sock *);
|
||||
void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace,
|
||||
rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t);
|
||||
void rxrpc_peer_keepalive_worker(struct work_struct *);
|
||||
|
||||
/*
|
||||
@ -1102,6 +1103,14 @@ extern const struct seq_operations rxrpc_peer_seq_ops;
|
||||
void rxrpc_notify_socket(struct rxrpc_call *);
|
||||
int rxrpc_recvmsg(struct socket *, struct msghdr *, size_t, int);
|
||||
|
||||
/*
|
||||
* rtt.c
|
||||
*/
|
||||
void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace,
|
||||
rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t);
|
||||
unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *, bool);
|
||||
void rxrpc_peer_init_rtt(struct rxrpc_peer *);
|
||||
|
||||
/*
|
||||
* rxkad.c
|
||||
*/
|
||||
|
@ -248,7 +248,7 @@ static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb)
|
||||
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
|
||||
ktime_t now = skb->tstamp;
|
||||
|
||||
if (call->peer->rtt_usage < 3 ||
|
||||
if (call->peer->rtt_count < 3 ||
|
||||
ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), now))
|
||||
rxrpc_propose_ACK(call, RXRPC_ACK_PING, sp->hdr.serial,
|
||||
true, true,
|
||||
|
@ -111,8 +111,8 @@ static void __rxrpc_propose_ACK(struct rxrpc_call *call, u8 ack_reason,
|
||||
} else {
|
||||
unsigned long now = jiffies, ack_at;
|
||||
|
||||
if (call->peer->rtt_usage > 0)
|
||||
ack_at = nsecs_to_jiffies(call->peer->rtt);
|
||||
if (call->peer->srtt_us != 0)
|
||||
ack_at = usecs_to_jiffies(call->peer->srtt_us >> 3);
|
||||
else
|
||||
ack_at = expiry;
|
||||
|
||||
@ -157,24 +157,18 @@ static void rxrpc_congestion_timeout(struct rxrpc_call *call)
|
||||
static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
|
||||
{
|
||||
struct sk_buff *skb;
|
||||
unsigned long resend_at;
|
||||
unsigned long resend_at, rto_j;
|
||||
rxrpc_seq_t cursor, seq, top;
|
||||
ktime_t now, max_age, oldest, ack_ts, timeout, min_timeo;
|
||||
ktime_t now, max_age, oldest, ack_ts;
|
||||
int ix;
|
||||
u8 annotation, anno_type, retrans = 0, unacked = 0;
|
||||
|
||||
_enter("{%d,%d}", call->tx_hard_ack, call->tx_top);
|
||||
|
||||
if (call->peer->rtt_usage > 1)
|
||||
timeout = ns_to_ktime(call->peer->rtt * 3 / 2);
|
||||
else
|
||||
timeout = ms_to_ktime(rxrpc_resend_timeout);
|
||||
min_timeo = ns_to_ktime((1000000000 / HZ) * 4);
|
||||
if (ktime_before(timeout, min_timeo))
|
||||
timeout = min_timeo;
|
||||
rto_j = call->peer->rto_j;
|
||||
|
||||
now = ktime_get_real();
|
||||
max_age = ktime_sub(now, timeout);
|
||||
max_age = ktime_sub(now, jiffies_to_usecs(rto_j));
|
||||
|
||||
spin_lock_bh(&call->lock);
|
||||
|
||||
@ -219,7 +213,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
|
||||
}
|
||||
|
||||
resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest)));
|
||||
resend_at += jiffies + rxrpc_resend_timeout;
|
||||
resend_at += jiffies + rto_j;
|
||||
WRITE_ONCE(call->resend_at, resend_at);
|
||||
|
||||
if (unacked)
|
||||
@ -234,7 +228,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
|
||||
rxrpc_timer_set_for_resend);
|
||||
spin_unlock_bh(&call->lock);
|
||||
ack_ts = ktime_sub(now, call->acks_latest_ts);
|
||||
if (ktime_to_ns(ack_ts) < call->peer->rtt)
|
||||
if (ktime_to_us(ack_ts) < (call->peer->srtt_us >> 3))
|
||||
goto out;
|
||||
rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, true, false,
|
||||
rxrpc_propose_ack_ping_for_lost_ack);
|
||||
|
@ -91,11 +91,11 @@ static void rxrpc_congestion_management(struct rxrpc_call *call,
|
||||
/* We analyse the number of packets that get ACK'd per RTT
|
||||
* period and increase the window if we managed to fill it.
|
||||
*/
|
||||
if (call->peer->rtt_usage == 0)
|
||||
if (call->peer->rtt_count == 0)
|
||||
goto out;
|
||||
if (ktime_before(skb->tstamp,
|
||||
ktime_add_ns(call->cong_tstamp,
|
||||
call->peer->rtt)))
|
||||
ktime_add_us(call->cong_tstamp,
|
||||
call->peer->srtt_us >> 3)))
|
||||
goto out_no_clear_ca;
|
||||
change = rxrpc_cong_rtt_window_end;
|
||||
call->cong_tstamp = skb->tstamp;
|
||||
@ -802,6 +802,30 @@ static void rxrpc_input_soft_acks(struct rxrpc_call *call, u8 *acks,
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Return true if the ACK is valid - ie. it doesn't appear to have regressed
|
||||
* with respect to the ack state conveyed by preceding ACKs.
|
||||
*/
|
||||
static bool rxrpc_is_ack_valid(struct rxrpc_call *call,
|
||||
rxrpc_seq_t first_pkt, rxrpc_seq_t prev_pkt)
|
||||
{
|
||||
rxrpc_seq_t base = READ_ONCE(call->ackr_first_seq);
|
||||
|
||||
if (after(first_pkt, base))
|
||||
return true; /* The window advanced */
|
||||
|
||||
if (before(first_pkt, base))
|
||||
return false; /* firstPacket regressed */
|
||||
|
||||
if (after_eq(prev_pkt, call->ackr_prev_seq))
|
||||
return true; /* previousPacket hasn't regressed. */
|
||||
|
||||
/* Some rx implementations put a serial number in previousPacket. */
|
||||
if (after_eq(prev_pkt, base + call->tx_winsize))
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Process an ACK packet.
|
||||
*
|
||||
@ -865,9 +889,12 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
|
||||
}
|
||||
|
||||
/* Discard any out-of-order or duplicate ACKs (outside lock). */
|
||||
if (before(first_soft_ack, call->ackr_first_seq) ||
|
||||
before(prev_pkt, call->ackr_prev_seq))
|
||||
if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
|
||||
trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial,
|
||||
first_soft_ack, call->ackr_first_seq,
|
||||
prev_pkt, call->ackr_prev_seq);
|
||||
return;
|
||||
}
|
||||
|
||||
buf.info.rxMTU = 0;
|
||||
ioffset = offset + nr_acks + 3;
|
||||
@ -878,9 +905,12 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
|
||||
spin_lock(&call->input_lock);
|
||||
|
||||
/* Discard any out-of-order or duplicate ACKs (inside lock). */
|
||||
if (before(first_soft_ack, call->ackr_first_seq) ||
|
||||
before(prev_pkt, call->ackr_prev_seq))
|
||||
if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
|
||||
trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial,
|
||||
first_soft_ack, call->ackr_first_seq,
|
||||
prev_pkt, call->ackr_prev_seq);
|
||||
goto out;
|
||||
}
|
||||
call->acks_latest_ts = skb->tstamp;
|
||||
|
||||
call->ackr_first_seq = first_soft_ack;
|
||||
|
@ -63,11 +63,6 @@ unsigned int rxrpc_rx_mtu = 5692;
|
||||
*/
|
||||
unsigned int rxrpc_rx_jumbo_max = 4;
|
||||
|
||||
/*
|
||||
* Time till packet resend (in milliseconds).
|
||||
*/
|
||||
unsigned long rxrpc_resend_timeout = 4 * HZ;
|
||||
|
||||
const s8 rxrpc_ack_priority[] = {
|
||||
[0] = 0,
|
||||
[RXRPC_ACK_DELAY] = 1,
|
||||
|
@ -369,7 +369,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
|
||||
(test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events) ||
|
||||
retrans ||
|
||||
call->cong_mode == RXRPC_CALL_SLOW_START ||
|
||||
(call->peer->rtt_usage < 3 && sp->hdr.seq & 1) ||
|
||||
(call->peer->rtt_count < 3 && sp->hdr.seq & 1) ||
|
||||
ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000),
|
||||
ktime_get_real())))
|
||||
whdr.flags |= RXRPC_REQUEST_ACK;
|
||||
@ -423,13 +423,10 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
|
||||
if (whdr.flags & RXRPC_REQUEST_ACK) {
|
||||
call->peer->rtt_last_req = skb->tstamp;
|
||||
trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_data, serial);
|
||||
if (call->peer->rtt_usage > 1) {
|
||||
if (call->peer->rtt_count > 1) {
|
||||
unsigned long nowj = jiffies, ack_lost_at;
|
||||
|
||||
ack_lost_at = nsecs_to_jiffies(2 * call->peer->rtt);
|
||||
if (ack_lost_at < 1)
|
||||
ack_lost_at = 1;
|
||||
|
||||
ack_lost_at = rxrpc_get_rto_backoff(call->peer, retrans);
|
||||
ack_lost_at += nowj;
|
||||
WRITE_ONCE(call->ack_lost_at, ack_lost_at);
|
||||
rxrpc_reduce_call_timer(call, ack_lost_at, nowj,
|
||||
|
@ -295,52 +295,6 @@ static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error,
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Add RTT information to cache. This is called in softirq mode and has
|
||||
* exclusive access to the peer RTT data.
|
||||
*/
|
||||
void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,
|
||||
rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial,
|
||||
ktime_t send_time, ktime_t resp_time)
|
||||
{
|
||||
struct rxrpc_peer *peer = call->peer;
|
||||
s64 rtt;
|
||||
u64 sum = peer->rtt_sum, avg;
|
||||
u8 cursor = peer->rtt_cursor, usage = peer->rtt_usage;
|
||||
|
||||
rtt = ktime_to_ns(ktime_sub(resp_time, send_time));
|
||||
if (rtt < 0)
|
||||
return;
|
||||
|
||||
spin_lock(&peer->rtt_input_lock);
|
||||
|
||||
/* Replace the oldest datum in the RTT buffer */
|
||||
sum -= peer->rtt_cache[cursor];
|
||||
sum += rtt;
|
||||
peer->rtt_cache[cursor] = rtt;
|
||||
peer->rtt_cursor = (cursor + 1) & (RXRPC_RTT_CACHE_SIZE - 1);
|
||||
peer->rtt_sum = sum;
|
||||
if (usage < RXRPC_RTT_CACHE_SIZE) {
|
||||
usage++;
|
||||
peer->rtt_usage = usage;
|
||||
}
|
||||
|
||||
spin_unlock(&peer->rtt_input_lock);
|
||||
|
||||
/* Now recalculate the average */
|
||||
if (usage == RXRPC_RTT_CACHE_SIZE) {
|
||||
avg = sum / RXRPC_RTT_CACHE_SIZE;
|
||||
} else {
|
||||
avg = sum;
|
||||
do_div(avg, usage);
|
||||
}
|
||||
|
||||
/* Don't need to update this under lock */
|
||||
peer->rtt = avg;
|
||||
trace_rxrpc_rtt_rx(call, why, send_serial, resp_serial, rtt,
|
||||
usage, avg);
|
||||
}
|
||||
|
||||
/*
|
||||
* Perform keep-alive pings.
|
||||
*/
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user