This is the 5.4.23 stable release

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl5ZPksACgkQONu9yGCS
 aT4X0A//YvcKCCLgtWdQsWVJ0PEf5YE2KQI4rvbbD7wKkE5S6AWMhL3D+t46cWVe
 EgBtZLJYFkfqhdfF4JqjPsof/3CYS4o/LnAqzo0BgnnFccLV25SsGqDMn1b5Z6K2
 2vUs3gRydFk8iAWFs6XxrxScUbYrqr+6rQcLvgWHuMXjOInYPBUdc6b+vYMRsY79
 Eil6ROUy0daQPDJzfFrODW+OiUQ8uUx0F9Mq3fhuzNwx8E1QBv0qoH6fFkCYOzNa
 rmyjETil09hjLFMVThGjJoUPEzog6135T/s+eRo7vR13XdHPLo8lvrRJNGnuFBct
 CPVEZBNDVE20TRXGCaKDM/T8BMgqZ3V4Kx9BFwCyP34LdGebKvOsNvoNX7AxlyvQ
 lfOEpJU3rBuEUaM32J842NoMaSbIrOYBwtrA/0XEMQhIyA26FjJsE9foJFog68gQ
 2fekQSKpzWHcw1k3kUPH5iYHjD4oEz3mVM+C12klszMeoGYmnkGpmW0GzhtDJZiL
 94LxhUo3vNzBN5ut1am5FrYMaw5YF0Ptnk6n4CWvU9NnvHesFNE/BFzok7yv03M+
 Mm0XDyGKO4xWnCIbj2nTfbKDoY3FL7nJJ1GhwmHb36V2ZURIkkSob4In2/JM18Gw
 ltYJTEPsK3SeomLQDNCpoSdRcp3G615b7k8H9agz14Loh4Tydh0=
 =ScbK
 -----END PGP SIGNATURE-----

Merge 5.4.23 into android-5.4

Changes in 5.4.23
	iommu/qcom: Fix bogus detach logic
	ALSA: hda: Use scnprintf() for printing texts for sysfs/procfs
	ALSA: hda/realtek - Apply quirk for MSI GP63, too
	ALSA: hda/realtek - Apply quirk for yet another MSI laptop
	ASoC: codec2codec: avoid invalid/double-free of pcm runtime
	ASoC: sun8i-codec: Fix setting DAI data format
	tpm: Initialize crypto_id of allocated_banks to HASH_ALGO__LAST
	ecryptfs: fix a memory leak bug in parse_tag_1_packet()
	ecryptfs: fix a memory leak bug in ecryptfs_init_messaging()
	btrfs: handle logged extent failure properly
	thunderbolt: Prevent crash if non-active NVMem file is read
	USB: misc: iowarrior: add support for 2 OEMed devices
	USB: misc: iowarrior: add support for the 28 and 28L devices
	USB: misc: iowarrior: add support for the 100 device
	e1000e: Use rtnl_lock to prevent race conditions between net and pci/pm
	floppy: check FDC index for errors before assigning it
	vt: fix scrollback flushing on background consoles
	vt: selection, handle pending signals in paste_selection
	vt: vt_ioctl: fix race in VT_RESIZEX
	staging: android: ashmem: Disallow ashmem memory from being remapped
	staging: vt6656: fix sign of rx_dbm to bb_pre_ed_rssi.
	xhci: Force Maximum Packet size for Full-speed bulk devices to valid range.
	xhci: fix runtime pm enabling for quirky Intel hosts
	xhci: apply XHCI_PME_STUCK_QUIRK to Intel Comet Lake platforms
	xhci: Fix memory leak when caching protocol extended capability PSI tables - take 2
	usb: host: xhci: update event ring dequeue pointer on purpose
	USB: core: add endpoint-blacklist quirk
	USB: quirks: blacklist duplicate ep on Sound Devices USBPre2
	usb: uas: fix a plug & unplug racing
	USB: Fix novation SourceControl XL after suspend
	USB: hub: Don't record a connect-change event during reset-resume
	USB: hub: Fix the broken detection of USB3 device in SMSC hub
	usb: dwc2: Fix SET/CLEAR_FEATURE and GET_STATUS flows
	usb: dwc3: gadget: Check for IOC/LST bit in TRB->ctrl fields
	usb: dwc3: debug: fix string position formatting mixup with ret and len
	scsi: Revert "target/core: Inline transport_lun_remove_cmd()"
	staging: rtl8188eu: Fix potential security hole
	staging: rtl8188eu: Fix potential overuse of kernel memory
	staging: rtl8723bs: Fix potential security hole
	staging: rtl8723bs: Fix potential overuse of kernel memory
	drm/panfrost: perfcnt: Reserve/use the AS attached to the perfcnt MMU context
	powerpc/8xx: Fix clearing of bits 20-23 in ITLB miss
	powerpc/eeh: Fix deadlock handling dead PHB
	powerpc/tm: Fix clearing MSR[TS] in current when reclaiming on signal delivery
	powerpc/entry: Fix an #if which should be an #ifdef in entry_32.S
	powerpc/hugetlb: Fix 512k hugepages on 8xx with 16k page size
	powerpc/hugetlb: Fix 8M hugepages on 8xx
	arm64: memory: Add missing brackets to untagged_addr() macro
	jbd2: fix ocfs2 corrupt when clearing block group bits
	x86/ima: use correct identifier for SetupMode variable
	x86/mce/amd: Publish the bank pointer only after setup has succeeded
	x86/mce/amd: Fix kobject lifetime
	x86/cpu/amd: Enable the fixed Instructions Retired counter IRPERF
	serial: 8250: Check UPF_IRQ_SHARED in advance
	tty/serial: atmel: manage shutdown in case of RS485 or ISO7816 mode
	tty: serial: imx: setup the correct sg entry for tx dma
	tty: serial: qcom_geni_serial: Fix RX cancel command failure
	serdev: ttyport: restore client ops on deregistration
	MAINTAINERS: Update drm/i915 bug filing URL
	ACPI: PM: s2idle: Check fixed wakeup events in acpi_s2idle_wake()
	Revert "ipc,sem: remove uneeded sem_undo_list lock usage in exit_sem()"
	mm/memcontrol.c: lost css_put in memcg_expand_shrinker_maps()
	nvme-multipath: Fix memory leak with ana_log_buf
	genirq/irqdomain: Make sure all irq domain flags are distinct
	mm/vmscan.c: don't round up scan size for online memory cgroup
	mm/sparsemem: pfn_to_page is not valid yet on SPARSEMEM
	lib/stackdepot.c: fix global out-of-bounds in stack_slabs
	mm: Avoid creating virtual address aliases in brk()/mmap()/mremap()
	drm/amdgpu/soc15: fix xclk for raven
	drm/amdgpu/gfx9: disable gfxoff when reading rlc clock
	drm/amdgpu/gfx10: disable gfxoff when reading rlc clock
	drm/nouveau/kms/gv100-: Re-set LUT after clearing for modesets
	drm/i915: Wean off drm_pci_alloc/drm_pci_free
	drm/i915: Update drm/i915 bug filing URL
	sched/psi: Fix OOB write when writing 0 bytes to PSI files
	KVM: nVMX: Don't emulate instructions in guest mode
	KVM: x86: don't notify userspace IOAPIC on edge-triggered interrupt EOI
	ext4: fix a data race in EXT4_I(inode)->i_disksize
	ext4: add cond_resched() to __ext4_find_entry()
	ext4: fix potential race between online resizing and write operations
	ext4: fix potential race between s_group_info online resizing and access
	ext4: fix potential race between s_flex_groups online resizing and access
	ext4: fix mount failure with quota configured as module
	ext4: rename s_journal_flag_rwsem to s_writepages_rwsem
	ext4: fix race between writepages and enabling EXT4_EXTENTS_FL
	KVM: nVMX: Refactor IO bitmap checks into helper function
	KVM: nVMX: Check IO instruction VM-exit conditions
	KVM: nVMX: clear PIN_BASED_POSTED_INTR from nested pinbased_ctls only when apicv is globally disabled
	KVM: nVMX: handle nested posted interrupts when apicv is disabled for L1
	KVM: apic: avoid calculating pending eoi from an uninitialized val
	btrfs: destroy qgroup extent records on transaction abort
	btrfs: fix bytes_may_use underflow in prealloc error condtition
	btrfs: reset fs_root to NULL on error in open_ctree
	btrfs: do not check delayed items are empty for single transaction cleanup
	Btrfs: fix btrfs_wait_ordered_range() so that it waits for all ordered extents
	Btrfs: fix race between shrinking truncate and fiemap
	btrfs: don't set path->leave_spinning for truncate
	Btrfs: fix deadlock during fast fsync when logging prealloc extents beyond eof
	Revert "dmaengine: imx-sdma: Fix memory leak"
	drm/i915/gt: Detect if we miss WaIdleLiteRestore
	drm/i915/execlists: Always force a context reload when rewinding RING_TAIL
	drm/i915/gvt: more locking for ppgtt mm LRU list
	drm/bridge: tc358767: fix poll timeouts
	drm/i915/gt: Protect defer_request() from new waiters
	drm/msm/dpu: fix BGR565 vs RGB565 confusion
	scsi: Revert "RDMA/isert: Fix a recently introduced regression related to logout"
	scsi: Revert "target: iscsi: Wait for all commands to finish before freeing a session"
	usb: gadget: composite: Fix bMaxPower for SuperSpeedPlus
	usb: dwc2: Fix in ISOC request length checking
	staging: rtl8723bs: fix copy of overlapping memory
	staging: greybus: use after free in gb_audio_manager_remove_all()
	ASoC: atmel: fix atmel_ssc_set_audio link failure
	ASoC: fsl_sai: Fix exiting path on probing failure
	ecryptfs: replace BUG_ON with error handling code
	iommu/vt-d: Fix compile warning from intel-svm.h
	crypto: rename sm3-256 to sm3 in hash_algo_name
	genirq/proc: Reject invalid affinity masks (again)
	bpf, offload: Replace bitwise AND by logical AND in bpf_prog_offload_info_fill
	arm64: lse: Fix LSE atomics with LLVM
	io_uring: fix __io_iopoll_check deadlock in io_sq_thread
	ALSA: rawmidi: Avoid bit fields for state flags
	ALSA: seq: Avoid concurrent access to queue flags
	ALSA: seq: Fix concurrent access to queue current tick/time
	netfilter: xt_hashlimit: limit the max size of hashtable
	rxrpc: Fix call RCU cleanup using non-bh-safe locks
	io_uring: prevent sq_thread from spinning when it should stop
	ata: ahci: Add shutdown to freeze hardware resources of ahci
	xen: Enable interrupts when calling _cond_resched()
	net/mlx5e: Reset RQ doorbell counter before moving RQ state from RST to RDY
	net/mlx5: Fix sleep while atomic in mlx5_eswitch_get_vepa
	net/mlx5e: Fix crash in recovery flow without devlink reporter
	s390/kaslr: Fix casts in get_random
	s390/mm: Explicitly compare PAGE_DEFAULT_KEY against zero in storage_key_init_range
	bpf: Selftests build error in sockmap_basic.c
	ASoC: SOF: Intel: hda: Add iDisp4 DAI
	Linux 5.4.23

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I1d60f06bcb6ee74e5601976c7af79153c41af11c
This commit is contained in:
Greg Kroah-Hartman 2020-03-01 10:17:44 +01:00
commit 2c2101d181
162 changed files with 1509 additions and 712 deletions

View File

@ -44,8 +44,15 @@ The AArch64 Tagged Address ABI has two stages of relaxation depending
how the user addresses are used by the kernel: how the user addresses are used by the kernel:
1. User addresses not accessed by the kernel but used for address space 1. User addresses not accessed by the kernel but used for address space
management (e.g. ``mmap()``, ``mprotect()``, ``madvise()``). The use management (e.g. ``mprotect()``, ``madvise()``). The use of valid
of valid tagged pointers in this context is always allowed. tagged pointers in this context is allowed with the exception of
``brk()``, ``mmap()`` and the ``new_address`` argument to
``mremap()`` as these have the potential to alias with existing
user addresses.
NOTE: This behaviour changed in v5.6 and so some earlier kernels may
incorrectly accept valid tagged pointers for the ``brk()``,
``mmap()`` and ``mremap()`` system calls.
2. User addresses accessed by the kernel (e.g. ``write()``). This ABI 2. User addresses accessed by the kernel (e.g. ``write()``). This ABI
relaxation is disabled by default and the application thread needs to relaxation is disabled by default and the application thread needs to

View File

@ -8201,7 +8201,7 @@ M: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
M: Rodrigo Vivi <rodrigo.vivi@intel.com> M: Rodrigo Vivi <rodrigo.vivi@intel.com>
L: intel-gfx@lists.freedesktop.org L: intel-gfx@lists.freedesktop.org
W: https://01.org/linuxgraphics/ W: https://01.org/linuxgraphics/
B: https://01.org/linuxgraphics/documentation/how-report-bugs B: https://gitlab.freedesktop.org/drm/intel/-/wikis/How-to-file-i915-bugs
C: irc://chat.freenode.net/intel-gfx C: irc://chat.freenode.net/intel-gfx
Q: http://patchwork.freedesktop.org/project/intel-gfx/ Q: http://patchwork.freedesktop.org/project/intel-gfx/
T: git git://anongit.freedesktop.org/drm-intel T: git git://anongit.freedesktop.org/drm-intel

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
VERSION = 5 VERSION = 5
PATCHLEVEL = 4 PATCHLEVEL = 4
SUBLEVEL = 22 SUBLEVEL = 23
EXTRAVERSION = EXTRAVERSION =
NAME = Kleptomaniac Octopus NAME = Kleptomaniac Octopus

View File

@ -6,7 +6,7 @@
#if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS) #if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS)
#define __LSE_PREAMBLE ".arch armv8-a+lse\n" #define __LSE_PREAMBLE ".arch_extension lse\n"
#include <linux/compiler_types.h> #include <linux/compiler_types.h>
#include <linux/export.h> #include <linux/export.h>

View File

@ -219,7 +219,7 @@ static inline unsigned long kaslr_offset(void)
((__force __typeof__(addr))sign_extend64((__force u64)(addr), 55)) ((__force __typeof__(addr))sign_extend64((__force u64)(addr), 55))
#define untagged_addr(addr) ({ \ #define untagged_addr(addr) ({ \
u64 __addr = (__force u64)addr; \ u64 __addr = (__force u64)(addr); \
__addr &= __untagged_addr(__addr); \ __addr &= __untagged_addr(__addr); \
(__force __typeof__(addr))__addr; \ (__force __typeof__(addr))__addr; \
}) })

View File

@ -295,8 +295,13 @@ static inline bool pfn_valid(unsigned long pfn)
/* /*
* Some number of bits at the level of the page table that points to * Some number of bits at the level of the page table that points to
* a hugepte are used to encode the size. This masks those bits. * a hugepte are used to encode the size. This masks those bits.
* On 8xx, HW assistance requires 4k alignment for the hugepte.
*/ */
#ifdef CONFIG_PPC_8xx
#define HUGEPD_SHIFT_MASK 0xfff
#else
#define HUGEPD_SHIFT_MASK 0x3f #define HUGEPD_SHIFT_MASK 0x3f
#endif
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__

View File

@ -1200,14 +1200,6 @@ void eeh_handle_special_event(void)
eeh_pe_state_mark(pe, EEH_PE_RECOVERING); eeh_pe_state_mark(pe, EEH_PE_RECOVERING);
eeh_handle_normal_event(pe); eeh_handle_normal_event(pe);
} else { } else {
pci_lock_rescan_remove();
list_for_each_entry(hose, &hose_list, list_node) {
phb_pe = eeh_phb_pe_get(hose);
if (!phb_pe ||
!(phb_pe->state & EEH_PE_ISOLATED) ||
(phb_pe->state & EEH_PE_RECOVERING))
continue;
eeh_for_each_pe(pe, tmp_pe) eeh_for_each_pe(pe, tmp_pe)
eeh_pe_for_each_dev(tmp_pe, edev, tmp_edev) eeh_pe_for_each_dev(tmp_pe, edev, tmp_edev)
edev->mode &= ~EEH_DEV_NO_HANDLER; edev->mode &= ~EEH_DEV_NO_HANDLER;
@ -1218,6 +1210,15 @@ void eeh_handle_special_event(void)
eeh_pe_report( eeh_pe_report(
"error_detected(permanent failure)", pe, "error_detected(permanent failure)", pe,
eeh_report_failure, NULL); eeh_report_failure, NULL);
pci_lock_rescan_remove();
list_for_each_entry(hose, &hose_list, list_node) {
phb_pe = eeh_phb_pe_get(hose);
if (!phb_pe ||
!(phb_pe->state & EEH_PE_ISOLATED) ||
(phb_pe->state & EEH_PE_RECOVERING))
continue;
bus = eeh_pe_bus_get(phb_pe); bus = eeh_pe_bus_get(phb_pe);
if (!bus) { if (!bus) {
pr_err("%s: Cannot find PCI bus for " pr_err("%s: Cannot find PCI bus for "

View File

@ -778,7 +778,7 @@ fast_exception_return:
1: lis r3,exc_exit_restart_end@ha 1: lis r3,exc_exit_restart_end@ha
addi r3,r3,exc_exit_restart_end@l addi r3,r3,exc_exit_restart_end@l
cmplw r12,r3 cmplw r12,r3
#if CONFIG_PPC_BOOK3S_601 #ifdef CONFIG_PPC_BOOK3S_601
bge 2b bge 2b
#else #else
bge 3f bge 3f
@ -786,7 +786,7 @@ fast_exception_return:
lis r4,exc_exit_restart@ha lis r4,exc_exit_restart@ha
addi r4,r4,exc_exit_restart@l addi r4,r4,exc_exit_restart@l
cmplw r12,r4 cmplw r12,r4
#if CONFIG_PPC_BOOK3S_601 #ifdef CONFIG_PPC_BOOK3S_601
blt 2b blt 2b
#else #else
blt 3f blt 3f

View File

@ -289,7 +289,7 @@ InstructionTLBMiss:
* set. All other Linux PTE bits control the behavior * set. All other Linux PTE bits control the behavior
* of the MMU. * of the MMU.
*/ */
rlwimi r10, r10, 0, 0x0f00 /* Clear bits 20-23 */ rlwinm r10, r10, 0, ~0x0f00 /* Clear bits 20-23 */
rlwimi r10, r10, 4, 0x0400 /* Copy _PAGE_EXEC into bit 21 */ rlwimi r10, r10, 4, 0x0400 /* Copy _PAGE_EXEC into bit 21 */
ori r10, r10, RPN_PATTERN | 0x200 /* Set 22 and 24-27 */ ori r10, r10, RPN_PATTERN | 0x200 /* Set 22 and 24-27 */
mtspr SPRN_MI_RPN, r10 /* Update TLB entry */ mtspr SPRN_MI_RPN, r10 /* Update TLB entry */

View File

@ -200,14 +200,27 @@ unsigned long get_tm_stackpointer(struct task_struct *tsk)
* normal/non-checkpointed stack pointer. * normal/non-checkpointed stack pointer.
*/ */
unsigned long ret = tsk->thread.regs->gpr[1];
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
BUG_ON(tsk != current); BUG_ON(tsk != current);
if (MSR_TM_ACTIVE(tsk->thread.regs->msr)) { if (MSR_TM_ACTIVE(tsk->thread.regs->msr)) {
preempt_disable();
tm_reclaim_current(TM_CAUSE_SIGNAL); tm_reclaim_current(TM_CAUSE_SIGNAL);
if (MSR_TM_TRANSACTIONAL(tsk->thread.regs->msr)) if (MSR_TM_TRANSACTIONAL(tsk->thread.regs->msr))
return tsk->thread.ckpt_regs.gpr[1]; ret = tsk->thread.ckpt_regs.gpr[1];
/*
* If we treclaim, we must clear the current thread's TM bits
* before re-enabling preemption. Otherwise we might be
* preempted and have the live MSR[TS] changed behind our back
* (tm_recheckpoint_new_task() would recheckpoint). Besides, we
* enter the signal handler in non-transactional state.
*/
tsk->thread.regs->msr &= ~MSR_TS_MASK;
preempt_enable();
} }
#endif #endif
return tsk->thread.regs->gpr[1]; return ret;
} }

View File

@ -489,19 +489,11 @@ static int save_user_regs(struct pt_regs *regs, struct mcontext __user *frame,
*/ */
static int save_tm_user_regs(struct pt_regs *regs, static int save_tm_user_regs(struct pt_regs *regs,
struct mcontext __user *frame, struct mcontext __user *frame,
struct mcontext __user *tm_frame, int sigret) struct mcontext __user *tm_frame, int sigret,
unsigned long msr)
{ {
unsigned long msr = regs->msr;
WARN_ON(tm_suspend_disabled); WARN_ON(tm_suspend_disabled);
/* Remove TM bits from thread's MSR. The MSR in the sigcontext
* just indicates to userland that we were doing a transaction, but we
* don't want to return in transactional state. This also ensures
* that flush_fp_to_thread won't set TIF_RESTORE_TM again.
*/
regs->msr &= ~MSR_TS_MASK;
/* Save both sets of general registers */ /* Save both sets of general registers */
if (save_general_regs(&current->thread.ckpt_regs, frame) if (save_general_regs(&current->thread.ckpt_regs, frame)
|| save_general_regs(regs, tm_frame)) || save_general_regs(regs, tm_frame))
@ -912,6 +904,10 @@ int handle_rt_signal32(struct ksignal *ksig, sigset_t *oldset,
int sigret; int sigret;
unsigned long tramp; unsigned long tramp;
struct pt_regs *regs = tsk->thread.regs; struct pt_regs *regs = tsk->thread.regs;
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
/* Save the thread's msr before get_tm_stackpointer() changes it */
unsigned long msr = regs->msr;
#endif
BUG_ON(tsk != current); BUG_ON(tsk != current);
@ -944,13 +940,13 @@ int handle_rt_signal32(struct ksignal *ksig, sigset_t *oldset,
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
tm_frame = &rt_sf->uc_transact.uc_mcontext; tm_frame = &rt_sf->uc_transact.uc_mcontext;
if (MSR_TM_ACTIVE(regs->msr)) { if (MSR_TM_ACTIVE(msr)) {
if (__put_user((unsigned long)&rt_sf->uc_transact, if (__put_user((unsigned long)&rt_sf->uc_transact,
&rt_sf->uc.uc_link) || &rt_sf->uc.uc_link) ||
__put_user((unsigned long)tm_frame, __put_user((unsigned long)tm_frame,
&rt_sf->uc_transact.uc_regs)) &rt_sf->uc_transact.uc_regs))
goto badframe; goto badframe;
if (save_tm_user_regs(regs, frame, tm_frame, sigret)) if (save_tm_user_regs(regs, frame, tm_frame, sigret, msr))
goto badframe; goto badframe;
} }
else else
@ -1369,6 +1365,10 @@ int handle_signal32(struct ksignal *ksig, sigset_t *oldset,
int sigret; int sigret;
unsigned long tramp; unsigned long tramp;
struct pt_regs *regs = tsk->thread.regs; struct pt_regs *regs = tsk->thread.regs;
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
/* Save the thread's msr before get_tm_stackpointer() changes it */
unsigned long msr = regs->msr;
#endif
BUG_ON(tsk != current); BUG_ON(tsk != current);
@ -1402,9 +1402,9 @@ int handle_signal32(struct ksignal *ksig, sigset_t *oldset,
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
tm_mctx = &frame->mctx_transact; tm_mctx = &frame->mctx_transact;
if (MSR_TM_ACTIVE(regs->msr)) { if (MSR_TM_ACTIVE(msr)) {
if (save_tm_user_regs(regs, &frame->mctx, &frame->mctx_transact, if (save_tm_user_regs(regs, &frame->mctx, &frame->mctx_transact,
sigret)) sigret, msr))
goto badframe; goto badframe;
} }
else else

View File

@ -192,7 +192,8 @@ static long setup_sigcontext(struct sigcontext __user *sc,
static long setup_tm_sigcontexts(struct sigcontext __user *sc, static long setup_tm_sigcontexts(struct sigcontext __user *sc,
struct sigcontext __user *tm_sc, struct sigcontext __user *tm_sc,
struct task_struct *tsk, struct task_struct *tsk,
int signr, sigset_t *set, unsigned long handler) int signr, sigset_t *set, unsigned long handler,
unsigned long msr)
{ {
/* When CONFIG_ALTIVEC is set, we _always_ setup v_regs even if the /* When CONFIG_ALTIVEC is set, we _always_ setup v_regs even if the
* process never used altivec yet (MSR_VEC is zero in pt_regs of * process never used altivec yet (MSR_VEC is zero in pt_regs of
@ -207,12 +208,11 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
elf_vrreg_t __user *tm_v_regs = sigcontext_vmx_regs(tm_sc); elf_vrreg_t __user *tm_v_regs = sigcontext_vmx_regs(tm_sc);
#endif #endif
struct pt_regs *regs = tsk->thread.regs; struct pt_regs *regs = tsk->thread.regs;
unsigned long msr = tsk->thread.regs->msr;
long err = 0; long err = 0;
BUG_ON(tsk != current); BUG_ON(tsk != current);
BUG_ON(!MSR_TM_ACTIVE(regs->msr)); BUG_ON(!MSR_TM_ACTIVE(msr));
WARN_ON(tm_suspend_disabled); WARN_ON(tm_suspend_disabled);
@ -222,13 +222,6 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
*/ */
msr |= tsk->thread.ckpt_regs.msr & (MSR_FP | MSR_VEC | MSR_VSX); msr |= tsk->thread.ckpt_regs.msr & (MSR_FP | MSR_VEC | MSR_VSX);
/* Remove TM bits from thread's MSR. The MSR in the sigcontext
* just indicates to userland that we were doing a transaction, but we
* don't want to return in transactional state. This also ensures
* that flush_fp_to_thread won't set TIF_RESTORE_TM again.
*/
regs->msr &= ~MSR_TS_MASK;
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
err |= __put_user(v_regs, &sc->v_regs); err |= __put_user(v_regs, &sc->v_regs);
err |= __put_user(tm_v_regs, &tm_sc->v_regs); err |= __put_user(tm_v_regs, &tm_sc->v_regs);
@ -824,6 +817,10 @@ int handle_rt_signal64(struct ksignal *ksig, sigset_t *set,
unsigned long newsp = 0; unsigned long newsp = 0;
long err = 0; long err = 0;
struct pt_regs *regs = tsk->thread.regs; struct pt_regs *regs = tsk->thread.regs;
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
/* Save the thread's msr before get_tm_stackpointer() changes it */
unsigned long msr = regs->msr;
#endif
BUG_ON(tsk != current); BUG_ON(tsk != current);
@ -841,7 +838,7 @@ int handle_rt_signal64(struct ksignal *ksig, sigset_t *set,
err |= __put_user(0, &frame->uc.uc_flags); err |= __put_user(0, &frame->uc.uc_flags);
err |= __save_altstack(&frame->uc.uc_stack, regs->gpr[1]); err |= __save_altstack(&frame->uc.uc_stack, regs->gpr[1]);
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (MSR_TM_ACTIVE(regs->msr)) { if (MSR_TM_ACTIVE(msr)) {
/* The ucontext_t passed to userland points to the second /* The ucontext_t passed to userland points to the second
* ucontext_t (for transactional state) with its uc_link ptr. * ucontext_t (for transactional state) with its uc_link ptr.
*/ */
@ -849,7 +846,8 @@ int handle_rt_signal64(struct ksignal *ksig, sigset_t *set,
err |= setup_tm_sigcontexts(&frame->uc.uc_mcontext, err |= setup_tm_sigcontexts(&frame->uc.uc_mcontext,
&frame->uc_transact.uc_mcontext, &frame->uc_transact.uc_mcontext,
tsk, ksig->sig, NULL, tsk, ksig->sig, NULL,
(unsigned long)ksig->ka.sa.sa_handler); (unsigned long)ksig->ka.sa.sa_handler,
msr);
} else } else
#endif #endif
{ {

View File

@ -53,19 +53,23 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
if (pshift >= pdshift) { if (pshift >= pdshift) {
cachep = PGT_CACHE(PTE_T_ORDER); cachep = PGT_CACHE(PTE_T_ORDER);
num_hugepd = 1 << (pshift - pdshift); num_hugepd = 1 << (pshift - pdshift);
new = NULL;
} else if (IS_ENABLED(CONFIG_PPC_8xx)) { } else if (IS_ENABLED(CONFIG_PPC_8xx)) {
cachep = PGT_CACHE(PTE_INDEX_SIZE); cachep = NULL;
num_hugepd = 1; num_hugepd = 1;
new = pte_alloc_one(mm);
} else { } else {
cachep = PGT_CACHE(pdshift - pshift); cachep = PGT_CACHE(pdshift - pshift);
num_hugepd = 1; num_hugepd = 1;
new = NULL;
} }
if (!cachep) { if (!cachep && !new) {
WARN_ONCE(1, "No page table cache created for hugetlb tables"); WARN_ONCE(1, "No page table cache created for hugetlb tables");
return -ENOMEM; return -ENOMEM;
} }
if (cachep)
new = kmem_cache_alloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL)); new = kmem_cache_alloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL));
BUG_ON(pshift > HUGEPD_SHIFT_MASK); BUG_ON(pshift > HUGEPD_SHIFT_MASK);
@ -97,7 +101,10 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
if (i < num_hugepd) { if (i < num_hugepd) {
for (i = i - 1 ; i >= 0; i--, hpdp--) for (i = i - 1 ; i >= 0; i--, hpdp--)
*hpdp = __hugepd(0); *hpdp = __hugepd(0);
if (cachep)
kmem_cache_free(cachep, new); kmem_cache_free(cachep, new);
else
pte_free(mm, new);
} else { } else {
kmemleak_ignore(new); kmemleak_ignore(new);
} }
@ -324,8 +331,7 @@ static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int pdshif
if (shift >= pdshift) if (shift >= pdshift)
hugepd_free(tlb, hugepte); hugepd_free(tlb, hugepte);
else if (IS_ENABLED(CONFIG_PPC_8xx)) else if (IS_ENABLED(CONFIG_PPC_8xx))
pgtable_free_tlb(tlb, hugepte, pgtable_free_tlb(tlb, hugepte, 0);
get_hugepd_cache_index(PTE_INDEX_SIZE));
else else
pgtable_free_tlb(tlb, hugepte, pgtable_free_tlb(tlb, hugepte,
get_hugepd_cache_index(pdshift - shift)); get_hugepd_cache_index(pdshift - shift));
@ -639,12 +645,13 @@ static int __init hugetlbpage_init(void)
* if we have pdshift and shift value same, we don't * if we have pdshift and shift value same, we don't
* use pgt cache for hugepd. * use pgt cache for hugepd.
*/ */
if (pdshift > shift && IS_ENABLED(CONFIG_PPC_8xx)) if (pdshift > shift) {
pgtable_cache_add(PTE_INDEX_SIZE); if (!IS_ENABLED(CONFIG_PPC_8xx))
else if (pdshift > shift)
pgtable_cache_add(pdshift - shift); pgtable_cache_add(pdshift - shift);
else if (IS_ENABLED(CONFIG_PPC_FSL_BOOK3E) || IS_ENABLED(CONFIG_PPC_8xx)) } else if (IS_ENABLED(CONFIG_PPC_FSL_BOOK3E) ||
IS_ENABLED(CONFIG_PPC_8xx)) {
pgtable_cache_add(PTE_T_ORDER); pgtable_cache_add(PTE_T_ORDER);
}
configured = true; configured = true;
} }

View File

@ -75,7 +75,7 @@ static unsigned long get_random(unsigned long limit)
*(unsigned long *) prng.parm_block ^= seed; *(unsigned long *) prng.parm_block ^= seed;
for (i = 0; i < 16; i++) { for (i = 0; i < 16; i++) {
cpacf_kmc(CPACF_KMC_PRNG, prng.parm_block, cpacf_kmc(CPACF_KMC_PRNG, prng.parm_block,
(char *) entropy, (char *) entropy, (u8 *) entropy, (u8 *) entropy,
sizeof(entropy)); sizeof(entropy));
memcpy(prng.parm_block, entropy, sizeof(entropy)); memcpy(prng.parm_block, entropy, sizeof(entropy));
} }

View File

@ -42,7 +42,7 @@ void __storage_key_init_range(unsigned long start, unsigned long end);
static inline void storage_key_init_range(unsigned long start, unsigned long end) static inline void storage_key_init_range(unsigned long start, unsigned long end)
{ {
if (PAGE_DEFAULT_KEY) if (PAGE_DEFAULT_KEY != 0)
__storage_key_init_range(start, end); __storage_key_init_range(start, end);
} }

View File

@ -1098,7 +1098,7 @@ struct kvm_x86_ops {
void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap); void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu); void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu);
void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu, hpa_t hpa); void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu, hpa_t hpa);
void (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector); int (*deliver_posted_interrupt)(struct kvm_vcpu *vcpu, int vector);
int (*sync_pir_to_irr)(struct kvm_vcpu *vcpu); int (*sync_pir_to_irr)(struct kvm_vcpu *vcpu);
int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); int (*set_tss_addr)(struct kvm *kvm, unsigned int addr);
int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr); int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr);

View File

@ -510,6 +510,8 @@
#define MSR_K7_HWCR 0xc0010015 #define MSR_K7_HWCR 0xc0010015
#define MSR_K7_HWCR_SMMLOCK_BIT 0 #define MSR_K7_HWCR_SMMLOCK_BIT 0
#define MSR_K7_HWCR_SMMLOCK BIT_ULL(MSR_K7_HWCR_SMMLOCK_BIT) #define MSR_K7_HWCR_SMMLOCK BIT_ULL(MSR_K7_HWCR_SMMLOCK_BIT)
#define MSR_K7_HWCR_IRPERF_EN_BIT 30
#define MSR_K7_HWCR_IRPERF_EN BIT_ULL(MSR_K7_HWCR_IRPERF_EN_BIT)
#define MSR_K7_FID_VID_CTL 0xc0010041 #define MSR_K7_FID_VID_CTL 0xc0010041
#define MSR_K7_FID_VID_STATUS 0xc0010042 #define MSR_K7_FID_VID_STATUS 0xc0010042

View File

@ -28,6 +28,7 @@
static const int amd_erratum_383[]; static const int amd_erratum_383[];
static const int amd_erratum_400[]; static const int amd_erratum_400[];
static const int amd_erratum_1054[];
static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum); static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum);
/* /*
@ -978,6 +979,15 @@ static void init_amd(struct cpuinfo_x86 *c)
/* AMD CPUs don't reset SS attributes on SYSRET, Xen does. */ /* AMD CPUs don't reset SS attributes on SYSRET, Xen does. */
if (!cpu_has(c, X86_FEATURE_XENPV)) if (!cpu_has(c, X86_FEATURE_XENPV))
set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS); set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);
/*
* Turn on the Instructions Retired free counter on machines not
* susceptible to erratum #1054 "Instructions Retired Performance
* Counter May Be Inaccurate".
*/
if (cpu_has(c, X86_FEATURE_IRPERF) &&
!cpu_has_amd_erratum(c, amd_erratum_1054))
msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
} }
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
@ -1105,6 +1115,10 @@ static const int amd_erratum_400[] =
static const int amd_erratum_383[] = static const int amd_erratum_383[] =
AMD_OSVW_ERRATUM(3, AMD_MODEL_RANGE(0x10, 0, 0, 0xff, 0xf)); AMD_OSVW_ERRATUM(3, AMD_MODEL_RANGE(0x10, 0, 0, 0xff, 0xf));
/* #1054: Instructions Retired Performance Counter May Be Inaccurate */
static const int amd_erratum_1054[] =
AMD_OSVW_ERRATUM(0, AMD_MODEL_RANGE(0x17, 0, 0, 0x2f, 0xf));
static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum) static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
{ {

View File

@ -1161,9 +1161,12 @@ static const struct sysfs_ops threshold_ops = {
.store = store, .store = store,
}; };
static void threshold_block_release(struct kobject *kobj);
static struct kobj_type threshold_ktype = { static struct kobj_type threshold_ktype = {
.sysfs_ops = &threshold_ops, .sysfs_ops = &threshold_ops,
.default_attrs = default_attrs, .default_attrs = default_attrs,
.release = threshold_block_release,
}; };
static const char *get_name(unsigned int bank, struct threshold_block *b) static const char *get_name(unsigned int bank, struct threshold_block *b)
@ -1196,8 +1199,9 @@ static const char *get_name(unsigned int bank, struct threshold_block *b)
return buf_mcatype; return buf_mcatype;
} }
static int allocate_threshold_blocks(unsigned int cpu, unsigned int bank, static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb,
unsigned int block, u32 address) unsigned int bank, unsigned int block,
u32 address)
{ {
struct threshold_block *b = NULL; struct threshold_block *b = NULL;
u32 low, high; u32 low, high;
@ -1241,16 +1245,12 @@ static int allocate_threshold_blocks(unsigned int cpu, unsigned int bank,
INIT_LIST_HEAD(&b->miscj); INIT_LIST_HEAD(&b->miscj);
if (per_cpu(threshold_banks, cpu)[bank]->blocks) { if (tb->blocks)
list_add(&b->miscj, list_add(&b->miscj, &tb->blocks->miscj);
&per_cpu(threshold_banks, cpu)[bank]->blocks->miscj); else
} else { tb->blocks = b;
per_cpu(threshold_banks, cpu)[bank]->blocks = b;
}
err = kobject_init_and_add(&b->kobj, &threshold_ktype, err = kobject_init_and_add(&b->kobj, &threshold_ktype, tb->kobj, get_name(bank, b));
per_cpu(threshold_banks, cpu)[bank]->kobj,
get_name(bank, b));
if (err) if (err)
goto out_free; goto out_free;
recurse: recurse:
@ -1258,7 +1258,7 @@ static int allocate_threshold_blocks(unsigned int cpu, unsigned int bank,
if (!address) if (!address)
return 0; return 0;
err = allocate_threshold_blocks(cpu, bank, block, address); err = allocate_threshold_blocks(cpu, tb, bank, block, address);
if (err) if (err)
goto out_free; goto out_free;
@ -1343,8 +1343,6 @@ static int threshold_create_bank(unsigned int cpu, unsigned int bank)
goto out_free; goto out_free;
} }
per_cpu(threshold_banks, cpu)[bank] = b;
if (is_shared_bank(bank)) { if (is_shared_bank(bank)) {
refcount_set(&b->cpus, 1); refcount_set(&b->cpus, 1);
@ -1355,9 +1353,13 @@ static int threshold_create_bank(unsigned int cpu, unsigned int bank)
} }
} }
err = allocate_threshold_blocks(cpu, bank, 0, msr_ops.misc(bank)); err = allocate_threshold_blocks(cpu, b, bank, 0, msr_ops.misc(bank));
if (!err) if (err)
goto out; goto out_free;
per_cpu(threshold_banks, cpu)[bank] = b;
return 0;
out_free: out_free:
kfree(b); kfree(b);
@ -1366,8 +1368,12 @@ static int threshold_create_bank(unsigned int cpu, unsigned int bank)
return err; return err;
} }
static void deallocate_threshold_block(unsigned int cpu, static void threshold_block_release(struct kobject *kobj)
unsigned int bank) {
kfree(to_block(kobj));
}
static void deallocate_threshold_block(unsigned int cpu, unsigned int bank)
{ {
struct threshold_block *pos = NULL; struct threshold_block *pos = NULL;
struct threshold_block *tmp = NULL; struct threshold_block *tmp = NULL;
@ -1377,13 +1383,11 @@ static void deallocate_threshold_block(unsigned int cpu,
return; return;
list_for_each_entry_safe(pos, tmp, &head->blocks->miscj, miscj) { list_for_each_entry_safe(pos, tmp, &head->blocks->miscj, miscj) {
kobject_put(&pos->kobj);
list_del(&pos->miscj); list_del(&pos->miscj);
kfree(pos); kobject_put(&pos->kobj);
} }
kfree(per_cpu(threshold_banks, cpu)[bank]->blocks); kobject_put(&head->blocks->kobj);
per_cpu(threshold_banks, cpu)[bank]->blocks = NULL;
} }
static void __threshold_remove_blocks(struct threshold_bank *b) static void __threshold_remove_blocks(struct threshold_bank *b)

View File

@ -10,8 +10,6 @@ extern struct boot_params boot_params;
static enum efi_secureboot_mode get_sb_mode(void) static enum efi_secureboot_mode get_sb_mode(void)
{ {
efi_char16_t efi_SecureBoot_name[] = L"SecureBoot";
efi_char16_t efi_SetupMode_name[] = L"SecureBoot";
efi_guid_t efi_variable_guid = EFI_GLOBAL_VARIABLE_GUID; efi_guid_t efi_variable_guid = EFI_GLOBAL_VARIABLE_GUID;
efi_status_t status; efi_status_t status;
unsigned long size; unsigned long size;
@ -25,7 +23,7 @@ static enum efi_secureboot_mode get_sb_mode(void)
} }
/* Get variable contents into buffer */ /* Get variable contents into buffer */
status = efi.get_variable(efi_SecureBoot_name, &efi_variable_guid, status = efi.get_variable(L"SecureBoot", &efi_variable_guid,
NULL, &size, &secboot); NULL, &size, &secboot);
if (status == EFI_NOT_FOUND) { if (status == EFI_NOT_FOUND) {
pr_info("ima: secureboot mode disabled\n"); pr_info("ima: secureboot mode disabled\n");
@ -38,7 +36,7 @@ static enum efi_secureboot_mode get_sb_mode(void)
} }
size = sizeof(setupmode); size = sizeof(setupmode);
status = efi.get_variable(efi_SetupMode_name, &efi_variable_guid, status = efi.get_variable(L"SetupMode", &efi_variable_guid,
NULL, &size, &setupmode); NULL, &size, &setupmode);
if (status != EFI_SUCCESS) /* ignore unknown SetupMode */ if (status != EFI_SUCCESS) /* ignore unknown SetupMode */

View File

@ -416,7 +416,7 @@ void kvm_scan_ioapic_routes(struct kvm_vcpu *vcpu,
kvm_set_msi_irq(vcpu->kvm, entry, &irq); kvm_set_msi_irq(vcpu->kvm, entry, &irq);
if (irq.level && kvm_apic_match_dest(vcpu, NULL, 0, if (irq.trig_mode && kvm_apic_match_dest(vcpu, NULL, 0,
irq.dest_id, irq.dest_mode)) irq.dest_id, irq.dest_mode))
__set_bit(irq.vector, ioapic_handled_vectors); __set_bit(irq.vector, ioapic_handled_vectors);
} }

View File

@ -637,9 +637,11 @@ static inline bool pv_eoi_enabled(struct kvm_vcpu *vcpu)
static bool pv_eoi_get_pending(struct kvm_vcpu *vcpu) static bool pv_eoi_get_pending(struct kvm_vcpu *vcpu)
{ {
u8 val; u8 val;
if (pv_eoi_get_user(vcpu, &val) < 0) if (pv_eoi_get_user(vcpu, &val) < 0) {
printk(KERN_WARNING "Can't read EOI MSR value: 0x%llx\n", printk(KERN_WARNING "Can't read EOI MSR value: 0x%llx\n",
(unsigned long long)vcpu->arch.pv_eoi.msr_val); (unsigned long long)vcpu->arch.pv_eoi.msr_val);
return false;
}
return val & 0x1; return val & 0x1;
} }
@ -1056,11 +1058,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
apic->regs + APIC_TMR); apic->regs + APIC_TMR);
} }
if (vcpu->arch.apicv_active) if (kvm_x86_ops->deliver_posted_interrupt(vcpu, vector)) {
kvm_x86_ops->deliver_posted_interrupt(vcpu, vector);
else {
kvm_lapic_set_irr(vector, apic); kvm_lapic_set_irr(vector, apic);
kvm_make_request(KVM_REQ_EVENT, vcpu); kvm_make_request(KVM_REQ_EVENT, vcpu);
kvm_vcpu_kick(vcpu); kvm_vcpu_kick(vcpu);
} }

View File

@ -5141,8 +5141,11 @@ static void svm_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
return; return;
} }
static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec) static int svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
{ {
if (!vcpu->arch.apicv_active)
return -1;
kvm_lapic_set_irr(vec, vcpu->arch.apic); kvm_lapic_set_irr(vec, vcpu->arch.apic);
smp_mb__after_atomic(); smp_mb__after_atomic();
@ -5154,6 +5157,8 @@ static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
put_cpu(); put_cpu();
} else } else
kvm_vcpu_wake_up(vcpu); kvm_vcpu_wake_up(vcpu);
return 0;
} }
static bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu) static bool svm_dy_apicv_has_pending_interrupt(struct kvm_vcpu *vcpu)

View File

@ -12,6 +12,7 @@ extern bool __read_mostly enable_ept;
extern bool __read_mostly enable_unrestricted_guest; extern bool __read_mostly enable_unrestricted_guest;
extern bool __read_mostly enable_ept_ad_bits; extern bool __read_mostly enable_ept_ad_bits;
extern bool __read_mostly enable_pml; extern bool __read_mostly enable_pml;
extern bool __read_mostly enable_apicv;
extern int __read_mostly pt_mode; extern int __read_mostly pt_mode;
#define PT_MODE_SYSTEM 0 #define PT_MODE_SYSTEM 0

View File

@ -5132,24 +5132,17 @@ static int handle_vmfunc(struct kvm_vcpu *vcpu)
return 1; return 1;
} }
/*
static bool nested_vmx_exit_handled_io(struct kvm_vcpu *vcpu, * Return true if an IO instruction with the specified port and size should cause
struct vmcs12 *vmcs12) * a VM-exit into L1.
*/
bool nested_vmx_check_io_bitmaps(struct kvm_vcpu *vcpu, unsigned int port,
int size)
{ {
unsigned long exit_qualification; struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
gpa_t bitmap, last_bitmap; gpa_t bitmap, last_bitmap;
unsigned int port;
int size;
u8 b; u8 b;
if (!nested_cpu_has(vmcs12, CPU_BASED_USE_IO_BITMAPS))
return nested_cpu_has(vmcs12, CPU_BASED_UNCOND_IO_EXITING);
exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
port = exit_qualification >> 16;
size = (exit_qualification & 7) + 1;
last_bitmap = (gpa_t)-1; last_bitmap = (gpa_t)-1;
b = -1; b = -1;
@ -5176,6 +5169,24 @@ static bool nested_vmx_exit_handled_io(struct kvm_vcpu *vcpu,
return false; return false;
} }
static bool nested_vmx_exit_handled_io(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
unsigned long exit_qualification;
unsigned short port;
int size;
if (!nested_cpu_has(vmcs12, CPU_BASED_USE_IO_BITMAPS))
return nested_cpu_has(vmcs12, CPU_BASED_UNCOND_IO_EXITING);
exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
port = exit_qualification >> 16;
size = (exit_qualification & 7) + 1;
return nested_vmx_check_io_bitmaps(vcpu, port, size);
}
/* /*
* Return 1 if we should exit from L2 to L1 to handle an MSR access access, * Return 1 if we should exit from L2 to L1 to handle an MSR access access,
* rather than handle it ourselves in L0. I.e., check whether L1 expressed * rather than handle it ourselves in L0. I.e., check whether L1 expressed
@ -5796,8 +5807,7 @@ void nested_vmx_vcpu_setup(void)
* bit in the high half is on if the corresponding bit in the control field * bit in the high half is on if the corresponding bit in the control field
* may be on. See also vmx_control_verify(). * may be on. See also vmx_control_verify().
*/ */
void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps, void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps)
bool apicv)
{ {
/* /*
* Note that as a general rule, the high half of the MSRs (bits in * Note that as a general rule, the high half of the MSRs (bits in
@ -5824,7 +5834,7 @@ void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps,
PIN_BASED_EXT_INTR_MASK | PIN_BASED_EXT_INTR_MASK |
PIN_BASED_NMI_EXITING | PIN_BASED_NMI_EXITING |
PIN_BASED_VIRTUAL_NMIS | PIN_BASED_VIRTUAL_NMIS |
(apicv ? PIN_BASED_POSTED_INTR : 0); (enable_apicv ? PIN_BASED_POSTED_INTR : 0);
msrs->pinbased_ctls_high |= msrs->pinbased_ctls_high |=
PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR | PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR |
PIN_BASED_VMX_PREEMPTION_TIMER; PIN_BASED_VMX_PREEMPTION_TIMER;

View File

@ -17,8 +17,7 @@ enum nvmx_vmentry_status {
}; };
void vmx_leave_nested(struct kvm_vcpu *vcpu); void vmx_leave_nested(struct kvm_vcpu *vcpu);
void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps, void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, u32 ept_caps);
bool apicv);
void nested_vmx_hardware_unsetup(void); void nested_vmx_hardware_unsetup(void);
__init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *)); __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *));
void nested_vmx_vcpu_setup(void); void nested_vmx_vcpu_setup(void);
@ -33,6 +32,8 @@ int vmx_set_vmx_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data);
int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdata); int vmx_get_vmx_msr(struct nested_vmx_msrs *msrs, u32 msr_index, u64 *pdata);
int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification, int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
u32 vmx_instruction_info, bool wr, int len, gva_t *ret); u32 vmx_instruction_info, bool wr, int len, gva_t *ret);
bool nested_vmx_check_io_bitmaps(struct kvm_vcpu *vcpu, unsigned int port,
int size);
static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu) static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu)
{ {

View File

@ -95,7 +95,7 @@ module_param(emulate_invalid_guest_state, bool, S_IRUGO);
static bool __read_mostly fasteoi = 1; static bool __read_mostly fasteoi = 1;
module_param(fasteoi, bool, S_IRUGO); module_param(fasteoi, bool, S_IRUGO);
static bool __read_mostly enable_apicv = 1; bool __read_mostly enable_apicv = 1;
module_param(enable_apicv, bool, S_IRUGO); module_param(enable_apicv, bool, S_IRUGO);
/* /*
@ -3853,24 +3853,29 @@ static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
* 2. If target vcpu isn't running(root mode), kick it to pick up the * 2. If target vcpu isn't running(root mode), kick it to pick up the
* interrupt from PIR in next vmentry. * interrupt from PIR in next vmentry.
*/ */
static void vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector) static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
{ {
struct vcpu_vmx *vmx = to_vmx(vcpu); struct vcpu_vmx *vmx = to_vmx(vcpu);
int r; int r;
r = vmx_deliver_nested_posted_interrupt(vcpu, vector); r = vmx_deliver_nested_posted_interrupt(vcpu, vector);
if (!r) if (!r)
return; return 0;
if (!vcpu->arch.apicv_active)
return -1;
if (pi_test_and_set_pir(vector, &vmx->pi_desc)) if (pi_test_and_set_pir(vector, &vmx->pi_desc))
return; return 0;
/* If a previous notification has sent the IPI, nothing to do. */ /* If a previous notification has sent the IPI, nothing to do. */
if (pi_test_and_set_on(&vmx->pi_desc)) if (pi_test_and_set_on(&vmx->pi_desc))
return; return 0;
if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false)) if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
kvm_vcpu_kick(vcpu); kvm_vcpu_kick(vcpu);
return 0;
} }
/* /*
@ -6802,8 +6807,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
if (nested) if (nested)
nested_vmx_setup_ctls_msrs(&vmx->nested.msrs, nested_vmx_setup_ctls_msrs(&vmx->nested.msrs,
vmx_capability.ept, vmx_capability.ept);
kvm_vcpu_apicv_active(&vmx->vcpu));
else else
memset(&vmx->nested.msrs, 0, sizeof(vmx->nested.msrs)); memset(&vmx->nested.msrs, 0, sizeof(vmx->nested.msrs));
@ -6885,8 +6889,7 @@ static int __init vmx_check_processor_compat(void)
if (setup_vmcs_config(&vmcs_conf, &vmx_cap) < 0) if (setup_vmcs_config(&vmcs_conf, &vmx_cap) < 0)
return -EIO; return -EIO;
if (nested) if (nested)
nested_vmx_setup_ctls_msrs(&vmcs_conf.nested, vmx_cap.ept, nested_vmx_setup_ctls_msrs(&vmcs_conf.nested, vmx_cap.ept);
enable_apicv);
if (memcmp(&vmcs_config, &vmcs_conf, sizeof(struct vmcs_config)) != 0) { if (memcmp(&vmcs_config, &vmcs_conf, sizeof(struct vmcs_config)) != 0) {
printk(KERN_ERR "kvm: CPU %d feature inconsistency!\n", printk(KERN_ERR "kvm: CPU %d feature inconsistency!\n",
smp_processor_id()); smp_processor_id());
@ -7132,6 +7135,39 @@ static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu)
to_vmx(vcpu)->req_immediate_exit = true; to_vmx(vcpu)->req_immediate_exit = true;
} }
static int vmx_check_intercept_io(struct kvm_vcpu *vcpu,
struct x86_instruction_info *info)
{
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
unsigned short port;
bool intercept;
int size;
if (info->intercept == x86_intercept_in ||
info->intercept == x86_intercept_ins) {
port = info->src_val;
size = info->dst_bytes;
} else {
port = info->dst_val;
size = info->src_bytes;
}
/*
* If the 'use IO bitmaps' VM-execution control is 0, IO instruction
* VM-exits depend on the 'unconditional IO exiting' VM-execution
* control.
*
* Otherwise, IO instruction VM-exits are controlled by the IO bitmaps.
*/
if (!nested_cpu_has(vmcs12, CPU_BASED_USE_IO_BITMAPS))
intercept = nested_cpu_has(vmcs12,
CPU_BASED_UNCOND_IO_EXITING);
else
intercept = nested_vmx_check_io_bitmaps(vcpu, port, size);
return intercept ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
}
static int vmx_check_intercept(struct kvm_vcpu *vcpu, static int vmx_check_intercept(struct kvm_vcpu *vcpu,
struct x86_instruction_info *info, struct x86_instruction_info *info,
enum x86_intercept_stage stage) enum x86_intercept_stage stage)
@ -7139,19 +7175,31 @@ static int vmx_check_intercept(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12 = get_vmcs12(vcpu); struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
struct x86_emulate_ctxt *ctxt = &vcpu->arch.emulate_ctxt; struct x86_emulate_ctxt *ctxt = &vcpu->arch.emulate_ctxt;
switch (info->intercept) {
/* /*
* RDPID causes #UD if disabled through secondary execution controls. * RDPID causes #UD if disabled through secondary execution controls.
* Because it is marked as EmulateOnUD, we need to intercept it here. * Because it is marked as EmulateOnUD, we need to intercept it here.
*/ */
if (info->intercept == x86_intercept_rdtscp && case x86_intercept_rdtscp:
!nested_cpu_has2(vmcs12, SECONDARY_EXEC_RDTSCP)) { if (!nested_cpu_has2(vmcs12, SECONDARY_EXEC_RDTSCP)) {
ctxt->exception.vector = UD_VECTOR; ctxt->exception.vector = UD_VECTOR;
ctxt->exception.error_code_valid = false; ctxt->exception.error_code_valid = false;
return X86EMUL_PROPAGATE_FAULT; return X86EMUL_PROPAGATE_FAULT;
} }
break;
case x86_intercept_in:
case x86_intercept_ins:
case x86_intercept_out:
case x86_intercept_outs:
return vmx_check_intercept_io(vcpu, info);
/* TODO: check more intercepts... */ /* TODO: check more intercepts... */
return X86EMUL_CONTINUE; default:
break;
}
return X86EMUL_UNHANDLEABLE;
} }
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
@ -7736,7 +7784,7 @@ static __init int hardware_setup(void)
if (nested) { if (nested) {
nested_vmx_setup_ctls_msrs(&vmcs_config.nested, nested_vmx_setup_ctls_msrs(&vmcs_config.nested,
vmx_capability.ept, enable_apicv); vmx_capability.ept);
r = nested_vmx_hardware_setup(kvm_vmx_exit_handlers); r = nested_vmx_hardware_setup(kvm_vmx_exit_handlers);
if (r) if (r)

View File

@ -26,7 +26,7 @@ const char *const hash_algo_name[HASH_ALGO__LAST] = {
[HASH_ALGO_TGR_128] = "tgr128", [HASH_ALGO_TGR_128] = "tgr128",
[HASH_ALGO_TGR_160] = "tgr160", [HASH_ALGO_TGR_160] = "tgr160",
[HASH_ALGO_TGR_192] = "tgr192", [HASH_ALGO_TGR_192] = "tgr192",
[HASH_ALGO_SM3_256] = "sm3-256", [HASH_ALGO_SM3_256] = "sm3",
[HASH_ALGO_STREEBOG_256] = "streebog256", [HASH_ALGO_STREEBOG_256] = "streebog256",
[HASH_ALGO_STREEBOG_512] = "streebog512", [HASH_ALGO_STREEBOG_512] = "streebog512",
}; };

View File

@ -265,4 +265,49 @@ static u32 acpi_ev_fixed_event_dispatch(u32 event)
handler) (acpi_gbl_fixed_event_handlers[event].context)); handler) (acpi_gbl_fixed_event_handlers[event].context));
} }
/*******************************************************************************
*
* FUNCTION: acpi_any_fixed_event_status_set
*
* PARAMETERS: None
*
* RETURN: TRUE or FALSE
*
* DESCRIPTION: Checks the PM status register for active fixed events
*
******************************************************************************/
u32 acpi_any_fixed_event_status_set(void)
{
acpi_status status;
u32 in_status;
u32 in_enable;
u32 i;
status = acpi_hw_register_read(ACPI_REGISTER_PM1_ENABLE, &in_enable);
if (ACPI_FAILURE(status)) {
return (FALSE);
}
status = acpi_hw_register_read(ACPI_REGISTER_PM1_STATUS, &in_status);
if (ACPI_FAILURE(status)) {
return (FALSE);
}
/*
* Check for all possible Fixed Events and dispatch those that are active
*/
for (i = 0; i < ACPI_NUM_FIXED_EVENTS; i++) {
/* Both the status and enable bits must be on for this event */
if ((in_status & acpi_gbl_fixed_event_info[i].status_bit_mask) &&
(in_enable & acpi_gbl_fixed_event_info[i].enable_bit_mask)) {
return (TRUE);
}
}
return (FALSE);
}
#endif /* !ACPI_REDUCED_HARDWARE */ #endif /* !ACPI_REDUCED_HARDWARE */

View File

@ -992,6 +992,13 @@ static bool acpi_s2idle_wake(void)
if (irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq))) if (irqd_is_wakeup_armed(irq_get_irq_data(acpi_sci_irq)))
return true; return true;
/*
* If the status bit of any enabled fixed event is set, the
* wakeup is regarded as valid.
*/
if (acpi_any_fixed_event_status_set())
return true;
/* /*
* If there are no EC events to process and at least one of the * If there are no EC events to process and at least one of the
* other enabled GPEs is active, the wakeup is regarded as a * other enabled GPEs is active, the wakeup is regarded as a

View File

@ -80,6 +80,7 @@ enum board_ids {
static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent); static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent);
static void ahci_remove_one(struct pci_dev *dev); static void ahci_remove_one(struct pci_dev *dev);
static void ahci_shutdown_one(struct pci_dev *dev);
static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class, static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class,
unsigned long deadline); unsigned long deadline);
static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class, static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class,
@ -593,6 +594,7 @@ static struct pci_driver ahci_pci_driver = {
.id_table = ahci_pci_tbl, .id_table = ahci_pci_tbl,
.probe = ahci_init_one, .probe = ahci_init_one,
.remove = ahci_remove_one, .remove = ahci_remove_one,
.shutdown = ahci_shutdown_one,
.driver = { .driver = {
.pm = &ahci_pci_pm_ops, .pm = &ahci_pci_pm_ops,
}, },
@ -1864,6 +1866,11 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
return 0; return 0;
} }
static void ahci_shutdown_one(struct pci_dev *pdev)
{
ata_pci_shutdown_one(pdev);
}
static void ahci_remove_one(struct pci_dev *pdev) static void ahci_remove_one(struct pci_dev *pdev)
{ {
pm_runtime_get_noresume(&pdev->dev); pm_runtime_get_noresume(&pdev->dev);

View File

@ -6762,6 +6762,26 @@ void ata_pci_remove_one(struct pci_dev *pdev)
ata_host_detach(host); ata_host_detach(host);
} }
void ata_pci_shutdown_one(struct pci_dev *pdev)
{
struct ata_host *host = pci_get_drvdata(pdev);
int i;
for (i = 0; i < host->n_ports; i++) {
struct ata_port *ap = host->ports[i];
ap->pflags |= ATA_PFLAG_FROZEN;
/* Disable port interrupts */
if (ap->ops->freeze)
ap->ops->freeze(ap);
/* Stop the port DMA engines */
if (ap->ops->port_stop)
ap->ops->port_stop(ap);
}
}
/* move to PCI subsystem */ /* move to PCI subsystem */
int pci_test_config_bits(struct pci_dev *pdev, const struct pci_bits *bits) int pci_test_config_bits(struct pci_dev *pdev, const struct pci_bits *bits)
{ {
@ -7382,6 +7402,7 @@ EXPORT_SYMBOL_GPL(ata_timing_cycle2mode);
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
EXPORT_SYMBOL_GPL(pci_test_config_bits); EXPORT_SYMBOL_GPL(pci_test_config_bits);
EXPORT_SYMBOL_GPL(ata_pci_shutdown_one);
EXPORT_SYMBOL_GPL(ata_pci_remove_one); EXPORT_SYMBOL_GPL(ata_pci_remove_one);
#ifdef CONFIG_PM #ifdef CONFIG_PM
EXPORT_SYMBOL_GPL(ata_pci_device_do_suspend); EXPORT_SYMBOL_GPL(ata_pci_device_do_suspend);

View File

@ -853,14 +853,17 @@ static void reset_fdc_info(int mode)
/* selects the fdc and drive, and enables the fdc's input/dma. */ /* selects the fdc and drive, and enables the fdc's input/dma. */
static void set_fdc(int drive) static void set_fdc(int drive)
{ {
unsigned int new_fdc = fdc;
if (drive >= 0 && drive < N_DRIVE) { if (drive >= 0 && drive < N_DRIVE) {
fdc = FDC(drive); new_fdc = FDC(drive);
current_drive = drive; current_drive = drive;
} }
if (fdc != 1 && fdc != 0) { if (new_fdc >= N_FDC) {
pr_info("bad fdc value\n"); pr_info("bad fdc value\n");
return; return;
} }
fdc = new_fdc;
set_dor(fdc, ~0, 8); set_dor(fdc, ~0, 8);
#if N_FDC > 1 #if N_FDC > 1
set_dor(1 - fdc, ~8, 0); set_dor(1 - fdc, ~8, 0);

View File

@ -831,6 +831,8 @@ static int tpm2_init_bank_info(struct tpm_chip *chip, u32 bank_index)
return 0; return 0;
} }
bank->crypto_id = HASH_ALGO__LAST;
return tpm2_pcr_read(chip, 0, &digest, &bank->digest_size); return tpm2_pcr_read(chip, 0, &digest, &bank->digest_size);
} }

View File

@ -760,7 +760,11 @@ static void sdma_start_desc(struct sdma_channel *sdmac)
return; return;
} }
sdmac->desc = desc = to_sdma_desc(&vd->tx); sdmac->desc = desc = to_sdma_desc(&vd->tx);
/*
* Do not delete the node in desc_issued list in cyclic mode, otherwise
* the desc allocated will never be freed in vchan_dma_desc_free_list
*/
if (!(sdmac->flags & IMX_DMA_SG_LOOP))
list_del(&vd->node); list_del(&vd->node);
sdma->channel_control[channel].base_bd_ptr = desc->bd_phys; sdma->channel_control[channel].base_bd_ptr = desc->bd_phys;
@ -1067,6 +1071,7 @@ static void sdma_channel_terminate_work(struct work_struct *work)
spin_lock_irqsave(&sdmac->vc.lock, flags); spin_lock_irqsave(&sdmac->vc.lock, flags);
vchan_get_all_descriptors(&sdmac->vc, &head); vchan_get_all_descriptors(&sdmac->vc, &head);
sdmac->desc = NULL;
spin_unlock_irqrestore(&sdmac->vc.lock, flags); spin_unlock_irqrestore(&sdmac->vc.lock, flags);
vchan_dma_desc_free_list(&sdmac->vc, &head); vchan_dma_desc_free_list(&sdmac->vc, &head);
sdmac->context_loaded = false; sdmac->context_loaded = false;
@ -1075,19 +1080,11 @@ static void sdma_channel_terminate_work(struct work_struct *work)
static int sdma_disable_channel_async(struct dma_chan *chan) static int sdma_disable_channel_async(struct dma_chan *chan)
{ {
struct sdma_channel *sdmac = to_sdma_chan(chan); struct sdma_channel *sdmac = to_sdma_chan(chan);
unsigned long flags;
spin_lock_irqsave(&sdmac->vc.lock, flags);
sdma_disable_channel(chan); sdma_disable_channel(chan);
if (sdmac->desc) { if (sdmac->desc)
vchan_terminate_vdesc(&sdmac->desc->vd);
sdmac->desc = NULL;
schedule_work(&sdmac->terminate_worker); schedule_work(&sdmac->terminate_worker);
}
spin_unlock_irqrestore(&sdmac->vc.lock, flags);
return 0; return 0;
} }

View File

@ -3977,11 +3977,13 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
{ {
uint64_t clock; uint64_t clock;
amdgpu_gfx_off_ctrl(adev, false);
mutex_lock(&adev->gfx.gpu_clock_mutex); mutex_lock(&adev->gfx.gpu_clock_mutex);
WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1); WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1);
clock = (uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB) | clock = (uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB) |
((uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB) << 32ULL); ((uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB) << 32ULL);
mutex_unlock(&adev->gfx.gpu_clock_mutex); mutex_unlock(&adev->gfx.gpu_clock_mutex);
amdgpu_gfx_off_ctrl(adev, true);
return clock; return clock;
} }

View File

@ -4080,11 +4080,13 @@ static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev)
{ {
uint64_t clock; uint64_t clock;
amdgpu_gfx_off_ctrl(adev, false);
mutex_lock(&adev->gfx.gpu_clock_mutex); mutex_lock(&adev->gfx.gpu_clock_mutex);
WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1); WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1);
clock = (uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB) | clock = (uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB) |
((uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB) << 32ULL); ((uint64_t)RREG32_SOC15(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB) << 32ULL);
mutex_unlock(&adev->gfx.gpu_clock_mutex); mutex_unlock(&adev->gfx.gpu_clock_mutex);
amdgpu_gfx_off_ctrl(adev, true);
return clock; return clock;
} }

View File

@ -267,7 +267,12 @@ static u32 soc15_get_config_memsize(struct amdgpu_device *adev)
static u32 soc15_get_xclk(struct amdgpu_device *adev) static u32 soc15_get_xclk(struct amdgpu_device *adev)
{ {
return adev->clock.spll.reference_freq; u32 reference_clock = adev->clock.spll.reference_freq;
if (adev->asic_type == CHIP_RAVEN)
return reference_clock / 4;
return reference_clock;
} }

View File

@ -294,7 +294,7 @@ static inline int tc_poll_timeout(struct tc_data *tc, unsigned int addr,
static int tc_aux_wait_busy(struct tc_data *tc) static int tc_aux_wait_busy(struct tc_data *tc)
{ {
return tc_poll_timeout(tc, DP0_AUXSTATUS, AUX_BUSY, 0, 1000, 100000); return tc_poll_timeout(tc, DP0_AUXSTATUS, AUX_BUSY, 0, 100, 100000);
} }
static int tc_aux_write_data(struct tc_data *tc, const void *data, static int tc_aux_write_data(struct tc_data *tc, const void *data,
@ -637,7 +637,7 @@ static int tc_aux_link_setup(struct tc_data *tc)
if (ret) if (ret)
goto err; goto err;
ret = tc_poll_timeout(tc, DP_PHY_CTRL, PHY_RDY, PHY_RDY, 1, 1000); ret = tc_poll_timeout(tc, DP_PHY_CTRL, PHY_RDY, PHY_RDY, 100, 100000);
if (ret == -ETIMEDOUT) { if (ret == -ETIMEDOUT) {
dev_err(tc->dev, "Timeout waiting for PHY to become ready"); dev_err(tc->dev, "Timeout waiting for PHY to become ready");
return ret; return ret;
@ -861,7 +861,7 @@ static int tc_wait_link_training(struct tc_data *tc)
int ret; int ret;
ret = tc_poll_timeout(tc, DP0_LTSTAT, LT_LOOPDONE, ret = tc_poll_timeout(tc, DP0_LTSTAT, LT_LOOPDONE,
LT_LOOPDONE, 1, 1000); LT_LOOPDONE, 500, 100000);
if (ret) { if (ret) {
dev_err(tc->dev, "Link training timeout waiting for LT_LOOPDONE!\n"); dev_err(tc->dev, "Link training timeout waiting for LT_LOOPDONE!\n");
return ret; return ret;
@ -934,7 +934,7 @@ static int tc_main_link_enable(struct tc_data *tc)
dp_phy_ctrl &= ~(DP_PHY_RST | PHY_M1_RST | PHY_M0_RST); dp_phy_ctrl &= ~(DP_PHY_RST | PHY_M1_RST | PHY_M0_RST);
ret = regmap_write(tc->regmap, DP_PHY_CTRL, dp_phy_ctrl); ret = regmap_write(tc->regmap, DP_PHY_CTRL, dp_phy_ctrl);
ret = tc_poll_timeout(tc, DP_PHY_CTRL, PHY_RDY, PHY_RDY, 1, 1000); ret = tc_poll_timeout(tc, DP_PHY_CTRL, PHY_RDY, PHY_RDY, 500, 100000);
if (ret) { if (ret) {
dev_err(dev, "timeout waiting for phy become ready"); dev_err(dev, "timeout waiting for phy become ready");
return ret; return ret;

View File

@ -75,9 +75,8 @@ config DRM_I915_CAPTURE_ERROR
help help
This option enables capturing the GPU state when a hang is detected. This option enables capturing the GPU state when a hang is detected.
This information is vital for triaging hangs and assists in debugging. This information is vital for triaging hangs and assists in debugging.
Please report any hang to Please report any hang for triaging according to:
https://bugs.freedesktop.org/enter_bug.cgi?product=DRI https://gitlab.freedesktop.org/drm/intel/-/wikis/How-to-file-i915-bugs
for triaging.
If in doubt, say "Y". If in doubt, say "Y".

View File

@ -10510,7 +10510,7 @@ static u32 intel_cursor_base(const struct intel_plane_state *plane_state)
u32 base; u32 base;
if (INTEL_INFO(dev_priv)->display.cursor_needs_physical) if (INTEL_INFO(dev_priv)->display.cursor_needs_physical)
base = obj->phys_handle->busaddr; base = sg_dma_address(obj->mm.pages->sgl);
else else
base = intel_plane_ggtt_offset(plane_state); base = intel_plane_ggtt_offset(plane_state);

View File

@ -240,9 +240,6 @@ struct drm_i915_gem_object {
void *gvt_info; void *gvt_info;
}; };
/** for phys allocated objects */
struct drm_dma_handle *phys_handle;
}; };
static inline struct drm_i915_gem_object * static inline struct drm_i915_gem_object *

View File

@ -21,88 +21,87 @@
static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
{ {
struct address_space *mapping = obj->base.filp->f_mapping; struct address_space *mapping = obj->base.filp->f_mapping;
struct drm_dma_handle *phys;
struct sg_table *st;
struct scatterlist *sg; struct scatterlist *sg;
char *vaddr; struct sg_table *st;
dma_addr_t dma;
void *vaddr;
void *dst;
int i; int i;
int err;
if (WARN_ON(i915_gem_object_needs_bit17_swizzle(obj))) if (WARN_ON(i915_gem_object_needs_bit17_swizzle(obj)))
return -EINVAL; return -EINVAL;
/* Always aligning to the object size, allows a single allocation /*
* Always aligning to the object size, allows a single allocation
* to handle all possible callers, and given typical object sizes, * to handle all possible callers, and given typical object sizes,
* the alignment of the buddy allocation will naturally match. * the alignment of the buddy allocation will naturally match.
*/ */
phys = drm_pci_alloc(obj->base.dev, vaddr = dma_alloc_coherent(&obj->base.dev->pdev->dev,
roundup_pow_of_two(obj->base.size), roundup_pow_of_two(obj->base.size),
roundup_pow_of_two(obj->base.size)); &dma, GFP_KERNEL);
if (!phys) if (!vaddr)
return -ENOMEM; return -ENOMEM;
vaddr = phys->vaddr;
for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
struct page *page;
char *src;
page = shmem_read_mapping_page(mapping, i);
if (IS_ERR(page)) {
err = PTR_ERR(page);
goto err_phys;
}
src = kmap_atomic(page);
memcpy(vaddr, src, PAGE_SIZE);
drm_clflush_virt_range(vaddr, PAGE_SIZE);
kunmap_atomic(src);
put_page(page);
vaddr += PAGE_SIZE;
}
intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt);
st = kmalloc(sizeof(*st), GFP_KERNEL); st = kmalloc(sizeof(*st), GFP_KERNEL);
if (!st) { if (!st)
err = -ENOMEM; goto err_pci;
goto err_phys;
}
if (sg_alloc_table(st, 1, GFP_KERNEL)) { if (sg_alloc_table(st, 1, GFP_KERNEL))
kfree(st); goto err_st;
err = -ENOMEM;
goto err_phys;
}
sg = st->sgl; sg = st->sgl;
sg->offset = 0; sg->offset = 0;
sg->length = obj->base.size; sg->length = obj->base.size;
sg_dma_address(sg) = phys->busaddr; sg_assign_page(sg, (struct page *)vaddr);
sg_dma_address(sg) = dma;
sg_dma_len(sg) = obj->base.size; sg_dma_len(sg) = obj->base.size;
obj->phys_handle = phys; dst = vaddr;
for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
struct page *page;
void *src;
page = shmem_read_mapping_page(mapping, i);
if (IS_ERR(page))
goto err_st;
src = kmap_atomic(page);
memcpy(dst, src, PAGE_SIZE);
drm_clflush_virt_range(dst, PAGE_SIZE);
kunmap_atomic(src);
put_page(page);
dst += PAGE_SIZE;
}
intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt);
__i915_gem_object_set_pages(obj, st, sg->length); __i915_gem_object_set_pages(obj, st, sg->length);
return 0; return 0;
err_phys: err_st:
drm_pci_free(obj->base.dev, phys); kfree(st);
err_pci:
return err; dma_free_coherent(&obj->base.dev->pdev->dev,
roundup_pow_of_two(obj->base.size),
vaddr, dma);
return -ENOMEM;
} }
static void static void
i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj, i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
struct sg_table *pages) struct sg_table *pages)
{ {
dma_addr_t dma = sg_dma_address(pages->sgl);
void *vaddr = sg_page(pages->sgl);
__i915_gem_object_release_shmem(obj, pages, false); __i915_gem_object_release_shmem(obj, pages, false);
if (obj->mm.dirty) { if (obj->mm.dirty) {
struct address_space *mapping = obj->base.filp->f_mapping; struct address_space *mapping = obj->base.filp->f_mapping;
char *vaddr = obj->phys_handle->vaddr; void *src = vaddr;
int i; int i;
for (i = 0; i < obj->base.size / PAGE_SIZE; i++) { for (i = 0; i < obj->base.size / PAGE_SIZE; i++) {
@ -114,15 +113,16 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
continue; continue;
dst = kmap_atomic(page); dst = kmap_atomic(page);
drm_clflush_virt_range(vaddr, PAGE_SIZE); drm_clflush_virt_range(src, PAGE_SIZE);
memcpy(dst, vaddr, PAGE_SIZE); memcpy(dst, src, PAGE_SIZE);
kunmap_atomic(dst); kunmap_atomic(dst);
set_page_dirty(page); set_page_dirty(page);
if (obj->mm.madv == I915_MADV_WILLNEED) if (obj->mm.madv == I915_MADV_WILLNEED)
mark_page_accessed(page); mark_page_accessed(page);
put_page(page); put_page(page);
vaddr += PAGE_SIZE;
src += PAGE_SIZE;
} }
obj->mm.dirty = false; obj->mm.dirty = false;
} }
@ -130,7 +130,9 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj,
sg_free_table(pages); sg_free_table(pages);
kfree(pages); kfree(pages);
drm_pci_free(obj->base.dev, obj->phys_handle); dma_free_coherent(&obj->base.dev->pdev->dev,
roundup_pow_of_two(obj->base.size),
vaddr, dma);
} }
static void phys_release(struct drm_i915_gem_object *obj) static void phys_release(struct drm_i915_gem_object *obj)

View File

@ -250,6 +250,14 @@ static inline u32 intel_ring_wrap(const struct intel_ring *ring, u32 pos)
return pos & (ring->size - 1); return pos & (ring->size - 1);
} }
static inline int intel_ring_direction(const struct intel_ring *ring,
u32 next, u32 prev)
{
typecheck(typeof(ring->size), next);
typecheck(typeof(ring->size), prev);
return (next - prev) << ring->wrap;
}
static inline bool static inline bool
intel_ring_offset_valid(const struct intel_ring *ring, intel_ring_offset_valid(const struct intel_ring *ring,
unsigned int pos) unsigned int pos)

View File

@ -107,6 +107,7 @@ struct intel_ring {
u32 space; u32 space;
u32 size; u32 size;
u32 wrap;
u32 effective_size; u32 effective_size;
}; };

View File

@ -471,12 +471,6 @@ lrc_descriptor(struct intel_context *ce, struct intel_engine_cs *engine)
return desc; return desc;
} }
static void unwind_wa_tail(struct i915_request *rq)
{
rq->tail = intel_ring_wrap(rq->ring, rq->wa_tail - WA_TAIL_BYTES);
assert_ring_tail_valid(rq->ring, rq->tail);
}
static struct i915_request * static struct i915_request *
__unwind_incomplete_requests(struct intel_engine_cs *engine) __unwind_incomplete_requests(struct intel_engine_cs *engine)
{ {
@ -495,7 +489,6 @@ __unwind_incomplete_requests(struct intel_engine_cs *engine)
continue; /* XXX */ continue; /* XXX */
__i915_request_unsubmit(rq); __i915_request_unsubmit(rq);
unwind_wa_tail(rq);
/* /*
* Push the request back into the queue for later resubmission. * Push the request back into the queue for later resubmission.
@ -650,13 +643,35 @@ execlists_schedule_out(struct i915_request *rq)
i915_request_put(rq); i915_request_put(rq);
} }
static u64 execlists_update_context(const struct i915_request *rq) static u64 execlists_update_context(struct i915_request *rq)
{ {
struct intel_context *ce = rq->hw_context; struct intel_context *ce = rq->hw_context;
u64 desc; u64 desc = ce->lrc_desc;
u32 tail, prev;
ce->lrc_reg_state[CTX_RING_TAIL + 1] = /*
intel_ring_set_tail(rq->ring, rq->tail); * WaIdleLiteRestore:bdw,skl
*
* We should never submit the context with the same RING_TAIL twice
* just in case we submit an empty ring, which confuses the HW.
*
* We append a couple of NOOPs (gen8_emit_wa_tail) after the end of
* the normal request to be able to always advance the RING_TAIL on
* subsequent resubmissions (for lite restore). Should that fail us,
* and we try and submit the same tail again, force the context
* reload.
*
* If we need to return to a preempted context, we need to skip the
* lite-restore and force it to reload the RING_TAIL. Otherwise, the
* HW has a tendency to ignore us rewinding the TAIL to the end of
* an earlier request.
*/
tail = intel_ring_set_tail(rq->ring, rq->tail);
prev = ce->lrc_reg_state[CTX_RING_TAIL + 1];
if (unlikely(intel_ring_direction(rq->ring, tail, prev) <= 0))
desc |= CTX_DESC_FORCE_RESTORE;
ce->lrc_reg_state[CTX_RING_TAIL + 1] = tail;
rq->tail = rq->wa_tail;
/* /*
* Make sure the context image is complete before we submit it to HW. * Make sure the context image is complete before we submit it to HW.
@ -675,7 +690,6 @@ static u64 execlists_update_context(const struct i915_request *rq)
*/ */
mb(); mb();
desc = ce->lrc_desc;
ce->lrc_desc &= ~CTX_DESC_FORCE_RESTORE; ce->lrc_desc &= ~CTX_DESC_FORCE_RESTORE;
return desc; return desc;
@ -919,6 +933,11 @@ last_active(const struct intel_engine_execlists *execlists)
return *last; return *last;
} }
#define for_each_waiter(p__, rq__) \
list_for_each_entry_lockless(p__, \
&(rq__)->sched.waiters_list, \
wait_link)
static void defer_request(struct i915_request *rq, struct list_head * const pl) static void defer_request(struct i915_request *rq, struct list_head * const pl)
{ {
LIST_HEAD(list); LIST_HEAD(list);
@ -936,7 +955,7 @@ static void defer_request(struct i915_request *rq, struct list_head * const pl)
GEM_BUG_ON(i915_request_is_active(rq)); GEM_BUG_ON(i915_request_is_active(rq));
list_move_tail(&rq->sched.link, pl); list_move_tail(&rq->sched.link, pl);
list_for_each_entry(p, &rq->sched.waiters_list, wait_link) { for_each_waiter(p, rq) {
struct i915_request *w = struct i915_request *w =
container_of(p->waiter, typeof(*w), sched); container_of(p->waiter, typeof(*w), sched);
@ -1102,14 +1121,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
*/ */
__unwind_incomplete_requests(engine); __unwind_incomplete_requests(engine);
/*
* If we need to return to the preempted context, we
* need to skip the lite-restore and force it to
* reload the RING_TAIL. Otherwise, the HW has a
* tendency to ignore us rewinding the TAIL to the
* end of an earlier request.
*/
last->hw_context->lrc_desc |= CTX_DESC_FORCE_RESTORE;
last = NULL; last = NULL;
} else if (need_timeslice(engine, last) && } else if (need_timeslice(engine, last) &&
!timer_pending(&engine->execlists.timer)) { !timer_pending(&engine->execlists.timer)) {
@ -1150,16 +1161,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
if (!list_is_last(&last->sched.link, if (!list_is_last(&last->sched.link,
&engine->active.requests)) &engine->active.requests))
return; return;
/*
* WaIdleLiteRestore:bdw,skl
* Apply the wa NOOPs to prevent
* ring:HEAD == rq:TAIL as we resubmit the
* request. See gen8_emit_fini_breadcrumb() for
* where we prepare the padding after the
* end of the request.
*/
last->tail = last->wa_tail;
} }
} }

View File

@ -1312,6 +1312,8 @@ intel_engine_create_ring(struct intel_engine_cs *engine, int size)
kref_init(&ring->ref); kref_init(&ring->ref);
ring->size = size; ring->size = size;
ring->wrap = BITS_PER_TYPE(ring->size) - ilog2(size);
/* Workaround an erratum on the i830 which causes a hang if /* Workaround an erratum on the i830 which causes a hang if
* the TAIL pointer points to within the last 2 cachelines * the TAIL pointer points to within the last 2 cachelines
* of the buffer. * of the buffer.

View File

@ -1956,7 +1956,11 @@ void _intel_vgpu_mm_release(struct kref *mm_ref)
if (mm->type == INTEL_GVT_MM_PPGTT) { if (mm->type == INTEL_GVT_MM_PPGTT) {
list_del(&mm->ppgtt_mm.list); list_del(&mm->ppgtt_mm.list);
mutex_lock(&mm->vgpu->gvt->gtt.ppgtt_mm_lock);
list_del(&mm->ppgtt_mm.lru_list); list_del(&mm->ppgtt_mm.lru_list);
mutex_unlock(&mm->vgpu->gvt->gtt.ppgtt_mm_lock);
invalidate_ppgtt_mm(mm); invalidate_ppgtt_mm(mm);
} else { } else {
vfree(mm->ggtt_mm.virtual_ggtt); vfree(mm->ggtt_mm.virtual_ggtt);

View File

@ -136,7 +136,7 @@ i915_gem_phys_pwrite(struct drm_i915_gem_object *obj,
struct drm_i915_gem_pwrite *args, struct drm_i915_gem_pwrite *args,
struct drm_file *file) struct drm_file *file)
{ {
void *vaddr = obj->phys_handle->vaddr + args->offset; void *vaddr = sg_page(obj->mm.pages->sgl) + args->offset;
char __user *user_data = u64_to_user_ptr(args->data_ptr); char __user *user_data = u64_to_user_ptr(args->data_ptr);
/* /*
@ -802,10 +802,10 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
ret = i915_gem_gtt_pwrite_fast(obj, args); ret = i915_gem_gtt_pwrite_fast(obj, args);
if (ret == -EFAULT || ret == -ENOSPC) { if (ret == -EFAULT || ret == -ENOSPC) {
if (obj->phys_handle) if (i915_gem_object_has_struct_page(obj))
ret = i915_gem_phys_pwrite(obj, args, file);
else
ret = i915_gem_shmem_pwrite(obj, args); ret = i915_gem_shmem_pwrite(obj, args);
else
ret = i915_gem_phys_pwrite(obj, args, file);
} }
i915_gem_object_unpin_pages(obj); i915_gem_object_unpin_pages(obj);

View File

@ -1768,7 +1768,8 @@ void i915_capture_error_state(struct drm_i915_private *i915,
if (!xchg(&warned, true) && if (!xchg(&warned, true) &&
ktime_get_real_seconds() - DRIVER_TIMESTAMP < DAY_AS_SECONDS(180)) { ktime_get_real_seconds() - DRIVER_TIMESTAMP < DAY_AS_SECONDS(180)) {
pr_info("GPU hangs can indicate a bug anywhere in the entire gfx stack, including userspace.\n"); pr_info("GPU hangs can indicate a bug anywhere in the entire gfx stack, including userspace.\n");
pr_info("Please file a _new_ bug report on bugs.freedesktop.org against DRI -> DRM/Intel\n"); pr_info("Please file a _new_ bug report at https://gitlab.freedesktop.org/drm/intel/issues/new.\n");
pr_info("Please see https://gitlab.freedesktop.org/drm/intel/-/wikis/How-to-file-i915-bugs for details.\n");
pr_info("drm/i915 developers can then reassign to the right component if it's not a kernel issue.\n"); pr_info("drm/i915 developers can then reassign to the right component if it's not a kernel issue.\n");
pr_info("The GPU crash dump is required to analyze GPU hangs, so please always attach it.\n"); pr_info("The GPU crash dump is required to analyze GPU hangs, so please always attach it.\n");
pr_info("GPU crash dump saved to /sys/class/drm/card%d/error\n", pr_info("GPU crash dump saved to /sys/class/drm/card%d/error\n",

View File

@ -418,8 +418,6 @@ bool __i915_sched_node_add_dependency(struct i915_sched_node *node,
if (!node_signaled(signal)) { if (!node_signaled(signal)) {
INIT_LIST_HEAD(&dep->dfs_link); INIT_LIST_HEAD(&dep->dfs_link);
list_add(&dep->wait_link, &signal->waiters_list);
list_add(&dep->signal_link, &node->signalers_list);
dep->signaler = signal; dep->signaler = signal;
dep->waiter = node; dep->waiter = node;
dep->flags = flags; dep->flags = flags;
@ -429,6 +427,10 @@ bool __i915_sched_node_add_dependency(struct i915_sched_node *node,
!node_started(signal)) !node_started(signal))
node->flags |= I915_SCHED_HAS_SEMAPHORE_CHAIN; node->flags |= I915_SCHED_HAS_SEMAPHORE_CHAIN;
/* All set, now publish. Beware the lockless walkers. */
list_add(&dep->signal_link, &node->signalers_list);
list_add_rcu(&dep->wait_link, &signal->waiters_list);
/* /*
* As we do not allow WAIT to preempt inflight requests, * As we do not allow WAIT to preempt inflight requests,
* once we have executed a request, along with triggering * once we have executed a request, along with triggering

View File

@ -8,9 +8,8 @@
#include "i915_drv.h" #include "i915_drv.h"
#include "i915_utils.h" #include "i915_utils.h"
#define FDO_BUG_URL "https://bugs.freedesktop.org/enter_bug.cgi?product=DRI" #define FDO_BUG_URL "https://gitlab.freedesktop.org/drm/intel/-/wikis/How-to-file-i915-bugs"
#define FDO_BUG_MSG "Please file a bug at " FDO_BUG_URL " against DRM/Intel " \ #define FDO_BUG_MSG "Please file a bug on drm/i915; see " FDO_BUG_URL " for details."
"providing the dmesg log by booting with drm.debug=0xf"
void void
__i915_printk(struct drm_i915_private *dev_priv, const char *level, __i915_printk(struct drm_i915_private *dev_priv, const char *level,

View File

@ -255,13 +255,13 @@ static const struct dpu_format dpu_format_map[] = {
INTERLEAVED_RGB_FMT(RGB565, INTERLEAVED_RGB_FMT(RGB565,
0, COLOR_5BIT, COLOR_6BIT, COLOR_5BIT, 0, COLOR_5BIT, COLOR_6BIT, COLOR_5BIT,
C2_R_Cr, C0_G_Y, C1_B_Cb, 0, 3, C1_B_Cb, C0_G_Y, C2_R_Cr, 0, 3,
false, 2, 0, false, 2, 0,
DPU_FETCH_LINEAR, 1), DPU_FETCH_LINEAR, 1),
INTERLEAVED_RGB_FMT(BGR565, INTERLEAVED_RGB_FMT(BGR565,
0, COLOR_5BIT, COLOR_6BIT, COLOR_5BIT, 0, COLOR_5BIT, COLOR_6BIT, COLOR_5BIT,
C1_B_Cb, C0_G_Y, C2_R_Cr, 0, 3, C2_R_Cr, C0_G_Y, C1_B_Cb, 0, 3,
false, 2, 0, false, 2, 0,
DPU_FETCH_LINEAR, 1), DPU_FETCH_LINEAR, 1),

View File

@ -451,6 +451,8 @@ nv50_wndw_atomic_check(struct drm_plane *plane, struct drm_plane_state *state)
asyw->clr.ntfy = armw->ntfy.handle != 0; asyw->clr.ntfy = armw->ntfy.handle != 0;
asyw->clr.sema = armw->sema.handle != 0; asyw->clr.sema = armw->sema.handle != 0;
asyw->clr.xlut = armw->xlut.handle != 0; asyw->clr.xlut = armw->xlut.handle != 0;
if (asyw->clr.xlut && asyw->visible)
asyw->set.xlut = asyw->xlut.handle != 0;
asyw->clr.csc = armw->csc.valid; asyw->clr.csc = armw->csc.valid;
if (wndw->func->image_clr) if (wndw->func->image_clr)
asyw->clr.image = armw->image.handle[0] != 0; asyw->clr.image = armw->image.handle[0] != 0;

View File

@ -151,7 +151,12 @@ u32 panfrost_mmu_as_get(struct panfrost_device *pfdev, struct panfrost_mmu *mmu)
as = mmu->as; as = mmu->as;
if (as >= 0) { if (as >= 0) {
int en = atomic_inc_return(&mmu->as_count); int en = atomic_inc_return(&mmu->as_count);
WARN_ON(en >= NUM_JOB_SLOTS);
/*
* AS can be retained by active jobs or a perfcnt context,
* hence the '+ 1' here.
*/
WARN_ON(en >= (NUM_JOB_SLOTS + 1));
list_move(&mmu->list, &pfdev->as_lru_list); list_move(&mmu->list, &pfdev->as_lru_list);
goto out; goto out;

View File

@ -73,7 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
struct panfrost_file_priv *user = file_priv->driver_priv; struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
struct drm_gem_shmem_object *bo; struct drm_gem_shmem_object *bo;
u32 cfg; u32 cfg, as;
int ret; int ret;
if (user == perfcnt->user) if (user == perfcnt->user)
@ -126,12 +126,8 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
perfcnt->user = user; perfcnt->user = user;
/* as = panfrost_mmu_as_get(pfdev, perfcnt->mapping->mmu);
* Always use address space 0 for now. cfg = GPU_PERFCNT_CFG_AS(as) |
* FIXME: this needs to be updated when we start using different
* address space.
*/
cfg = GPU_PERFCNT_CFG_AS(0) |
GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_MANUAL); GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_MANUAL);
/* /*
@ -195,6 +191,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf); drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
perfcnt->buf = NULL; perfcnt->buf = NULL;
panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
panfrost_gem_mapping_put(perfcnt->mapping); panfrost_gem_mapping_put(perfcnt->mapping);
perfcnt->mapping = NULL; perfcnt->mapping = NULL;
pm_runtime_mark_last_busy(pfdev->dev); pm_runtime_mark_last_busy(pfdev->dev);

View File

@ -2575,6 +2575,17 @@ isert_wait4logout(struct isert_conn *isert_conn)
} }
} }
static void
isert_wait4cmds(struct iscsi_conn *conn)
{
isert_info("iscsi_conn %p\n", conn);
if (conn->sess) {
target_sess_cmd_list_set_waiting(conn->sess->se_sess);
target_wait_for_sess_cmds(conn->sess->se_sess);
}
}
/** /**
* isert_put_unsol_pending_cmds() - Drop commands waiting for * isert_put_unsol_pending_cmds() - Drop commands waiting for
* unsolicitate dataout * unsolicitate dataout
@ -2622,6 +2633,7 @@ static void isert_wait_conn(struct iscsi_conn *conn)
ib_drain_qp(isert_conn->qp); ib_drain_qp(isert_conn->qp);
isert_put_unsol_pending_cmds(conn); isert_put_unsol_pending_cmds(conn);
isert_wait4cmds(conn);
isert_wait4logout(isert_conn); isert_wait4logout(isert_conn);
queue_work(isert_release_wq, &isert_conn->release_work); queue_work(isert_release_wq, &isert_conn->release_work);

View File

@ -345,21 +345,19 @@ static void qcom_iommu_domain_free(struct iommu_domain *domain)
{ {
struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain);
if (WARN_ON(qcom_domain->iommu)) /* forgot to detach? */
return;
iommu_put_dma_cookie(domain); iommu_put_dma_cookie(domain);
/* NOTE: unmap can be called after client device is powered off, if (qcom_domain->iommu) {
* for example, with GPUs or anything involving dma-buf. So we /*
* cannot rely on the device_link. Make sure the IOMMU is on to * NOTE: unmap can be called after client device is powered
* avoid unclocked accesses in the TLB inv path: * off, for example, with GPUs or anything involving dma-buf.
* So we cannot rely on the device_link. Make sure the IOMMU
* is on to avoid unclocked accesses in the TLB inv path:
*/ */
pm_runtime_get_sync(qcom_domain->iommu->dev); pm_runtime_get_sync(qcom_domain->iommu->dev);
free_io_pgtable_ops(qcom_domain->pgtbl_ops); free_io_pgtable_ops(qcom_domain->pgtbl_ops);
pm_runtime_put_sync(qcom_domain->iommu->dev); pm_runtime_put_sync(qcom_domain->iommu->dev);
}
kfree(qcom_domain); kfree(qcom_domain);
} }
@ -405,7 +403,7 @@ static void qcom_iommu_detach_dev(struct iommu_domain *domain, struct device *de
struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain); struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain);
unsigned i; unsigned i;
if (!qcom_domain->iommu) if (WARN_ON(!qcom_domain->iommu))
return; return;
pm_runtime_get_sync(qcom_iommu->dev); pm_runtime_get_sync(qcom_iommu->dev);
@ -418,8 +416,6 @@ static void qcom_iommu_detach_dev(struct iommu_domain *domain, struct device *de
ctx->domain = NULL; ctx->domain = NULL;
} }
pm_runtime_put_sync(qcom_iommu->dev); pm_runtime_put_sync(qcom_iommu->dev);
qcom_domain->iommu = NULL;
} }
static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova, static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova,

View File

@ -4713,12 +4713,12 @@ int e1000e_close(struct net_device *netdev)
pm_runtime_get_sync(&pdev->dev); pm_runtime_get_sync(&pdev->dev);
if (!test_bit(__E1000_DOWN, &adapter->state)) { if (netif_device_present(netdev)) {
e1000e_down(adapter, true); e1000e_down(adapter, true);
e1000_free_irq(adapter); e1000_free_irq(adapter);
/* Link status message must follow this format */ /* Link status message must follow this format */
pr_info("%s NIC Link is Down\n", adapter->netdev->name); pr_info("%s NIC Link is Down\n", netdev->name);
} }
napi_disable(&adapter->napi); napi_disable(&adapter->napi);
@ -6309,10 +6309,14 @@ static int e1000e_pm_freeze(struct device *dev)
{ {
struct net_device *netdev = dev_get_drvdata(dev); struct net_device *netdev = dev_get_drvdata(dev);
struct e1000_adapter *adapter = netdev_priv(netdev); struct e1000_adapter *adapter = netdev_priv(netdev);
bool present;
rtnl_lock();
present = netif_device_present(netdev);
netif_device_detach(netdev); netif_device_detach(netdev);
if (netif_running(netdev)) { if (present && netif_running(netdev)) {
int count = E1000_CHECK_RESET_COUNT; int count = E1000_CHECK_RESET_COUNT;
while (test_bit(__E1000_RESETTING, &adapter->state) && count--) while (test_bit(__E1000_RESETTING, &adapter->state) && count--)
@ -6324,6 +6328,8 @@ static int e1000e_pm_freeze(struct device *dev)
e1000e_down(adapter, false); e1000e_down(adapter, false);
e1000_free_irq(adapter); e1000_free_irq(adapter);
} }
rtnl_unlock();
e1000e_reset_interrupt_capability(adapter); e1000e_reset_interrupt_capability(adapter);
/* Allow time for pending master requests to run */ /* Allow time for pending master requests to run */
@ -6571,6 +6577,30 @@ static void e1000e_disable_aspm_locked(struct pci_dev *pdev, u16 state)
__e1000e_disable_aspm(pdev, state, 1); __e1000e_disable_aspm(pdev, state, 1);
} }
static int e1000e_pm_thaw(struct device *dev)
{
struct net_device *netdev = dev_get_drvdata(dev);
struct e1000_adapter *adapter = netdev_priv(netdev);
int rc = 0;
e1000e_set_interrupt_capability(adapter);
rtnl_lock();
if (netif_running(netdev)) {
rc = e1000_request_irq(adapter);
if (rc)
goto err_irq;
e1000e_up(adapter);
}
netif_device_attach(netdev);
err_irq:
rtnl_unlock();
return rc;
}
#ifdef CONFIG_PM #ifdef CONFIG_PM
static int __e1000_resume(struct pci_dev *pdev) static int __e1000_resume(struct pci_dev *pdev)
{ {
@ -6638,26 +6668,6 @@ static int __e1000_resume(struct pci_dev *pdev)
} }
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
static int e1000e_pm_thaw(struct device *dev)
{
struct net_device *netdev = dev_get_drvdata(dev);
struct e1000_adapter *adapter = netdev_priv(netdev);
e1000e_set_interrupt_capability(adapter);
if (netif_running(netdev)) {
u32 err = e1000_request_irq(adapter);
if (err)
return err;
e1000e_up(adapter);
}
netif_device_attach(netdev);
return 0;
}
static int e1000e_pm_suspend(struct device *dev) static int e1000e_pm_suspend(struct device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
@ -6829,16 +6839,11 @@ static void e1000_netpoll(struct net_device *netdev)
static pci_ers_result_t e1000_io_error_detected(struct pci_dev *pdev, static pci_ers_result_t e1000_io_error_detected(struct pci_dev *pdev,
pci_channel_state_t state) pci_channel_state_t state)
{ {
struct net_device *netdev = pci_get_drvdata(pdev); e1000e_pm_freeze(&pdev->dev);
struct e1000_adapter *adapter = netdev_priv(netdev);
netif_device_detach(netdev);
if (state == pci_channel_io_perm_failure) if (state == pci_channel_io_perm_failure)
return PCI_ERS_RESULT_DISCONNECT; return PCI_ERS_RESULT_DISCONNECT;
if (netif_running(netdev))
e1000e_down(adapter, true);
pci_disable_device(pdev); pci_disable_device(pdev);
/* Request a slot slot reset. */ /* Request a slot slot reset. */
@ -6904,10 +6909,7 @@ static void e1000_io_resume(struct pci_dev *pdev)
e1000_init_manageability_pt(adapter); e1000_init_manageability_pt(adapter);
if (netif_running(netdev)) e1000e_pm_thaw(&pdev->dev);
e1000e_up(adapter);
netif_device_attach(netdev);
/* If the controller has AMT, do not set DRV_LOAD until the interface /* If the controller has AMT, do not set DRV_LOAD until the interface
* is up. For all other cases, let the f/w know that the h/w is now * is up. For all other cases, let the f/w know that the h/w is now

View File

@ -200,7 +200,7 @@ int mlx5e_health_report(struct mlx5e_priv *priv,
netdev_err(priv->netdev, err_str); netdev_err(priv->netdev, err_str);
if (!reporter) if (!reporter)
return err_ctx->recover(&err_ctx->ctx); return err_ctx->recover(err_ctx->ctx);
return devlink_health_report(reporter, err_str, err_ctx); return devlink_health_report(reporter, err_str, err_ctx);
} }

View File

@ -179,6 +179,14 @@ mlx5e_tx_dma_unmap(struct device *pdev, struct mlx5e_sq_dma *dma)
} }
} }
static inline void mlx5e_rqwq_reset(struct mlx5e_rq *rq)
{
if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ)
mlx5_wq_ll_reset(&rq->mpwqe.wq);
else
mlx5_wq_cyc_reset(&rq->wqe.wq);
}
/* SW parser related functions */ /* SW parser related functions */
struct mlx5e_swp_spec { struct mlx5e_swp_spec {

View File

@ -723,6 +723,9 @@ int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state, int next_state)
if (!in) if (!in)
return -ENOMEM; return -ENOMEM;
if (curr_state == MLX5_RQC_STATE_RST && next_state == MLX5_RQC_STATE_RDY)
mlx5e_rqwq_reset(rq);
rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx); rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx);
MLX5_SET(modify_rq_in, in, rq_state, curr_state); MLX5_SET(modify_rq_in, in, rq_state, curr_state);

View File

@ -2319,25 +2319,17 @@ int mlx5_eswitch_set_vepa(struct mlx5_eswitch *esw, u8 setting)
int mlx5_eswitch_get_vepa(struct mlx5_eswitch *esw, u8 *setting) int mlx5_eswitch_get_vepa(struct mlx5_eswitch *esw, u8 *setting)
{ {
int err = 0;
if (!esw) if (!esw)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (!ESW_ALLOWED(esw)) if (!ESW_ALLOWED(esw))
return -EPERM; return -EPERM;
mutex_lock(&esw->state_lock); if (esw->mode != MLX5_ESWITCH_LEGACY)
if (esw->mode != MLX5_ESWITCH_LEGACY) { return -EOPNOTSUPP;
err = -EOPNOTSUPP;
goto out;
}
*setting = esw->fdb_table.legacy.vepa_uplink_rule ? 1 : 0; *setting = esw->fdb_table.legacy.vepa_uplink_rule ? 1 : 0;
return 0;
out:
mutex_unlock(&esw->state_lock);
return err;
} }
int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw, int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw,

View File

@ -96,6 +96,13 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
return err; return err;
} }
void mlx5_wq_cyc_reset(struct mlx5_wq_cyc *wq)
{
wq->wqe_ctr = 0;
wq->cur_sz = 0;
mlx5_wq_cyc_update_db_record(wq);
}
int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param, int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *qpc, struct mlx5_wq_qp *wq, void *qpc, struct mlx5_wq_qp *wq,
struct mlx5_wq_ctrl *wq_ctrl) struct mlx5_wq_ctrl *wq_ctrl)
@ -194,6 +201,19 @@ int mlx5_cqwq_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
return err; return err;
} }
static void mlx5_wq_ll_init_list(struct mlx5_wq_ll *wq)
{
struct mlx5_wqe_srq_next_seg *next_seg;
int i;
for (i = 0; i < wq->fbc.sz_m1; i++) {
next_seg = mlx5_wq_ll_get_wqe(wq, i);
next_seg->next_wqe_index = cpu_to_be16(i + 1);
}
next_seg = mlx5_wq_ll_get_wqe(wq, i);
wq->tail_next = &next_seg->next_wqe_index;
}
int mlx5_wq_ll_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param, int mlx5_wq_ll_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *wqc, struct mlx5_wq_ll *wq, void *wqc, struct mlx5_wq_ll *wq,
struct mlx5_wq_ctrl *wq_ctrl) struct mlx5_wq_ctrl *wq_ctrl)
@ -201,9 +221,7 @@ int mlx5_wq_ll_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
u8 log_wq_stride = MLX5_GET(wq, wqc, log_wq_stride); u8 log_wq_stride = MLX5_GET(wq, wqc, log_wq_stride);
u8 log_wq_sz = MLX5_GET(wq, wqc, log_wq_sz); u8 log_wq_sz = MLX5_GET(wq, wqc, log_wq_sz);
struct mlx5_frag_buf_ctrl *fbc = &wq->fbc; struct mlx5_frag_buf_ctrl *fbc = &wq->fbc;
struct mlx5_wqe_srq_next_seg *next_seg;
int err; int err;
int i;
err = mlx5_db_alloc_node(mdev, &wq_ctrl->db, param->db_numa_node); err = mlx5_db_alloc_node(mdev, &wq_ctrl->db, param->db_numa_node);
if (err) { if (err) {
@ -222,13 +240,7 @@ int mlx5_wq_ll_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
mlx5_init_fbc(wq_ctrl->buf.frags, log_wq_stride, log_wq_sz, fbc); mlx5_init_fbc(wq_ctrl->buf.frags, log_wq_stride, log_wq_sz, fbc);
for (i = 0; i < fbc->sz_m1; i++) { mlx5_wq_ll_init_list(wq);
next_seg = mlx5_wq_ll_get_wqe(wq, i);
next_seg->next_wqe_index = cpu_to_be16(i + 1);
}
next_seg = mlx5_wq_ll_get_wqe(wq, i);
wq->tail_next = &next_seg->next_wqe_index;
wq_ctrl->mdev = mdev; wq_ctrl->mdev = mdev;
return 0; return 0;
@ -239,6 +251,15 @@ int mlx5_wq_ll_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
return err; return err;
} }
void mlx5_wq_ll_reset(struct mlx5_wq_ll *wq)
{
wq->head = 0;
wq->wqe_ctr = 0;
wq->cur_sz = 0;
mlx5_wq_ll_init_list(wq);
mlx5_wq_ll_update_db_record(wq);
}
void mlx5_wq_destroy(struct mlx5_wq_ctrl *wq_ctrl) void mlx5_wq_destroy(struct mlx5_wq_ctrl *wq_ctrl)
{ {
mlx5_frag_buf_free(wq_ctrl->mdev, &wq_ctrl->buf); mlx5_frag_buf_free(wq_ctrl->mdev, &wq_ctrl->buf);

View File

@ -80,10 +80,12 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *wqc, struct mlx5_wq_cyc *wq, void *wqc, struct mlx5_wq_cyc *wq,
struct mlx5_wq_ctrl *wq_ctrl); struct mlx5_wq_ctrl *wq_ctrl);
u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq); u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
void mlx5_wq_cyc_reset(struct mlx5_wq_cyc *wq);
int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param, int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *qpc, struct mlx5_wq_qp *wq, void *qpc, struct mlx5_wq_qp *wq,
struct mlx5_wq_ctrl *wq_ctrl); struct mlx5_wq_ctrl *wq_ctrl);
void mlx5_wq_ll_reset(struct mlx5_wq_ll *wq);
int mlx5_cqwq_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param, int mlx5_cqwq_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *cqc, struct mlx5_cqwq *wq, void *cqc, struct mlx5_cqwq *wq,

View File

@ -711,6 +711,7 @@ int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
} }
INIT_WORK(&ctrl->ana_work, nvme_ana_work); INIT_WORK(&ctrl->ana_work, nvme_ana_work);
kfree(ctrl->ana_log_buf);
ctrl->ana_log_buf = kmalloc(ctrl->ana_log_size, GFP_KERNEL); ctrl->ana_log_buf = kmalloc(ctrl->ana_log_size, GFP_KERNEL);
if (!ctrl->ana_log_buf) { if (!ctrl->ana_log_buf) {
error = -ENOMEM; error = -ENOMEM;

View File

@ -351,8 +351,23 @@ static inline vm_flags_t calc_vm_may_flags(unsigned long prot)
_calc_vm_trans(prot, PROT_EXEC, VM_MAYEXEC); _calc_vm_trans(prot, PROT_EXEC, VM_MAYEXEC);
} }
static int ashmem_vmfile_mmap(struct file *file, struct vm_area_struct *vma)
{
/* do not allow to mmap ashmem backing shmem file directly */
return -EPERM;
}
static unsigned long
ashmem_vmfile_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags)
{
return current->mm->get_unmapped_area(file, addr, len, pgoff, flags);
}
static int ashmem_mmap(struct file *file, struct vm_area_struct *vma) static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
{ {
static struct file_operations vmfile_fops;
struct ashmem_area *asma = file->private_data; struct ashmem_area *asma = file->private_data;
int ret = 0; int ret = 0;
@ -393,6 +408,19 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
} }
vmfile->f_mode |= FMODE_LSEEK; vmfile->f_mode |= FMODE_LSEEK;
asma->file = vmfile; asma->file = vmfile;
/*
* override mmap operation of the vmfile so that it can't be
* remapped which would lead to creation of a new vma with no
* asma permission checks. Have to override get_unmapped_area
* as well to prevent VM_BUG_ON check for f_ops modification.
*/
if (!vmfile_fops.mmap) {
vmfile_fops = *vmfile->f_op;
vmfile_fops.mmap = ashmem_vmfile_mmap;
vmfile_fops.get_unmapped_area =
ashmem_vmfile_get_unmapped_area;
}
vmfile->f_op = &vmfile_fops;
} }
get_file(asma->file); get_file(asma->file);

View File

@ -92,8 +92,8 @@ void gb_audio_manager_remove_all(void)
list_for_each_entry_safe(module, next, &modules_list, list) { list_for_each_entry_safe(module, next, &modules_list, list) {
list_del(&module->list); list_del(&module->list);
kobject_put(&module->kobj);
ida_simple_remove(&module_id, module->id); ida_simple_remove(&module_id, module->id);
kobject_put(&module->kobj);
} }
is_empty = list_empty(&modules_list); is_empty = list_empty(&modules_list);

View File

@ -2025,7 +2025,7 @@ static int wpa_supplicant_ioctl(struct net_device *dev, struct iw_point *p)
struct ieee_param *param; struct ieee_param *param;
uint ret = 0; uint ret = 0;
if (p->length < sizeof(struct ieee_param) || !p->pointer) { if (!p->pointer || p->length != sizeof(struct ieee_param)) {
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }
@ -2812,7 +2812,7 @@ static int rtw_hostapd_ioctl(struct net_device *dev, struct iw_point *p)
goto out; goto out;
} }
if (!p->pointer) { if (!p->pointer || p->length != sizeof(struct ieee_param)) {
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }

View File

@ -476,14 +476,13 @@ int rtl8723bs_xmit_thread(void *context)
s32 ret; s32 ret;
struct adapter *padapter; struct adapter *padapter;
struct xmit_priv *pxmitpriv; struct xmit_priv *pxmitpriv;
u8 thread_name[20] = "RTWHALXT"; u8 thread_name[20];
ret = _SUCCESS; ret = _SUCCESS;
padapter = context; padapter = context;
pxmitpriv = &padapter->xmitpriv; pxmitpriv = &padapter->xmitpriv;
rtw_sprintf(thread_name, 20, "%s-"ADPT_FMT, thread_name, ADPT_ARG(padapter)); rtw_sprintf(thread_name, 20, "RTWHALXT-" ADPT_FMT, ADPT_ARG(padapter));
thread_enter(thread_name); thread_enter(thread_name);
DBG_871X("start "FUNC_ADPT_FMT"\n", FUNC_ADPT_ARG(padapter)); DBG_871X("start "FUNC_ADPT_FMT"\n", FUNC_ADPT_ARG(padapter));

View File

@ -3379,7 +3379,7 @@ static int wpa_supplicant_ioctl(struct net_device *dev, struct iw_point *p)
/* down(&ieee->wx_sem); */ /* down(&ieee->wx_sem); */
if (p->length < sizeof(struct ieee_param) || !p->pointer) { if (!p->pointer || p->length != sizeof(struct ieee_param)) {
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }
@ -4213,7 +4213,7 @@ static int rtw_hostapd_ioctl(struct net_device *dev, struct iw_point *p)
/* if (p->length < sizeof(struct ieee_param) || !p->pointer) { */ /* if (p->length < sizeof(struct ieee_param) || !p->pointer) { */
if (!p->pointer) { if (!p->pointer || p->length != sizeof(*param)) {
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }

View File

@ -130,7 +130,7 @@ int vnt_rx_data(struct vnt_private *priv, struct vnt_rcb *ptr_rcb,
vnt_rf_rssi_to_dbm(priv, *rssi, &rx_dbm); vnt_rf_rssi_to_dbm(priv, *rssi, &rx_dbm);
priv->bb_pre_ed_rssi = (u8)rx_dbm + 1; priv->bb_pre_ed_rssi = (u8)-rx_dbm + 1;
priv->current_rssi = priv->bb_pre_ed_rssi; priv->current_rssi = priv->bb_pre_ed_rssi;
skb_pull(skb, 8); skb_pull(skb, 8);

View File

@ -1165,9 +1165,7 @@ int iscsit_setup_scsi_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
hdr->cmdsn, be32_to_cpu(hdr->data_length), payload_length, hdr->cmdsn, be32_to_cpu(hdr->data_length), payload_length,
conn->cid); conn->cid);
if (target_get_sess_cmd(&cmd->se_cmd, true) < 0) target_get_sess_cmd(&cmd->se_cmd, true);
return iscsit_add_reject_cmd(cmd,
ISCSI_REASON_WAITING_FOR_LOGOUT, buf);
cmd->sense_reason = transport_lookup_cmd_lun(&cmd->se_cmd, cmd->sense_reason = transport_lookup_cmd_lun(&cmd->se_cmd,
scsilun_to_int(&hdr->lun)); scsilun_to_int(&hdr->lun));
@ -2004,9 +2002,7 @@ iscsit_handle_task_mgt_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd,
conn->sess->se_sess, 0, DMA_NONE, conn->sess->se_sess, 0, DMA_NONE,
TCM_SIMPLE_TAG, cmd->sense_buffer + 2); TCM_SIMPLE_TAG, cmd->sense_buffer + 2);
if (target_get_sess_cmd(&cmd->se_cmd, true) < 0) target_get_sess_cmd(&cmd->se_cmd, true);
return iscsit_add_reject_cmd(cmd,
ISCSI_REASON_WAITING_FOR_LOGOUT, buf);
/* /*
* TASK_REASSIGN for ERL=2 / connection stays inside of * TASK_REASSIGN for ERL=2 / connection stays inside of
@ -4151,6 +4147,9 @@ int iscsit_close_connection(
iscsit_stop_nopin_response_timer(conn); iscsit_stop_nopin_response_timer(conn);
iscsit_stop_nopin_timer(conn); iscsit_stop_nopin_timer(conn);
if (conn->conn_transport->iscsit_wait_conn)
conn->conn_transport->iscsit_wait_conn(conn);
/* /*
* During Connection recovery drop unacknowledged out of order * During Connection recovery drop unacknowledged out of order
* commands for this connection, and prepare the other commands * commands for this connection, and prepare the other commands
@ -4233,11 +4232,6 @@ int iscsit_close_connection(
* must wait until they have completed. * must wait until they have completed.
*/ */
iscsit_check_conn_usage_count(conn); iscsit_check_conn_usage_count(conn);
target_sess_cmd_list_set_waiting(sess->se_sess);
target_wait_for_sess_cmds(sess->se_sess);
if (conn->conn_transport->iscsit_wait_conn)
conn->conn_transport->iscsit_wait_conn(conn);
ahash_request_free(conn->conn_tx_hash); ahash_request_free(conn->conn_tx_hash);
if (conn->conn_rx_hash) { if (conn->conn_rx_hash) {

View File

@ -666,6 +666,11 @@ static int transport_cmd_check_stop_to_fabric(struct se_cmd *cmd)
target_remove_from_state_list(cmd); target_remove_from_state_list(cmd);
/*
* Clear struct se_cmd->se_lun before the handoff to FE.
*/
cmd->se_lun = NULL;
spin_lock_irqsave(&cmd->t_state_lock, flags); spin_lock_irqsave(&cmd->t_state_lock, flags);
/* /*
* Determine if frontend context caller is requesting the stopping of * Determine if frontend context caller is requesting the stopping of
@ -693,6 +698,17 @@ static int transport_cmd_check_stop_to_fabric(struct se_cmd *cmd)
return cmd->se_tfo->check_stop_free(cmd); return cmd->se_tfo->check_stop_free(cmd);
} }
static void transport_lun_remove_cmd(struct se_cmd *cmd)
{
struct se_lun *lun = cmd->se_lun;
if (!lun)
return;
if (cmpxchg(&cmd->lun_ref_active, true, false))
percpu_ref_put(&lun->lun_ref);
}
static void target_complete_failure_work(struct work_struct *work) static void target_complete_failure_work(struct work_struct *work)
{ {
struct se_cmd *cmd = container_of(work, struct se_cmd, work); struct se_cmd *cmd = container_of(work, struct se_cmd, work);
@ -783,6 +799,8 @@ static void target_handle_abort(struct se_cmd *cmd)
WARN_ON_ONCE(kref_read(&cmd->cmd_kref) == 0); WARN_ON_ONCE(kref_read(&cmd->cmd_kref) == 0);
transport_lun_remove_cmd(cmd);
transport_cmd_check_stop_to_fabric(cmd); transport_cmd_check_stop_to_fabric(cmd);
} }
@ -1695,6 +1713,7 @@ static void target_complete_tmr_failure(struct work_struct *work)
se_cmd->se_tmr_req->response = TMR_LUN_DOES_NOT_EXIST; se_cmd->se_tmr_req->response = TMR_LUN_DOES_NOT_EXIST;
se_cmd->se_tfo->queue_tm_rsp(se_cmd); se_cmd->se_tfo->queue_tm_rsp(se_cmd);
transport_lun_remove_cmd(se_cmd);
transport_cmd_check_stop_to_fabric(se_cmd); transport_cmd_check_stop_to_fabric(se_cmd);
} }
@ -1885,6 +1904,7 @@ void transport_generic_request_failure(struct se_cmd *cmd,
goto queue_full; goto queue_full;
check_stop: check_stop:
transport_lun_remove_cmd(cmd);
transport_cmd_check_stop_to_fabric(cmd); transport_cmd_check_stop_to_fabric(cmd);
return; return;
@ -2182,6 +2202,7 @@ static void transport_complete_qf(struct se_cmd *cmd)
transport_handle_queue_full(cmd, cmd->se_dev, ret, false); transport_handle_queue_full(cmd, cmd->se_dev, ret, false);
return; return;
} }
transport_lun_remove_cmd(cmd);
transport_cmd_check_stop_to_fabric(cmd); transport_cmd_check_stop_to_fabric(cmd);
} }
@ -2276,6 +2297,7 @@ static void target_complete_ok_work(struct work_struct *work)
if (ret) if (ret)
goto queue_full; goto queue_full;
transport_lun_remove_cmd(cmd);
transport_cmd_check_stop_to_fabric(cmd); transport_cmd_check_stop_to_fabric(cmd);
return; return;
} }
@ -2301,6 +2323,7 @@ static void target_complete_ok_work(struct work_struct *work)
if (ret) if (ret)
goto queue_full; goto queue_full;
transport_lun_remove_cmd(cmd);
transport_cmd_check_stop_to_fabric(cmd); transport_cmd_check_stop_to_fabric(cmd);
return; return;
} }
@ -2336,6 +2359,7 @@ static void target_complete_ok_work(struct work_struct *work)
if (ret) if (ret)
goto queue_full; goto queue_full;
transport_lun_remove_cmd(cmd);
transport_cmd_check_stop_to_fabric(cmd); transport_cmd_check_stop_to_fabric(cmd);
return; return;
} }
@ -2371,6 +2395,7 @@ static void target_complete_ok_work(struct work_struct *work)
break; break;
} }
transport_lun_remove_cmd(cmd);
transport_cmd_check_stop_to_fabric(cmd); transport_cmd_check_stop_to_fabric(cmd);
return; return;
@ -2697,6 +2722,9 @@ int transport_generic_free_cmd(struct se_cmd *cmd, int wait_for_tasks)
*/ */
if (cmd->state_active) if (cmd->state_active)
target_remove_from_state_list(cmd); target_remove_from_state_list(cmd);
if (cmd->se_lun)
transport_lun_remove_cmd(cmd);
} }
if (aborted) if (aborted)
cmd->free_compl = &compl; cmd->free_compl = &compl;
@ -2768,9 +2796,6 @@ static void target_release_cmd_kref(struct kref *kref)
struct completion *abrt_compl = se_cmd->abrt_compl; struct completion *abrt_compl = se_cmd->abrt_compl;
unsigned long flags; unsigned long flags;
if (se_cmd->lun_ref_active)
percpu_ref_put(&se_cmd->se_lun->lun_ref);
if (se_sess) { if (se_sess) {
spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);
list_del_init(&se_cmd->se_cmd_list); list_del_init(&se_cmd->se_cmd_list);

View File

@ -274,6 +274,12 @@ static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val,
return ret; return ret;
} }
static int tb_switch_nvm_no_read(void *priv, unsigned int offset, void *val,
size_t bytes)
{
return -EPERM;
}
static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val, static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
size_t bytes) size_t bytes)
{ {
@ -319,6 +325,7 @@ static struct nvmem_device *register_nvmem(struct tb_switch *sw, int id,
config.read_only = true; config.read_only = true;
} else { } else {
config.name = "nvm_non_active"; config.name = "nvm_non_active";
config.reg_read = tb_switch_nvm_no_read;
config.reg_write = tb_switch_nvm_write; config.reg_write = tb_switch_nvm_write;
config.root_only = true; config.root_only = true;
} }

View File

@ -265,7 +265,6 @@ struct device *serdev_tty_port_register(struct tty_port *port,
struct device *parent, struct device *parent,
struct tty_driver *drv, int idx) struct tty_driver *drv, int idx)
{ {
const struct tty_port_client_operations *old_ops;
struct serdev_controller *ctrl; struct serdev_controller *ctrl;
struct serport *serport; struct serport *serport;
int ret; int ret;
@ -289,7 +288,6 @@ struct device *serdev_tty_port_register(struct tty_port *port,
ctrl->ops = &ctrl_ops; ctrl->ops = &ctrl_ops;
old_ops = port->client_ops;
port->client_ops = &client_ops; port->client_ops = &client_ops;
port->client_data = ctrl; port->client_data = ctrl;
@ -302,7 +300,7 @@ struct device *serdev_tty_port_register(struct tty_port *port,
err_reset_data: err_reset_data:
port->client_data = NULL; port->client_data = NULL;
port->client_ops = old_ops; port->client_ops = &tty_port_default_client_ops;
serdev_controller_put(ctrl); serdev_controller_put(ctrl);
return ERR_PTR(ret); return ERR_PTR(ret);
@ -317,8 +315,8 @@ int serdev_tty_port_unregister(struct tty_port *port)
return -ENODEV; return -ENODEV;
serdev_controller_remove(ctrl); serdev_controller_remove(ctrl);
port->client_ops = NULL;
port->client_data = NULL; port->client_data = NULL;
port->client_ops = &tty_port_default_client_ops;
serdev_controller_put(ctrl); serdev_controller_put(ctrl);
return 0; return 0;

View File

@ -379,7 +379,6 @@ static int aspeed_vuart_probe(struct platform_device *pdev)
port.port.line = rc; port.port.line = rc;
port.port.irq = irq_of_parse_and_map(np, 0); port.port.irq = irq_of_parse_and_map(np, 0);
port.port.irqflags = IRQF_SHARED;
port.port.handle_irq = aspeed_vuart_handle_irq; port.port.handle_irq = aspeed_vuart_handle_irq;
port.port.iotype = UPIO_MEM; port.port.iotype = UPIO_MEM;
port.port.type = PORT_16550A; port.port.type = PORT_16550A;

View File

@ -174,7 +174,7 @@ static int serial_link_irq_chain(struct uart_8250_port *up)
struct hlist_head *h; struct hlist_head *h;
struct hlist_node *n; struct hlist_node *n;
struct irq_info *i; struct irq_info *i;
int ret, irq_flags = up->port.flags & UPF_SHARE_IRQ ? IRQF_SHARED : 0; int ret;
mutex_lock(&hash_mutex); mutex_lock(&hash_mutex);
@ -209,9 +209,8 @@ static int serial_link_irq_chain(struct uart_8250_port *up)
INIT_LIST_HEAD(&up->list); INIT_LIST_HEAD(&up->list);
i->head = &up->list; i->head = &up->list;
spin_unlock_irq(&i->lock); spin_unlock_irq(&i->lock);
irq_flags |= up->port.irqflags;
ret = request_irq(up->port.irq, serial8250_interrupt, ret = request_irq(up->port.irq, serial8250_interrupt,
irq_flags, up->port.name, i); up->port.irqflags, up->port.name, i);
if (ret < 0) if (ret < 0)
serial_do_unlink(i, up); serial_do_unlink(i, up);
} }

View File

@ -172,7 +172,6 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
port->type = type; port->type = type;
port->uartclk = clk; port->uartclk = clk;
port->irqflags |= IRQF_SHARED;
if (of_property_read_bool(np, "no-loopback-test")) if (of_property_read_bool(np, "no-loopback-test"))
port->flags |= UPF_SKIP_TEST; port->flags |= UPF_SKIP_TEST;

View File

@ -2192,6 +2192,10 @@ int serial8250_do_startup(struct uart_port *port)
} }
} }
/* Check if we need to have shared IRQs */
if (port->irq && (up->port.flags & UPF_SHARE_IRQ))
up->port.irqflags |= IRQF_SHARED;
if (port->irq && !(up->port.flags & UPF_NO_THRE_TEST)) { if (port->irq && !(up->port.flags & UPF_NO_THRE_TEST)) {
unsigned char iir1; unsigned char iir1;
/* /*

View File

@ -574,6 +574,7 @@ static void atmel_stop_tx(struct uart_port *port)
atmel_uart_writel(port, ATMEL_US_IDR, atmel_port->tx_done_mask); atmel_uart_writel(port, ATMEL_US_IDR, atmel_port->tx_done_mask);
if (atmel_uart_is_half_duplex(port)) if (atmel_uart_is_half_duplex(port))
if (!atomic_read(&atmel_port->tasklet_shutdown))
atmel_start_rx(port); atmel_start_rx(port);
} }

View File

@ -603,7 +603,7 @@ static void imx_uart_dma_tx(struct imx_port *sport)
sport->tx_bytes = uart_circ_chars_pending(xmit); sport->tx_bytes = uart_circ_chars_pending(xmit);
if (xmit->tail < xmit->head) { if (xmit->tail < xmit->head || xmit->head == 0) {
sport->dma_tx_nents = 1; sport->dma_tx_nents = 1;
sg_init_one(sgl, xmit->buf + xmit->tail, sport->tx_bytes); sg_init_one(sgl, xmit->buf + xmit->tail, sport->tx_bytes);
} else { } else {

View File

@ -125,6 +125,7 @@ static int handle_rx_console(struct uart_port *uport, u32 bytes, bool drop);
static int handle_rx_uart(struct uart_port *uport, u32 bytes, bool drop); static int handle_rx_uart(struct uart_port *uport, u32 bytes, bool drop);
static unsigned int qcom_geni_serial_tx_empty(struct uart_port *port); static unsigned int qcom_geni_serial_tx_empty(struct uart_port *port);
static void qcom_geni_serial_stop_rx(struct uart_port *uport); static void qcom_geni_serial_stop_rx(struct uart_port *uport);
static void qcom_geni_serial_handle_rx(struct uart_port *uport, bool drop);
static const unsigned long root_freq[] = {7372800, 14745600, 19200000, 29491200, static const unsigned long root_freq[] = {7372800, 14745600, 19200000, 29491200,
32000000, 48000000, 64000000, 80000000, 32000000, 48000000, 64000000, 80000000,
@ -615,7 +616,7 @@ static void qcom_geni_serial_stop_rx(struct uart_port *uport)
u32 irq_en; u32 irq_en;
u32 status; u32 status;
struct qcom_geni_serial_port *port = to_dev_port(uport, uport); struct qcom_geni_serial_port *port = to_dev_port(uport, uport);
u32 irq_clear = S_CMD_DONE_EN; u32 s_irq_status;
irq_en = readl(uport->membase + SE_GENI_S_IRQ_EN); irq_en = readl(uport->membase + SE_GENI_S_IRQ_EN);
irq_en &= ~(S_RX_FIFO_WATERMARK_EN | S_RX_FIFO_LAST_EN); irq_en &= ~(S_RX_FIFO_WATERMARK_EN | S_RX_FIFO_LAST_EN);
@ -631,10 +632,19 @@ static void qcom_geni_serial_stop_rx(struct uart_port *uport)
return; return;
geni_se_cancel_s_cmd(&port->se); geni_se_cancel_s_cmd(&port->se);
qcom_geni_serial_poll_bit(uport, SE_GENI_S_CMD_CTRL_REG, qcom_geni_serial_poll_bit(uport, SE_GENI_S_IRQ_STATUS,
S_GENI_CMD_CANCEL, false); S_CMD_CANCEL_EN, true);
/*
* If timeout occurs secondary engine remains active
* and Abort sequence is executed.
*/
s_irq_status = readl(uport->membase + SE_GENI_S_IRQ_STATUS);
/* Flush the Rx buffer */
if (s_irq_status & S_RX_FIFO_LAST_EN)
qcom_geni_serial_handle_rx(uport, true);
writel(s_irq_status, uport->membase + SE_GENI_S_IRQ_CLEAR);
status = readl(uport->membase + SE_GENI_STATUS); status = readl(uport->membase + SE_GENI_STATUS);
writel(irq_clear, uport->membase + SE_GENI_S_IRQ_CLEAR);
if (status & S_GENI_CMD_ACTIVE) if (status & S_GENI_CMD_ACTIVE)
qcom_geni_serial_abort_rx(uport); qcom_geni_serial_abort_rx(uport);
} }

View File

@ -52,10 +52,11 @@ static void tty_port_default_wakeup(struct tty_port *port)
} }
} }
static const struct tty_port_client_operations default_client_ops = { const struct tty_port_client_operations tty_port_default_client_ops = {
.receive_buf = tty_port_default_receive_buf, .receive_buf = tty_port_default_receive_buf,
.write_wakeup = tty_port_default_wakeup, .write_wakeup = tty_port_default_wakeup,
}; };
EXPORT_SYMBOL_GPL(tty_port_default_client_ops);
void tty_port_init(struct tty_port *port) void tty_port_init(struct tty_port *port)
{ {
@ -68,7 +69,7 @@ void tty_port_init(struct tty_port *port)
spin_lock_init(&port->lock); spin_lock_init(&port->lock);
port->close_delay = (50 * HZ) / 100; port->close_delay = (50 * HZ) / 100;
port->closing_wait = (3000 * HZ) / 100; port->closing_wait = (3000 * HZ) / 100;
port->client_ops = &default_client_ops; port->client_ops = &tty_port_default_client_ops;
kref_init(&port->kref); kref_init(&port->kref);
} }
EXPORT_SYMBOL(tty_port_init); EXPORT_SYMBOL(tty_port_init);

View File

@ -29,6 +29,8 @@
#include <linux/console.h> #include <linux/console.h>
#include <linux/tty_flip.h> #include <linux/tty_flip.h>
#include <linux/sched/signal.h>
/* Don't take this from <ctype.h>: 011-015 on the screen aren't spaces */ /* Don't take this from <ctype.h>: 011-015 on the screen aren't spaces */
#define isspace(c) ((c) == ' ') #define isspace(c) ((c) == ' ')
@ -350,6 +352,7 @@ int paste_selection(struct tty_struct *tty)
unsigned int count; unsigned int count;
struct tty_ldisc *ld; struct tty_ldisc *ld;
DECLARE_WAITQUEUE(wait, current); DECLARE_WAITQUEUE(wait, current);
int ret = 0;
console_lock(); console_lock();
poke_blanked_console(); poke_blanked_console();
@ -363,6 +366,10 @@ int paste_selection(struct tty_struct *tty)
add_wait_queue(&vc->paste_wait, &wait); add_wait_queue(&vc->paste_wait, &wait);
while (sel_buffer && sel_buffer_lth > pasted) { while (sel_buffer && sel_buffer_lth > pasted) {
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
if (signal_pending(current)) {
ret = -EINTR;
break;
}
if (tty_throttled(tty)) { if (tty_throttled(tty)) {
schedule(); schedule();
continue; continue;
@ -378,6 +385,6 @@ int paste_selection(struct tty_struct *tty)
tty_buffer_unlock_exclusive(&vc->port); tty_buffer_unlock_exclusive(&vc->port);
tty_ldisc_deref(ld); tty_ldisc_deref(ld);
return 0; return ret;
} }
EXPORT_SYMBOL_GPL(paste_selection); EXPORT_SYMBOL_GPL(paste_selection);

View File

@ -936,10 +936,21 @@ static void flush_scrollback(struct vc_data *vc)
WARN_CONSOLE_UNLOCKED(); WARN_CONSOLE_UNLOCKED();
set_origin(vc); set_origin(vc);
if (vc->vc_sw->con_flush_scrollback) if (vc->vc_sw->con_flush_scrollback) {
vc->vc_sw->con_flush_scrollback(vc); vc->vc_sw->con_flush_scrollback(vc);
else } else if (con_is_visible(vc)) {
/*
* When no con_flush_scrollback method is provided then the
* legacy way for flushing the scrollback buffer is to use
* a side effect of the con_switch method. We do it only on
* the foreground console as background consoles have no
* scrollback buffers in that case and we obviously don't
* want to switch to them.
*/
hide_cursor(vc);
vc->vc_sw->con_switch(vc); vc->vc_sw->con_switch(vc);
set_cursor(vc);
}
} }
/* /*

View File

@ -876,15 +876,20 @@ int vt_ioctl(struct tty_struct *tty,
return -EINVAL; return -EINVAL;
for (i = 0; i < MAX_NR_CONSOLES; i++) { for (i = 0; i < MAX_NR_CONSOLES; i++) {
struct vc_data *vcp;
if (!vc_cons[i].d) if (!vc_cons[i].d)
continue; continue;
console_lock(); console_lock();
vcp = vc_cons[i].d;
if (vcp) {
if (v.v_vlin) if (v.v_vlin)
vc_cons[i].d->vc_scan_lines = v.v_vlin; vcp->vc_scan_lines = v.v_vlin;
if (v.v_clin) if (v.v_clin)
vc_cons[i].d->vc_font.height = v.v_clin; vcp->vc_font.height = v.v_clin;
vc_cons[i].d->vc_resize_user = 1; vcp->vc_resize_user = 1;
vc_resize(vc_cons[i].d, v.v_cols, v.v_rows); vc_resize(vcp, v.v_cols, v.v_rows);
}
console_unlock(); console_unlock();
} }
break; break;

View File

@ -256,6 +256,7 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
struct usb_host_interface *ifp, int num_ep, struct usb_host_interface *ifp, int num_ep,
unsigned char *buffer, int size) unsigned char *buffer, int size)
{ {
struct usb_device *udev = to_usb_device(ddev);
unsigned char *buffer0 = buffer; unsigned char *buffer0 = buffer;
struct usb_endpoint_descriptor *d; struct usb_endpoint_descriptor *d;
struct usb_host_endpoint *endpoint; struct usb_host_endpoint *endpoint;
@ -297,6 +298,16 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
goto skip_to_next_endpoint_or_interface_descriptor; goto skip_to_next_endpoint_or_interface_descriptor;
} }
/* Ignore blacklisted endpoints */
if (udev->quirks & USB_QUIRK_ENDPOINT_BLACKLIST) {
if (usb_endpoint_is_blacklisted(udev, ifp, d)) {
dev_warn(ddev, "config %d interface %d altsetting %d has a blacklisted endpoint with address 0x%X, skipping\n",
cfgno, inum, asnum,
d->bEndpointAddress);
goto skip_to_next_endpoint_or_interface_descriptor;
}
}
endpoint = &ifp->endpoint[ifp->desc.bNumEndpoints]; endpoint = &ifp->endpoint[ifp->desc.bNumEndpoints];
++ifp->desc.bNumEndpoints; ++ifp->desc.bNumEndpoints;

View File

@ -38,7 +38,9 @@
#include "otg_whitelist.h" #include "otg_whitelist.h"
#define USB_VENDOR_GENESYS_LOGIC 0x05e3 #define USB_VENDOR_GENESYS_LOGIC 0x05e3
#define USB_VENDOR_SMSC 0x0424
#define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND 0x01 #define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND 0x01
#define HUB_QUIRK_DISABLE_AUTOSUSPEND 0x02
#define USB_TP_TRANSMISSION_DELAY 40 /* ns */ #define USB_TP_TRANSMISSION_DELAY 40 /* ns */
#define USB_TP_TRANSMISSION_DELAY_MAX 65535 /* ns */ #define USB_TP_TRANSMISSION_DELAY_MAX 65535 /* ns */
@ -1217,11 +1219,6 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
#ifdef CONFIG_PM #ifdef CONFIG_PM
udev->reset_resume = 1; udev->reset_resume = 1;
#endif #endif
/* Don't set the change_bits when the device
* was powered off.
*/
if (test_bit(port1, hub->power_bits))
set_bit(port1, hub->change_bits);
} else { } else {
/* The power session is gone; tell hub_wq */ /* The power session is gone; tell hub_wq */
@ -1731,6 +1728,10 @@ static void hub_disconnect(struct usb_interface *intf)
kfree(hub->buffer); kfree(hub->buffer);
pm_suspend_ignore_children(&intf->dev, false); pm_suspend_ignore_children(&intf->dev, false);
if (hub->quirk_disable_autosuspend)
usb_autopm_put_interface(intf);
kref_put(&hub->kref, hub_release); kref_put(&hub->kref, hub_release);
} }
@ -1863,6 +1864,11 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id)
if (id->driver_info & HUB_QUIRK_CHECK_PORT_AUTOSUSPEND) if (id->driver_info & HUB_QUIRK_CHECK_PORT_AUTOSUSPEND)
hub->quirk_check_port_auto_suspend = 1; hub->quirk_check_port_auto_suspend = 1;
if (id->driver_info & HUB_QUIRK_DISABLE_AUTOSUSPEND) {
hub->quirk_disable_autosuspend = 1;
usb_autopm_get_interface(intf);
}
if (hub_configure(hub, &desc->endpoint[0].desc) >= 0) if (hub_configure(hub, &desc->endpoint[0].desc) >= 0)
return 0; return 0;
@ -5489,6 +5495,10 @@ static void hub_event(struct work_struct *work)
} }
static const struct usb_device_id hub_id_table[] = { static const struct usb_device_id hub_id_table[] = {
{ .match_flags = USB_DEVICE_ID_MATCH_VENDOR | USB_DEVICE_ID_MATCH_INT_CLASS,
.idVendor = USB_VENDOR_SMSC,
.bInterfaceClass = USB_CLASS_HUB,
.driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
{ .match_flags = USB_DEVICE_ID_MATCH_VENDOR { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
| USB_DEVICE_ID_MATCH_INT_CLASS, | USB_DEVICE_ID_MATCH_INT_CLASS,
.idVendor = USB_VENDOR_GENESYS_LOGIC, .idVendor = USB_VENDOR_GENESYS_LOGIC,

View File

@ -61,6 +61,7 @@ struct usb_hub {
unsigned quiescing:1; unsigned quiescing:1;
unsigned disconnected:1; unsigned disconnected:1;
unsigned in_reset:1; unsigned in_reset:1;
unsigned quirk_disable_autosuspend:1;
unsigned quirk_check_port_auto_suspend:1; unsigned quirk_check_port_auto_suspend:1;

View File

@ -354,6 +354,10 @@ static const struct usb_device_id usb_quirk_list[] = {
{ USB_DEVICE(0x0904, 0x6103), .driver_info = { USB_DEVICE(0x0904, 0x6103), .driver_info =
USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL }, USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL },
/* Sound Devices USBPre2 */
{ USB_DEVICE(0x0926, 0x0202), .driver_info =
USB_QUIRK_ENDPOINT_BLACKLIST },
/* Keytouch QWERTY Panel keyboard */ /* Keytouch QWERTY Panel keyboard */
{ USB_DEVICE(0x0926, 0x3333), .driver_info = { USB_DEVICE(0x0926, 0x3333), .driver_info =
USB_QUIRK_CONFIG_INTF_STRINGS }, USB_QUIRK_CONFIG_INTF_STRINGS },
@ -445,6 +449,9 @@ static const struct usb_device_id usb_quirk_list[] = {
/* INTEL VALUE SSD */ /* INTEL VALUE SSD */
{ USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME }, { USB_DEVICE(0x8086, 0xf1a5), .driver_info = USB_QUIRK_RESET_RESUME },
/* novation SoundControl XL */
{ USB_DEVICE(0x1235, 0x0061), .driver_info = USB_QUIRK_RESET_RESUME },
{ } /* terminating entry must be last */ { } /* terminating entry must be last */
}; };
@ -472,6 +479,39 @@ static const struct usb_device_id usb_amd_resume_quirk_list[] = {
{ } /* terminating entry must be last */ { } /* terminating entry must be last */
}; };
/*
* Entries for blacklisted endpoints that should be ignored when parsing
* configuration descriptors.
*
* Matched for devices with USB_QUIRK_ENDPOINT_BLACKLIST.
*/
static const struct usb_device_id usb_endpoint_blacklist[] = {
{ USB_DEVICE_INTERFACE_NUMBER(0x0926, 0x0202, 1), .driver_info = 0x85 },
{ }
};
bool usb_endpoint_is_blacklisted(struct usb_device *udev,
struct usb_host_interface *intf,
struct usb_endpoint_descriptor *epd)
{
const struct usb_device_id *id;
unsigned int address;
for (id = usb_endpoint_blacklist; id->match_flags; ++id) {
if (!usb_match_device(udev, id))
continue;
if (!usb_match_one_id_intf(udev, intf, id))
continue;
address = id->driver_info;
if (address == epd->bEndpointAddress)
return true;
}
return false;
}
static bool usb_match_any_interface(struct usb_device *udev, static bool usb_match_any_interface(struct usb_device *udev,
const struct usb_device_id *id) const struct usb_device_id *id)
{ {

View File

@ -37,6 +37,9 @@ extern void usb_authorize_interface(struct usb_interface *);
extern void usb_detect_quirks(struct usb_device *udev); extern void usb_detect_quirks(struct usb_device *udev);
extern void usb_detect_interface_quirks(struct usb_device *udev); extern void usb_detect_interface_quirks(struct usb_device *udev);
extern void usb_release_quirk_list(void); extern void usb_release_quirk_list(void);
extern bool usb_endpoint_is_blacklisted(struct usb_device *udev,
struct usb_host_interface *intf,
struct usb_endpoint_descriptor *epd);
extern int usb_remove_device(struct usb_device *udev); extern int usb_remove_device(struct usb_device *udev);
extern int usb_get_device_descriptor(struct usb_device *dev, extern int usb_get_device_descriptor(struct usb_device *dev,

View File

@ -1083,11 +1083,6 @@ static void dwc2_hsotg_start_req(struct dwc2_hsotg *hsotg,
else else
packets = 1; /* send one packet if length is zero. */ packets = 1; /* send one packet if length is zero. */
if (hs_ep->isochronous && length > (hs_ep->mc * hs_ep->ep.maxpacket)) {
dev_err(hsotg->dev, "req length > maxpacket*mc\n");
return;
}
if (dir_in && index != 0) if (dir_in && index != 0)
if (hs_ep->isochronous) if (hs_ep->isochronous)
epsize = DXEPTSIZ_MC(packets); epsize = DXEPTSIZ_MC(packets);
@ -1391,6 +1386,13 @@ static int dwc2_hsotg_ep_queue(struct usb_ep *ep, struct usb_request *req,
req->actual = 0; req->actual = 0;
req->status = -EINPROGRESS; req->status = -EINPROGRESS;
/* Don't queue ISOC request if length greater than mps*mc */
if (hs_ep->isochronous &&
req->length > (hs_ep->mc * hs_ep->ep.maxpacket)) {
dev_err(hs->dev, "req length > maxpacket*mc\n");
return -EINVAL;
}
/* In DDMA mode for ISOC's don't queue request if length greater /* In DDMA mode for ISOC's don't queue request if length greater
* than descriptor limits. * than descriptor limits.
*/ */
@ -1632,6 +1634,7 @@ static int dwc2_hsotg_process_req_status(struct dwc2_hsotg *hsotg,
struct dwc2_hsotg_ep *ep0 = hsotg->eps_out[0]; struct dwc2_hsotg_ep *ep0 = hsotg->eps_out[0];
struct dwc2_hsotg_ep *ep; struct dwc2_hsotg_ep *ep;
__le16 reply; __le16 reply;
u16 status;
int ret; int ret;
dev_dbg(hsotg->dev, "%s: USB_REQ_GET_STATUS\n", __func__); dev_dbg(hsotg->dev, "%s: USB_REQ_GET_STATUS\n", __func__);
@ -1643,11 +1646,10 @@ static int dwc2_hsotg_process_req_status(struct dwc2_hsotg *hsotg,
switch (ctrl->bRequestType & USB_RECIP_MASK) { switch (ctrl->bRequestType & USB_RECIP_MASK) {
case USB_RECIP_DEVICE: case USB_RECIP_DEVICE:
/* status = 1 << USB_DEVICE_SELF_POWERED;
* bit 0 => self powered status |= hsotg->remote_wakeup_allowed <<
* bit 1 => remote wakeup USB_DEVICE_REMOTE_WAKEUP;
*/ reply = cpu_to_le16(status);
reply = cpu_to_le16(0);
break; break;
case USB_RECIP_INTERFACE: case USB_RECIP_INTERFACE:
@ -1758,7 +1760,10 @@ static int dwc2_hsotg_process_req_feature(struct dwc2_hsotg *hsotg,
case USB_RECIP_DEVICE: case USB_RECIP_DEVICE:
switch (wValue) { switch (wValue) {
case USB_DEVICE_REMOTE_WAKEUP: case USB_DEVICE_REMOTE_WAKEUP:
if (set)
hsotg->remote_wakeup_allowed = 1; hsotg->remote_wakeup_allowed = 1;
else
hsotg->remote_wakeup_allowed = 0;
break; break;
case USB_DEVICE_TEST_MODE: case USB_DEVICE_TEST_MODE:
@ -1768,6 +1773,11 @@ static int dwc2_hsotg_process_req_feature(struct dwc2_hsotg *hsotg,
return -EINVAL; return -EINVAL;
hsotg->test_mode = wIndex >> 8; hsotg->test_mode = wIndex >> 8;
break;
default:
return -ENOENT;
}
ret = dwc2_hsotg_send_reply(hsotg, ep0, NULL, 0); ret = dwc2_hsotg_send_reply(hsotg, ep0, NULL, 0);
if (ret) { if (ret) {
dev_err(hsotg->dev, dev_err(hsotg->dev,
@ -1775,10 +1785,6 @@ static int dwc2_hsotg_process_req_feature(struct dwc2_hsotg *hsotg,
return ret; return ret;
} }
break; break;
default:
return -ENOENT;
}
break;
case USB_RECIP_ENDPOINT: case USB_RECIP_ENDPOINT:
ep = ep_from_windex(hsotg, wIndex); ep = ep_from_windex(hsotg, wIndex);

View File

@ -256,86 +256,77 @@ static inline const char *dwc3_ep_event_string(char *str, size_t size,
u8 epnum = event->endpoint_number; u8 epnum = event->endpoint_number;
size_t len; size_t len;
int status; int status;
int ret;
ret = snprintf(str, size, "ep%d%s: ", epnum >> 1, len = scnprintf(str, size, "ep%d%s: ", epnum >> 1,
(epnum & 1) ? "in" : "out"); (epnum & 1) ? "in" : "out");
if (ret < 0)
return "UNKNOWN";
status = event->status; status = event->status;
switch (event->endpoint_event) { switch (event->endpoint_event) {
case DWC3_DEPEVT_XFERCOMPLETE: case DWC3_DEPEVT_XFERCOMPLETE:
len = strlen(str); len += scnprintf(str + len, size - len,
snprintf(str + len, size - len, "Transfer Complete (%c%c%c)", "Transfer Complete (%c%c%c)",
status & DEPEVT_STATUS_SHORT ? 'S' : 's', status & DEPEVT_STATUS_SHORT ? 'S' : 's',
status & DEPEVT_STATUS_IOC ? 'I' : 'i', status & DEPEVT_STATUS_IOC ? 'I' : 'i',
status & DEPEVT_STATUS_LST ? 'L' : 'l'); status & DEPEVT_STATUS_LST ? 'L' : 'l');
len = strlen(str);
if (epnum <= 1) if (epnum <= 1)
snprintf(str + len, size - len, " [%s]", scnprintf(str + len, size - len, " [%s]",
dwc3_ep0_state_string(ep0state)); dwc3_ep0_state_string(ep0state));
break; break;
case DWC3_DEPEVT_XFERINPROGRESS: case DWC3_DEPEVT_XFERINPROGRESS:
len = strlen(str); scnprintf(str + len, size - len,
"Transfer In Progress [%d] (%c%c%c)",
snprintf(str + len, size - len, "Transfer In Progress [%d] (%c%c%c)",
event->parameters, event->parameters,
status & DEPEVT_STATUS_SHORT ? 'S' : 's', status & DEPEVT_STATUS_SHORT ? 'S' : 's',
status & DEPEVT_STATUS_IOC ? 'I' : 'i', status & DEPEVT_STATUS_IOC ? 'I' : 'i',
status & DEPEVT_STATUS_LST ? 'M' : 'm'); status & DEPEVT_STATUS_LST ? 'M' : 'm');
break; break;
case DWC3_DEPEVT_XFERNOTREADY: case DWC3_DEPEVT_XFERNOTREADY:
len = strlen(str); len += scnprintf(str + len, size - len,
"Transfer Not Ready [%d]%s",
snprintf(str + len, size - len, "Transfer Not Ready [%d]%s",
event->parameters, event->parameters,
status & DEPEVT_STATUS_TRANSFER_ACTIVE ? status & DEPEVT_STATUS_TRANSFER_ACTIVE ?
" (Active)" : " (Not Active)"); " (Active)" : " (Not Active)");
len = strlen(str);
/* Control Endpoints */ /* Control Endpoints */
if (epnum <= 1) { if (epnum <= 1) {
int phase = DEPEVT_STATUS_CONTROL_PHASE(event->status); int phase = DEPEVT_STATUS_CONTROL_PHASE(event->status);
switch (phase) { switch (phase) {
case DEPEVT_STATUS_CONTROL_DATA: case DEPEVT_STATUS_CONTROL_DATA:
snprintf(str + ret, size - ret, scnprintf(str + len, size - len,
" [Data Phase]"); " [Data Phase]");
break; break;
case DEPEVT_STATUS_CONTROL_STATUS: case DEPEVT_STATUS_CONTROL_STATUS:
snprintf(str + ret, size - ret, scnprintf(str + len, size - len,
" [Status Phase]"); " [Status Phase]");
} }
} }
break; break;
case DWC3_DEPEVT_RXTXFIFOEVT: case DWC3_DEPEVT_RXTXFIFOEVT:
snprintf(str + ret, size - ret, "FIFO"); scnprintf(str + len, size - len, "FIFO");
break; break;
case DWC3_DEPEVT_STREAMEVT: case DWC3_DEPEVT_STREAMEVT:
status = event->status; status = event->status;
switch (status) { switch (status) {
case DEPEVT_STREAMEVT_FOUND: case DEPEVT_STREAMEVT_FOUND:
snprintf(str + ret, size - ret, " Stream %d Found", scnprintf(str + len, size - len, " Stream %d Found",
event->parameters); event->parameters);
break; break;
case DEPEVT_STREAMEVT_NOTFOUND: case DEPEVT_STREAMEVT_NOTFOUND:
default: default:
snprintf(str + ret, size - ret, " Stream Not Found"); scnprintf(str + len, size - len, " Stream Not Found");
break; break;
} }
break; break;
case DWC3_DEPEVT_EPCMDCMPLT: case DWC3_DEPEVT_EPCMDCMPLT:
snprintf(str + ret, size - ret, "Endpoint Command Complete"); scnprintf(str + len, size - len, "Endpoint Command Complete");
break; break;
default: default:
snprintf(str, size, "UNKNOWN"); scnprintf(str + len, size - len, "UNKNOWN");
} }
return str; return str;

View File

@ -2426,7 +2426,8 @@ static int dwc3_gadget_ep_reclaim_completed_trb(struct dwc3_ep *dep,
if (event->status & DEPEVT_STATUS_SHORT && !chain) if (event->status & DEPEVT_STATUS_SHORT && !chain)
return 1; return 1;
if (event->status & DEPEVT_STATUS_IOC) if ((trb->ctrl & DWC3_TRB_CTRL_IOC) ||
(trb->ctrl & DWC3_TRB_CTRL_LST))
return 1; return 1;
return 0; return 0;

View File

@ -437,12 +437,10 @@ static u8 encode_bMaxPower(enum usb_device_speed speed,
val = CONFIG_USB_GADGET_VBUS_DRAW; val = CONFIG_USB_GADGET_VBUS_DRAW;
if (!val) if (!val)
return 0; return 0;
switch (speed) { if (speed < USB_SPEED_SUPER)
case USB_SPEED_SUPER:
return DIV_ROUND_UP(val, 8);
default:
return DIV_ROUND_UP(val, 2); return DIV_ROUND_UP(val, 2);
} else
return DIV_ROUND_UP(val, 8);
} }
static int config_buf(struct usb_configuration *config, static int config_buf(struct usb_configuration *config,

View File

@ -55,6 +55,7 @@ static u8 usb_bos_descriptor [] = {
static int xhci_create_usb3_bos_desc(struct xhci_hcd *xhci, char *buf, static int xhci_create_usb3_bos_desc(struct xhci_hcd *xhci, char *buf,
u16 wLength) u16 wLength)
{ {
struct xhci_port_cap *port_cap = NULL;
int i, ssa_count; int i, ssa_count;
u32 temp; u32 temp;
u16 desc_size, ssp_cap_size, ssa_size = 0; u16 desc_size, ssp_cap_size, ssa_size = 0;
@ -64,16 +65,24 @@ static int xhci_create_usb3_bos_desc(struct xhci_hcd *xhci, char *buf,
ssp_cap_size = sizeof(usb_bos_descriptor) - desc_size; ssp_cap_size = sizeof(usb_bos_descriptor) - desc_size;
/* does xhci support USB 3.1 Enhanced SuperSpeed */ /* does xhci support USB 3.1 Enhanced SuperSpeed */
if (xhci->usb3_rhub.min_rev >= 0x01) { for (i = 0; i < xhci->num_port_caps; i++) {
if (xhci->port_caps[i].maj_rev == 0x03 &&
xhci->port_caps[i].min_rev >= 0x01) {
usb3_1 = true;
port_cap = &xhci->port_caps[i];
break;
}
}
if (usb3_1) {
/* does xhci provide a PSI table for SSA speed attributes? */ /* does xhci provide a PSI table for SSA speed attributes? */
if (xhci->usb3_rhub.psi_count) { if (port_cap->psi_count) {
/* two SSA entries for each unique PSI ID, RX and TX */ /* two SSA entries for each unique PSI ID, RX and TX */
ssa_count = xhci->usb3_rhub.psi_uid_count * 2; ssa_count = port_cap->psi_uid_count * 2;
ssa_size = ssa_count * sizeof(u32); ssa_size = ssa_count * sizeof(u32);
ssp_cap_size -= 16; /* skip copying the default SSA */ ssp_cap_size -= 16; /* skip copying the default SSA */
} }
desc_size += ssp_cap_size; desc_size += ssp_cap_size;
usb3_1 = true;
} }
memcpy(buf, &usb_bos_descriptor, min(desc_size, wLength)); memcpy(buf, &usb_bos_descriptor, min(desc_size, wLength));
@ -99,7 +108,7 @@ static int xhci_create_usb3_bos_desc(struct xhci_hcd *xhci, char *buf,
} }
/* If PSI table exists, add the custom speed attributes from it */ /* If PSI table exists, add the custom speed attributes from it */
if (usb3_1 && xhci->usb3_rhub.psi_count) { if (usb3_1 && port_cap->psi_count) {
u32 ssp_cap_base, bm_attrib, psi, psi_mant, psi_exp; u32 ssp_cap_base, bm_attrib, psi, psi_mant, psi_exp;
int offset; int offset;
@ -111,7 +120,7 @@ static int xhci_create_usb3_bos_desc(struct xhci_hcd *xhci, char *buf,
/* attribute count SSAC bits 4:0 and ID count SSIC bits 8:5 */ /* attribute count SSAC bits 4:0 and ID count SSIC bits 8:5 */
bm_attrib = (ssa_count - 1) & 0x1f; bm_attrib = (ssa_count - 1) & 0x1f;
bm_attrib |= (xhci->usb3_rhub.psi_uid_count - 1) << 5; bm_attrib |= (port_cap->psi_uid_count - 1) << 5;
put_unaligned_le32(bm_attrib, &buf[ssp_cap_base + 4]); put_unaligned_le32(bm_attrib, &buf[ssp_cap_base + 4]);
if (wLength < desc_size + ssa_size) if (wLength < desc_size + ssa_size)
@ -124,8 +133,8 @@ static int xhci_create_usb3_bos_desc(struct xhci_hcd *xhci, char *buf,
* USB 3.1 requires two SSA entries (RX and TX) for every link * USB 3.1 requires two SSA entries (RX and TX) for every link
*/ */
offset = desc_size; offset = desc_size;
for (i = 0; i < xhci->usb3_rhub.psi_count; i++) { for (i = 0; i < port_cap->psi_count; i++) {
psi = xhci->usb3_rhub.psi[i]; psi = port_cap->psi[i];
psi &= ~USB_SSP_SUBLINK_SPEED_RSVD; psi &= ~USB_SSP_SUBLINK_SPEED_RSVD;
psi_exp = XHCI_EXT_PORT_PSIE(psi); psi_exp = XHCI_EXT_PORT_PSIE(psi);
psi_mant = XHCI_EXT_PORT_PSIM(psi); psi_mant = XHCI_EXT_PORT_PSIM(psi);

View File

@ -1475,9 +1475,15 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
/* Allow 3 retries for everything but isoc, set CErr = 3 */ /* Allow 3 retries for everything but isoc, set CErr = 3 */
if (!usb_endpoint_xfer_isoc(&ep->desc)) if (!usb_endpoint_xfer_isoc(&ep->desc))
err_count = 3; err_count = 3;
/* Some devices get this wrong */ /* HS bulk max packet should be 512, FS bulk supports 8, 16, 32 or 64 */
if (usb_endpoint_xfer_bulk(&ep->desc) && udev->speed == USB_SPEED_HIGH) if (usb_endpoint_xfer_bulk(&ep->desc)) {
if (udev->speed == USB_SPEED_HIGH)
max_packet = 512; max_packet = 512;
if (udev->speed == USB_SPEED_FULL) {
max_packet = rounddown_pow_of_two(max_packet);
max_packet = clamp_val(max_packet, 8, 64);
}
}
/* xHCI 1.0 and 1.1 indicates that ctrl ep avg TRB Length should be 8 */ /* xHCI 1.0 and 1.1 indicates that ctrl ep avg TRB Length should be 8 */
if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100) if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100)
avg_trb_len = 8; avg_trb_len = 8;
@ -1909,17 +1915,17 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
xhci->usb3_rhub.num_ports = 0; xhci->usb3_rhub.num_ports = 0;
xhci->num_active_eps = 0; xhci->num_active_eps = 0;
kfree(xhci->usb2_rhub.ports); kfree(xhci->usb2_rhub.ports);
kfree(xhci->usb2_rhub.psi);
kfree(xhci->usb3_rhub.ports); kfree(xhci->usb3_rhub.ports);
kfree(xhci->usb3_rhub.psi);
kfree(xhci->hw_ports); kfree(xhci->hw_ports);
kfree(xhci->rh_bw); kfree(xhci->rh_bw);
kfree(xhci->ext_caps); kfree(xhci->ext_caps);
for (i = 0; i < xhci->num_port_caps; i++)
kfree(xhci->port_caps[i].psi);
kfree(xhci->port_caps);
xhci->num_port_caps = 0;
xhci->usb2_rhub.ports = NULL; xhci->usb2_rhub.ports = NULL;
xhci->usb2_rhub.psi = NULL;
xhci->usb3_rhub.ports = NULL; xhci->usb3_rhub.ports = NULL;
xhci->usb3_rhub.psi = NULL;
xhci->hw_ports = NULL; xhci->hw_ports = NULL;
xhci->rh_bw = NULL; xhci->rh_bw = NULL;
xhci->ext_caps = NULL; xhci->ext_caps = NULL;
@ -2120,6 +2126,7 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
u8 major_revision, minor_revision; u8 major_revision, minor_revision;
struct xhci_hub *rhub; struct xhci_hub *rhub;
struct device *dev = xhci_to_hcd(xhci)->self.sysdev; struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
struct xhci_port_cap *port_cap;
temp = readl(addr); temp = readl(addr);
major_revision = XHCI_EXT_PORT_MAJOR(temp); major_revision = XHCI_EXT_PORT_MAJOR(temp);
@ -2154,31 +2161,39 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
/* WTF? "Valid values are 1 to MaxPorts" */ /* WTF? "Valid values are 1 to MaxPorts" */
return; return;
rhub->psi_count = XHCI_EXT_PORT_PSIC(temp); port_cap = &xhci->port_caps[xhci->num_port_caps++];
if (rhub->psi_count) { if (xhci->num_port_caps > max_caps)
rhub->psi = kcalloc_node(rhub->psi_count, sizeof(*rhub->psi), return;
GFP_KERNEL, dev_to_node(dev));
if (!rhub->psi)
rhub->psi_count = 0;
rhub->psi_uid_count++; port_cap->maj_rev = major_revision;
for (i = 0; i < rhub->psi_count; i++) { port_cap->min_rev = minor_revision;
rhub->psi[i] = readl(addr + 4 + i); port_cap->psi_count = XHCI_EXT_PORT_PSIC(temp);
if (port_cap->psi_count) {
port_cap->psi = kcalloc_node(port_cap->psi_count,
sizeof(*port_cap->psi),
GFP_KERNEL, dev_to_node(dev));
if (!port_cap->psi)
port_cap->psi_count = 0;
port_cap->psi_uid_count++;
for (i = 0; i < port_cap->psi_count; i++) {
port_cap->psi[i] = readl(addr + 4 + i);
/* count unique ID values, two consecutive entries can /* count unique ID values, two consecutive entries can
* have the same ID if link is assymetric * have the same ID if link is assymetric
*/ */
if (i && (XHCI_EXT_PORT_PSIV(rhub->psi[i]) != if (i && (XHCI_EXT_PORT_PSIV(port_cap->psi[i]) !=
XHCI_EXT_PORT_PSIV(rhub->psi[i - 1]))) XHCI_EXT_PORT_PSIV(port_cap->psi[i - 1])))
rhub->psi_uid_count++; port_cap->psi_uid_count++;
xhci_dbg(xhci, "PSIV:%d PSIE:%d PLT:%d PFD:%d LP:%d PSIM:%d\n", xhci_dbg(xhci, "PSIV:%d PSIE:%d PLT:%d PFD:%d LP:%d PSIM:%d\n",
XHCI_EXT_PORT_PSIV(rhub->psi[i]), XHCI_EXT_PORT_PSIV(port_cap->psi[i]),
XHCI_EXT_PORT_PSIE(rhub->psi[i]), XHCI_EXT_PORT_PSIE(port_cap->psi[i]),
XHCI_EXT_PORT_PLT(rhub->psi[i]), XHCI_EXT_PORT_PLT(port_cap->psi[i]),
XHCI_EXT_PORT_PFD(rhub->psi[i]), XHCI_EXT_PORT_PFD(port_cap->psi[i]),
XHCI_EXT_PORT_LP(rhub->psi[i]), XHCI_EXT_PORT_LP(port_cap->psi[i]),
XHCI_EXT_PORT_PSIM(rhub->psi[i])); XHCI_EXT_PORT_PSIM(port_cap->psi[i]));
} }
} }
/* cache usb2 port capabilities */ /* cache usb2 port capabilities */
@ -2213,6 +2228,7 @@ static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports,
continue; continue;
} }
hw_port->rhub = rhub; hw_port->rhub = rhub;
hw_port->port_cap = port_cap;
rhub->num_ports++; rhub->num_ports++;
} }
/* FIXME: Should we disable ports not in the Extended Capabilities? */ /* FIXME: Should we disable ports not in the Extended Capabilities? */
@ -2303,6 +2319,11 @@ static int xhci_setup_port_arrays(struct xhci_hcd *xhci, gfp_t flags)
if (!xhci->ext_caps) if (!xhci->ext_caps)
return -ENOMEM; return -ENOMEM;
xhci->port_caps = kcalloc_node(cap_count, sizeof(*xhci->port_caps),
flags, dev_to_node(dev));
if (!xhci->port_caps)
return -ENOMEM;
offset = cap_start; offset = cap_start;
while (offset) { while (offset) {

View File

@ -50,6 +50,7 @@
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_XHCI 0x15ec #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_XHCI 0x15ec
#define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI 0x15f0 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_XHCI 0x15f0
#define PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI 0x8a13 #define PCI_DEVICE_ID_INTEL_ICE_LAKE_XHCI 0x8a13
#define PCI_DEVICE_ID_INTEL_CML_XHCI 0xa3af
#define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9
#define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba
@ -186,7 +187,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
pdev->device == PCI_DEVICE_ID_INTEL_BROXTON_M_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_BROXTON_M_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_BROXTON_B_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_BROXTON_B_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_DNV_XHCI)) { pdev->device == PCI_DEVICE_ID_INTEL_DNV_XHCI ||
pdev->device == PCI_DEVICE_ID_INTEL_CML_XHCI)) {
xhci->quirks |= XHCI_PME_STUCK_QUIRK; xhci->quirks |= XHCI_PME_STUCK_QUIRK;
} }
if (pdev->vendor == PCI_VENDOR_ID_INTEL && if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
@ -301,6 +303,9 @@ int xhci_pci_setup(struct usb_hcd *hcd)
if (!usb_hcd_is_primary_hcd(hcd)) if (!usb_hcd_is_primary_hcd(hcd))
return 0; return 0;
if (xhci->quirks & XHCI_PME_STUCK_QUIRK)
xhci_pme_acpi_rtd3_enable(pdev);
xhci_dbg(xhci, "Got SBRN %u\n", (unsigned int) xhci->sbrn); xhci_dbg(xhci, "Got SBRN %u\n", (unsigned int) xhci->sbrn);
/* Find any debug ports */ /* Find any debug ports */
@ -359,9 +364,6 @@ int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
HCC_MAX_PSA(xhci->hcc_params) >= 4) HCC_MAX_PSA(xhci->hcc_params) >= 4)
xhci->shared_hcd->can_do_streams = 1; xhci->shared_hcd->can_do_streams = 1;
if (xhci->quirks & XHCI_PME_STUCK_QUIRK)
xhci_pme_acpi_rtd3_enable(dev);
/* USB-2 and USB-3 roothubs initialized, allow runtime pm suspend */ /* USB-2 and USB-3 roothubs initialized, allow runtime pm suspend */
pm_runtime_put_noidle(&dev->dev); pm_runtime_put_noidle(&dev->dev);

View File

@ -2740,6 +2740,42 @@ static int xhci_handle_event(struct xhci_hcd *xhci)
return 1; return 1;
} }
/*
* Update Event Ring Dequeue Pointer:
* - When all events have finished
* - To avoid "Event Ring Full Error" condition
*/
static void xhci_update_erst_dequeue(struct xhci_hcd *xhci,
union xhci_trb *event_ring_deq)
{
u64 temp_64;
dma_addr_t deq;
temp_64 = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue);
/* If necessary, update the HW's version of the event ring deq ptr. */
if (event_ring_deq != xhci->event_ring->dequeue) {
deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg,
xhci->event_ring->dequeue);
if (deq == 0)
xhci_warn(xhci, "WARN something wrong with SW event ring dequeue ptr\n");
/*
* Per 4.9.4, Software writes to the ERDP register shall
* always advance the Event Ring Dequeue Pointer value.
*/
if ((temp_64 & (u64) ~ERST_PTR_MASK) ==
((u64) deq & (u64) ~ERST_PTR_MASK))
return;
/* Update HC event ring dequeue pointer */
temp_64 &= ERST_PTR_MASK;
temp_64 |= ((u64) deq & (u64) ~ERST_PTR_MASK);
}
/* Clear the event handler busy flag (RW1C) */
temp_64 |= ERST_EHB;
xhci_write_64(xhci, temp_64, &xhci->ir_set->erst_dequeue);
}
/* /*
* xHCI spec says we can get an interrupt, and if the HC has an error condition, * xHCI spec says we can get an interrupt, and if the HC has an error condition,
* we might get bad data out of the event ring. Section 4.10.2.7 has a list of * we might get bad data out of the event ring. Section 4.10.2.7 has a list of
@ -2751,9 +2787,9 @@ irqreturn_t xhci_irq(struct usb_hcd *hcd)
union xhci_trb *event_ring_deq; union xhci_trb *event_ring_deq;
irqreturn_t ret = IRQ_NONE; irqreturn_t ret = IRQ_NONE;
unsigned long flags; unsigned long flags;
dma_addr_t deq;
u64 temp_64; u64 temp_64;
u32 status; u32 status;
int event_loop = 0;
spin_lock_irqsave(&xhci->lock, flags); spin_lock_irqsave(&xhci->lock, flags);
/* Check if the xHC generated the interrupt, or the irq is shared */ /* Check if the xHC generated the interrupt, or the irq is shared */
@ -2807,24 +2843,14 @@ irqreturn_t xhci_irq(struct usb_hcd *hcd)
/* FIXME this should be a delayed service routine /* FIXME this should be a delayed service routine
* that clears the EHB. * that clears the EHB.
*/ */
while (xhci_handle_event(xhci) > 0) {} while (xhci_handle_event(xhci) > 0) {
if (event_loop++ < TRBS_PER_SEGMENT / 2)
temp_64 = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue); continue;
/* If necessary, update the HW's version of the event ring deq ptr. */ xhci_update_erst_dequeue(xhci, event_ring_deq);
if (event_ring_deq != xhci->event_ring->dequeue) { event_loop = 0;
deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg,
xhci->event_ring->dequeue);
if (deq == 0)
xhci_warn(xhci, "WARN something wrong with SW event "
"ring dequeue ptr.\n");
/* Update HC event ring dequeue pointer */
temp_64 &= ERST_PTR_MASK;
temp_64 |= ((u64) deq & (u64) ~ERST_PTR_MASK);
} }
/* Clear the event handler busy flag (RW1C); event ring is empty. */ xhci_update_erst_dequeue(xhci, event_ring_deq);
temp_64 |= ERST_EHB;
xhci_write_64(xhci, temp_64, &xhci->ir_set->erst_dequeue);
ret = IRQ_HANDLED; ret = IRQ_HANDLED;
out: out:

Some files were not shown because too many files have changed in this diff Show More