Merge 6.1.60 into android14-6.1-lts

Changes in 6.1.60
	lib/Kconfig.debug: do not enable DEBUG_PREEMPT by default
	igc: remove I226 Qbv BaseTime restriction
	igc: enable Qbv configuration for 2nd GCL
	igc: Remove reset adapter task for i226 during disable tsn config
	igc: Add qbv_config_change_errors counter
	igc: Add condition for qbv_config_change_errors counter
	igc: Fix race condition in PTP tx code
	Bluetooth: hci_event: Ignore NULL link key
	Bluetooth: Reject connection with the device which has same BD_ADDR
	Bluetooth: Fix a refcnt underflow problem for hci_conn
	Bluetooth: vhci: Fix race when opening vhci device
	Bluetooth: hci_event: Fix coding style
	Bluetooth: avoid memcmp() out of bounds warning
	ice: fix over-shifted variable
	ice: reset first in crash dump kernels
	net/smc: return the right falback reason when prefix checks fail
	btrfs: fix stripe length calculation for non-zoned data chunk allocation
	nfc: nci: fix possible NULL pointer dereference in send_acknowledge()
	regmap: fix NULL deref on lookup
	KVM: x86: Mask LVTPC when handling a PMI
	x86/sev: Disable MMIO emulation from user mode
	x86/sev: Check IOBM for IOIO exceptions from user-space
	x86/sev: Check for user-space IOIO pointing to kernel space
	x86/fpu: Allow caller to constrain xfeatures when copying to uabi buffer
	KVM: x86: Constrain guest-supported xfeatures only at KVM_GET_XSAVE{2}
	x86: KVM: SVM: add support for Invalid IPI Vector interception
	x86: KVM: SVM: refresh AVIC inhibition in svm_leave_nested()
	audit,io_uring: io_uring openat triggers audit reference count underflow
	tcp: check mptcp-level constraints for backlog coalescing
	mptcp: more conservative check for zero probes
	fs/ntfs3: Fix possible null-pointer dereference in hdr_find_e()
	fs/ntfs3: fix panic about slab-out-of-bounds caused by ntfs_list_ea()
	fs/ntfs3: fix deadlock in mark_as_free_ex
	netfilter: nft_payload: fix wrong mac header matching
	nvmet-tcp: Fix a possible UAF in queue intialization setup
	drm/i915: Retry gtt fault when out of fence registers
	drm/mediatek: Correctly free sg_table in gem prime vmap
	ALSA: hda/realtek - Fixed ASUS platform headset Mic issue
	ALSA: hda/realtek: Add quirk for ASUS ROG GU603ZV
	ALSA: hda/relatek: Enable Mute LED on HP Laptop 15s-fq5xxx
	ASoC: codecs: wcd938x-sdw: fix use after free on driver unbind
	ASoC: codecs: wcd938x-sdw: fix runtime PM imbalance on probe errors
	ASoC: codecs: wcd938x: drop bogus bind error handling
	ASoC: codecs: wcd938x: fix unbind tear down order
	ASoC: codecs: wcd938x: fix resource leaks on bind errors
	qed: fix LL2 RX buffer allocation
	xfrm: fix a data-race in xfrm_lookup_with_ifid()
	xfrm: fix a data-race in xfrm_gen_index()
	xfrm: interface: use DEV_STATS_INC()
	wifi: cfg80211: use system_unbound_wq for wiphy work
	net: ipv4: fix return value check in esp_remove_trailer
	net: ipv6: fix return value check in esp_remove_trailer
	net: rfkill: gpio: prevent value glitch during probe
	tcp: fix excessive TLP and RACK timeouts from HZ rounding
	tcp: tsq: relax tcp_small_queue_check() when rtx queue contains a single skb
	tcp: Fix listen() warning with v4-mapped-v6 address.
	tun: prevent negative ifindex
	ipv4: fib: annotate races around nh->nh_saddr_genid and nh->nh_saddr
	net: usb: smsc95xx: Fix an error code in smsc95xx_reset()
	octeon_ep: update BQL sent bytes before ringing doorbell
	i40e: prevent crash on probe if hw registers have invalid values
	net: dsa: bcm_sf2: Fix possible memory leak in bcm_sf2_mdio_register()
	bonding: Return pointer to data after pull on skb
	net/sched: sch_hfsc: upgrade 'rt' to 'sc' when it becomes a inner curve
	neighbor: tracing: Move pin6 inside CONFIG_IPV6=y section
	selftests: openvswitch: Catch cases where the tests are killed
	selftests: netfilter: Run nft_audit.sh in its own netns
	netfilter: nft_set_rbtree: .deactivate fails if element has expired
	netlink: Correct offload_xstats size
	netfilter: nf_tables: do not remove elements if set backend implements .abort
	netfilter: nf_tables: revert do not remove elements if set backend implements .abort
	net: phy: bcm7xxx: Add missing 16nm EPHY statistics
	net: pktgen: Fix interface flags printing
	net: avoid UAF on deleted altname
	net: fix ifname in netlink ntf during netns move
	net: check for altname conflicts when changing netdev's netns
	selftests/mm: fix awk usage in charge_reserved_hugetlb.sh and hugetlb_reparenting_test.sh that may cause error
	usb: misc: onboard_usb_hub: add Genesys Logic GL850G hub support
	usb: misc: onboard_usb_hub: add Genesys Logic GL852G hub support
	usb: misc: onboard_usb_hub: add Genesys Logic GL3523 hub support
	usb: misc: onboard_hub: add support for Microchip USB2412 USB 2.0 hub
	serial: Move uart_change_speed() earlier
	serial: Rename uart_change_speed() to uart_change_line_settings()
	serial: Reduce spinlocked portion of uart_rs485_config()
	serial: 8250: omap: Fix imprecise external abort for omap_8250_pm()
	serial: 8250_omap: Fix errors with no_console_suspend
	iio: core: introduce iio_device_{claim|release}_buffer_mode() APIs
	iio: cros_ec: fix an use-after-free in cros_ec_sensors_push_data()
	iio: adc: ad7192: Simplify using devm_regulator_get_enable()
	iio: adc: ad7192: Correct reference voltage
	pwr-mlxbf: extend Kconfig to include gpio-mlxbf3 dependency
	ARM: dts: ti: omap: Fix noisy serial with overrun-throttle-ms for mapphone
	fs-writeback: do not requeue a clean inode having skipped pages
	btrfs: prevent transaction block reserve underflow when starting transaction
	btrfs: return -EUCLEAN for delayed tree ref with a ref count not equals to 1
	btrfs: initialize start_slot in btrfs_log_prealloc_extents
	i2c: mux: Avoid potential false error message in i2c_mux_add_adapter
	overlayfs: set ctime when setting mtime and atime
	gpio: timberdale: Fix potential deadlock on &tgpio->lock
	ata: libata-core: Fix compilation warning in ata_dev_config_ncq()
	ata: libata-eh: Fix compilation warning in ata_eh_link_report()
	tracing: relax trace_event_eval_update() execution with cond_resched()
	wifi: mwifiex: Sanity check tlv_len and tlv_bitmap_len
	wifi: iwlwifi: Ensure ack flag is properly cleared.
	HID: logitech-hidpp: Add Bluetooth ID for the Logitech M720 Triathlon mouse
	HID: holtek: fix slab-out-of-bounds Write in holtek_kbd_input_event
	Bluetooth: btusb: add shutdown function for QCA6174
	Bluetooth: Avoid redundant authentication
	Bluetooth: hci_core: Fix build warnings
	wifi: cfg80211: Fix 6GHz scan configuration
	wifi: mac80211: work around Cisco AP 9115 VHT MPDU length
	wifi: mac80211: allow transmitting EAPOL frames with tainted key
	wifi: cfg80211: avoid leaking stack data into trace
	regulator/core: Revert "fix kobject release warning and memory leak in regulator_register()"
	sky2: Make sure there is at least one frag_addr available
	ipv4/fib: send notify when delete source address routes
	drm: panel-orientation-quirks: Add quirk for One Mix 2S
	btrfs: fix some -Wmaybe-uninitialized warnings in ioctl.c
	btrfs: error out when COWing block using a stale transaction
	btrfs: error when COWing block from a root that is being deleted
	btrfs: error out when reallocating block for defrag using a stale transaction
	drm/amd/pm: add unique_id for gc 11.0.3
	HID: multitouch: Add required quirk for Synaptics 0xcd7e device
	HID: nintendo: reinitialize USB Pro Controller after resuming from suspend
	platform/x86: touchscreen_dmi: Add info for the Positivo C4128B
	cpufreq: schedutil: Update next_freq when cpufreq_limits change
	fprobe: Pass entry_data to handlers
	fprobe: Add nr_maxactive to specify rethook_node pool size
	fprobe: Fix to ensure the number of active retprobes is not zero
	net: xfrm: skip policies marked as dead while reinserting policies
	xfrm6: fix inet6_dev refcount underflow problem
	net/mlx5: E-switch, register event handler before arming the event
	net/mlx5: Handle fw tracer change ownership event based on MTRC
	net/mlx5e: Don't offload internal port if filter device is out device
	net/tls: split tls_rx_reader_lock
	tcp: allow again tcp_disconnect() when threads are waiting
	ice: Remove redundant pci_enable_pcie_error_reporting()
	Bluetooth: hci_event: Fix using memcmp when comparing keys
	selftests: openvswitch: Add version check for pyroute2
	tcp_bpf: properly release resources on error paths
	net/smc: fix smc clc failed issue when netdevice not in init_net
	mtd: rawnand: qcom: Unmap the right resource upon probe failure
	mtd: rawnand: pl353: Ensure program page operations are successful
	mtd: rawnand: marvell: Ensure program page operations are successful
	mtd: rawnand: arasan: Ensure program page operations are successful
	mtd: spinand: micron: correct bitmask for ecc status
	mtd: physmap-core: Restore map_rom fallback
	dt-bindings: mmc: sdhci-msm: correct minimum number of clocks
	mmc: sdhci-pci-gli: fix LPM negotiation so x86/S0ix SoCs can suspend
	mmc: mtk-sd: Use readl_poll_timeout_atomic in msdc_reset_hw
	mmc: core: sdio: hold retuning if sdio in 1-bit mode
	mmc: core: Capture correct oemid-bits for eMMC cards
	Revert "pinctrl: avoid unsafe code pattern in find_pinctrl()"
	pNFS: Fix a hang in nfs4_evict_inode()
	pNFS/flexfiles: Check the layout validity in ff_layout_mirror_prepare_stats
	NFSv4.1: fixup use EXCHGID4_FLAG_USE_PNFS_DS for DS server
	ACPI: irq: Fix incorrect return value in acpi_register_gsi()
	nfs42: client needs to strip file mode's suid/sgid bit after ALLOCATE op
	nvme: sanitize metadata bounce buffer for reads
	nvme-pci: add BOGUS_NID for Intel 0a54 device
	nvmet-auth: complete a request only after freeing the dhchap pointers
	nvme-rdma: do not try to stop unallocated queues
	KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously
	HID: input: map battery system charging
	USB: serial: option: add Telit LE910C4-WWX 0x1035 composition
	USB: serial: option: add entry for Sierra EM9191 with new firmware
	USB: serial: option: add Fibocom to DELL custom modem FM101R-GL
	perf: Disallow mis-matched inherited group reads
	s390/pci: fix iommu bitmap allocation
	selftests/ftrace: Add new test case which checks non unique symbol
	s390/cio: fix a memleak in css_alloc_subchannel
	platform/surface: platform_profile: Propagate error if profile registration fails
	platform/x86: intel-uncore-freq: Conditionally create attribute for read frequency
	platform/x86: asus-wmi: Change ASUS_WMI_BRN_DOWN code from 0x20 to 0x2e
	platform/x86: asus-wmi: Only map brightness codes when using asus-wmi backlight control
	platform/x86: asus-wmi: Map 0x2a code, Ignore 0x2b and 0x2c events
	gpio: vf610: set value before the direction to avoid a glitch
	ASoC: pxa: fix a memory leak in probe()
	drm/bridge: ti-sn65dsi86: Associate DSI device lifetime with auxiliary device
	serial: 8250: omap: Move uart_write() inside PM section
	serial: 8250: omap: convert to modern PM ops
	kallsyms: Reduce the memory occupied by kallsyms_seqs_of_names[]
	kallsyms: Add helper kallsyms_on_each_match_symbol()
	tracing/kprobes: Return EADDRNOTAVAIL when func matches several symbols
	gpio: vf610: make irq_chip immutable
	gpio: vf610: mask the gpio irq in system suspend and support wakeup
	phy: mapphone-mdm6600: Fix runtime disable on probe
	phy: mapphone-mdm6600: Fix runtime PM for remove
	phy: mapphone-mdm6600: Fix pinctrl_pm handling for sleep pins
	net: move altnames together with the netdevice
	Bluetooth: hci_sock: fix slab oob read in create_monitor_event
	Bluetooth: hci_sock: Correctly bounds check and pad HCI_MON_NEW_INDEX name
	mptcp: avoid sending RST when closing the initial subflow
	selftests: mptcp: join: correctly check for no RST
	selftests: mptcp: join: no RST when rm subflow/addr
	Linux 6.1.60

Change-Id: I85a246fd8800df019794b531f5befe0a84a3e138
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman 2023-10-26 19:10:00 +00:00
commit 788e35fdea
200 changed files with 1941 additions and 935 deletions

View File

@ -59,7 +59,7 @@ properties:
maxItems: 4 maxItems: 4
clocks: clocks:
minItems: 3 minItems: 2
items: items:
- description: Main peripheral bus clock, PCLK/HCLK - AHB Bus clock - description: Main peripheral bus clock, PCLK/HCLK - AHB Bus clock
- description: SDC MMC clock, MCLK - description: SDC MMC clock, MCLK

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
VERSION = 6 VERSION = 6
PATCHLEVEL = 1 PATCHLEVEL = 1
SUBLEVEL = 59 SUBLEVEL = 60
EXTRAVERSION = EXTRAVERSION =
NAME = Curry Ramen NAME = Curry Ramen

View File

@ -640,6 +640,7 @@ &uart1 {
&uart3 { &uart3 {
interrupts-extended = <&wakeupgen GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH interrupts-extended = <&wakeupgen GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH
&omap4_pmx_core 0x17c>; &omap4_pmx_core 0x17c>;
overrun-throttle-ms = <500>;
}; };
&uart4 { &uart4 {

View File

@ -545,6 +545,17 @@ static void s390_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
} }
} }
static unsigned long *bitmap_vzalloc(size_t bits, gfp_t flags)
{
size_t n = BITS_TO_LONGS(bits);
size_t bytes;
if (unlikely(check_mul_overflow(n, sizeof(unsigned long), &bytes)))
return NULL;
return vzalloc(bytes);
}
int zpci_dma_init_device(struct zpci_dev *zdev) int zpci_dma_init_device(struct zpci_dev *zdev)
{ {
int rc; int rc;
@ -584,13 +595,13 @@ int zpci_dma_init_device(struct zpci_dev *zdev)
zdev->end_dma - zdev->start_dma + 1); zdev->end_dma - zdev->start_dma + 1);
zdev->end_dma = zdev->start_dma + zdev->iommu_size - 1; zdev->end_dma = zdev->start_dma + zdev->iommu_size - 1;
zdev->iommu_pages = zdev->iommu_size >> PAGE_SHIFT; zdev->iommu_pages = zdev->iommu_size >> PAGE_SHIFT;
zdev->iommu_bitmap = vzalloc(zdev->iommu_pages / 8); zdev->iommu_bitmap = bitmap_vzalloc(zdev->iommu_pages, GFP_KERNEL);
if (!zdev->iommu_bitmap) { if (!zdev->iommu_bitmap) {
rc = -ENOMEM; rc = -ENOMEM;
goto free_dma_table; goto free_dma_table;
} }
if (!s390_iommu_strict) { if (!s390_iommu_strict) {
zdev->lazy_bitmap = vzalloc(zdev->iommu_pages / 8); zdev->lazy_bitmap = bitmap_vzalloc(zdev->iommu_pages, GFP_KERNEL);
if (!zdev->lazy_bitmap) { if (!zdev->lazy_bitmap) {
rc = -ENOMEM; rc = -ENOMEM;
goto free_bitmap; goto free_bitmap;

View File

@ -103,6 +103,16 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
return ES_OK; return ES_OK;
} }
static enum es_result vc_ioio_check(struct es_em_ctxt *ctxt, u16 port, size_t size)
{
return ES_OK;
}
static bool fault_in_kernel_space(unsigned long address)
{
return false;
}
#undef __init #undef __init
#undef __pa #undef __pa
#define __init #define __init

View File

@ -148,7 +148,8 @@ static inline void fpu_update_guest_xfd(struct fpu_guest *guest_fpu, u64 xfd) {
static inline void fpu_sync_guest_vmexit_xfd_state(void) { } static inline void fpu_sync_guest_vmexit_xfd_state(void) { }
#endif #endif
extern void fpu_copy_guest_fpstate_to_uabi(struct fpu_guest *gfpu, void *buf, unsigned int size, u32 pkru); extern void fpu_copy_guest_fpstate_to_uabi(struct fpu_guest *gfpu, void *buf,
unsigned int size, u64 xfeatures, u32 pkru);
extern int fpu_copy_uabi_to_guest_fpstate(struct fpu_guest *gfpu, const void *buf, u64 xcr0, u32 *vpkru); extern int fpu_copy_uabi_to_guest_fpstate(struct fpu_guest *gfpu, const void *buf, u64 xcr0, u32 *vpkru);
static inline void fpstate_set_confidential(struct fpu_guest *gfpu) static inline void fpstate_set_confidential(struct fpu_guest *gfpu)

View File

@ -1324,7 +1324,6 @@ struct kvm_arch {
* the thread holds the MMU lock in write mode. * the thread holds the MMU lock in write mode.
*/ */
spinlock_t tdp_mmu_pages_lock; spinlock_t tdp_mmu_pages_lock;
struct workqueue_struct *tdp_mmu_zap_wq;
#endif /* CONFIG_X86_64 */ #endif /* CONFIG_X86_64 */
/* /*
@ -1727,7 +1726,7 @@ void kvm_mmu_vendor_module_exit(void);
void kvm_mmu_destroy(struct kvm_vcpu *vcpu); void kvm_mmu_destroy(struct kvm_vcpu *vcpu);
int kvm_mmu_create(struct kvm_vcpu *vcpu); int kvm_mmu_create(struct kvm_vcpu *vcpu);
int kvm_mmu_init_vm(struct kvm *kvm); void kvm_mmu_init_vm(struct kvm *kvm);
void kvm_mmu_uninit_vm(struct kvm *kvm); void kvm_mmu_uninit_vm(struct kvm *kvm);
void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu); void kvm_mmu_after_set_cpuid(struct kvm_vcpu *vcpu);

View File

@ -259,6 +259,7 @@ enum avic_ipi_failure_cause {
AVIC_IPI_FAILURE_TARGET_NOT_RUNNING, AVIC_IPI_FAILURE_TARGET_NOT_RUNNING,
AVIC_IPI_FAILURE_INVALID_TARGET, AVIC_IPI_FAILURE_INVALID_TARGET,
AVIC_IPI_FAILURE_INVALID_BACKING_PAGE, AVIC_IPI_FAILURE_INVALID_BACKING_PAGE,
AVIC_IPI_FAILURE_INVALID_IPI_VECTOR,
}; };
#define AVIC_PHYSICAL_MAX_INDEX_MASK GENMASK_ULL(8, 0) #define AVIC_PHYSICAL_MAX_INDEX_MASK GENMASK_ULL(8, 0)

View File

@ -369,14 +369,15 @@ int fpu_swap_kvm_fpstate(struct fpu_guest *guest_fpu, bool enter_guest)
EXPORT_SYMBOL_GPL(fpu_swap_kvm_fpstate); EXPORT_SYMBOL_GPL(fpu_swap_kvm_fpstate);
void fpu_copy_guest_fpstate_to_uabi(struct fpu_guest *gfpu, void *buf, void fpu_copy_guest_fpstate_to_uabi(struct fpu_guest *gfpu, void *buf,
unsigned int size, u32 pkru) unsigned int size, u64 xfeatures, u32 pkru)
{ {
struct fpstate *kstate = gfpu->fpstate; struct fpstate *kstate = gfpu->fpstate;
union fpregs_state *ustate = buf; union fpregs_state *ustate = buf;
struct membuf mb = { .p = buf, .left = size }; struct membuf mb = { .p = buf, .left = size };
if (cpu_feature_enabled(X86_FEATURE_XSAVE)) { if (cpu_feature_enabled(X86_FEATURE_XSAVE)) {
__copy_xstate_to_uabi_buf(mb, kstate, pkru, XSTATE_COPY_XSAVE); __copy_xstate_to_uabi_buf(mb, kstate, xfeatures, pkru,
XSTATE_COPY_XSAVE);
} else { } else {
memcpy(&ustate->fxsave, &kstate->regs.fxsave, memcpy(&ustate->fxsave, &kstate->regs.fxsave,
sizeof(ustate->fxsave)); sizeof(ustate->fxsave));

View File

@ -1053,6 +1053,7 @@ static void copy_feature(bool from_xstate, struct membuf *to, void *xstate,
* __copy_xstate_to_uabi_buf - Copy kernel saved xstate to a UABI buffer * __copy_xstate_to_uabi_buf - Copy kernel saved xstate to a UABI buffer
* @to: membuf descriptor * @to: membuf descriptor
* @fpstate: The fpstate buffer from which to copy * @fpstate: The fpstate buffer from which to copy
* @xfeatures: The mask of xfeatures to save (XSAVE mode only)
* @pkru_val: The PKRU value to store in the PKRU component * @pkru_val: The PKRU value to store in the PKRU component
* @copy_mode: The requested copy mode * @copy_mode: The requested copy mode
* *
@ -1063,7 +1064,8 @@ static void copy_feature(bool from_xstate, struct membuf *to, void *xstate,
* It supports partial copy but @to.pos always starts from zero. * It supports partial copy but @to.pos always starts from zero.
*/ */
void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate, void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate,
u32 pkru_val, enum xstate_copy_mode copy_mode) u64 xfeatures, u32 pkru_val,
enum xstate_copy_mode copy_mode)
{ {
const unsigned int off_mxcsr = offsetof(struct fxregs_state, mxcsr); const unsigned int off_mxcsr = offsetof(struct fxregs_state, mxcsr);
struct xregs_state *xinit = &init_fpstate.regs.xsave; struct xregs_state *xinit = &init_fpstate.regs.xsave;
@ -1087,7 +1089,7 @@ void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate,
break; break;
case XSTATE_COPY_XSAVE: case XSTATE_COPY_XSAVE:
header.xfeatures &= fpstate->user_xfeatures; header.xfeatures &= fpstate->user_xfeatures & xfeatures;
break; break;
} }
@ -1189,6 +1191,7 @@ void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk,
enum xstate_copy_mode copy_mode) enum xstate_copy_mode copy_mode)
{ {
__copy_xstate_to_uabi_buf(to, tsk->thread.fpu.fpstate, __copy_xstate_to_uabi_buf(to, tsk->thread.fpu.fpstate,
tsk->thread.fpu.fpstate->user_xfeatures,
tsk->thread.pkru, copy_mode); tsk->thread.pkru, copy_mode);
} }
@ -1540,10 +1543,7 @@ static int fpstate_realloc(u64 xfeatures, unsigned int ksize,
fpregs_restore_userregs(); fpregs_restore_userregs();
newfps->xfeatures = curfps->xfeatures | xfeatures; newfps->xfeatures = curfps->xfeatures | xfeatures;
if (!guest_fpu)
newfps->user_xfeatures = curfps->user_xfeatures | xfeatures; newfps->user_xfeatures = curfps->user_xfeatures | xfeatures;
newfps->xfd = curfps->xfd & ~xfeatures; newfps->xfd = curfps->xfd & ~xfeatures;
/* Do the final updates within the locked region */ /* Do the final updates within the locked region */

View File

@ -43,7 +43,8 @@ enum xstate_copy_mode {
struct membuf; struct membuf;
extern void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate, extern void __copy_xstate_to_uabi_buf(struct membuf to, struct fpstate *fpstate,
u32 pkru_val, enum xstate_copy_mode copy_mode); u64 xfeatures, u32 pkru_val,
enum xstate_copy_mode copy_mode);
extern void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk, extern void copy_xstate_to_uabi_buf(struct membuf to, struct task_struct *tsk,
enum xstate_copy_mode mode); enum xstate_copy_mode mode);
extern int copy_uabi_from_kernel_to_xstate(struct fpstate *fpstate, const void *kbuf, u32 *pkru); extern int copy_uabi_from_kernel_to_xstate(struct fpstate *fpstate, const void *kbuf, u32 *pkru);

View File

@ -629,6 +629,23 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SEV_ES_GEN_REQ); sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SEV_ES_GEN_REQ);
} }
static enum es_result vc_insn_string_check(struct es_em_ctxt *ctxt,
unsigned long address,
bool write)
{
if (user_mode(ctxt->regs) && fault_in_kernel_space(address)) {
ctxt->fi.vector = X86_TRAP_PF;
ctxt->fi.error_code = X86_PF_USER;
ctxt->fi.cr2 = address;
if (write)
ctxt->fi.error_code |= X86_PF_WRITE;
return ES_EXCEPTION;
}
return ES_OK;
}
static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt, static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
void *src, char *buf, void *src, char *buf,
unsigned int data_size, unsigned int data_size,
@ -636,7 +653,12 @@ static enum es_result vc_insn_string_read(struct es_em_ctxt *ctxt,
bool backwards) bool backwards)
{ {
int i, b = backwards ? -1 : 1; int i, b = backwards ? -1 : 1;
enum es_result ret = ES_OK; unsigned long address = (unsigned long)src;
enum es_result ret;
ret = vc_insn_string_check(ctxt, address, false);
if (ret != ES_OK)
return ret;
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
void *s = src + (i * data_size * b); void *s = src + (i * data_size * b);
@ -657,7 +679,12 @@ static enum es_result vc_insn_string_write(struct es_em_ctxt *ctxt,
bool backwards) bool backwards)
{ {
int i, s = backwards ? -1 : 1; int i, s = backwards ? -1 : 1;
enum es_result ret = ES_OK; unsigned long address = (unsigned long)dst;
enum es_result ret;
ret = vc_insn_string_check(ctxt, address, true);
if (ret != ES_OK)
return ret;
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
void *d = dst + (i * data_size * s); void *d = dst + (i * data_size * s);
@ -693,6 +720,9 @@ static enum es_result vc_insn_string_write(struct es_em_ctxt *ctxt,
static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo) static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
{ {
struct insn *insn = &ctxt->insn; struct insn *insn = &ctxt->insn;
size_t size;
u64 port;
*exitinfo = 0; *exitinfo = 0;
switch (insn->opcode.bytes[0]) { switch (insn->opcode.bytes[0]) {
@ -701,7 +731,7 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
case 0x6d: case 0x6d:
*exitinfo |= IOIO_TYPE_INS; *exitinfo |= IOIO_TYPE_INS;
*exitinfo |= IOIO_SEG_ES; *exitinfo |= IOIO_SEG_ES;
*exitinfo |= (ctxt->regs->dx & 0xffff) << 16; port = ctxt->regs->dx & 0xffff;
break; break;
/* OUTS opcodes */ /* OUTS opcodes */
@ -709,41 +739,43 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
case 0x6f: case 0x6f:
*exitinfo |= IOIO_TYPE_OUTS; *exitinfo |= IOIO_TYPE_OUTS;
*exitinfo |= IOIO_SEG_DS; *exitinfo |= IOIO_SEG_DS;
*exitinfo |= (ctxt->regs->dx & 0xffff) << 16; port = ctxt->regs->dx & 0xffff;
break; break;
/* IN immediate opcodes */ /* IN immediate opcodes */
case 0xe4: case 0xe4:
case 0xe5: case 0xe5:
*exitinfo |= IOIO_TYPE_IN; *exitinfo |= IOIO_TYPE_IN;
*exitinfo |= (u8)insn->immediate.value << 16; port = (u8)insn->immediate.value & 0xffff;
break; break;
/* OUT immediate opcodes */ /* OUT immediate opcodes */
case 0xe6: case 0xe6:
case 0xe7: case 0xe7:
*exitinfo |= IOIO_TYPE_OUT; *exitinfo |= IOIO_TYPE_OUT;
*exitinfo |= (u8)insn->immediate.value << 16; port = (u8)insn->immediate.value & 0xffff;
break; break;
/* IN register opcodes */ /* IN register opcodes */
case 0xec: case 0xec:
case 0xed: case 0xed:
*exitinfo |= IOIO_TYPE_IN; *exitinfo |= IOIO_TYPE_IN;
*exitinfo |= (ctxt->regs->dx & 0xffff) << 16; port = ctxt->regs->dx & 0xffff;
break; break;
/* OUT register opcodes */ /* OUT register opcodes */
case 0xee: case 0xee:
case 0xef: case 0xef:
*exitinfo |= IOIO_TYPE_OUT; *exitinfo |= IOIO_TYPE_OUT;
*exitinfo |= (ctxt->regs->dx & 0xffff) << 16; port = ctxt->regs->dx & 0xffff;
break; break;
default: default:
return ES_DECODE_FAILED; return ES_DECODE_FAILED;
} }
*exitinfo |= port << 16;
switch (insn->opcode.bytes[0]) { switch (insn->opcode.bytes[0]) {
case 0x6c: case 0x6c:
case 0x6e: case 0x6e:
@ -753,12 +785,15 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
case 0xee: case 0xee:
/* Single byte opcodes */ /* Single byte opcodes */
*exitinfo |= IOIO_DATA_8; *exitinfo |= IOIO_DATA_8;
size = 1;
break; break;
default: default:
/* Length determined by instruction parsing */ /* Length determined by instruction parsing */
*exitinfo |= (insn->opnd_bytes == 2) ? IOIO_DATA_16 *exitinfo |= (insn->opnd_bytes == 2) ? IOIO_DATA_16
: IOIO_DATA_32; : IOIO_DATA_32;
size = (insn->opnd_bytes == 2) ? 2 : 4;
} }
switch (insn->addr_bytes) { switch (insn->addr_bytes) {
case 2: case 2:
*exitinfo |= IOIO_ADDR_16; *exitinfo |= IOIO_ADDR_16;
@ -774,7 +809,7 @@ static enum es_result vc_ioio_exitinfo(struct es_em_ctxt *ctxt, u64 *exitinfo)
if (insn_has_rep_prefix(insn)) if (insn_has_rep_prefix(insn))
*exitinfo |= IOIO_REP; *exitinfo |= IOIO_REP;
return ES_OK; return vc_ioio_check(ctxt, (u16)port, size);
} }
static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt) static enum es_result vc_handle_ioio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)

View File

@ -512,6 +512,33 @@ static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt
return ES_OK; return ES_OK;
} }
static enum es_result vc_ioio_check(struct es_em_ctxt *ctxt, u16 port, size_t size)
{
BUG_ON(size > 4);
if (user_mode(ctxt->regs)) {
struct thread_struct *t = &current->thread;
struct io_bitmap *iobm = t->io_bitmap;
size_t idx;
if (!iobm)
goto fault;
for (idx = port; idx < port + size; ++idx) {
if (test_bit(idx, iobm->bitmap))
goto fault;
}
}
return ES_OK;
fault:
ctxt->fi.vector = X86_TRAP_GP;
ctxt->fi.error_code = 0;
return ES_EXCEPTION;
}
/* Include code shared with pre-decompression boot stage */ /* Include code shared with pre-decompression boot stage */
#include "sev-shared.c" #include "sev-shared.c"
@ -1552,6 +1579,9 @@ static enum es_result vc_handle_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt)
return ES_DECODE_FAILED; return ES_DECODE_FAILED;
} }
if (user_mode(ctxt->regs))
return ES_UNSUPPORTED;
switch (mmio) { switch (mmio) {
case MMIO_WRITE: case MMIO_WRITE:
memcpy(ghcb->shared_buffer, reg_data, bytes); memcpy(ghcb->shared_buffer, reg_data, bytes);

View File

@ -338,14 +338,6 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
vcpu->arch.guest_supported_xcr0 = vcpu->arch.guest_supported_xcr0 =
cpuid_get_supported_xcr0(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent); cpuid_get_supported_xcr0(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent);
/*
* FP+SSE can always be saved/restored via KVM_{G,S}ET_XSAVE, even if
* XSAVE/XCRO are not exposed to the guest, and even if XSAVE isn't
* supported by the host.
*/
vcpu->arch.guest_fpu.fpstate->user_xfeatures = vcpu->arch.guest_supported_xcr0 |
XFEATURE_MASK_FPSSE;
kvm_update_pv_runtime(vcpu); kvm_update_pv_runtime(vcpu);
vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu); vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);

View File

@ -2535,13 +2535,17 @@ int kvm_apic_local_deliver(struct kvm_lapic *apic, int lvt_type)
{ {
u32 reg = kvm_lapic_get_reg(apic, lvt_type); u32 reg = kvm_lapic_get_reg(apic, lvt_type);
int vector, mode, trig_mode; int vector, mode, trig_mode;
int r;
if (kvm_apic_hw_enabled(apic) && !(reg & APIC_LVT_MASKED)) { if (kvm_apic_hw_enabled(apic) && !(reg & APIC_LVT_MASKED)) {
vector = reg & APIC_VECTOR_MASK; vector = reg & APIC_VECTOR_MASK;
mode = reg & APIC_MODE_MASK; mode = reg & APIC_MODE_MASK;
trig_mode = reg & APIC_LVT_LEVEL_TRIGGER; trig_mode = reg & APIC_LVT_LEVEL_TRIGGER;
return __apic_accept_irq(apic, mode, vector, 1, trig_mode,
NULL); r = __apic_accept_irq(apic, mode, vector, 1, trig_mode, NULL);
if (r && lvt_type == APIC_LVTPC)
kvm_lapic_set_reg(apic, APIC_LVTPC, reg | APIC_LVT_MASKED);
return r;
} }
return 0; return 0;
} }

View File

@ -5994,19 +5994,16 @@ static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
kvm_mmu_zap_all_fast(kvm); kvm_mmu_zap_all_fast(kvm);
} }
int kvm_mmu_init_vm(struct kvm *kvm) void kvm_mmu_init_vm(struct kvm *kvm)
{ {
struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker;
int r;
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages); INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages); INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages);
INIT_LIST_HEAD(&kvm->arch.lpage_disallowed_mmu_pages); INIT_LIST_HEAD(&kvm->arch.lpage_disallowed_mmu_pages);
spin_lock_init(&kvm->arch.mmu_unsync_pages_lock); spin_lock_init(&kvm->arch.mmu_unsync_pages_lock);
r = kvm_mmu_init_tdp_mmu(kvm); kvm_mmu_init_tdp_mmu(kvm);
if (r < 0)
return r;
node->track_write = kvm_mmu_pte_write; node->track_write = kvm_mmu_pte_write;
node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot;
@ -6019,8 +6016,6 @@ int kvm_mmu_init_vm(struct kvm *kvm)
kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache; kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO; kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
return 0;
} }
static void mmu_free_vm_memory_caches(struct kvm *kvm) static void mmu_free_vm_memory_caches(struct kvm *kvm)

View File

@ -56,7 +56,12 @@ struct kvm_mmu_page {
bool tdp_mmu_page; bool tdp_mmu_page;
bool unsync; bool unsync;
union {
u8 mmu_valid_gen; u8 mmu_valid_gen;
/* Only accessed under slots_lock. */
bool tdp_mmu_scheduled_root_to_zap;
};
bool lpage_disallowed; /* Can't be replaced by an equiv large page */ bool lpage_disallowed; /* Can't be replaced by an equiv large page */
/* /*
@ -92,13 +97,7 @@ struct kvm_mmu_page {
struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
tdp_ptep_t ptep; tdp_ptep_t ptep;
}; };
union {
DECLARE_BITMAP(unsync_child_bitmap, 512); DECLARE_BITMAP(unsync_child_bitmap, 512);
struct {
struct work_struct tdp_mmu_async_work;
void *tdp_mmu_async_data;
};
};
struct list_head lpage_disallowed_link; struct list_head lpage_disallowed_link;
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32

View File

@ -14,24 +14,16 @@ static bool __read_mostly tdp_mmu_enabled = true;
module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644);
/* Initializes the TDP MMU for the VM, if enabled. */ /* Initializes the TDP MMU for the VM, if enabled. */
int kvm_mmu_init_tdp_mmu(struct kvm *kvm) void kvm_mmu_init_tdp_mmu(struct kvm *kvm)
{ {
struct workqueue_struct *wq;
if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled))
return 0; return;
wq = alloc_workqueue("kvm", WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 0);
if (!wq)
return -ENOMEM;
/* This should not be changed for the lifetime of the VM. */ /* This should not be changed for the lifetime of the VM. */
kvm->arch.tdp_mmu_enabled = true; kvm->arch.tdp_mmu_enabled = true;
INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots);
spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock);
INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages);
kvm->arch.tdp_mmu_zap_wq = wq;
return 1;
} }
/* Arbitrarily returns true so that this may be used in if statements. */ /* Arbitrarily returns true so that this may be used in if statements. */
@ -57,20 +49,15 @@ void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
* ultimately frees all roots. * ultimately frees all roots.
*/ */
kvm_tdp_mmu_invalidate_all_roots(kvm); kvm_tdp_mmu_invalidate_all_roots(kvm);
kvm_tdp_mmu_zap_invalidated_roots(kvm);
/*
* Destroying a workqueue also first flushes the workqueue, i.e. no
* need to invoke kvm_tdp_mmu_zap_invalidated_roots().
*/
destroy_workqueue(kvm->arch.tdp_mmu_zap_wq);
WARN_ON(!list_empty(&kvm->arch.tdp_mmu_pages)); WARN_ON(!list_empty(&kvm->arch.tdp_mmu_pages));
WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots)); WARN_ON(!list_empty(&kvm->arch.tdp_mmu_roots));
/* /*
* Ensure that all the outstanding RCU callbacks to free shadow pages * Ensure that all the outstanding RCU callbacks to free shadow pages
* can run before the VM is torn down. Work items on tdp_mmu_zap_wq * can run before the VM is torn down. Putting the last reference to
* can call kvm_tdp_mmu_put_root and create new callbacks. * zapped roots will create new callbacks.
*/ */
rcu_barrier(); rcu_barrier();
} }
@ -97,46 +84,6 @@ static void tdp_mmu_free_sp_rcu_callback(struct rcu_head *head)
tdp_mmu_free_sp(sp); tdp_mmu_free_sp(sp);
} }
static void tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root,
bool shared);
static void tdp_mmu_zap_root_work(struct work_struct *work)
{
struct kvm_mmu_page *root = container_of(work, struct kvm_mmu_page,
tdp_mmu_async_work);
struct kvm *kvm = root->tdp_mmu_async_data;
read_lock(&kvm->mmu_lock);
/*
* A TLB flush is not necessary as KVM performs a local TLB flush when
* allocating a new root (see kvm_mmu_load()), and when migrating vCPU
* to a different pCPU. Note, the local TLB flush on reuse also
* invalidates any paging-structure-cache entries, i.e. TLB entries for
* intermediate paging structures, that may be zapped, as such entries
* are associated with the ASID on both VMX and SVM.
*/
tdp_mmu_zap_root(kvm, root, true);
/*
* Drop the refcount using kvm_tdp_mmu_put_root() to test its logic for
* avoiding an infinite loop. By design, the root is reachable while
* it's being asynchronously zapped, thus a different task can put its
* last reference, i.e. flowing through kvm_tdp_mmu_put_root() for an
* asynchronously zapped root is unavoidable.
*/
kvm_tdp_mmu_put_root(kvm, root, true);
read_unlock(&kvm->mmu_lock);
}
static void tdp_mmu_schedule_zap_root(struct kvm *kvm, struct kvm_mmu_page *root)
{
root->tdp_mmu_async_data = kvm;
INIT_WORK(&root->tdp_mmu_async_work, tdp_mmu_zap_root_work);
queue_work(kvm->arch.tdp_mmu_zap_wq, &root->tdp_mmu_async_work);
}
void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
bool shared) bool shared)
{ {
@ -222,11 +169,11 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
#define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \ #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \
__for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true) __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true)
#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \ #define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _shared) \
for (_root = tdp_mmu_next_root(_kvm, NULL, false, false); \ for (_root = tdp_mmu_next_root(_kvm, NULL, _shared, false); \
_root; \ _root; \
_root = tdp_mmu_next_root(_kvm, _root, false, false)) \ _root = tdp_mmu_next_root(_kvm, _root, _shared, false)) \
if (!kvm_lockdep_assert_mmu_lock_held(_kvm, false)) { \ if (!kvm_lockdep_assert_mmu_lock_held(_kvm, _shared)) { \
} else } else
/* /*
@ -305,7 +252,7 @@ hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu)
* by a memslot update or by the destruction of the VM. Initialize the * by a memslot update or by the destruction of the VM. Initialize the
* refcount to two; one reference for the vCPU, and one reference for * refcount to two; one reference for the vCPU, and one reference for
* the TDP MMU itself, which is held until the root is invalidated and * the TDP MMU itself, which is held until the root is invalidated and
* is ultimately put by tdp_mmu_zap_root_work(). * is ultimately put by kvm_tdp_mmu_zap_invalidated_roots().
*/ */
refcount_set(&root->tdp_mmu_root_count, 2); refcount_set(&root->tdp_mmu_root_count, 2);
@ -963,7 +910,7 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush)
{ {
struct kvm_mmu_page *root; struct kvm_mmu_page *root;
for_each_tdp_mmu_root_yield_safe(kvm, root) for_each_tdp_mmu_root_yield_safe(kvm, root, false)
flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush); flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush);
return flush; return flush;
@ -985,7 +932,7 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
* is being destroyed or the userspace VMM has exited. In both cases, * is being destroyed or the userspace VMM has exited. In both cases,
* KVM_RUN is unreachable, i.e. no vCPUs will ever service the request. * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request.
*/ */
for_each_tdp_mmu_root_yield_safe(kvm, root) for_each_tdp_mmu_root_yield_safe(kvm, root, false)
tdp_mmu_zap_root(kvm, root, false); tdp_mmu_zap_root(kvm, root, false);
} }
@ -995,18 +942,47 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm)
*/ */
void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm) void kvm_tdp_mmu_zap_invalidated_roots(struct kvm *kvm)
{ {
flush_workqueue(kvm->arch.tdp_mmu_zap_wq); struct kvm_mmu_page *root;
read_lock(&kvm->mmu_lock);
for_each_tdp_mmu_root_yield_safe(kvm, root, true) {
if (!root->tdp_mmu_scheduled_root_to_zap)
continue;
root->tdp_mmu_scheduled_root_to_zap = false;
KVM_BUG_ON(!root->role.invalid, kvm);
/*
* A TLB flush is not necessary as KVM performs a local TLB
* flush when allocating a new root (see kvm_mmu_load()), and
* when migrating a vCPU to a different pCPU. Note, the local
* TLB flush on reuse also invalidates paging-structure-cache
* entries, i.e. TLB entries for intermediate paging structures,
* that may be zapped, as such entries are associated with the
* ASID on both VMX and SVM.
*/
tdp_mmu_zap_root(kvm, root, true);
/*
* The referenced needs to be put *after* zapping the root, as
* the root must be reachable by mmu_notifiers while it's being
* zapped
*/
kvm_tdp_mmu_put_root(kvm, root, true);
}
read_unlock(&kvm->mmu_lock);
} }
/* /*
* Mark each TDP MMU root as invalid to prevent vCPUs from reusing a root that * Mark each TDP MMU root as invalid to prevent vCPUs from reusing a root that
* is about to be zapped, e.g. in response to a memslots update. The actual * is about to be zapped, e.g. in response to a memslots update. The actual
* zapping is performed asynchronously. Using a separate workqueue makes it * zapping is done separately so that it happens with mmu_lock with read,
* easy to ensure that the destruction is performed before the "fast zap" * whereas invalidating roots must be done with mmu_lock held for write (unless
* completes, without keeping a separate list of invalidated roots; the list is * the VM is being destroyed).
* effectively the list of work items in the workqueue.
* *
* Note, the asynchronous worker is gifted the TDP MMU's reference. * Note, kvm_tdp_mmu_zap_invalidated_roots() is gifted the TDP MMU's reference.
* See kvm_tdp_mmu_get_vcpu_root_hpa(). * See kvm_tdp_mmu_get_vcpu_root_hpa().
*/ */
void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm) void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
@ -1031,19 +1007,20 @@ void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm)
/* /*
* As above, mmu_lock isn't held when destroying the VM! There can't * As above, mmu_lock isn't held when destroying the VM! There can't
* be other references to @kvm, i.e. nothing else can invalidate roots * be other references to @kvm, i.e. nothing else can invalidate roots
* or be consuming roots, but walking the list of roots does need to be * or get/put references to roots.
* guarded against roots being deleted by the asynchronous zap worker. */
list_for_each_entry(root, &kvm->arch.tdp_mmu_roots, link) {
/*
* Note, invalid roots can outlive a memslot update! Invalid
* roots must be *zapped* before the memslot update completes,
* but a different task can acquire a reference and keep the
* root alive after its been zapped.
*/ */
rcu_read_lock();
list_for_each_entry_rcu(root, &kvm->arch.tdp_mmu_roots, link) {
if (!root->role.invalid) { if (!root->role.invalid) {
root->tdp_mmu_scheduled_root_to_zap = true;
root->role.invalid = true; root->role.invalid = true;
tdp_mmu_schedule_zap_root(kvm, root);
} }
} }
rcu_read_unlock();
} }
/* /*

View File

@ -65,7 +65,7 @@ u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
u64 *spte); u64 *spte);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
int kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_init_tdp_mmu(struct kvm *kvm);
void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm);
static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; }
@ -86,7 +86,7 @@ static inline bool is_tdp_mmu(struct kvm_mmu *mmu)
return sp && is_tdp_mmu_page(sp) && sp->root_count; return sp && is_tdp_mmu_page(sp) && sp->root_count;
} }
#else #else
static inline int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return 0; } static inline void kvm_mmu_init_tdp_mmu(struct kvm *kvm) {}
static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {}
static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; }
static inline bool is_tdp_mmu(struct kvm_mmu *mmu) { return false; } static inline bool is_tdp_mmu(struct kvm_mmu *mmu) { return false; }

View File

@ -542,8 +542,11 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu)
case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE: case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE:
WARN_ONCE(1, "Invalid backing page\n"); WARN_ONCE(1, "Invalid backing page\n");
break; break;
case AVIC_IPI_FAILURE_INVALID_IPI_VECTOR:
/* Invalid IPI with vector < 16 */
break;
default: default:
pr_err("Unknown IPI interception\n"); vcpu_unimpl(vcpu, "Unknown avic incomplete IPI interception\n");
} }
return 1; return 1;

View File

@ -1164,6 +1164,9 @@ void svm_leave_nested(struct kvm_vcpu *vcpu)
nested_svm_uninit_mmu_context(vcpu); nested_svm_uninit_mmu_context(vcpu);
vmcb_mark_all_dirty(svm->vmcb); vmcb_mark_all_dirty(svm->vmcb);
if (kvm_apicv_activated(vcpu->kvm))
kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
} }
kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu); kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);

View File

@ -5301,26 +5301,37 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
return 0; return 0;
} }
static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
struct kvm_xsave *guest_xsave)
{
if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
return;
fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu,
guest_xsave->region,
sizeof(guest_xsave->region),
vcpu->arch.pkru);
}
static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu, static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
u8 *state, unsigned int size) u8 *state, unsigned int size)
{ {
/*
* Only copy state for features that are enabled for the guest. The
* state itself isn't problematic, but setting bits in the header for
* features that are supported in *this* host but not exposed to the
* guest can result in KVM_SET_XSAVE failing when live migrating to a
* compatible host without the features that are NOT exposed to the
* guest.
*
* FP+SSE can always be saved/restored via KVM_{G,S}ET_XSAVE, even if
* XSAVE/XCRO are not exposed to the guest, and even if XSAVE isn't
* supported by the host.
*/
u64 supported_xcr0 = vcpu->arch.guest_supported_xcr0 |
XFEATURE_MASK_FPSSE;
if (fpstate_is_confidential(&vcpu->arch.guest_fpu)) if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
return; return;
fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, state, size,
state, size, vcpu->arch.pkru); supported_xcr0, vcpu->arch.pkru);
}
static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
struct kvm_xsave *guest_xsave)
{
return kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region,
sizeof(guest_xsave->region));
} }
static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
@ -12442,9 +12453,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
if (ret) if (ret)
goto out; goto out;
ret = kvm_mmu_init_vm(kvm); kvm_mmu_init_vm(kvm);
if (ret)
goto out_page_track;
ret = static_call(kvm_x86_vm_init)(kvm); ret = static_call(kvm_x86_vm_init)(kvm);
if (ret) if (ret)
@ -12489,7 +12498,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
out_uninit_mmu: out_uninit_mmu:
kvm_mmu_uninit_vm(kvm); kvm_mmu_uninit_vm(kvm);
out_page_track:
kvm_page_track_cleanup(kvm); kvm_page_track_cleanup(kvm);
out: out:
return ret; return ret;

View File

@ -57,6 +57,7 @@ int acpi_register_gsi(struct device *dev, u32 gsi, int trigger,
int polarity) int polarity)
{ {
struct irq_fwspec fwspec; struct irq_fwspec fwspec;
unsigned int irq;
fwspec.fwnode = acpi_get_gsi_domain_id(gsi); fwspec.fwnode = acpi_get_gsi_domain_id(gsi);
if (WARN_ON(!fwspec.fwnode)) { if (WARN_ON(!fwspec.fwnode)) {
@ -68,7 +69,11 @@ int acpi_register_gsi(struct device *dev, u32 gsi, int trigger,
fwspec.param[1] = acpi_dev_get_irq_type(trigger, polarity); fwspec.param[1] = acpi_dev_get_irq_type(trigger, polarity);
fwspec.param_count = 2; fwspec.param_count = 2;
return irq_create_fwspec_mapping(&fwspec); irq = irq_create_fwspec_mapping(&fwspec);
if (!irq)
return -EINVAL;
return irq;
} }
EXPORT_SYMBOL_GPL(acpi_register_gsi); EXPORT_SYMBOL_GPL(acpi_register_gsi);

View File

@ -2456,7 +2456,7 @@ static int ata_dev_config_lba(struct ata_device *dev)
{ {
const u16 *id = dev->id; const u16 *id = dev->id;
const char *lba_desc; const char *lba_desc;
char ncq_desc[24]; char ncq_desc[32];
int ret; int ret;
dev->flags |= ATA_DFLAG_LBA; dev->flags |= ATA_DFLAG_LBA;

View File

@ -2247,7 +2247,7 @@ static void ata_eh_link_report(struct ata_link *link)
struct ata_eh_context *ehc = &link->eh_context; struct ata_eh_context *ehc = &link->eh_context;
struct ata_queued_cmd *qc; struct ata_queued_cmd *qc;
const char *frozen, *desc; const char *frozen, *desc;
char tries_buf[6] = ""; char tries_buf[16] = "";
int tag, nr_failed = 0; int tag, nr_failed = 0;
if (ehc->i.flags & ATA_EHI_QUIET) if (ehc->i.flags & ATA_EHI_QUIET)

View File

@ -1572,7 +1572,7 @@ static int dev_get_regmap_match(struct device *dev, void *res, void *data)
/* If the user didn't specify a name match any */ /* If the user didn't specify a name match any */
if (data) if (data)
return !strcmp((*r)->name, data); return (*r)->name && !strcmp((*r)->name, data);
else else
return 1; return 1;
} }

View File

@ -3984,6 +3984,7 @@ static int btusb_probe(struct usb_interface *intf,
if (id->driver_info & BTUSB_QCA_ROME) { if (id->driver_info & BTUSB_QCA_ROME) {
data->setup_on_usb = btusb_setup_qca; data->setup_on_usb = btusb_setup_qca;
hdev->shutdown = btusb_shutdown_qca;
hdev->set_bdaddr = btusb_set_bdaddr_ath3012; hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
hdev->cmd_timeout = btusb_qca_cmd_timeout; hdev->cmd_timeout = btusb_qca_cmd_timeout;
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);

View File

@ -74,7 +74,10 @@ static int vhci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
struct vhci_data *data = hci_get_drvdata(hdev); struct vhci_data *data = hci_get_drvdata(hdev);
memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1); memcpy(skb_push(skb, 1), &hci_skb_pkt_type(skb), 1);
mutex_lock(&data->open_mutex);
skb_queue_tail(&data->readq, skb); skb_queue_tail(&data->readq, skb);
mutex_unlock(&data->open_mutex);
wake_up_interruptible(&data->read_wait); wake_up_interruptible(&data->read_wait);
return 0; return 0;

View File

@ -43,9 +43,10 @@ static int timbgpio_update_bit(struct gpio_chip *gpio, unsigned index,
unsigned offset, bool enabled) unsigned offset, bool enabled)
{ {
struct timbgpio *tgpio = gpiochip_get_data(gpio); struct timbgpio *tgpio = gpiochip_get_data(gpio);
unsigned long flags;
u32 reg; u32 reg;
spin_lock(&tgpio->lock); spin_lock_irqsave(&tgpio->lock, flags);
reg = ioread32(tgpio->membase + offset); reg = ioread32(tgpio->membase + offset);
if (enabled) if (enabled)
@ -54,7 +55,7 @@ static int timbgpio_update_bit(struct gpio_chip *gpio, unsigned index,
reg &= ~(1 << index); reg &= ~(1 << index);
iowrite32(reg, tgpio->membase + offset); iowrite32(reg, tgpio->membase + offset);
spin_unlock(&tgpio->lock); spin_unlock_irqrestore(&tgpio->lock, flags);
return 0; return 0;
} }

View File

@ -30,7 +30,6 @@ struct fsl_gpio_soc_data {
struct vf610_gpio_port { struct vf610_gpio_port {
struct gpio_chip gc; struct gpio_chip gc;
struct irq_chip ic;
void __iomem *base; void __iomem *base;
void __iomem *gpio_base; void __iomem *gpio_base;
const struct fsl_gpio_soc_data *sdata; const struct fsl_gpio_soc_data *sdata;
@ -128,14 +127,14 @@ static int vf610_gpio_direction_output(struct gpio_chip *chip, unsigned gpio,
unsigned long mask = BIT(gpio); unsigned long mask = BIT(gpio);
u32 val; u32 val;
vf610_gpio_set(chip, gpio, value);
if (port->sdata && port->sdata->have_paddr) { if (port->sdata && port->sdata->have_paddr) {
val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR); val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR);
val |= mask; val |= mask;
vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR); vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR);
} }
vf610_gpio_set(chip, gpio, value);
return pinctrl_gpio_direction_output(chip->base + gpio); return pinctrl_gpio_direction_output(chip->base + gpio);
} }
@ -207,20 +206,24 @@ static int vf610_gpio_irq_set_type(struct irq_data *d, u32 type)
static void vf610_gpio_irq_mask(struct irq_data *d) static void vf610_gpio_irq_mask(struct irq_data *d)
{ {
struct vf610_gpio_port *port = struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
gpiochip_get_data(irq_data_get_irq_chip_data(d)); struct vf610_gpio_port *port = gpiochip_get_data(gc);
void __iomem *pcr_base = port->base + PORT_PCR(d->hwirq); irq_hw_number_t gpio_num = irqd_to_hwirq(d);
void __iomem *pcr_base = port->base + PORT_PCR(gpio_num);
vf610_gpio_writel(0, pcr_base); vf610_gpio_writel(0, pcr_base);
gpiochip_disable_irq(gc, gpio_num);
} }
static void vf610_gpio_irq_unmask(struct irq_data *d) static void vf610_gpio_irq_unmask(struct irq_data *d)
{ {
struct vf610_gpio_port *port = struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
gpiochip_get_data(irq_data_get_irq_chip_data(d)); struct vf610_gpio_port *port = gpiochip_get_data(gc);
void __iomem *pcr_base = port->base + PORT_PCR(d->hwirq); irq_hw_number_t gpio_num = irqd_to_hwirq(d);
void __iomem *pcr_base = port->base + PORT_PCR(gpio_num);
vf610_gpio_writel(port->irqc[d->hwirq] << PORT_PCR_IRQC_OFFSET, gpiochip_enable_irq(gc, gpio_num);
vf610_gpio_writel(port->irqc[gpio_num] << PORT_PCR_IRQC_OFFSET,
pcr_base); pcr_base);
} }
@ -237,6 +240,18 @@ static int vf610_gpio_irq_set_wake(struct irq_data *d, u32 enable)
return 0; return 0;
} }
static const struct irq_chip vf610_irqchip = {
.name = "gpio-vf610",
.irq_ack = vf610_gpio_irq_ack,
.irq_mask = vf610_gpio_irq_mask,
.irq_unmask = vf610_gpio_irq_unmask,
.irq_set_type = vf610_gpio_irq_set_type,
.irq_set_wake = vf610_gpio_irq_set_wake,
.flags = IRQCHIP_IMMUTABLE | IRQCHIP_MASK_ON_SUSPEND
| IRQCHIP_ENABLE_WAKEUP_ON_SUSPEND,
GPIOCHIP_IRQ_RESOURCE_HELPERS,
};
static void vf610_gpio_disable_clk(void *data) static void vf610_gpio_disable_clk(void *data)
{ {
clk_disable_unprepare(data); clk_disable_unprepare(data);
@ -249,7 +264,6 @@ static int vf610_gpio_probe(struct platform_device *pdev)
struct vf610_gpio_port *port; struct vf610_gpio_port *port;
struct gpio_chip *gc; struct gpio_chip *gc;
struct gpio_irq_chip *girq; struct gpio_irq_chip *girq;
struct irq_chip *ic;
int i; int i;
int ret; int ret;
@ -315,14 +329,6 @@ static int vf610_gpio_probe(struct platform_device *pdev)
gc->direction_output = vf610_gpio_direction_output; gc->direction_output = vf610_gpio_direction_output;
gc->set = vf610_gpio_set; gc->set = vf610_gpio_set;
ic = &port->ic;
ic->name = "gpio-vf610";
ic->irq_ack = vf610_gpio_irq_ack;
ic->irq_mask = vf610_gpio_irq_mask;
ic->irq_unmask = vf610_gpio_irq_unmask;
ic->irq_set_type = vf610_gpio_irq_set_type;
ic->irq_set_wake = vf610_gpio_irq_set_wake;
/* Mask all GPIO interrupts */ /* Mask all GPIO interrupts */
for (i = 0; i < gc->ngpio; i++) for (i = 0; i < gc->ngpio; i++)
vf610_gpio_writel(0, port->base + PORT_PCR(i)); vf610_gpio_writel(0, port->base + PORT_PCR(i));
@ -331,7 +337,7 @@ static int vf610_gpio_probe(struct platform_device *pdev)
vf610_gpio_writel(~0, port->base + PORT_ISFR); vf610_gpio_writel(~0, port->base + PORT_ISFR);
girq = &gc->irq; girq = &gc->irq;
girq->chip = ic; gpio_irq_chip_set_chip(girq, &vf610_irqchip);
girq->parent_handler = vf610_gpio_irq_handler; girq->parent_handler = vf610_gpio_irq_handler;
girq->num_parents = 1; girq->num_parents = 1;
girq->parents = devm_kcalloc(&pdev->dev, 1, girq->parents = devm_kcalloc(&pdev->dev, 1,

View File

@ -1991,6 +1991,7 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
case IP_VERSION(11, 0, 0): case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 1): case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 2): case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
*states = ATTR_STATE_SUPPORTED; *states = ATTR_STATE_SUPPORTED;
break; break;
default: default:

View File

@ -673,7 +673,7 @@ static struct ti_sn65dsi86 *bridge_to_ti_sn65dsi86(struct drm_bridge *bridge)
return container_of(bridge, struct ti_sn65dsi86, bridge); return container_of(bridge, struct ti_sn65dsi86, bridge);
} }
static int ti_sn_attach_host(struct ti_sn65dsi86 *pdata) static int ti_sn_attach_host(struct auxiliary_device *adev, struct ti_sn65dsi86 *pdata)
{ {
int val; int val;
struct mipi_dsi_host *host; struct mipi_dsi_host *host;
@ -688,7 +688,7 @@ static int ti_sn_attach_host(struct ti_sn65dsi86 *pdata)
if (!host) if (!host)
return -EPROBE_DEFER; return -EPROBE_DEFER;
dsi = devm_mipi_dsi_device_register_full(dev, host, &info); dsi = devm_mipi_dsi_device_register_full(&adev->dev, host, &info);
if (IS_ERR(dsi)) if (IS_ERR(dsi))
return PTR_ERR(dsi); return PTR_ERR(dsi);
@ -706,7 +706,7 @@ static int ti_sn_attach_host(struct ti_sn65dsi86 *pdata)
pdata->dsi = dsi; pdata->dsi = dsi;
return devm_mipi_dsi_attach(dev, dsi); return devm_mipi_dsi_attach(&adev->dev, dsi);
} }
static int ti_sn_bridge_attach(struct drm_bridge *bridge, static int ti_sn_bridge_attach(struct drm_bridge *bridge,
@ -1279,9 +1279,9 @@ static int ti_sn_bridge_probe(struct auxiliary_device *adev,
struct device_node *np = pdata->dev->of_node; struct device_node *np = pdata->dev->of_node;
int ret; int ret;
pdata->next_bridge = devm_drm_of_get_bridge(pdata->dev, np, 1, 0); pdata->next_bridge = devm_drm_of_get_bridge(&adev->dev, np, 1, 0);
if (IS_ERR(pdata->next_bridge)) if (IS_ERR(pdata->next_bridge))
return dev_err_probe(pdata->dev, PTR_ERR(pdata->next_bridge), return dev_err_probe(&adev->dev, PTR_ERR(pdata->next_bridge),
"failed to create panel bridge\n"); "failed to create panel bridge\n");
ti_sn_bridge_parse_lanes(pdata, np); ti_sn_bridge_parse_lanes(pdata, np);
@ -1300,9 +1300,9 @@ static int ti_sn_bridge_probe(struct auxiliary_device *adev,
drm_bridge_add(&pdata->bridge); drm_bridge_add(&pdata->bridge);
ret = ti_sn_attach_host(pdata); ret = ti_sn_attach_host(adev, pdata);
if (ret) { if (ret) {
dev_err_probe(pdata->dev, ret, "failed to attach dsi host\n"); dev_err_probe(&adev->dev, ret, "failed to attach dsi host\n");
goto err_remove_bridge; goto err_remove_bridge;
} }

View File

@ -38,6 +38,14 @@ static const struct drm_dmi_panel_orientation_data gpd_micropc = {
.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP, .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
}; };
static const struct drm_dmi_panel_orientation_data gpd_onemix2s = {
.width = 1200,
.height = 1920,
.bios_dates = (const char * const []){ "05/21/2018", "10/26/2018",
"03/04/2019", NULL },
.orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
};
static const struct drm_dmi_panel_orientation_data gpd_pocket = { static const struct drm_dmi_panel_orientation_data gpd_pocket = {
.width = 1200, .width = 1200,
.height = 1920, .height = 1920,
@ -401,6 +409,14 @@ static const struct dmi_system_id orientation_data[] = {
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "LTH17"), DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "LTH17"),
}, },
.driver_data = (void *)&lcd800x1280_rightside_up, .driver_data = (void *)&lcd800x1280_rightside_up,
}, { /* One Mix 2S (generic strings, also match on bios date) */
.matches = {
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Default string"),
DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Default string"),
DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "Default string"),
DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
},
.driver_data = (void *)&gpd_onemix2s,
}, },
{} {}
}; };

View File

@ -235,6 +235,7 @@ static vm_fault_t i915_error_to_vmf_fault(int err)
case 0: case 0:
case -EAGAIN: case -EAGAIN:
case -ENOSPC: /* transient failure to evict? */ case -ENOSPC: /* transient failure to evict? */
case -ENOBUFS: /* temporarily out of fences? */
case -ERESTARTSYS: case -ERESTARTSYS:
case -EINTR: case -EINTR:
case -EBUSY: case -EBUSY:

View File

@ -234,6 +234,7 @@ int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
npages = obj->size >> PAGE_SHIFT; npages = obj->size >> PAGE_SHIFT;
mtk_gem->pages = kcalloc(npages, sizeof(*mtk_gem->pages), GFP_KERNEL); mtk_gem->pages = kcalloc(npages, sizeof(*mtk_gem->pages), GFP_KERNEL);
if (!mtk_gem->pages) { if (!mtk_gem->pages) {
sg_free_table(sgt);
kfree(sgt); kfree(sgt);
return -ENOMEM; return -ENOMEM;
} }
@ -243,12 +244,15 @@ int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map)
mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP, mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP,
pgprot_writecombine(PAGE_KERNEL)); pgprot_writecombine(PAGE_KERNEL));
if (!mtk_gem->kvaddr) { if (!mtk_gem->kvaddr) {
sg_free_table(sgt);
kfree(sgt); kfree(sgt);
kfree(mtk_gem->pages); kfree(mtk_gem->pages);
return -ENOMEM; return -ENOMEM;
} }
out: sg_free_table(sgt);
kfree(sgt); kfree(sgt);
out:
iosys_map_set_vaddr(map, mtk_gem->kvaddr); iosys_map_set_vaddr(map, mtk_gem->kvaddr);
return 0; return 0;

View File

@ -130,6 +130,10 @@ static int holtek_kbd_input_event(struct input_dev *dev, unsigned int type,
return -ENODEV; return -ENODEV;
boot_hid = usb_get_intfdata(boot_interface); boot_hid = usb_get_intfdata(boot_interface);
if (list_empty(&boot_hid->inputs)) {
hid_err(hid, "no inputs found\n");
return -ENODEV;
}
boot_hid_input = list_first_entry(&boot_hid->inputs, boot_hid_input = list_first_entry(&boot_hid->inputs,
struct hid_input, list); struct hid_input, list);

View File

@ -4427,6 +4427,8 @@ static const struct hid_device_id hidpp_devices[] = {
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb008) }, HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb008) },
{ /* MX Master mouse over Bluetooth */ { /* MX Master mouse over Bluetooth */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb012) }, HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb012) },
{ /* M720 Triathlon mouse over Bluetooth */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb015) },
{ /* MX Ergo trackball over Bluetooth */ { /* MX Ergo trackball over Bluetooth */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01d) }, HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01d) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01e) }, { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb01e) },

View File

@ -2144,6 +2144,10 @@ static const struct hid_device_id mt_devices[] = {
USB_DEVICE_ID_MTP_STM)}, USB_DEVICE_ID_MTP_STM)},
/* Synaptics devices */ /* Synaptics devices */
{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
USB_VENDOR_ID_SYNAPTICS, 0xcd7e) },
{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT, { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
USB_VENDOR_ID_SYNAPTICS, 0xce08) }, USB_VENDOR_ID_SYNAPTICS, 0xce08) },

View File

@ -2011,7 +2011,9 @@ static int joycon_read_info(struct joycon_ctlr *ctlr)
struct joycon_input_report *report; struct joycon_input_report *report;
req.subcmd_id = JC_SUBCMD_REQ_DEV_INFO; req.subcmd_id = JC_SUBCMD_REQ_DEV_INFO;
mutex_lock(&ctlr->output_mutex);
ret = joycon_send_subcmd(ctlr, &req, 0, HZ); ret = joycon_send_subcmd(ctlr, &req, 0, HZ);
mutex_unlock(&ctlr->output_mutex);
if (ret) { if (ret) {
hid_err(ctlr->hdev, "Failed to get joycon info; ret=%d\n", ret); hid_err(ctlr->hdev, "Failed to get joycon info; ret=%d\n", ret);
return ret; return ret;
@ -2040,6 +2042,85 @@ static int joycon_read_info(struct joycon_ctlr *ctlr)
return 0; return 0;
} }
static int joycon_init(struct hid_device *hdev)
{
struct joycon_ctlr *ctlr = hid_get_drvdata(hdev);
int ret = 0;
mutex_lock(&ctlr->output_mutex);
/* if handshake command fails, assume ble pro controller */
if ((jc_type_is_procon(ctlr) || jc_type_is_chrggrip(ctlr)) &&
!joycon_send_usb(ctlr, JC_USB_CMD_HANDSHAKE, HZ)) {
hid_dbg(hdev, "detected USB controller\n");
/* set baudrate for improved latency */
ret = joycon_send_usb(ctlr, JC_USB_CMD_BAUDRATE_3M, HZ);
if (ret) {
hid_err(hdev, "Failed to set baudrate; ret=%d\n", ret);
goto out_unlock;
}
/* handshake */
ret = joycon_send_usb(ctlr, JC_USB_CMD_HANDSHAKE, HZ);
if (ret) {
hid_err(hdev, "Failed handshake; ret=%d\n", ret);
goto out_unlock;
}
/*
* Set no timeout (to keep controller in USB mode).
* This doesn't send a response, so ignore the timeout.
*/
joycon_send_usb(ctlr, JC_USB_CMD_NO_TIMEOUT, HZ/10);
} else if (jc_type_is_chrggrip(ctlr)) {
hid_err(hdev, "Failed charging grip handshake\n");
ret = -ETIMEDOUT;
goto out_unlock;
}
/* get controller calibration data, and parse it */
ret = joycon_request_calibration(ctlr);
if (ret) {
/*
* We can function with default calibration, but it may be
* inaccurate. Provide a warning, and continue on.
*/
hid_warn(hdev, "Analog stick positions may be inaccurate\n");
}
/* get IMU calibration data, and parse it */
ret = joycon_request_imu_calibration(ctlr);
if (ret) {
/*
* We can function with default calibration, but it may be
* inaccurate. Provide a warning, and continue on.
*/
hid_warn(hdev, "Unable to read IMU calibration data\n");
}
/* Set the reporting mode to 0x30, which is the full report mode */
ret = joycon_set_report_mode(ctlr);
if (ret) {
hid_err(hdev, "Failed to set report mode; ret=%d\n", ret);
goto out_unlock;
}
/* Enable rumble */
ret = joycon_enable_rumble(ctlr);
if (ret) {
hid_err(hdev, "Failed to enable rumble; ret=%d\n", ret);
goto out_unlock;
}
/* Enable the IMU */
ret = joycon_enable_imu(ctlr);
if (ret) {
hid_err(hdev, "Failed to enable the IMU; ret=%d\n", ret);
goto out_unlock;
}
out_unlock:
mutex_unlock(&ctlr->output_mutex);
return ret;
}
/* Common handler for parsing inputs */ /* Common handler for parsing inputs */
static int joycon_ctlr_read_handler(struct joycon_ctlr *ctlr, u8 *data, static int joycon_ctlr_read_handler(struct joycon_ctlr *ctlr, u8 *data,
int size) int size)
@ -2171,85 +2252,19 @@ static int nintendo_hid_probe(struct hid_device *hdev,
hid_device_io_start(hdev); hid_device_io_start(hdev);
/* Initialize the controller */ ret = joycon_init(hdev);
mutex_lock(&ctlr->output_mutex);
/* if handshake command fails, assume ble pro controller */
if ((jc_type_is_procon(ctlr) || jc_type_is_chrggrip(ctlr)) &&
!joycon_send_usb(ctlr, JC_USB_CMD_HANDSHAKE, HZ)) {
hid_dbg(hdev, "detected USB controller\n");
/* set baudrate for improved latency */
ret = joycon_send_usb(ctlr, JC_USB_CMD_BAUDRATE_3M, HZ);
if (ret) { if (ret) {
hid_err(hdev, "Failed to set baudrate; ret=%d\n", ret); hid_err(hdev, "Failed to initialize controller; ret=%d\n", ret);
goto err_mutex; goto err_close;
}
/* handshake */
ret = joycon_send_usb(ctlr, JC_USB_CMD_HANDSHAKE, HZ);
if (ret) {
hid_err(hdev, "Failed handshake; ret=%d\n", ret);
goto err_mutex;
}
/*
* Set no timeout (to keep controller in USB mode).
* This doesn't send a response, so ignore the timeout.
*/
joycon_send_usb(ctlr, JC_USB_CMD_NO_TIMEOUT, HZ/10);
} else if (jc_type_is_chrggrip(ctlr)) {
hid_err(hdev, "Failed charging grip handshake\n");
ret = -ETIMEDOUT;
goto err_mutex;
}
/* get controller calibration data, and parse it */
ret = joycon_request_calibration(ctlr);
if (ret) {
/*
* We can function with default calibration, but it may be
* inaccurate. Provide a warning, and continue on.
*/
hid_warn(hdev, "Analog stick positions may be inaccurate\n");
}
/* get IMU calibration data, and parse it */
ret = joycon_request_imu_calibration(ctlr);
if (ret) {
/*
* We can function with default calibration, but it may be
* inaccurate. Provide a warning, and continue on.
*/
hid_warn(hdev, "Unable to read IMU calibration data\n");
}
/* Set the reporting mode to 0x30, which is the full report mode */
ret = joycon_set_report_mode(ctlr);
if (ret) {
hid_err(hdev, "Failed to set report mode; ret=%d\n", ret);
goto err_mutex;
}
/* Enable rumble */
ret = joycon_enable_rumble(ctlr);
if (ret) {
hid_err(hdev, "Failed to enable rumble; ret=%d\n", ret);
goto err_mutex;
}
/* Enable the IMU */
ret = joycon_enable_imu(ctlr);
if (ret) {
hid_err(hdev, "Failed to enable the IMU; ret=%d\n", ret);
goto err_mutex;
} }
ret = joycon_read_info(ctlr); ret = joycon_read_info(ctlr);
if (ret) { if (ret) {
hid_err(hdev, "Failed to retrieve controller info; ret=%d\n", hid_err(hdev, "Failed to retrieve controller info; ret=%d\n",
ret); ret);
goto err_mutex; goto err_close;
} }
mutex_unlock(&ctlr->output_mutex);
/* Initialize the leds */ /* Initialize the leds */
ret = joycon_leds_create(ctlr); ret = joycon_leds_create(ctlr);
if (ret) { if (ret) {
@ -2275,8 +2290,6 @@ static int nintendo_hid_probe(struct hid_device *hdev,
hid_dbg(hdev, "probe - success\n"); hid_dbg(hdev, "probe - success\n");
return 0; return 0;
err_mutex:
mutex_unlock(&ctlr->output_mutex);
err_close: err_close:
hid_hw_close(hdev); hid_hw_close(hdev);
err_stop: err_stop:
@ -2306,6 +2319,20 @@ static void nintendo_hid_remove(struct hid_device *hdev)
hid_hw_stop(hdev); hid_hw_stop(hdev);
} }
#ifdef CONFIG_PM
static int nintendo_hid_resume(struct hid_device *hdev)
{
int ret = joycon_init(hdev);
if (ret)
hid_err(hdev, "Failed to restore controller after resume");
return ret;
}
#endif
static const struct hid_device_id nintendo_hid_devices[] = { static const struct hid_device_id nintendo_hid_devices[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_NINTENDO, { HID_USB_DEVICE(USB_VENDOR_ID_NINTENDO,
USB_DEVICE_ID_NINTENDO_PROCON) }, USB_DEVICE_ID_NINTENDO_PROCON) },
@ -2327,6 +2354,10 @@ static struct hid_driver nintendo_hid_driver = {
.probe = nintendo_hid_probe, .probe = nintendo_hid_probe,
.remove = nintendo_hid_remove, .remove = nintendo_hid_remove,
.raw_event = nintendo_hid_event, .raw_event = nintendo_hid_event,
#ifdef CONFIG_PM
.resume = nintendo_hid_resume,
#endif
}; };
module_hid_driver(nintendo_hid_driver); module_hid_driver(nintendo_hid_driver);

View File

@ -341,7 +341,7 @@ int i2c_mux_add_adapter(struct i2c_mux_core *muxc,
priv->adap.lock_ops = &i2c_parent_lock_ops; priv->adap.lock_ops = &i2c_parent_lock_ops;
/* Sanity check on class */ /* Sanity check on class */
if (i2c_mux_parent_classes(parent) & class) if (i2c_mux_parent_classes(parent) & class & ~I2C_CLASS_DEPRECATED)
dev_err(&parent->dev, dev_err(&parent->dev,
"Segment %d behind mux can't share classes with ancestors\n", "Segment %d behind mux can't share classes with ancestors\n",
chan_id); chan_id);

View File

@ -177,7 +177,7 @@ struct ad7192_chip_info {
struct ad7192_state { struct ad7192_state {
const struct ad7192_chip_info *chip_info; const struct ad7192_chip_info *chip_info;
struct regulator *avdd; struct regulator *avdd;
struct regulator *dvdd; struct regulator *vref;
struct clk *mclk; struct clk *mclk;
u16 int_vref_mv; u16 int_vref_mv;
u32 fclk; u32 fclk;
@ -1011,24 +1011,34 @@ static int ad7192_probe(struct spi_device *spi)
if (ret) if (ret)
return ret; return ret;
st->dvdd = devm_regulator_get(&spi->dev, "dvdd"); ret = devm_regulator_get_enable(&spi->dev, "dvdd");
if (IS_ERR(st->dvdd)) if (ret)
return PTR_ERR(st->dvdd); return dev_err_probe(&spi->dev, ret, "Failed to enable specified DVdd supply\n");
ret = regulator_enable(st->dvdd); st->vref = devm_regulator_get_optional(&spi->dev, "vref");
if (IS_ERR(st->vref)) {
if (PTR_ERR(st->vref) != -ENODEV)
return PTR_ERR(st->vref);
ret = regulator_get_voltage(st->avdd);
if (ret < 0)
return dev_err_probe(&spi->dev, ret,
"Device tree error, AVdd voltage undefined\n");
} else {
ret = regulator_enable(st->vref);
if (ret) { if (ret) {
dev_err(&spi->dev, "Failed to enable specified DVdd supply\n"); dev_err(&spi->dev, "Failed to enable specified Vref supply\n");
return ret; return ret;
} }
ret = devm_add_action_or_reset(&spi->dev, ad7192_reg_disable, st->dvdd); ret = devm_add_action_or_reset(&spi->dev, ad7192_reg_disable, st->vref);
if (ret) if (ret)
return ret; return ret;
ret = regulator_get_voltage(st->avdd); ret = regulator_get_voltage(st->vref);
if (ret < 0) { if (ret < 0)
dev_err(&spi->dev, "Device tree error, reference voltage undefined\n"); return dev_err_probe(&spi->dev, ret,
return ret; "Device tree error, Vref voltage undefined\n");
} }
st->int_vref_mv = ret / 1000; st->int_vref_mv = ret / 1000;

View File

@ -190,8 +190,11 @@ int cros_ec_sensors_push_data(struct iio_dev *indio_dev,
/* /*
* Ignore samples if the buffer is not set: it is needed if the ODR is * Ignore samples if the buffer is not set: it is needed if the ODR is
* set but the buffer is not enabled yet. * set but the buffer is not enabled yet.
*
* Note: iio_device_claim_buffer_mode() returns -EBUSY if the buffer
* is not enabled.
*/ */
if (!iio_buffer_enabled(indio_dev)) if (iio_device_claim_buffer_mode(indio_dev) < 0)
return 0; return 0;
out = (s16 *)st->samples; out = (s16 *)st->samples;
@ -210,6 +213,7 @@ int cros_ec_sensors_push_data(struct iio_dev *indio_dev,
iio_push_to_buffers_with_timestamp(indio_dev, st->samples, iio_push_to_buffers_with_timestamp(indio_dev, st->samples,
timestamp + delta); timestamp + delta);
iio_device_release_buffer_mode(indio_dev);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(cros_ec_sensors_push_data); EXPORT_SYMBOL_GPL(cros_ec_sensors_push_data);

View File

@ -2084,6 +2084,44 @@ void iio_device_release_direct_mode(struct iio_dev *indio_dev)
} }
EXPORT_SYMBOL_GPL(iio_device_release_direct_mode); EXPORT_SYMBOL_GPL(iio_device_release_direct_mode);
/**
* iio_device_claim_buffer_mode - Keep device in buffer mode
* @indio_dev: the iio_dev associated with the device
*
* If the device is in buffer mode it is guaranteed to stay
* that way until iio_device_release_buffer_mode() is called.
*
* Use with iio_device_release_buffer_mode().
*
* Returns: 0 on success, -EBUSY on failure.
*/
int iio_device_claim_buffer_mode(struct iio_dev *indio_dev)
{
mutex_lock(&indio_dev->mlock);
if (iio_buffer_enabled(indio_dev))
return 0;
mutex_unlock(&indio_dev->mlock);
return -EBUSY;
}
EXPORT_SYMBOL_GPL(iio_device_claim_buffer_mode);
/**
* iio_device_release_buffer_mode - releases claim on buffer mode
* @indio_dev: the iio_dev associated with the device
*
* Release the claim. Device is no longer guaranteed to stay
* in buffer mode.
*
* Use with iio_device_claim_buffer_mode().
*/
void iio_device_release_buffer_mode(struct iio_dev *indio_dev)
{
mutex_unlock(&indio_dev->mlock);
}
EXPORT_SYMBOL_GPL(iio_device_release_buffer_mode);
/** /**
* iio_device_get_current_mode() - helper function providing read-only access to * iio_device_get_current_mode() - helper function providing read-only access to
* the opaque @currentmode variable * the opaque @currentmode variable

View File

@ -105,7 +105,7 @@ static int mmc_decode_cid(struct mmc_card *card)
case 3: /* MMC v3.1 - v3.3 */ case 3: /* MMC v3.1 - v3.3 */
case 4: /* MMC v4 */ case 4: /* MMC v4 */
card->cid.manfid = UNSTUFF_BITS(resp, 120, 8); card->cid.manfid = UNSTUFF_BITS(resp, 120, 8);
card->cid.oemid = UNSTUFF_BITS(resp, 104, 16); card->cid.oemid = UNSTUFF_BITS(resp, 104, 8);
card->cid.prod_name[0] = UNSTUFF_BITS(resp, 96, 8); card->cid.prod_name[0] = UNSTUFF_BITS(resp, 96, 8);
card->cid.prod_name[1] = UNSTUFF_BITS(resp, 88, 8); card->cid.prod_name[1] = UNSTUFF_BITS(resp, 88, 8);
card->cid.prod_name[2] = UNSTUFF_BITS(resp, 80, 8); card->cid.prod_name[2] = UNSTUFF_BITS(resp, 80, 8);

View File

@ -1089,8 +1089,14 @@ static int mmc_sdio_resume(struct mmc_host *host)
} }
err = mmc_sdio_reinit_card(host); err = mmc_sdio_reinit_card(host);
} else if (mmc_card_wake_sdio_irq(host)) { } else if (mmc_card_wake_sdio_irq(host)) {
/* We may have switched to 1-bit mode during suspend */ /*
* We may have switched to 1-bit mode during suspend,
* need to hold retuning, because tuning only supprt
* 4-bit mode or 8 bit mode.
*/
mmc_retune_hold_now(host);
err = sdio_enable_4bit_bus(host->card); err = sdio_enable_4bit_bus(host->card);
mmc_retune_release(host);
} }
if (err) if (err)

View File

@ -655,10 +655,10 @@ static void msdc_reset_hw(struct msdc_host *host)
u32 val; u32 val;
sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_RST); sdr_set_bits(host->base + MSDC_CFG, MSDC_CFG_RST);
readl_poll_timeout(host->base + MSDC_CFG, val, !(val & MSDC_CFG_RST), 0, 0); readl_poll_timeout_atomic(host->base + MSDC_CFG, val, !(val & MSDC_CFG_RST), 0, 0);
sdr_set_bits(host->base + MSDC_FIFOCS, MSDC_FIFOCS_CLR); sdr_set_bits(host->base + MSDC_FIFOCS, MSDC_FIFOCS_CLR);
readl_poll_timeout(host->base + MSDC_FIFOCS, val, readl_poll_timeout_atomic(host->base + MSDC_FIFOCS, val,
!(val & MSDC_FIFOCS_CLR), 0, 0); !(val & MSDC_FIFOCS_CLR), 0, 0);
val = readl(host->base + MSDC_INT); val = readl(host->base + MSDC_INT);

View File

@ -756,42 +756,6 @@ static u32 sdhci_gl9750_readl(struct sdhci_host *host, int reg)
return value; return value;
} }
#ifdef CONFIG_PM_SLEEP
static int sdhci_pci_gli_resume(struct sdhci_pci_chip *chip)
{
struct sdhci_pci_slot *slot = chip->slots[0];
pci_free_irq_vectors(slot->chip->pdev);
gli_pcie_enable_msi(slot);
return sdhci_pci_resume_host(chip);
}
static int sdhci_cqhci_gli_resume(struct sdhci_pci_chip *chip)
{
struct sdhci_pci_slot *slot = chip->slots[0];
int ret;
ret = sdhci_pci_gli_resume(chip);
if (ret)
return ret;
return cqhci_resume(slot->host->mmc);
}
static int sdhci_cqhci_gli_suspend(struct sdhci_pci_chip *chip)
{
struct sdhci_pci_slot *slot = chip->slots[0];
int ret;
ret = cqhci_suspend(slot->host->mmc);
if (ret)
return ret;
return sdhci_suspend_host(slot->host);
}
#endif
static void gl9763e_hs400_enhanced_strobe(struct mmc_host *mmc, static void gl9763e_hs400_enhanced_strobe(struct mmc_host *mmc,
struct mmc_ios *ios) struct mmc_ios *ios)
{ {
@ -1040,6 +1004,70 @@ static int gl9763e_runtime_resume(struct sdhci_pci_chip *chip)
} }
#endif #endif
#ifdef CONFIG_PM_SLEEP
static int sdhci_pci_gli_resume(struct sdhci_pci_chip *chip)
{
struct sdhci_pci_slot *slot = chip->slots[0];
pci_free_irq_vectors(slot->chip->pdev);
gli_pcie_enable_msi(slot);
return sdhci_pci_resume_host(chip);
}
static int gl9763e_resume(struct sdhci_pci_chip *chip)
{
struct sdhci_pci_slot *slot = chip->slots[0];
int ret;
ret = sdhci_pci_gli_resume(chip);
if (ret)
return ret;
ret = cqhci_resume(slot->host->mmc);
if (ret)
return ret;
/*
* Disable LPM negotiation to bring device back in sync
* with its runtime_pm state.
*/
gl9763e_set_low_power_negotiation(slot, false);
return 0;
}
static int gl9763e_suspend(struct sdhci_pci_chip *chip)
{
struct sdhci_pci_slot *slot = chip->slots[0];
int ret;
/*
* Certain SoCs can suspend only with the bus in low-
* power state, notably x86 SoCs when using S0ix.
* Re-enable LPM negotiation to allow entering L1 state
* and entering system suspend.
*/
gl9763e_set_low_power_negotiation(slot, true);
ret = cqhci_suspend(slot->host->mmc);
if (ret)
goto err_suspend;
ret = sdhci_suspend_host(slot->host);
if (ret)
goto err_suspend_host;
return 0;
err_suspend_host:
cqhci_resume(slot->host->mmc);
err_suspend:
gl9763e_set_low_power_negotiation(slot, false);
return ret;
}
#endif
static int gli_probe_slot_gl9763e(struct sdhci_pci_slot *slot) static int gli_probe_slot_gl9763e(struct sdhci_pci_slot *slot)
{ {
struct pci_dev *pdev = slot->chip->pdev; struct pci_dev *pdev = slot->chip->pdev;
@ -1147,8 +1175,8 @@ const struct sdhci_pci_fixes sdhci_gl9763e = {
.probe_slot = gli_probe_slot_gl9763e, .probe_slot = gli_probe_slot_gl9763e,
.ops = &sdhci_gl9763e_ops, .ops = &sdhci_gl9763e_ops,
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
.resume = sdhci_cqhci_gli_resume, .resume = gl9763e_resume,
.suspend = sdhci_cqhci_gli_suspend, .suspend = gl9763e_suspend,
#endif #endif
#ifdef CONFIG_PM #ifdef CONFIG_PM
.runtime_suspend = gl9763e_runtime_suspend, .runtime_suspend = gl9763e_runtime_suspend,

View File

@ -552,6 +552,17 @@ static int physmap_flash_probe(struct platform_device *dev)
if (info->probe_type) { if (info->probe_type) {
info->mtds[i] = do_map_probe(info->probe_type, info->mtds[i] = do_map_probe(info->probe_type,
&info->maps[i]); &info->maps[i]);
/* Fall back to mapping region as ROM */
if (!info->mtds[i] && IS_ENABLED(CONFIG_MTD_ROM) &&
strcmp(info->probe_type, "map_rom")) {
dev_warn(&dev->dev,
"map_probe() failed for type %s\n",
info->probe_type);
info->mtds[i] = do_map_probe("map_rom",
&info->maps[i]);
}
} else { } else {
int j; int j;

View File

@ -515,6 +515,7 @@ static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
struct mtd_info *mtd = nand_to_mtd(chip); struct mtd_info *mtd = nand_to_mtd(chip);
unsigned int len = mtd->writesize + (oob_required ? mtd->oobsize : 0); unsigned int len = mtd->writesize + (oob_required ? mtd->oobsize : 0);
dma_addr_t dma_addr; dma_addr_t dma_addr;
u8 status;
int ret; int ret;
struct anfc_op nfc_op = { struct anfc_op nfc_op = {
.pkt_reg = .pkt_reg =
@ -561,12 +562,23 @@ static int anfc_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
} }
/* Spare data is not protected */ /* Spare data is not protected */
if (oob_required) if (oob_required) {
ret = nand_write_oob_std(chip, page); ret = nand_write_oob_std(chip, page);
if (ret)
return ret; return ret;
} }
/* Check write status on the chip side */
ret = nand_status_op(chip, &status);
if (ret)
return ret;
if (status & NAND_STATUS_FAIL)
return -EIO;
return 0;
}
static int anfc_sel_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf, static int anfc_sel_write_page_hw_ecc(struct nand_chip *chip, const u8 *buf,
int oob_required, int page) int oob_required, int page)
{ {

View File

@ -1154,6 +1154,7 @@ static int marvell_nfc_hw_ecc_hmg_do_write_page(struct nand_chip *chip,
.ndcb[2] = NDCB2_ADDR5_PAGE(page), .ndcb[2] = NDCB2_ADDR5_PAGE(page),
}; };
unsigned int oob_bytes = lt->spare_bytes + (raw ? lt->ecc_bytes : 0); unsigned int oob_bytes = lt->spare_bytes + (raw ? lt->ecc_bytes : 0);
u8 status;
int ret; int ret;
/* NFCv2 needs more information about the operation being executed */ /* NFCv2 needs more information about the operation being executed */
@ -1187,7 +1188,18 @@ static int marvell_nfc_hw_ecc_hmg_do_write_page(struct nand_chip *chip,
ret = marvell_nfc_wait_op(chip, ret = marvell_nfc_wait_op(chip,
PSEC_TO_MSEC(sdr->tPROG_max)); PSEC_TO_MSEC(sdr->tPROG_max));
if (ret)
return ret; return ret;
/* Check write status on the chip side */
ret = nand_status_op(chip, &status);
if (ret)
return ret;
if (status & NAND_STATUS_FAIL)
return -EIO;
return 0;
} }
static int marvell_nfc_hw_ecc_hmg_write_page_raw(struct nand_chip *chip, static int marvell_nfc_hw_ecc_hmg_write_page_raw(struct nand_chip *chip,
@ -1616,6 +1628,7 @@ static int marvell_nfc_hw_ecc_bch_write_page(struct nand_chip *chip,
int data_len = lt->data_bytes; int data_len = lt->data_bytes;
int spare_len = lt->spare_bytes; int spare_len = lt->spare_bytes;
int chunk, ret; int chunk, ret;
u8 status;
marvell_nfc_select_target(chip, chip->cur_cs); marvell_nfc_select_target(chip, chip->cur_cs);
@ -1652,6 +1665,14 @@ static int marvell_nfc_hw_ecc_bch_write_page(struct nand_chip *chip,
if (ret) if (ret)
return ret; return ret;
/* Check write status on the chip side */
ret = nand_status_op(chip, &status);
if (ret)
return ret;
if (status & NAND_STATUS_FAIL)
return -EIO;
return 0; return 0;
} }

View File

@ -513,6 +513,7 @@ static int pl35x_nand_write_page_hwecc(struct nand_chip *chip,
u32 addr1 = 0, addr2 = 0, row; u32 addr1 = 0, addr2 = 0, row;
u32 cmd_addr; u32 cmd_addr;
int i, ret; int i, ret;
u8 status;
ret = pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_APB); ret = pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_APB);
if (ret) if (ret)
@ -565,6 +566,14 @@ static int pl35x_nand_write_page_hwecc(struct nand_chip *chip,
if (ret) if (ret)
goto disable_ecc_engine; goto disable_ecc_engine;
/* Check write status on the chip side */
ret = nand_status_op(chip, &status);
if (ret)
goto disable_ecc_engine;
if (status & NAND_STATUS_FAIL)
ret = -EIO;
disable_ecc_engine: disable_ecc_engine:
pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_BYPASS); pl35x_smc_set_ecc_mode(nfc, chip, PL35X_SMC_ECC_CFG_MODE_BYPASS);

View File

@ -3310,7 +3310,7 @@ static int qcom_nandc_probe(struct platform_device *pdev)
err_aon_clk: err_aon_clk:
clk_disable_unprepare(nandc->core_clk); clk_disable_unprepare(nandc->core_clk);
err_core_clk: err_core_clk:
dma_unmap_resource(dev, res->start, resource_size(res), dma_unmap_resource(dev, nandc->base_dma, resource_size(res),
DMA_BIDIRECTIONAL, 0); DMA_BIDIRECTIONAL, 0);
return ret; return ret;
} }

View File

@ -12,7 +12,7 @@
#define SPINAND_MFR_MICRON 0x2c #define SPINAND_MFR_MICRON 0x2c
#define MICRON_STATUS_ECC_MASK GENMASK(7, 4) #define MICRON_STATUS_ECC_MASK GENMASK(6, 4)
#define MICRON_STATUS_ECC_NO_BITFLIPS (0 << 4) #define MICRON_STATUS_ECC_NO_BITFLIPS (0 << 4)
#define MICRON_STATUS_ECC_1TO3_BITFLIPS (1 << 4) #define MICRON_STATUS_ECC_1TO3_BITFLIPS (1 << 4)
#define MICRON_STATUS_ECC_4TO6_BITFLIPS (3 << 4) #define MICRON_STATUS_ECC_4TO6_BITFLIPS (3 << 4)

View File

@ -3990,7 +3990,7 @@ static inline const void *bond_pull_data(struct sk_buff *skb,
if (likely(n <= hlen)) if (likely(n <= hlen))
return data; return data;
else if (skb && likely(pskb_may_pull(skb, n))) else if (skb && likely(pskb_may_pull(skb, n)))
return skb->head; return skb->data;
return NULL; return NULL;
} }

View File

@ -617,17 +617,16 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
dn = of_find_compatible_node(NULL, NULL, "brcm,unimac-mdio"); dn = of_find_compatible_node(NULL, NULL, "brcm,unimac-mdio");
priv->master_mii_bus = of_mdio_find_bus(dn); priv->master_mii_bus = of_mdio_find_bus(dn);
if (!priv->master_mii_bus) { if (!priv->master_mii_bus) {
of_node_put(dn); err = -EPROBE_DEFER;
return -EPROBE_DEFER; goto err_of_node_put;
} }
get_device(&priv->master_mii_bus->dev);
priv->master_mii_dn = dn; priv->master_mii_dn = dn;
priv->slave_mii_bus = mdiobus_alloc(); priv->slave_mii_bus = mdiobus_alloc();
if (!priv->slave_mii_bus) { if (!priv->slave_mii_bus) {
of_node_put(dn); err = -ENOMEM;
return -ENOMEM; goto err_put_master_mii_bus_dev;
} }
priv->slave_mii_bus->priv = priv; priv->slave_mii_bus->priv = priv;
@ -684,11 +683,17 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
} }
err = mdiobus_register(priv->slave_mii_bus); err = mdiobus_register(priv->slave_mii_bus);
if (err && dn) { if (err && dn)
mdiobus_free(priv->slave_mii_bus); goto err_free_slave_mii_bus;
of_node_put(dn);
}
return 0;
err_free_slave_mii_bus:
mdiobus_free(priv->slave_mii_bus);
err_put_master_mii_bus_dev:
put_device(&priv->master_mii_bus->dev);
err_of_node_put:
of_node_put(dn);
return err; return err;
} }
@ -696,6 +701,7 @@ static void bcm_sf2_mdio_unregister(struct bcm_sf2_priv *priv)
{ {
mdiobus_unregister(priv->slave_mii_bus); mdiobus_unregister(priv->slave_mii_bus);
mdiobus_free(priv->slave_mii_bus); mdiobus_free(priv->slave_mii_bus);
put_device(&priv->master_mii_bus->dev);
of_node_put(priv->master_mii_dn); of_node_put(priv->master_mii_dn);
} }

View File

@ -911,7 +911,7 @@ static int csk_wait_memory(struct chtls_dev *cdev,
struct sock *sk, long *timeo_p) struct sock *sk, long *timeo_p)
{ {
DEFINE_WAIT_FUNC(wait, woken_wake_function); DEFINE_WAIT_FUNC(wait, woken_wake_function);
int err = 0; int ret, err = 0;
long current_timeo; long current_timeo;
long vm_wait = 0; long vm_wait = 0;
bool noblock; bool noblock;
@ -942,10 +942,13 @@ static int csk_wait_memory(struct chtls_dev *cdev,
set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
sk->sk_write_pending++; sk->sk_write_pending++;
sk_wait_event(sk, &current_timeo, sk->sk_err || ret = sk_wait_event(sk, &current_timeo, sk->sk_err ||
(sk->sk_shutdown & SEND_SHUTDOWN) || (sk->sk_shutdown & SEND_SHUTDOWN) ||
(csk_mem_free(cdev, sk) && !vm_wait), &wait); (csk_mem_free(cdev, sk) && !vm_wait),
&wait);
sk->sk_write_pending--; sk->sk_write_pending--;
if (ret < 0)
goto do_error;
if (vm_wait) { if (vm_wait) {
vm_wait -= current_timeo; vm_wait -= current_timeo;
@ -1438,6 +1441,7 @@ static int chtls_pt_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int copied = 0; int copied = 0;
int target; int target;
long timeo; long timeo;
int ret;
buffers_freed = 0; buffers_freed = 0;
@ -1513,7 +1517,11 @@ static int chtls_pt_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
if (copied >= target) if (copied >= target)
break; break;
chtls_cleanup_rbuf(sk, copied); chtls_cleanup_rbuf(sk, copied);
sk_wait_data(sk, &timeo, NULL); ret = sk_wait_data(sk, &timeo, NULL);
if (ret < 0) {
copied = copied ? : ret;
goto unlock;
}
continue; continue;
found_ok_skb: found_ok_skb:
if (!skb->len) { if (!skb->len) {
@ -1608,6 +1616,8 @@ static int chtls_pt_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
if (buffers_freed) if (buffers_freed)
chtls_cleanup_rbuf(sk, copied); chtls_cleanup_rbuf(sk, copied);
unlock:
release_sock(sk); release_sock(sk);
return copied; return copied;
} }
@ -1624,6 +1634,7 @@ static int peekmsg(struct sock *sk, struct msghdr *msg,
int copied = 0; int copied = 0;
size_t avail; /* amount of available data in current skb */ size_t avail; /* amount of available data in current skb */
long timeo; long timeo;
int ret;
lock_sock(sk); lock_sock(sk);
timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
@ -1675,7 +1686,12 @@ static int peekmsg(struct sock *sk, struct msghdr *msg,
release_sock(sk); release_sock(sk);
lock_sock(sk); lock_sock(sk);
} else { } else {
sk_wait_data(sk, &timeo, NULL); ret = sk_wait_data(sk, &timeo, NULL);
if (ret < 0) {
/* here 'copied' is 0 due to previous checks */
copied = ret;
break;
}
} }
if (unlikely(peek_seq != tp->copied_seq)) { if (unlikely(peek_seq != tp->copied_seq)) {
@ -1746,6 +1762,7 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int copied = 0; int copied = 0;
long timeo; long timeo;
int target; /* Read at least this many bytes */ int target; /* Read at least this many bytes */
int ret;
buffers_freed = 0; buffers_freed = 0;
@ -1837,7 +1854,11 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
if (copied >= target) if (copied >= target)
break; break;
chtls_cleanup_rbuf(sk, copied); chtls_cleanup_rbuf(sk, copied);
sk_wait_data(sk, &timeo, NULL); ret = sk_wait_data(sk, &timeo, NULL);
if (ret < 0) {
copied = copied ? : ret;
goto unlock;
}
continue; continue;
found_ok_skb: found_ok_skb:
@ -1906,6 +1927,7 @@ int chtls_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
if (buffers_freed) if (buffers_freed)
chtls_cleanup_rbuf(sk, copied); chtls_cleanup_rbuf(sk, copied);
unlock:
release_sock(sk); release_sock(sk);
return copied; return copied;
} }

View File

@ -1082,7 +1082,7 @@ void i40e_clear_hw(struct i40e_hw *hw)
I40E_PFLAN_QALLOC_FIRSTQ_SHIFT; I40E_PFLAN_QALLOC_FIRSTQ_SHIFT;
j = (val & I40E_PFLAN_QALLOC_LASTQ_MASK) >> j = (val & I40E_PFLAN_QALLOC_LASTQ_MASK) >>
I40E_PFLAN_QALLOC_LASTQ_SHIFT; I40E_PFLAN_QALLOC_LASTQ_SHIFT;
if (val & I40E_PFLAN_QALLOC_VALID_MASK) if (val & I40E_PFLAN_QALLOC_VALID_MASK && j >= base_queue)
num_queues = (j - base_queue) + 1; num_queues = (j - base_queue) + 1;
else else
num_queues = 0; num_queues = 0;
@ -1092,7 +1092,7 @@ void i40e_clear_hw(struct i40e_hw *hw)
I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT; I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT;
j = (val & I40E_PF_VT_PFALLOC_LASTVF_MASK) >> j = (val & I40E_PF_VT_PFALLOC_LASTVF_MASK) >>
I40E_PF_VT_PFALLOC_LASTVF_SHIFT; I40E_PF_VT_PFALLOC_LASTVF_SHIFT;
if (val & I40E_PF_VT_PFALLOC_VALID_MASK) if (val & I40E_PF_VT_PFALLOC_VALID_MASK && j >= i)
num_vfs = (j - i) + 1; num_vfs = (j - i) + 1;
else else
num_vfs = 0; num_vfs = 0;

View File

@ -1100,8 +1100,7 @@ static void ice_set_rss_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi)
ctxt->info.q_opt_rss = ((lut_type << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) & ctxt->info.q_opt_rss = ((lut_type << ICE_AQ_VSI_Q_OPT_RSS_LUT_S) &
ICE_AQ_VSI_Q_OPT_RSS_LUT_M) | ICE_AQ_VSI_Q_OPT_RSS_LUT_M) |
((hash_type << ICE_AQ_VSI_Q_OPT_RSS_HASH_S) & (hash_type & ICE_AQ_VSI_Q_OPT_RSS_HASH_M);
ICE_AQ_VSI_Q_OPT_RSS_HASH_M);
} }
static void static void

View File

@ -6,6 +6,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <generated/utsrelease.h> #include <generated/utsrelease.h>
#include <linux/crash_dump.h>
#include "ice.h" #include "ice.h"
#include "ice_base.h" #include "ice_base.h"
#include "ice_lib.h" #include "ice_lib.h"
@ -4681,6 +4682,20 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
return -EINVAL; return -EINVAL;
} }
/* when under a kdump kernel initiate a reset before enabling the
* device in order to clear out any pending DMA transactions. These
* transactions can cause some systems to machine check when doing
* the pcim_enable_device() below.
*/
if (is_kdump_kernel()) {
pci_save_state(pdev);
pci_clear_master(pdev);
err = pcie_flr(pdev);
if (err)
return err;
pci_restore_state(pdev);
}
/* this driver uses devres, see /* this driver uses devres, see
* Documentation/driver-api/driver-model/devres.rst * Documentation/driver-api/driver-model/devres.rst
*/ */
@ -4708,7 +4723,6 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
return err; return err;
} }
pci_enable_pcie_error_reporting(pdev);
pci_set_master(pdev); pci_set_master(pdev);
pf->pdev = pdev; pf->pdev = pdev;
@ -5001,7 +5015,6 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
ice_devlink_destroy_regions(pf); ice_devlink_destroy_regions(pf);
ice_deinit_hw(hw); ice_deinit_hw(hw);
err_exit_unroll: err_exit_unroll:
pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);
return err; return err;
} }
@ -5127,7 +5140,6 @@ static void ice_remove(struct pci_dev *pdev)
ice_reset(&pf->hw, ICE_RESET_PFR); ice_reset(&pf->hw, ICE_RESET_PFR);
pci_wait_for_pending_transaction(pdev); pci_wait_for_pending_transaction(pdev);
ice_clear_interrupt_scheme(pf); ice_clear_interrupt_scheme(pf);
pci_disable_pcie_error_reporting(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);
} }

View File

@ -183,9 +183,11 @@ struct igc_adapter {
u32 max_frame_size; u32 max_frame_size;
u32 min_frame_size; u32 min_frame_size;
int tc_setup_type;
ktime_t base_time; ktime_t base_time;
ktime_t cycle_time; ktime_t cycle_time;
bool qbv_enable; bool qbv_enable;
u32 qbv_config_change_errors;
/* OS defined structs */ /* OS defined structs */
struct pci_dev *pdev; struct pci_dev *pdev;
@ -228,6 +230,10 @@ struct igc_adapter {
struct ptp_clock *ptp_clock; struct ptp_clock *ptp_clock;
struct ptp_clock_info ptp_caps; struct ptp_clock_info ptp_caps;
struct work_struct ptp_tx_work; struct work_struct ptp_tx_work;
/* Access to ptp_tx_skb and ptp_tx_start are protected by the
* ptp_tx_lock.
*/
spinlock_t ptp_tx_lock;
struct sk_buff *ptp_tx_skb; struct sk_buff *ptp_tx_skb;
struct hwtstamp_config tstamp_config; struct hwtstamp_config tstamp_config;
unsigned long ptp_tx_start; unsigned long ptp_tx_start;
@ -429,7 +435,6 @@ enum igc_state_t {
__IGC_TESTING, __IGC_TESTING,
__IGC_RESETTING, __IGC_RESETTING,
__IGC_DOWN, __IGC_DOWN,
__IGC_PTP_TX_IN_PROGRESS,
}; };
enum igc_tx_flags { enum igc_tx_flags {

View File

@ -396,6 +396,35 @@ void igc_rx_fifo_flush_base(struct igc_hw *hw)
rd32(IGC_MPC); rd32(IGC_MPC);
} }
bool igc_is_device_id_i225(struct igc_hw *hw)
{
switch (hw->device_id) {
case IGC_DEV_ID_I225_LM:
case IGC_DEV_ID_I225_V:
case IGC_DEV_ID_I225_I:
case IGC_DEV_ID_I225_K:
case IGC_DEV_ID_I225_K2:
case IGC_DEV_ID_I225_LMVP:
case IGC_DEV_ID_I225_IT:
return true;
default:
return false;
}
}
bool igc_is_device_id_i226(struct igc_hw *hw)
{
switch (hw->device_id) {
case IGC_DEV_ID_I226_LM:
case IGC_DEV_ID_I226_V:
case IGC_DEV_ID_I226_K:
case IGC_DEV_ID_I226_IT:
return true;
default:
return false;
}
}
static struct igc_mac_operations igc_mac_ops_base = { static struct igc_mac_operations igc_mac_ops_base = {
.init_hw = igc_init_hw_base, .init_hw = igc_init_hw_base,
.check_for_link = igc_check_for_copper_link, .check_for_link = igc_check_for_copper_link,

View File

@ -7,6 +7,8 @@
/* forward declaration */ /* forward declaration */
void igc_rx_fifo_flush_base(struct igc_hw *hw); void igc_rx_fifo_flush_base(struct igc_hw *hw);
void igc_power_down_phy_copper_base(struct igc_hw *hw); void igc_power_down_phy_copper_base(struct igc_hw *hw);
bool igc_is_device_id_i225(struct igc_hw *hw);
bool igc_is_device_id_i226(struct igc_hw *hw);
/* Transmit Descriptor - Advanced */ /* Transmit Descriptor - Advanced */
union igc_adv_tx_desc { union igc_adv_tx_desc {

View File

@ -515,6 +515,7 @@
/* Transmit Scheduling */ /* Transmit Scheduling */
#define IGC_TQAVCTRL_TRANSMIT_MODE_TSN 0x00000001 #define IGC_TQAVCTRL_TRANSMIT_MODE_TSN 0x00000001
#define IGC_TQAVCTRL_ENHANCED_QAV 0x00000008 #define IGC_TQAVCTRL_ENHANCED_QAV 0x00000008
#define IGC_TQAVCTRL_FUTSCDDIS 0x00000080
#define IGC_TXQCTL_QUEUE_MODE_LAUNCHT 0x00000001 #define IGC_TXQCTL_QUEUE_MODE_LAUNCHT 0x00000001
#define IGC_TXQCTL_STRICT_CYCLE 0x00000002 #define IGC_TXQCTL_STRICT_CYCLE 0x00000002

View File

@ -67,6 +67,7 @@ static const struct igc_stats igc_gstrings_stats[] = {
IGC_STAT("rx_hwtstamp_cleared", rx_hwtstamp_cleared), IGC_STAT("rx_hwtstamp_cleared", rx_hwtstamp_cleared),
IGC_STAT("tx_lpi_counter", stats.tlpic), IGC_STAT("tx_lpi_counter", stats.tlpic),
IGC_STAT("rx_lpi_counter", stats.rlpic), IGC_STAT("rx_lpi_counter", stats.rlpic),
IGC_STAT("qbv_config_change_errors", qbv_config_change_errors),
}; };
#define IGC_NETDEV_STAT(_net_stat) { \ #define IGC_NETDEV_STAT(_net_stat) { \

View File

@ -1606,9 +1606,10 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
* the other timer registers before skipping the * the other timer registers before skipping the
* timestamping request. * timestamping request.
*/ */
if (adapter->tstamp_config.tx_type == HWTSTAMP_TX_ON && unsigned long flags;
!test_and_set_bit_lock(__IGC_PTP_TX_IN_PROGRESS,
&adapter->state)) { spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
if (adapter->tstamp_config.tx_type == HWTSTAMP_TX_ON && !adapter->ptp_tx_skb) {
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
tx_flags |= IGC_TX_FLAGS_TSTAMP; tx_flags |= IGC_TX_FLAGS_TSTAMP;
@ -1617,6 +1618,8 @@ static netdev_tx_t igc_xmit_frame_ring(struct sk_buff *skb,
} else { } else {
adapter->tx_hwtstamp_skipped++; adapter->tx_hwtstamp_skipped++;
} }
spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
} }
if (skb_vlan_tag_present(skb)) { if (skb_vlan_tag_present(skb)) {
@ -6035,6 +6038,7 @@ static bool validate_schedule(struct igc_adapter *adapter,
const struct tc_taprio_qopt_offload *qopt) const struct tc_taprio_qopt_offload *qopt)
{ {
int queue_uses[IGC_MAX_TX_QUEUES] = { }; int queue_uses[IGC_MAX_TX_QUEUES] = { };
struct igc_hw *hw = &adapter->hw;
struct timespec64 now; struct timespec64 now;
size_t n; size_t n;
@ -6047,8 +6051,10 @@ static bool validate_schedule(struct igc_adapter *adapter,
* in the future, it will hold all the packets until that * in the future, it will hold all the packets until that
* time, causing a lot of TX Hangs, so to avoid that, we * time, causing a lot of TX Hangs, so to avoid that, we
* reject schedules that would start in the future. * reject schedules that would start in the future.
* Note: Limitation above is no longer in i226.
*/ */
if (!is_base_time_past(qopt->base_time, &now)) if (!is_base_time_past(qopt->base_time, &now) &&
igc_is_device_id_i225(hw))
return false; return false;
for (n = 0; n < qopt->num_entries; n++) { for (n = 0; n < qopt->num_entries; n++) {
@ -6103,6 +6109,7 @@ static int igc_tsn_clear_schedule(struct igc_adapter *adapter)
adapter->base_time = 0; adapter->base_time = 0;
adapter->cycle_time = NSEC_PER_SEC; adapter->cycle_time = NSEC_PER_SEC;
adapter->qbv_config_change_errors = 0;
for (i = 0; i < adapter->num_tx_queues; i++) { for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *ring = adapter->tx_ring[i]; struct igc_ring *ring = adapter->tx_ring[i];
@ -6118,6 +6125,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
struct tc_taprio_qopt_offload *qopt) struct tc_taprio_qopt_offload *qopt)
{ {
bool queue_configured[IGC_MAX_TX_QUEUES] = { }; bool queue_configured[IGC_MAX_TX_QUEUES] = { };
struct igc_hw *hw = &adapter->hw;
u32 start_time = 0, end_time = 0; u32 start_time = 0, end_time = 0;
size_t n; size_t n;
int i; int i;
@ -6130,7 +6138,7 @@ static int igc_save_qbv_schedule(struct igc_adapter *adapter,
if (qopt->base_time < 0) if (qopt->base_time < 0)
return -ERANGE; return -ERANGE;
if (adapter->base_time) if (igc_is_device_id_i225(hw) && adapter->base_time)
return -EALREADY; return -EALREADY;
if (!validate_schedule(adapter, qopt)) if (!validate_schedule(adapter, qopt))
@ -6283,6 +6291,8 @@ static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type,
{ {
struct igc_adapter *adapter = netdev_priv(dev); struct igc_adapter *adapter = netdev_priv(dev);
adapter->tc_setup_type = type;
switch (type) { switch (type) {
case TC_SETUP_QDISC_TAPRIO: case TC_SETUP_QDISC_TAPRIO:
return igc_tsn_enable_qbv_scheduling(adapter, type_data); return igc_tsn_enable_qbv_scheduling(adapter, type_data);

View File

@ -622,6 +622,7 @@ static int igc_ptp_set_timestamp_mode(struct igc_adapter *adapter,
return 0; return 0;
} }
/* Requires adapter->ptp_tx_lock held by caller. */
static void igc_ptp_tx_timeout(struct igc_adapter *adapter) static void igc_ptp_tx_timeout(struct igc_adapter *adapter)
{ {
struct igc_hw *hw = &adapter->hw; struct igc_hw *hw = &adapter->hw;
@ -629,7 +630,6 @@ static void igc_ptp_tx_timeout(struct igc_adapter *adapter)
dev_kfree_skb_any(adapter->ptp_tx_skb); dev_kfree_skb_any(adapter->ptp_tx_skb);
adapter->ptp_tx_skb = NULL; adapter->ptp_tx_skb = NULL;
adapter->tx_hwtstamp_timeouts++; adapter->tx_hwtstamp_timeouts++;
clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
/* Clear the tx valid bit in TSYNCTXCTL register to enable interrupt. */ /* Clear the tx valid bit in TSYNCTXCTL register to enable interrupt. */
rd32(IGC_TXSTMPH); rd32(IGC_TXSTMPH);
netdev_warn(adapter->netdev, "Tx timestamp timeout\n"); netdev_warn(adapter->netdev, "Tx timestamp timeout\n");
@ -637,20 +637,20 @@ static void igc_ptp_tx_timeout(struct igc_adapter *adapter)
void igc_ptp_tx_hang(struct igc_adapter *adapter) void igc_ptp_tx_hang(struct igc_adapter *adapter)
{ {
bool timeout = time_is_before_jiffies(adapter->ptp_tx_start + unsigned long flags;
IGC_PTP_TX_TIMEOUT);
if (!test_bit(__IGC_PTP_TX_IN_PROGRESS, &adapter->state)) spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
return;
if (!adapter->ptp_tx_skb)
goto unlock;
if (time_is_after_jiffies(adapter->ptp_tx_start + IGC_PTP_TX_TIMEOUT))
goto unlock;
/* If we haven't received a timestamp within the timeout, it is
* reasonable to assume that it will never occur, so we can unlock the
* timestamp bit when this occurs.
*/
if (timeout) {
cancel_work_sync(&adapter->ptp_tx_work);
igc_ptp_tx_timeout(adapter); igc_ptp_tx_timeout(adapter);
}
unlock:
spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
} }
/** /**
@ -660,6 +660,8 @@ void igc_ptp_tx_hang(struct igc_adapter *adapter)
* If we were asked to do hardware stamping and such a time stamp is * If we were asked to do hardware stamping and such a time stamp is
* available, then it must have been for this skb here because we only * available, then it must have been for this skb here because we only
* allow only one such packet into the queue. * allow only one such packet into the queue.
*
* Context: Expects adapter->ptp_tx_lock to be held by caller.
*/ */
static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter) static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
{ {
@ -695,13 +697,7 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
shhwtstamps.hwtstamp = shhwtstamps.hwtstamp =
ktime_add_ns(shhwtstamps.hwtstamp, adjust); ktime_add_ns(shhwtstamps.hwtstamp, adjust);
/* Clear the lock early before calling skb_tstamp_tx so that
* applications are not woken up before the lock bit is clear. We use
* a copy of the skb pointer to ensure other threads can't change it
* while we're notifying the stack.
*/
adapter->ptp_tx_skb = NULL; adapter->ptp_tx_skb = NULL;
clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
/* Notify the stack and free the skb after we've unlocked */ /* Notify the stack and free the skb after we've unlocked */
skb_tstamp_tx(skb, &shhwtstamps); skb_tstamp_tx(skb, &shhwtstamps);
@ -712,24 +708,33 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
* igc_ptp_tx_work * igc_ptp_tx_work
* @work: pointer to work struct * @work: pointer to work struct
* *
* This work function polls the TSYNCTXCTL valid bit to determine when a * This work function checks the TSYNCTXCTL valid bit to determine when
* timestamp has been taken for the current stored skb. * a timestamp has been taken for the current stored skb.
*/ */
static void igc_ptp_tx_work(struct work_struct *work) static void igc_ptp_tx_work(struct work_struct *work)
{ {
struct igc_adapter *adapter = container_of(work, struct igc_adapter, struct igc_adapter *adapter = container_of(work, struct igc_adapter,
ptp_tx_work); ptp_tx_work);
struct igc_hw *hw = &adapter->hw; struct igc_hw *hw = &adapter->hw;
unsigned long flags;
u32 tsynctxctl; u32 tsynctxctl;
if (!test_bit(__IGC_PTP_TX_IN_PROGRESS, &adapter->state)) spin_lock_irqsave(&adapter->ptp_tx_lock, flags);
return;
if (!adapter->ptp_tx_skb)
goto unlock;
tsynctxctl = rd32(IGC_TSYNCTXCTL); tsynctxctl = rd32(IGC_TSYNCTXCTL);
if (WARN_ON_ONCE(!(tsynctxctl & IGC_TSYNCTXCTL_TXTT_0))) tsynctxctl &= IGC_TSYNCTXCTL_TXTT_0;
return; if (!tsynctxctl) {
WARN_ONCE(1, "Received a TSTAMP interrupt but no TSTAMP is ready.\n");
goto unlock;
}
igc_ptp_tx_hwtstamp(adapter); igc_ptp_tx_hwtstamp(adapter);
unlock:
spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags);
} }
/** /**
@ -978,6 +983,7 @@ void igc_ptp_init(struct igc_adapter *adapter)
return; return;
} }
spin_lock_init(&adapter->ptp_tx_lock);
spin_lock_init(&adapter->tmreg_lock); spin_lock_init(&adapter->tmreg_lock);
INIT_WORK(&adapter->ptp_tx_work, igc_ptp_tx_work); INIT_WORK(&adapter->ptp_tx_work, igc_ptp_tx_work);
@ -1042,7 +1048,6 @@ void igc_ptp_suspend(struct igc_adapter *adapter)
cancel_work_sync(&adapter->ptp_tx_work); cancel_work_sync(&adapter->ptp_tx_work);
dev_kfree_skb_any(adapter->ptp_tx_skb); dev_kfree_skb_any(adapter->ptp_tx_skb);
adapter->ptp_tx_skb = NULL; adapter->ptp_tx_skb = NULL;
clear_bit_unlock(__IGC_PTP_TX_IN_PROGRESS, &adapter->state);
if (pci_device_is_present(adapter->pdev)) { if (pci_device_is_present(adapter->pdev)) {
igc_ptp_time_save(adapter); igc_ptp_time_save(adapter);

View File

@ -2,6 +2,7 @@
/* Copyright (c) 2019 Intel Corporation */ /* Copyright (c) 2019 Intel Corporation */
#include "igc.h" #include "igc.h"
#include "igc_hw.h"
#include "igc_tsn.h" #include "igc_tsn.h"
static bool is_any_launchtime(struct igc_adapter *adapter) static bool is_any_launchtime(struct igc_adapter *adapter)
@ -62,7 +63,8 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter)
tqavctrl = rd32(IGC_TQAVCTRL); tqavctrl = rd32(IGC_TQAVCTRL);
tqavctrl &= ~(IGC_TQAVCTRL_TRANSMIT_MODE_TSN | tqavctrl &= ~(IGC_TQAVCTRL_TRANSMIT_MODE_TSN |
IGC_TQAVCTRL_ENHANCED_QAV); IGC_TQAVCTRL_ENHANCED_QAV | IGC_TQAVCTRL_FUTSCDDIS);
wr32(IGC_TQAVCTRL, tqavctrl); wr32(IGC_TQAVCTRL, tqavctrl);
for (i = 0; i < adapter->num_tx_queues; i++) { for (i = 0; i < adapter->num_tx_queues; i++) {
@ -82,25 +84,16 @@ static int igc_tsn_disable_offload(struct igc_adapter *adapter)
static int igc_tsn_enable_offload(struct igc_adapter *adapter) static int igc_tsn_enable_offload(struct igc_adapter *adapter)
{ {
struct igc_hw *hw = &adapter->hw; struct igc_hw *hw = &adapter->hw;
bool tsn_mode_reconfig = false;
u32 tqavctrl, baset_l, baset_h; u32 tqavctrl, baset_l, baset_h;
u32 sec, nsec, cycle; u32 sec, nsec, cycle;
ktime_t base_time, systim; ktime_t base_time, systim;
int i; int i;
cycle = adapter->cycle_time;
base_time = adapter->base_time;
wr32(IGC_TSAUXC, 0); wr32(IGC_TSAUXC, 0);
wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_TSN); wr32(IGC_DTXMXPKTSZ, IGC_DTXMXPKTSZ_TSN);
wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN); wr32(IGC_TXPBS, IGC_TXPBSIZE_TSN);
tqavctrl = rd32(IGC_TQAVCTRL);
tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV;
wr32(IGC_TQAVCTRL, tqavctrl);
wr32(IGC_QBVCYCLET_S, cycle);
wr32(IGC_QBVCYCLET, cycle);
for (i = 0; i < adapter->num_tx_queues; i++) { for (i = 0; i < adapter->num_tx_queues; i++) {
struct igc_ring *ring = adapter->tx_ring[i]; struct igc_ring *ring = adapter->tx_ring[i];
u32 txqctl = 0; u32 txqctl = 0;
@ -203,21 +196,58 @@ static int igc_tsn_enable_offload(struct igc_adapter *adapter)
wr32(IGC_TXQCTL(i), txqctl); wr32(IGC_TXQCTL(i), txqctl);
} }
tqavctrl = rd32(IGC_TQAVCTRL) & ~IGC_TQAVCTRL_FUTSCDDIS;
if (tqavctrl & IGC_TQAVCTRL_TRANSMIT_MODE_TSN)
tsn_mode_reconfig = true;
tqavctrl |= IGC_TQAVCTRL_TRANSMIT_MODE_TSN | IGC_TQAVCTRL_ENHANCED_QAV;
cycle = adapter->cycle_time;
base_time = adapter->base_time;
nsec = rd32(IGC_SYSTIML); nsec = rd32(IGC_SYSTIML);
sec = rd32(IGC_SYSTIMH); sec = rd32(IGC_SYSTIMH);
systim = ktime_set(sec, nsec); systim = ktime_set(sec, nsec);
if (ktime_compare(systim, base_time) > 0) { if (ktime_compare(systim, base_time) > 0) {
s64 n; s64 n = div64_s64(ktime_sub_ns(systim, base_time), cycle);
n = div64_s64(ktime_sub_ns(systim, base_time), cycle);
base_time = ktime_add_ns(base_time, (n + 1) * cycle); base_time = ktime_add_ns(base_time, (n + 1) * cycle);
/* Increase the counter if scheduling into the past while
* Gate Control List (GCL) is running.
*/
if ((rd32(IGC_BASET_H) || rd32(IGC_BASET_L)) &&
(adapter->tc_setup_type == TC_SETUP_QDISC_TAPRIO) &&
tsn_mode_reconfig)
adapter->qbv_config_change_errors++;
} else {
/* According to datasheet section 7.5.2.9.3.3, FutScdDis bit
* has to be configured before the cycle time and base time.
* Tx won't hang if there is a GCL is already running,
* so in this case we don't need to set FutScdDis.
*/
if (igc_is_device_id_i226(hw) &&
!(rd32(IGC_BASET_H) || rd32(IGC_BASET_L)))
tqavctrl |= IGC_TQAVCTRL_FUTSCDDIS;
} }
baset_h = div_s64_rem(base_time, NSEC_PER_SEC, &baset_l); wr32(IGC_TQAVCTRL, tqavctrl);
wr32(IGC_QBVCYCLET_S, cycle);
wr32(IGC_QBVCYCLET, cycle);
baset_h = div_s64_rem(base_time, NSEC_PER_SEC, &baset_l);
wr32(IGC_BASET_H, baset_h); wr32(IGC_BASET_H, baset_h);
/* In i226, Future base time is only supported when FutScdDis bit
* is enabled and only active for re-configuration.
* In this case, initialize the base time with zero to create
* "re-configuration" scenario then only set the desired base time.
*/
if (tqavctrl & IGC_TQAVCTRL_FUTSCDDIS)
wr32(IGC_BASET_L, 0);
wr32(IGC_BASET_L, baset_l); wr32(IGC_BASET_L, baset_l);
return 0; return 0;
@ -244,17 +274,14 @@ int igc_tsn_reset(struct igc_adapter *adapter)
int igc_tsn_offload_apply(struct igc_adapter *adapter) int igc_tsn_offload_apply(struct igc_adapter *adapter)
{ {
int err; struct igc_hw *hw = &adapter->hw;
if (netif_running(adapter->netdev)) { if (netif_running(adapter->netdev) && igc_is_device_id_i225(hw)) {
schedule_work(&adapter->reset_task); schedule_work(&adapter->reset_task);
return 0; return 0;
} }
err = igc_tsn_enable_offload(adapter); igc_tsn_reset(adapter);
if (err < 0)
return err;
adapter->flags = igc_tsn_new_flags(adapter);
return 0; return 0;
} }

View File

@ -707,20 +707,19 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
hw_desc->dptr = tx_buffer->sglist_dma; hw_desc->dptr = tx_buffer->sglist_dma;
} }
/* Flush the hw descriptor before writing to doorbell */ netdev_tx_sent_queue(iq->netdev_q, skb->len);
wmb(); skb_tx_timestamp(skb);
/* Ring Doorbell to notify the NIC there is a new packet */
writel(1, iq->doorbell_reg);
atomic_inc(&iq->instr_pending); atomic_inc(&iq->instr_pending);
wi++; wi++;
if (wi == iq->max_count) if (wi == iq->max_count)
wi = 0; wi = 0;
iq->host_write_index = wi; iq->host_write_index = wi;
/* Flush the hw descriptor before writing to doorbell */
wmb();
netdev_tx_sent_queue(iq->netdev_q, skb->len); /* Ring Doorbell to notify the NIC there is a new packet */
writel(1, iq->doorbell_reg);
iq->stats.instr_posted++; iq->stats.instr_posted++;
skb_tx_timestamp(skb);
return NETDEV_TX_OK; return NETDEV_TX_OK;
dma_map_sg_err: dma_map_sg_err:

View File

@ -2195,7 +2195,7 @@ struct rx_ring_info {
struct sk_buff *skb; struct sk_buff *skb;
dma_addr_t data_addr; dma_addr_t data_addr;
DEFINE_DMA_UNMAP_LEN(data_size); DEFINE_DMA_UNMAP_LEN(data_size);
dma_addr_t frag_addr[ETH_JUMBO_MTU >> PAGE_SHIFT]; dma_addr_t frag_addr[ETH_JUMBO_MTU >> PAGE_SHIFT ?: 1];
}; };
enum flow_control { enum flow_control {

View File

@ -821,7 +821,7 @@ static void mlx5_fw_tracer_ownership_change(struct work_struct *work)
mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner); mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner);
if (tracer->owner) { if (tracer->owner) {
tracer->owner = false; mlx5_fw_tracer_ownership_acquire(tracer);
return; return;
} }

View File

@ -23,7 +23,8 @@ static int mlx5e_set_int_port_tunnel(struct mlx5e_priv *priv,
route_dev = dev_get_by_index(dev_net(e->out_dev), e->route_dev_ifindex); route_dev = dev_get_by_index(dev_net(e->out_dev), e->route_dev_ifindex);
if (!route_dev || !netif_is_ovs_master(route_dev)) if (!route_dev || !netif_is_ovs_master(route_dev) ||
attr->parse_attr->filter_dev == e->out_dev)
goto out; goto out;
err = mlx5e_set_fwd_to_int_port_actions(priv, attr, e->route_dev_ifindex, err = mlx5e_set_fwd_to_int_port_actions(priv, attr, e->route_dev_ifindex,

View File

@ -969,11 +969,8 @@ const u32 *mlx5_esw_query_functions(struct mlx5_core_dev *dev)
return ERR_PTR(err); return ERR_PTR(err);
} }
static void mlx5_eswitch_event_handlers_register(struct mlx5_eswitch *esw) static void mlx5_eswitch_event_handler_register(struct mlx5_eswitch *esw)
{ {
MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE);
mlx5_eq_notifier_register(esw->dev, &esw->nb);
if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) { if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) {
MLX5_NB_INIT(&esw->esw_funcs.nb, mlx5_esw_funcs_changed_handler, MLX5_NB_INIT(&esw->esw_funcs.nb, mlx5_esw_funcs_changed_handler,
ESW_FUNCTIONS_CHANGED); ESW_FUNCTIONS_CHANGED);
@ -981,13 +978,11 @@ static void mlx5_eswitch_event_handlers_register(struct mlx5_eswitch *esw)
} }
} }
static void mlx5_eswitch_event_handlers_unregister(struct mlx5_eswitch *esw) static void mlx5_eswitch_event_handler_unregister(struct mlx5_eswitch *esw)
{ {
if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev)) if (esw->mode == MLX5_ESWITCH_OFFLOADS && mlx5_eswitch_is_funcs_handler(esw->dev))
mlx5_eq_notifier_unregister(esw->dev, &esw->esw_funcs.nb); mlx5_eq_notifier_unregister(esw->dev, &esw->esw_funcs.nb);
mlx5_eq_notifier_unregister(esw->dev, &esw->nb);
flush_workqueue(esw->work_queue); flush_workqueue(esw->work_queue);
} }
@ -1273,6 +1268,9 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs)
mlx5_eswitch_update_num_of_vfs(esw, num_vfs); mlx5_eswitch_update_num_of_vfs(esw, num_vfs);
MLX5_NB_INIT(&esw->nb, eswitch_vport_event, NIC_VPORT_CHANGE);
mlx5_eq_notifier_register(esw->dev, &esw->nb);
if (esw->mode == MLX5_ESWITCH_LEGACY) { if (esw->mode == MLX5_ESWITCH_LEGACY) {
err = esw_legacy_enable(esw); err = esw_legacy_enable(esw);
} else { } else {
@ -1285,7 +1283,7 @@ int mlx5_eswitch_enable_locked(struct mlx5_eswitch *esw, int num_vfs)
esw->fdb_table.flags |= MLX5_ESW_FDB_CREATED; esw->fdb_table.flags |= MLX5_ESW_FDB_CREATED;
mlx5_eswitch_event_handlers_register(esw); mlx5_eswitch_event_handler_register(esw);
esw_info(esw->dev, "Enable: mode(%s), nvfs(%d), active vports(%d)\n", esw_info(esw->dev, "Enable: mode(%s), nvfs(%d), active vports(%d)\n",
esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS", esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",
@ -1394,7 +1392,8 @@ void mlx5_eswitch_disable_locked(struct mlx5_eswitch *esw)
*/ */
mlx5_esw_mode_change_notify(esw, MLX5_ESWITCH_LEGACY); mlx5_esw_mode_change_notify(esw, MLX5_ESWITCH_LEGACY);
mlx5_eswitch_event_handlers_unregister(esw); mlx5_eq_notifier_unregister(esw->dev, &esw->nb);
mlx5_eswitch_event_handler_unregister(esw);
esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), active vports(%d)\n", esw_info(esw->dev, "Disable: mode(%s), nvfs(%d), active vports(%d)\n",
esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS", esw->mode == MLX5_ESWITCH_LEGACY ? "LEGACY" : "OFFLOADS",

View File

@ -113,7 +113,10 @@ static void qed_ll2b_complete_tx_packet(void *cxt,
static int qed_ll2_alloc_buffer(struct qed_dev *cdev, static int qed_ll2_alloc_buffer(struct qed_dev *cdev,
u8 **data, dma_addr_t *phys_addr) u8 **data, dma_addr_t *phys_addr)
{ {
*data = kmalloc(cdev->ll2->rx_size, GFP_ATOMIC); size_t size = cdev->ll2->rx_size + NET_SKB_PAD +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
*data = kmalloc(size, GFP_ATOMIC);
if (!(*data)) { if (!(*data)) {
DP_INFO(cdev, "Failed to allocate LL2 buffer data\n"); DP_INFO(cdev, "Failed to allocate LL2 buffer data\n");
return -ENOMEM; return -ENOMEM;
@ -2590,7 +2593,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
INIT_LIST_HEAD(&cdev->ll2->list); INIT_LIST_HEAD(&cdev->ll2->list);
spin_lock_init(&cdev->ll2->lock); spin_lock_init(&cdev->ll2->lock);
cdev->ll2->rx_size = NET_SKB_PAD + ETH_HLEN + cdev->ll2->rx_size = PRM_DMA_PAD_BYTES_NUM + ETH_HLEN +
L1_CACHE_BYTES + params->mtu; L1_CACHE_BYTES + params->mtu;
/* Allocate memory for LL2. /* Allocate memory for LL2.

View File

@ -907,6 +907,9 @@ static void bcm7xxx_28nm_remove(struct phy_device *phydev)
.name = _name, \ .name = _name, \
/* PHY_BASIC_FEATURES */ \ /* PHY_BASIC_FEATURES */ \
.flags = PHY_IS_INTERNAL, \ .flags = PHY_IS_INTERNAL, \
.get_sset_count = bcm_phy_get_sset_count, \
.get_strings = bcm_phy_get_strings, \
.get_stats = bcm7xxx_28nm_get_phy_stats, \
.probe = bcm7xxx_28nm_probe, \ .probe = bcm7xxx_28nm_probe, \
.remove = bcm7xxx_28nm_remove, \ .remove = bcm7xxx_28nm_remove, \
.config_init = bcm7xxx_16nm_ephy_config_init, \ .config_init = bcm7xxx_16nm_ephy_config_init, \

View File

@ -3056,10 +3056,11 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
struct net *net = sock_net(&tfile->sk); struct net *net = sock_net(&tfile->sk);
struct tun_struct *tun; struct tun_struct *tun;
void __user* argp = (void __user*)arg; void __user* argp = (void __user*)arg;
unsigned int ifindex, carrier; unsigned int carrier;
struct ifreq ifr; struct ifreq ifr;
kuid_t owner; kuid_t owner;
kgid_t group; kgid_t group;
int ifindex;
int sndbuf; int sndbuf;
int vnet_hdr_sz; int vnet_hdr_sz;
int le; int le;
@ -3115,7 +3116,9 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
ret = -EFAULT; ret = -EFAULT;
if (copy_from_user(&ifindex, argp, sizeof(ifindex))) if (copy_from_user(&ifindex, argp, sizeof(ifindex)))
goto unlock; goto unlock;
ret = -EINVAL;
if (ifindex < 0)
goto unlock;
ret = 0; ret = 0;
tfile->ifindex = ifindex; tfile->ifindex = ifindex;
goto unlock; goto unlock;

View File

@ -897,7 +897,7 @@ static int smsc95xx_reset(struct usbnet *dev)
if (timeout >= 100) { if (timeout >= 100) {
netdev_warn(dev->net, "timeout waiting for completion of Lite Reset\n"); netdev_warn(dev->net, "timeout waiting for completion of Lite Reset\n");
return ret; return -ETIMEDOUT;
} }
ret = smsc95xx_set_mac_address(dev); ret = smsc95xx_set_mac_address(dev);

View File

@ -1585,6 +1585,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
iwl_trans_free_tx_cmd(mvm->trans, info->driver_data[1]); iwl_trans_free_tx_cmd(mvm->trans, info->driver_data[1]);
memset(&info->status, 0, sizeof(info->status)); memset(&info->status, 0, sizeof(info->status));
info->flags &= ~(IEEE80211_TX_STAT_ACK | IEEE80211_TX_STAT_TX_FILTERED);
/* inform mac80211 about what happened with the frame */ /* inform mac80211 about what happened with the frame */
switch (status & TX_STATUS_MSK) { switch (status & TX_STATUS_MSK) {
@ -1936,6 +1937,8 @@ static void iwl_mvm_tx_reclaim(struct iwl_mvm *mvm, int sta_id, int tid,
*/ */
if (!is_flush) if (!is_flush)
info->flags |= IEEE80211_TX_STAT_ACK; info->flags |= IEEE80211_TX_STAT_ACK;
else
info->flags &= ~IEEE80211_TX_STAT_ACK;
} }
/* /*

View File

@ -921,6 +921,14 @@ void mwifiex_11n_rxba_sync_event(struct mwifiex_private *priv,
while (tlv_buf_left >= sizeof(*tlv_rxba)) { while (tlv_buf_left >= sizeof(*tlv_rxba)) {
tlv_type = le16_to_cpu(tlv_rxba->header.type); tlv_type = le16_to_cpu(tlv_rxba->header.type);
tlv_len = le16_to_cpu(tlv_rxba->header.len); tlv_len = le16_to_cpu(tlv_rxba->header.len);
if (size_add(sizeof(tlv_rxba->header), tlv_len) > tlv_buf_left) {
mwifiex_dbg(priv->adapter, WARN,
"TLV size (%zu) overflows event_buf buf_left=%d\n",
size_add(sizeof(tlv_rxba->header), tlv_len),
tlv_buf_left);
return;
}
if (tlv_type != TLV_TYPE_RXBA_SYNC) { if (tlv_type != TLV_TYPE_RXBA_SYNC) {
mwifiex_dbg(priv->adapter, ERROR, mwifiex_dbg(priv->adapter, ERROR,
"Wrong TLV id=0x%x\n", tlv_type); "Wrong TLV id=0x%x\n", tlv_type);
@ -929,6 +937,14 @@ void mwifiex_11n_rxba_sync_event(struct mwifiex_private *priv,
tlv_seq_num = le16_to_cpu(tlv_rxba->seq_num); tlv_seq_num = le16_to_cpu(tlv_rxba->seq_num);
tlv_bitmap_len = le16_to_cpu(tlv_rxba->bitmap_len); tlv_bitmap_len = le16_to_cpu(tlv_rxba->bitmap_len);
if (size_add(sizeof(*tlv_rxba), tlv_bitmap_len) > tlv_buf_left) {
mwifiex_dbg(priv->adapter, WARN,
"TLV size (%zu) overflows event_buf buf_left=%d\n",
size_add(sizeof(*tlv_rxba), tlv_bitmap_len),
tlv_buf_left);
return;
}
mwifiex_dbg(priv->adapter, INFO, mwifiex_dbg(priv->adapter, INFO,
"%pM tid=%d seq_num=%d bitmap_len=%d\n", "%pM tid=%d seq_num=%d bitmap_len=%d\n",
tlv_rxba->mac, tlv_rxba->tid, tlv_seq_num, tlv_rxba->mac, tlv_rxba->tid, tlv_seq_num,

View File

@ -32,9 +32,13 @@ static void *nvme_add_user_metadata(struct request *req, void __user *ubuf,
if (!buf) if (!buf)
goto out; goto out;
if (req_op(req) == REQ_OP_DRV_OUT) {
ret = -EFAULT; ret = -EFAULT;
if ((req_op(req) == REQ_OP_DRV_OUT) && copy_from_user(buf, ubuf, len)) if (copy_from_user(buf, ubuf, len))
goto out_free_meta; goto out_free_meta;
} else {
memset(buf, 0, len);
}
bip = bio_integrity_alloc(bio, GFP_KERNEL, 1); bip = bio_integrity_alloc(bio, GFP_KERNEL, 1);
if (IS_ERR(bip)) { if (IS_ERR(bip)) {

View File

@ -3440,7 +3440,8 @@ static const struct pci_device_id nvme_id_table[] = {
{ PCI_VDEVICE(INTEL, 0x0a54), /* Intel P4500/P4600 */ { PCI_VDEVICE(INTEL, 0x0a54), /* Intel P4500/P4600 */
.driver_data = NVME_QUIRK_STRIPE_SIZE | .driver_data = NVME_QUIRK_STRIPE_SIZE |
NVME_QUIRK_DEALLOCATE_ZEROES | NVME_QUIRK_DEALLOCATE_ZEROES |
NVME_QUIRK_IGNORE_DEV_SUBNQN, }, NVME_QUIRK_IGNORE_DEV_SUBNQN |
NVME_QUIRK_BOGUS_NID, },
{ PCI_VDEVICE(INTEL, 0x0a55), /* Dell Express Flash P4600 */ { PCI_VDEVICE(INTEL, 0x0a55), /* Dell Express Flash P4600 */
.driver_data = NVME_QUIRK_STRIPE_SIZE | .driver_data = NVME_QUIRK_STRIPE_SIZE |
NVME_QUIRK_DEALLOCATE_ZEROES, }, NVME_QUIRK_DEALLOCATE_ZEROES, },

View File

@ -643,6 +643,9 @@ static void __nvme_rdma_stop_queue(struct nvme_rdma_queue *queue)
static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue) static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue)
{ {
if (!test_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
return;
mutex_lock(&queue->queue_lock); mutex_lock(&queue->queue_lock);
if (test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags)) if (test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags))
__nvme_rdma_stop_queue(queue); __nvme_rdma_stop_queue(queue);

View File

@ -337,19 +337,21 @@ void nvmet_execute_auth_send(struct nvmet_req *req)
__func__, ctrl->cntlid, req->sq->qid, __func__, ctrl->cntlid, req->sq->qid,
status, req->error_loc); status, req->error_loc);
req->cqe->result.u64 = 0; req->cqe->result.u64 = 0;
nvmet_req_complete(req, status);
if (req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2 && if (req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2 &&
req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_FAILURE2) { req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_FAILURE2) {
unsigned long auth_expire_secs = ctrl->kato ? ctrl->kato : 120; unsigned long auth_expire_secs = ctrl->kato ? ctrl->kato : 120;
mod_delayed_work(system_wq, &req->sq->auth_expired_work, mod_delayed_work(system_wq, &req->sq->auth_expired_work,
auth_expire_secs * HZ); auth_expire_secs * HZ);
return; goto complete;
} }
/* Final states, clear up variables */ /* Final states, clear up variables */
nvmet_auth_sq_free(req->sq); nvmet_auth_sq_free(req->sq);
if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE2) if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE2)
nvmet_ctrl_fatal_error(ctrl); nvmet_ctrl_fatal_error(ctrl);
complete:
nvmet_req_complete(req, status);
} }
static int nvmet_auth_challenge(struct nvmet_req *req, void *d, int al) static int nvmet_auth_challenge(struct nvmet_req *req, void *d, int al)
@ -527,11 +529,12 @@ void nvmet_execute_auth_receive(struct nvmet_req *req)
kfree(d); kfree(d);
done: done:
req->cqe->result.u64 = 0; req->cqe->result.u64 = 0;
nvmet_req_complete(req, status);
if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2) if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2)
nvmet_auth_sq_free(req->sq); nvmet_auth_sq_free(req->sq);
else if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) { else if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) {
nvmet_auth_sq_free(req->sq); nvmet_auth_sq_free(req->sq);
nvmet_ctrl_fatal_error(ctrl); nvmet_ctrl_fatal_error(ctrl);
} }
nvmet_req_complete(req, status);
} }

View File

@ -345,6 +345,7 @@ static void nvmet_tcp_fatal_error(struct nvmet_tcp_queue *queue)
static void nvmet_tcp_socket_error(struct nvmet_tcp_queue *queue, int status) static void nvmet_tcp_socket_error(struct nvmet_tcp_queue *queue, int status)
{ {
queue->rcv_state = NVMET_TCP_RECV_ERR;
if (status == -EPIPE || status == -ECONNRESET) if (status == -EPIPE || status == -ECONNRESET)
kernel_sock_shutdown(queue->sock, SHUT_RDWR); kernel_sock_shutdown(queue->sock, SHUT_RDWR);
else else
@ -871,15 +872,11 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
iov.iov_len = sizeof(*icresp); iov.iov_len = sizeof(*icresp);
ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len); ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
if (ret < 0) if (ret < 0)
goto free_crypto; return ret; /* queue removal will cleanup */
queue->state = NVMET_TCP_Q_LIVE; queue->state = NVMET_TCP_Q_LIVE;
nvmet_prepare_receive_pdu(queue); nvmet_prepare_receive_pdu(queue);
return 0; return 0;
free_crypto:
if (queue->hdr_digest || queue->data_digest)
nvmet_tcp_free_crypto(queue);
return ret;
} }
static void nvmet_tcp_handle_req_failure(struct nvmet_tcp_queue *queue, static void nvmet_tcp_handle_req_failure(struct nvmet_tcp_queue *queue,

View File

@ -122,16 +122,10 @@ static int phy_mdm6600_power_on(struct phy *x)
{ {
struct phy_mdm6600 *ddata = phy_get_drvdata(x); struct phy_mdm6600 *ddata = phy_get_drvdata(x);
struct gpio_desc *enable_gpio = ddata->ctrl_gpios[PHY_MDM6600_ENABLE]; struct gpio_desc *enable_gpio = ddata->ctrl_gpios[PHY_MDM6600_ENABLE];
int error;
if (!ddata->enabled) if (!ddata->enabled)
return -ENODEV; return -ENODEV;
error = pinctrl_pm_select_default_state(ddata->dev);
if (error)
dev_warn(ddata->dev, "%s: error with default_state: %i\n",
__func__, error);
gpiod_set_value_cansleep(enable_gpio, 1); gpiod_set_value_cansleep(enable_gpio, 1);
/* Allow aggressive PM for USB, it's only needed for n_gsm port */ /* Allow aggressive PM for USB, it's only needed for n_gsm port */
@ -160,11 +154,6 @@ static int phy_mdm6600_power_off(struct phy *x)
gpiod_set_value_cansleep(enable_gpio, 0); gpiod_set_value_cansleep(enable_gpio, 0);
error = pinctrl_pm_select_sleep_state(ddata->dev);
if (error)
dev_warn(ddata->dev, "%s: error with sleep_state: %i\n",
__func__, error);
return 0; return 0;
} }
@ -456,6 +445,7 @@ static void phy_mdm6600_device_power_off(struct phy_mdm6600 *ddata)
{ {
struct gpio_desc *reset_gpio = struct gpio_desc *reset_gpio =
ddata->ctrl_gpios[PHY_MDM6600_RESET]; ddata->ctrl_gpios[PHY_MDM6600_RESET];
int error;
ddata->enabled = false; ddata->enabled = false;
phy_mdm6600_cmd(ddata, PHY_MDM6600_CMD_BP_SHUTDOWN_REQ); phy_mdm6600_cmd(ddata, PHY_MDM6600_CMD_BP_SHUTDOWN_REQ);
@ -471,6 +461,17 @@ static void phy_mdm6600_device_power_off(struct phy_mdm6600 *ddata)
} else { } else {
dev_err(ddata->dev, "Timed out powering down\n"); dev_err(ddata->dev, "Timed out powering down\n");
} }
/*
* Keep reset gpio high with padconf internal pull-up resistor to
* prevent modem from waking up during deeper SoC idle states. The
* gpio bank lines can have glitches if not in the always-on wkup
* domain.
*/
error = pinctrl_pm_select_sleep_state(ddata->dev);
if (error)
dev_warn(ddata->dev, "%s: error with sleep_state: %i\n",
__func__, error);
} }
static void phy_mdm6600_deferred_power_on(struct work_struct *work) static void phy_mdm6600_deferred_power_on(struct work_struct *work)
@ -571,12 +572,6 @@ static int phy_mdm6600_probe(struct platform_device *pdev)
ddata->dev = &pdev->dev; ddata->dev = &pdev->dev;
platform_set_drvdata(pdev, ddata); platform_set_drvdata(pdev, ddata);
/* Active state selected in phy_mdm6600_power_on() */
error = pinctrl_pm_select_sleep_state(ddata->dev);
if (error)
dev_warn(ddata->dev, "%s: error with sleep_state: %i\n",
__func__, error);
error = phy_mdm6600_init_lines(ddata); error = phy_mdm6600_init_lines(ddata);
if (error) if (error)
return error; return error;
@ -627,10 +622,12 @@ static int phy_mdm6600_probe(struct platform_device *pdev)
pm_runtime_put_autosuspend(ddata->dev); pm_runtime_put_autosuspend(ddata->dev);
cleanup: cleanup:
if (error < 0) if (error < 0) {
phy_mdm6600_device_power_off(ddata); phy_mdm6600_device_power_off(ddata);
pm_runtime_disable(ddata->dev); pm_runtime_disable(ddata->dev);
pm_runtime_dont_use_autosuspend(ddata->dev); pm_runtime_dont_use_autosuspend(ddata->dev);
}
return error; return error;
} }
@ -639,6 +636,7 @@ static int phy_mdm6600_remove(struct platform_device *pdev)
struct phy_mdm6600 *ddata = platform_get_drvdata(pdev); struct phy_mdm6600 *ddata = platform_get_drvdata(pdev);
struct gpio_desc *reset_gpio = ddata->ctrl_gpios[PHY_MDM6600_RESET]; struct gpio_desc *reset_gpio = ddata->ctrl_gpios[PHY_MDM6600_RESET];
pm_runtime_get_noresume(ddata->dev);
pm_runtime_dont_use_autosuspend(ddata->dev); pm_runtime_dont_use_autosuspend(ddata->dev);
pm_runtime_put_sync(ddata->dev); pm_runtime_put_sync(ddata->dev);
pm_runtime_disable(ddata->dev); pm_runtime_disable(ddata->dev);

View File

@ -1007,20 +1007,17 @@ static int add_setting(struct pinctrl *p, struct pinctrl_dev *pctldev,
static struct pinctrl *find_pinctrl(struct device *dev) static struct pinctrl *find_pinctrl(struct device *dev)
{ {
struct pinctrl *entry, *p = NULL; struct pinctrl *p;
mutex_lock(&pinctrl_list_mutex); mutex_lock(&pinctrl_list_mutex);
list_for_each_entry(p, &pinctrl_list, node)
list_for_each_entry(entry, &pinctrl_list, node) { if (p->dev == dev) {
if (entry->dev == dev) { mutex_unlock(&pinctrl_list_mutex);
p = entry; return p;
kref_get(&p->users);
break;
}
} }
mutex_unlock(&pinctrl_list_mutex); mutex_unlock(&pinctrl_list_mutex);
return p; return NULL;
} }
static void pinctrl_free(struct pinctrl *p, bool inlist); static void pinctrl_free(struct pinctrl *p, bool inlist);
@ -1129,6 +1126,7 @@ struct pinctrl *pinctrl_get(struct device *dev)
p = find_pinctrl(dev); p = find_pinctrl(dev);
if (p) { if (p) {
dev_dbg(dev, "obtain a copy of previously claimed pinctrl\n"); dev_dbg(dev, "obtain a copy of previously claimed pinctrl\n");
kref_get(&p->users);
return p; return p;
} }

View File

@ -159,8 +159,7 @@ static int surface_platform_profile_probe(struct ssam_device *sdev)
set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, tpd->handler.choices); set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, tpd->handler.choices);
set_bit(PLATFORM_PROFILE_PERFORMANCE, tpd->handler.choices); set_bit(PLATFORM_PROFILE_PERFORMANCE, tpd->handler.choices);
platform_profile_register(&tpd->handler); return platform_profile_register(&tpd->handler);
return 0;
} }
static void surface_platform_profile_remove(struct ssam_device *sdev) static void surface_platform_profile_remove(struct ssam_device *sdev)

View File

@ -531,6 +531,9 @@ static void asus_nb_wmi_quirks(struct asus_wmi_driver *driver)
static const struct key_entry asus_nb_wmi_keymap[] = { static const struct key_entry asus_nb_wmi_keymap[] = {
{ KE_KEY, ASUS_WMI_BRN_DOWN, { KEY_BRIGHTNESSDOWN } }, { KE_KEY, ASUS_WMI_BRN_DOWN, { KEY_BRIGHTNESSDOWN } },
{ KE_KEY, ASUS_WMI_BRN_UP, { KEY_BRIGHTNESSUP } }, { KE_KEY, ASUS_WMI_BRN_UP, { KEY_BRIGHTNESSUP } },
{ KE_KEY, 0x2a, { KEY_SELECTIVE_SCREENSHOT } },
{ KE_IGNORE, 0x2b, }, /* PrintScreen (also send via PS/2) on newer models */
{ KE_IGNORE, 0x2c, }, /* CapsLock (also send via PS/2) on newer models */
{ KE_KEY, 0x30, { KEY_VOLUMEUP } }, { KE_KEY, 0x30, { KEY_VOLUMEUP } },
{ KE_KEY, 0x31, { KEY_VOLUMEDOWN } }, { KE_KEY, 0x31, { KEY_VOLUMEDOWN } },
{ KE_KEY, 0x32, { KEY_MUTE } }, { KE_KEY, 0x32, { KEY_MUTE } },

View File

@ -3268,7 +3268,6 @@ static void asus_wmi_handle_event_code(int code, struct asus_wmi *asus)
{ {
unsigned int key_value = 1; unsigned int key_value = 1;
bool autorelease = 1; bool autorelease = 1;
int orig_code = code;
if (asus->driver->key_filter) { if (asus->driver->key_filter) {
asus->driver->key_filter(asus->driver, &code, &key_value, asus->driver->key_filter(asus->driver, &code, &key_value,
@ -3277,17 +3276,11 @@ static void asus_wmi_handle_event_code(int code, struct asus_wmi *asus)
return; return;
} }
if (code >= NOTIFY_BRNUP_MIN && code <= NOTIFY_BRNUP_MAX) if (acpi_video_get_backlight_type() == acpi_backlight_vendor &&
code = ASUS_WMI_BRN_UP; code >= NOTIFY_BRNUP_MIN && code <= NOTIFY_BRNDOWN_MAX) {
else if (code >= NOTIFY_BRNDOWN_MIN && code <= NOTIFY_BRNDOWN_MAX) asus_wmi_backlight_notify(asus, code);
code = ASUS_WMI_BRN_DOWN;
if (code == ASUS_WMI_BRN_DOWN || code == ASUS_WMI_BRN_UP) {
if (acpi_video_get_backlight_type() == acpi_backlight_vendor) {
asus_wmi_backlight_notify(asus, orig_code);
return; return;
} }
}
if (code == NOTIFY_KBD_BRTUP) { if (code == NOTIFY_KBD_BRTUP) {
kbd_led_set_by_kbd(asus, asus->kbd_led_wk + 1); kbd_led_set_by_kbd(asus, asus->kbd_led_wk + 1);

View File

@ -18,7 +18,7 @@
#include <linux/i8042.h> #include <linux/i8042.h>
#define ASUS_WMI_KEY_IGNORE (-1) #define ASUS_WMI_KEY_IGNORE (-1)
#define ASUS_WMI_BRN_DOWN 0x20 #define ASUS_WMI_BRN_DOWN 0x2e
#define ASUS_WMI_BRN_UP 0x2f #define ASUS_WMI_BRN_UP 0x2f
struct module; struct module;

View File

@ -153,7 +153,7 @@ show_uncore_data(initial_max_freq_khz);
static int create_attr_group(struct uncore_data *data, char *name) static int create_attr_group(struct uncore_data *data, char *name)
{ {
int ret, index = 0; int ret, freq, index = 0;
init_attribute_rw(max_freq_khz); init_attribute_rw(max_freq_khz);
init_attribute_rw(min_freq_khz); init_attribute_rw(min_freq_khz);
@ -165,7 +165,11 @@ static int create_attr_group(struct uncore_data *data, char *name)
data->uncore_attrs[index++] = &data->min_freq_khz_dev_attr.attr; data->uncore_attrs[index++] = &data->min_freq_khz_dev_attr.attr;
data->uncore_attrs[index++] = &data->initial_min_freq_khz_dev_attr.attr; data->uncore_attrs[index++] = &data->initial_min_freq_khz_dev_attr.attr;
data->uncore_attrs[index++] = &data->initial_max_freq_khz_dev_attr.attr; data->uncore_attrs[index++] = &data->initial_max_freq_khz_dev_attr.attr;
ret = uncore_read_freq(data, &freq);
if (!ret)
data->uncore_attrs[index++] = &data->current_freq_khz_dev_attr.attr; data->uncore_attrs[index++] = &data->current_freq_khz_dev_attr.attr;
data->uncore_attrs[index] = NULL; data->uncore_attrs[index] = NULL;
data->uncore_attr_group.name = name; data->uncore_attr_group.name = name;

View File

@ -740,6 +740,21 @@ static const struct ts_dmi_data pipo_w11_data = {
.properties = pipo_w11_props, .properties = pipo_w11_props,
}; };
static const struct property_entry positivo_c4128b_props[] = {
PROPERTY_ENTRY_U32("touchscreen-min-x", 4),
PROPERTY_ENTRY_U32("touchscreen-min-y", 13),
PROPERTY_ENTRY_U32("touchscreen-size-x", 1915),
PROPERTY_ENTRY_U32("touchscreen-size-y", 1269),
PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-positivo-c4128b.fw"),
PROPERTY_ENTRY_U32("silead,max-fingers", 10),
{ }
};
static const struct ts_dmi_data positivo_c4128b_data = {
.acpi_name = "MSSL1680:00",
.properties = positivo_c4128b_props,
};
static const struct property_entry pov_mobii_wintab_p800w_v20_props[] = { static const struct property_entry pov_mobii_wintab_p800w_v20_props[] = {
PROPERTY_ENTRY_U32("touchscreen-min-x", 32), PROPERTY_ENTRY_U32("touchscreen-min-x", 32),
PROPERTY_ENTRY_U32("touchscreen-min-y", 16), PROPERTY_ENTRY_U32("touchscreen-min-y", 16),
@ -1457,6 +1472,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
DMI_MATCH(DMI_BIOS_VERSION, "MOMO.G.WI71C.MABMRBA02"), DMI_MATCH(DMI_BIOS_VERSION, "MOMO.G.WI71C.MABMRBA02"),
}, },
}, },
{
/* Positivo C4128B */
.driver_data = (void *)&positivo_c4128b_data,
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"),
DMI_MATCH(DMI_PRODUCT_NAME, "C4128B-1"),
},
},
{ {
/* Point of View mobii wintab p800w (v2.0) */ /* Point of View mobii wintab p800w (v2.0) */
.driver_data = (void *)&pov_mobii_wintab_p800w_v20_data, .driver_data = (void *)&pov_mobii_wintab_p800w_v20_data,

View File

@ -299,7 +299,7 @@ config NVMEM_REBOOT_MODE
config POWER_MLXBF config POWER_MLXBF
tristate "Mellanox BlueField power handling driver" tristate "Mellanox BlueField power handling driver"
depends on (GPIO_MLXBF2 && ACPI) depends on (GPIO_MLXBF2 || GPIO_MLXBF3) && ACPI
help help
This driver supports reset or low power mode handling for Mellanox BlueField. This driver supports reset or low power mode handling for Mellanox BlueField.

View File

@ -5725,15 +5725,11 @@ regulator_register(struct device *dev,
mutex_lock(&regulator_list_mutex); mutex_lock(&regulator_list_mutex);
regulator_ena_gpio_free(rdev); regulator_ena_gpio_free(rdev);
mutex_unlock(&regulator_list_mutex); mutex_unlock(&regulator_list_mutex);
put_device(&rdev->dev);
rdev = NULL;
clean: clean:
if (dangling_of_gpiod) if (dangling_of_gpiod)
gpiod_put(config->ena_gpiod); gpiod_put(config->ena_gpiod);
if (rdev && rdev->dev.of_node)
of_node_put(rdev->dev.of_node);
kfree(rdev);
kfree(config); kfree(config);
put_device(&rdev->dev);
rinse: rinse:
if (dangling_cfg_gpiod) if (dangling_cfg_gpiod)
gpiod_put(cfg->ena_gpiod); gpiod_put(cfg->ena_gpiod);

View File

@ -233,17 +233,19 @@ struct subchannel *css_alloc_subchannel(struct subchannel_id schid,
*/ */
ret = dma_set_coherent_mask(&sch->dev, DMA_BIT_MASK(31)); ret = dma_set_coherent_mask(&sch->dev, DMA_BIT_MASK(31));
if (ret) if (ret)
goto err; goto err_lock;
/* /*
* But we don't have such restrictions imposed on the stuff that * But we don't have such restrictions imposed on the stuff that
* is handled by the streaming API. * is handled by the streaming API.
*/ */
ret = dma_set_mask(&sch->dev, DMA_BIT_MASK(64)); ret = dma_set_mask(&sch->dev, DMA_BIT_MASK(64));
if (ret) if (ret)
goto err; goto err_lock;
return sch; return sch;
err_lock:
kfree(sch->lock);
err: err:
kfree(sch); kfree(sch);
return ERR_PTR(ret); return ERR_PTR(ret);

View File

@ -32,6 +32,7 @@
#include "8250.h" #include "8250.h"
#define DEFAULT_CLK_SPEED 48000000 #define DEFAULT_CLK_SPEED 48000000
#define OMAP_UART_REGSHIFT 2
#define UART_ERRATA_i202_MDR1_ACCESS (1 << 0) #define UART_ERRATA_i202_MDR1_ACCESS (1 << 0)
#define OMAP_UART_WER_HAS_TX_WAKEUP (1 << 1) #define OMAP_UART_WER_HAS_TX_WAKEUP (1 << 1)
@ -109,6 +110,7 @@
#define UART_OMAP_RX_LVL 0x19 #define UART_OMAP_RX_LVL 0x19
struct omap8250_priv { struct omap8250_priv {
void __iomem *membase;
int line; int line;
u8 habit; u8 habit;
u8 mdr1; u8 mdr1;
@ -152,9 +154,9 @@ static void omap_8250_rx_dma_flush(struct uart_8250_port *p);
static inline void omap_8250_rx_dma_flush(struct uart_8250_port *p) { } static inline void omap_8250_rx_dma_flush(struct uart_8250_port *p) { }
#endif #endif
static u32 uart_read(struct uart_8250_port *up, u32 reg) static u32 uart_read(struct omap8250_priv *priv, u32 reg)
{ {
return readl(up->port.membase + (reg << up->port.regshift)); return readl(priv->membase + (reg << OMAP_UART_REGSHIFT));
} }
/* /*
@ -538,7 +540,7 @@ static void omap_serial_fill_features_erratas(struct uart_8250_port *up,
u32 mvr, scheme; u32 mvr, scheme;
u16 revision, major, minor; u16 revision, major, minor;
mvr = uart_read(up, UART_OMAP_MVER); mvr = uart_read(priv, UART_OMAP_MVER);
/* Check revision register scheme */ /* Check revision register scheme */
scheme = mvr >> OMAP_UART_MVR_SCHEME_SHIFT; scheme = mvr >> OMAP_UART_MVR_SCHEME_SHIFT;
@ -1319,7 +1321,7 @@ static int omap8250_probe(struct platform_device *pdev)
UPF_HARD_FLOW; UPF_HARD_FLOW;
up.port.private_data = priv; up.port.private_data = priv;
up.port.regshift = 2; up.port.regshift = OMAP_UART_REGSHIFT;
up.port.fifosize = 64; up.port.fifosize = 64;
up.tx_loadsz = 64; up.tx_loadsz = 64;
up.capabilities = UART_CAP_FIFO; up.capabilities = UART_CAP_FIFO;
@ -1381,6 +1383,8 @@ static int omap8250_probe(struct platform_device *pdev)
DEFAULT_CLK_SPEED); DEFAULT_CLK_SPEED);
} }
priv->membase = membase;
priv->line = -ENODEV;
priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE; priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE; priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency); cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency);
@ -1388,6 +1392,8 @@ static int omap8250_probe(struct platform_device *pdev)
spin_lock_init(&priv->rx_dma_lock); spin_lock_init(&priv->rx_dma_lock);
platform_set_drvdata(pdev, priv);
device_init_wakeup(&pdev->dev, true); device_init_wakeup(&pdev->dev, true);
pm_runtime_enable(&pdev->dev); pm_runtime_enable(&pdev->dev);
pm_runtime_use_autosuspend(&pdev->dev); pm_runtime_use_autosuspend(&pdev->dev);
@ -1449,7 +1455,6 @@ static int omap8250_probe(struct platform_device *pdev)
goto err; goto err;
} }
priv->line = ret; priv->line = ret;
platform_set_drvdata(pdev, priv);
pm_runtime_mark_last_busy(&pdev->dev); pm_runtime_mark_last_busy(&pdev->dev);
pm_runtime_put_autosuspend(&pdev->dev); pm_runtime_put_autosuspend(&pdev->dev);
return 0; return 0;
@ -1471,17 +1476,17 @@ static int omap8250_remove(struct platform_device *pdev)
if (err) if (err)
return err; return err;
serial8250_unregister_port(priv->line);
priv->line = -ENODEV;
pm_runtime_dont_use_autosuspend(&pdev->dev); pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
flush_work(&priv->qos_work); flush_work(&priv->qos_work);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
serial8250_unregister_port(priv->line);
cpu_latency_qos_remove_request(&priv->pm_qos_request); cpu_latency_qos_remove_request(&priv->pm_qos_request);
device_init_wakeup(&pdev->dev, false); device_init_wakeup(&pdev->dev, false);
return 0; return 0;
} }
#ifdef CONFIG_PM_SLEEP
static int omap8250_prepare(struct device *dev) static int omap8250_prepare(struct device *dev)
{ {
struct omap8250_priv *priv = dev_get_drvdata(dev); struct omap8250_priv *priv = dev_get_drvdata(dev);
@ -1505,7 +1510,7 @@ static int omap8250_suspend(struct device *dev)
{ {
struct omap8250_priv *priv = dev_get_drvdata(dev); struct omap8250_priv *priv = dev_get_drvdata(dev);
struct uart_8250_port *up = serial8250_get_port(priv->line); struct uart_8250_port *up = serial8250_get_port(priv->line);
int err; int err = 0;
serial8250_suspend_port(priv->line); serial8250_suspend_port(priv->line);
@ -1515,6 +1520,7 @@ static int omap8250_suspend(struct device *dev)
if (!device_may_wakeup(dev)) if (!device_may_wakeup(dev))
priv->wer = 0; priv->wer = 0;
serial_out(up, UART_OMAP_WER, priv->wer); serial_out(up, UART_OMAP_WER, priv->wer);
if (uart_console(&up->port) && console_suspend_enabled)
err = pm_runtime_force_suspend(dev); err = pm_runtime_force_suspend(dev);
flush_work(&priv->qos_work); flush_work(&priv->qos_work);
@ -1524,11 +1530,15 @@ static int omap8250_suspend(struct device *dev)
static int omap8250_resume(struct device *dev) static int omap8250_resume(struct device *dev)
{ {
struct omap8250_priv *priv = dev_get_drvdata(dev); struct omap8250_priv *priv = dev_get_drvdata(dev);
struct uart_8250_port *up = serial8250_get_port(priv->line);
int err; int err;
if (uart_console(&up->port) && console_suspend_enabled) {
err = pm_runtime_force_resume(dev); err = pm_runtime_force_resume(dev);
if (err) if (err)
return err; return err;
}
serial8250_resume_port(priv->line); serial8250_resume_port(priv->line);
/* Paired with pm_runtime_resume_and_get() in omap8250_suspend() */ /* Paired with pm_runtime_resume_and_get() in omap8250_suspend() */
pm_runtime_mark_last_busy(dev); pm_runtime_mark_last_busy(dev);
@ -1536,12 +1546,7 @@ static int omap8250_resume(struct device *dev)
return 0; return 0;
} }
#else
#define omap8250_prepare NULL
#define omap8250_complete NULL
#endif
#ifdef CONFIG_PM
static int omap8250_lost_context(struct uart_8250_port *up) static int omap8250_lost_context(struct uart_8250_port *up)
{ {
u32 val; u32 val;
@ -1557,11 +1562,15 @@ static int omap8250_lost_context(struct uart_8250_port *up)
return 0; return 0;
} }
static void uart_write(struct omap8250_priv *priv, u32 reg, u32 val)
{
writel(val, priv->membase + (reg << OMAP_UART_REGSHIFT));
}
/* TODO: in future, this should happen via API in drivers/reset/ */ /* TODO: in future, this should happen via API in drivers/reset/ */
static int omap8250_soft_reset(struct device *dev) static int omap8250_soft_reset(struct device *dev)
{ {
struct omap8250_priv *priv = dev_get_drvdata(dev); struct omap8250_priv *priv = dev_get_drvdata(dev);
struct uart_8250_port *up = serial8250_get_port(priv->line);
int timeout = 100; int timeout = 100;
int sysc; int sysc;
int syss; int syss;
@ -1575,20 +1584,20 @@ static int omap8250_soft_reset(struct device *dev)
* needing omap8250_soft_reset() quirk. Do it in two writes as * needing omap8250_soft_reset() quirk. Do it in two writes as
* recommended in the comment for omap8250_update_scr(). * recommended in the comment for omap8250_update_scr().
*/ */
serial_out(up, UART_OMAP_SCR, OMAP_UART_SCR_DMAMODE_1); uart_write(priv, UART_OMAP_SCR, OMAP_UART_SCR_DMAMODE_1);
serial_out(up, UART_OMAP_SCR, uart_write(priv, UART_OMAP_SCR,
OMAP_UART_SCR_DMAMODE_1 | OMAP_UART_SCR_DMAMODE_CTL); OMAP_UART_SCR_DMAMODE_1 | OMAP_UART_SCR_DMAMODE_CTL);
sysc = serial_in(up, UART_OMAP_SYSC); sysc = uart_read(priv, UART_OMAP_SYSC);
/* softreset the UART */ /* softreset the UART */
sysc |= OMAP_UART_SYSC_SOFTRESET; sysc |= OMAP_UART_SYSC_SOFTRESET;
serial_out(up, UART_OMAP_SYSC, sysc); uart_write(priv, UART_OMAP_SYSC, sysc);
/* By experiments, 1us enough for reset complete on AM335x */ /* By experiments, 1us enough for reset complete on AM335x */
do { do {
udelay(1); udelay(1);
syss = serial_in(up, UART_OMAP_SYSS); syss = uart_read(priv, UART_OMAP_SYSS);
} while (--timeout && !(syss & OMAP_UART_SYSS_RESETDONE)); } while (--timeout && !(syss & OMAP_UART_SYSS_RESETDONE));
if (!timeout) { if (!timeout) {
@ -1602,23 +1611,10 @@ static int omap8250_soft_reset(struct device *dev)
static int omap8250_runtime_suspend(struct device *dev) static int omap8250_runtime_suspend(struct device *dev)
{ {
struct omap8250_priv *priv = dev_get_drvdata(dev); struct omap8250_priv *priv = dev_get_drvdata(dev);
struct uart_8250_port *up; struct uart_8250_port *up = NULL;
/* In case runtime-pm tries this before we are setup */
if (!priv)
return 0;
if (priv->line >= 0)
up = serial8250_get_port(priv->line); up = serial8250_get_port(priv->line);
/*
* When using 'no_console_suspend', the console UART must not be
* suspended. Since driver suspend is managed by runtime suspend,
* preventing runtime suspend (by returning error) will keep device
* active during suspend.
*/
if (priv->is_suspending && !console_suspend_enabled) {
if (uart_console(&up->port))
return -EBUSY;
}
if (priv->habit & UART_ERRATA_CLOCK_DISABLE) { if (priv->habit & UART_ERRATA_CLOCK_DISABLE) {
int ret; int ret;
@ -1627,13 +1623,15 @@ static int omap8250_runtime_suspend(struct device *dev)
if (ret) if (ret)
return ret; return ret;
if (up) {
/* Restore to UART mode after reset (for wakeup) */ /* Restore to UART mode after reset (for wakeup) */
omap8250_update_mdr1(up, priv); omap8250_update_mdr1(up, priv);
/* Restore wakeup enable register */ /* Restore wakeup enable register */
serial_out(up, UART_OMAP_WER, priv->wer); serial_out(up, UART_OMAP_WER, priv->wer);
} }
}
if (up->dma && up->dma->rxchan) if (up && up->dma && up->dma->rxchan)
omap_8250_rx_dma_flush(up); omap_8250_rx_dma_flush(up);
priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE; priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
@ -1645,25 +1643,21 @@ static int omap8250_runtime_suspend(struct device *dev)
static int omap8250_runtime_resume(struct device *dev) static int omap8250_runtime_resume(struct device *dev)
{ {
struct omap8250_priv *priv = dev_get_drvdata(dev); struct omap8250_priv *priv = dev_get_drvdata(dev);
struct uart_8250_port *up; struct uart_8250_port *up = NULL;
/* In case runtime-pm tries this before we are setup */
if (!priv)
return 0;
if (priv->line >= 0)
up = serial8250_get_port(priv->line); up = serial8250_get_port(priv->line);
if (omap8250_lost_context(up)) if (up && omap8250_lost_context(up))
omap8250_restore_regs(up); omap8250_restore_regs(up);
if (up->dma && up->dma->rxchan && !(priv->habit & UART_HAS_EFR2)) if (up && up->dma && up->dma->rxchan && !(priv->habit & UART_HAS_EFR2))
omap_8250_rx_dma(up); omap_8250_rx_dma(up);
priv->latency = priv->calc_latency; priv->latency = priv->calc_latency;
schedule_work(&priv->qos_work); schedule_work(&priv->qos_work);
return 0; return 0;
} }
#endif
#ifdef CONFIG_SERIAL_8250_OMAP_TTYO_FIXUP #ifdef CONFIG_SERIAL_8250_OMAP_TTYO_FIXUP
static int __init omap8250_console_fixup(void) static int __init omap8250_console_fixup(void)
@ -1706,17 +1700,17 @@ console_initcall(omap8250_console_fixup);
#endif #endif
static const struct dev_pm_ops omap8250_dev_pm_ops = { static const struct dev_pm_ops omap8250_dev_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(omap8250_suspend, omap8250_resume) SYSTEM_SLEEP_PM_OPS(omap8250_suspend, omap8250_resume)
SET_RUNTIME_PM_OPS(omap8250_runtime_suspend, RUNTIME_PM_OPS(omap8250_runtime_suspend,
omap8250_runtime_resume, NULL) omap8250_runtime_resume, NULL)
.prepare = omap8250_prepare, .prepare = pm_sleep_ptr(omap8250_prepare),
.complete = omap8250_complete, .complete = pm_sleep_ptr(omap8250_complete),
}; };
static struct platform_driver omap8250_platform_driver = { static struct platform_driver omap8250_platform_driver = {
.driver = { .driver = {
.name = "omap8250", .name = "omap8250",
.pm = &omap8250_dev_pm_ops, .pm = pm_ptr(&omap8250_dev_pm_ops),
.of_match_table = omap8250_dt_ids, .of_match_table = omap8250_dt_ids,
}, },
.probe = omap8250_probe, .probe = omap8250_probe,

View File

@ -48,8 +48,6 @@ static struct lock_class_key port_lock_key;
*/ */
#define RS485_MAX_RTS_DELAY 100 /* msecs */ #define RS485_MAX_RTS_DELAY 100 /* msecs */
static void uart_change_speed(struct tty_struct *tty, struct uart_state *state,
const struct ktermios *old_termios);
static void uart_wait_until_sent(struct tty_struct *tty, int timeout); static void uart_wait_until_sent(struct tty_struct *tty, int timeout);
static void uart_change_pm(struct uart_state *state, static void uart_change_pm(struct uart_state *state,
enum uart_pm_state pm_state); enum uart_pm_state pm_state);
@ -177,6 +175,52 @@ static void uart_port_dtr_rts(struct uart_port *uport, int raise)
uart_clear_mctrl(uport, TIOCM_DTR | TIOCM_RTS); uart_clear_mctrl(uport, TIOCM_DTR | TIOCM_RTS);
} }
/* Caller holds port mutex */
static void uart_change_line_settings(struct tty_struct *tty, struct uart_state *state,
const struct ktermios *old_termios)
{
struct uart_port *uport = uart_port_check(state);
struct ktermios *termios;
int hw_stopped;
/*
* If we have no tty, termios, or the port does not exist,
* then we can't set the parameters for this port.
*/
if (!tty || uport->type == PORT_UNKNOWN)
return;
termios = &tty->termios;
uport->ops->set_termios(uport, termios, old_termios);
/*
* Set modem status enables based on termios cflag
*/
spin_lock_irq(&uport->lock);
if (termios->c_cflag & CRTSCTS)
uport->status |= UPSTAT_CTS_ENABLE;
else
uport->status &= ~UPSTAT_CTS_ENABLE;
if (termios->c_cflag & CLOCAL)
uport->status &= ~UPSTAT_DCD_ENABLE;
else
uport->status |= UPSTAT_DCD_ENABLE;
/* reset sw-assisted CTS flow control based on (possibly) new mode */
hw_stopped = uport->hw_stopped;
uport->hw_stopped = uart_softcts_mode(uport) &&
!(uport->ops->get_mctrl(uport) & TIOCM_CTS);
if (uport->hw_stopped) {
if (!hw_stopped)
uport->ops->stop_tx(uport);
} else {
if (hw_stopped)
__uart_start(tty);
}
spin_unlock_irq(&uport->lock);
}
/* /*
* Startup the port. This will be called once per open. All calls * Startup the port. This will be called once per open. All calls
* will be serialised by the per-port mutex. * will be serialised by the per-port mutex.
@ -232,7 +276,7 @@ static int uart_port_startup(struct tty_struct *tty, struct uart_state *state,
/* /*
* Initialise the hardware port settings. * Initialise the hardware port settings.
*/ */
uart_change_speed(tty, state, NULL); uart_change_line_settings(tty, state, NULL);
/* /*
* Setup the RTS and DTR signals once the * Setup the RTS and DTR signals once the
@ -485,52 +529,6 @@ uart_get_divisor(struct uart_port *port, unsigned int baud)
} }
EXPORT_SYMBOL(uart_get_divisor); EXPORT_SYMBOL(uart_get_divisor);
/* Caller holds port mutex */
static void uart_change_speed(struct tty_struct *tty, struct uart_state *state,
const struct ktermios *old_termios)
{
struct uart_port *uport = uart_port_check(state);
struct ktermios *termios;
int hw_stopped;
/*
* If we have no tty, termios, or the port does not exist,
* then we can't set the parameters for this port.
*/
if (!tty || uport->type == PORT_UNKNOWN)
return;
termios = &tty->termios;
uport->ops->set_termios(uport, termios, old_termios);
/*
* Set modem status enables based on termios cflag
*/
spin_lock_irq(&uport->lock);
if (termios->c_cflag & CRTSCTS)
uport->status |= UPSTAT_CTS_ENABLE;
else
uport->status &= ~UPSTAT_CTS_ENABLE;
if (termios->c_cflag & CLOCAL)
uport->status &= ~UPSTAT_DCD_ENABLE;
else
uport->status |= UPSTAT_DCD_ENABLE;
/* reset sw-assisted CTS flow control based on (possibly) new mode */
hw_stopped = uport->hw_stopped;
uport->hw_stopped = uart_softcts_mode(uport) &&
!(uport->ops->get_mctrl(uport) & TIOCM_CTS);
if (uport->hw_stopped) {
if (!hw_stopped)
uport->ops->stop_tx(uport);
} else {
if (hw_stopped)
__uart_start(tty);
}
spin_unlock_irq(&uport->lock);
}
static int uart_put_char(struct tty_struct *tty, unsigned char c) static int uart_put_char(struct tty_struct *tty, unsigned char c)
{ {
struct uart_state *state = tty->driver_data; struct uart_state *state = tty->driver_data;
@ -994,7 +992,7 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port,
current->comm, current->comm,
tty_name(port->tty)); tty_name(port->tty));
} }
uart_change_speed(tty, state, NULL); uart_change_line_settings(tty, state, NULL);
} }
} else { } else {
retval = uart_startup(tty, state, 1); retval = uart_startup(tty, state, 1);
@ -1389,12 +1387,18 @@ static void uart_set_rs485_termination(struct uart_port *port,
static int uart_rs485_config(struct uart_port *port) static int uart_rs485_config(struct uart_port *port)
{ {
struct serial_rs485 *rs485 = &port->rs485; struct serial_rs485 *rs485 = &port->rs485;
unsigned long flags;
int ret; int ret;
if (!(rs485->flags & SER_RS485_ENABLED))
return 0;
uart_sanitize_serial_rs485(port, rs485); uart_sanitize_serial_rs485(port, rs485);
uart_set_rs485_termination(port, rs485); uart_set_rs485_termination(port, rs485);
spin_lock_irqsave(&port->lock, flags);
ret = port->rs485_config(port, NULL, rs485); ret = port->rs485_config(port, NULL, rs485);
spin_unlock_irqrestore(&port->lock, flags);
if (ret) if (ret)
memset(rs485, 0, sizeof(*rs485)); memset(rs485, 0, sizeof(*rs485));
@ -1656,7 +1660,7 @@ static void uart_set_termios(struct tty_struct *tty,
goto out; goto out;
} }
uart_change_speed(tty, state, old_termios); uart_change_line_settings(tty, state, old_termios);
/* reload cflag from termios; port driver may have overridden flags */ /* reload cflag from termios; port driver may have overridden flags */
cflag = tty->termios.c_cflag; cflag = tty->termios.c_cflag;
@ -2456,12 +2460,11 @@ int uart_resume_port(struct uart_driver *drv, struct uart_port *uport)
ret = ops->startup(uport); ret = ops->startup(uport);
if (ret == 0) { if (ret == 0) {
if (tty) if (tty)
uart_change_speed(tty, state, NULL); uart_change_line_settings(tty, state, NULL);
uart_rs485_config(uport);
spin_lock_irq(&uport->lock); spin_lock_irq(&uport->lock);
if (!(uport->rs485.flags & SER_RS485_ENABLED)) if (!(uport->rs485.flags & SER_RS485_ENABLED))
ops->set_mctrl(uport, uport->mctrl); ops->set_mctrl(uport, uport->mctrl);
else
uart_rs485_config(uport);
ops->start_tx(uport); ops->start_tx(uport);
spin_unlock_irq(&uport->lock); spin_unlock_irq(&uport->lock);
tty_port_set_initialized(port, 1); tty_port_set_initialized(port, 1);
@ -2570,10 +2573,10 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state,
port->mctrl &= TIOCM_DTR; port->mctrl &= TIOCM_DTR;
if (!(port->rs485.flags & SER_RS485_ENABLED)) if (!(port->rs485.flags & SER_RS485_ENABLED))
port->ops->set_mctrl(port, port->mctrl); port->ops->set_mctrl(port, port->mctrl);
else
uart_rs485_config(port);
spin_unlock_irqrestore(&port->lock, flags); spin_unlock_irqrestore(&port->lock, flags);
uart_rs485_config(port);
/* /*
* If this driver supports console, and it hasn't been * If this driver supports console, and it hasn't been
* successfully registered yet, try to re-register it. * successfully registered yet, try to re-register it.

View File

@ -329,6 +329,7 @@ static struct platform_driver onboard_hub_driver = {
/************************** USB driver **************************/ /************************** USB driver **************************/
#define VENDOR_ID_GENESYS 0x05e3
#define VENDOR_ID_MICROCHIP 0x0424 #define VENDOR_ID_MICROCHIP 0x0424
#define VENDOR_ID_REALTEK 0x0bda #define VENDOR_ID_REALTEK 0x0bda
#define VENDOR_ID_TI 0x0451 #define VENDOR_ID_TI 0x0451
@ -405,6 +406,10 @@ static void onboard_hub_usbdev_disconnect(struct usb_device *udev)
} }
static const struct usb_device_id onboard_hub_id_table[] = { static const struct usb_device_id onboard_hub_id_table[] = {
{ USB_DEVICE(VENDOR_ID_GENESYS, 0x0608) }, /* Genesys Logic GL850G USB 2.0 */
{ USB_DEVICE(VENDOR_ID_GENESYS, 0x0610) }, /* Genesys Logic GL852G USB 2.0 */
{ USB_DEVICE(VENDOR_ID_GENESYS, 0x0620) }, /* Genesys Logic GL3523 USB 3.1 */
{ USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2412) }, /* USB2412 USB 2.0 */
{ USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2514) }, /* USB2514B USB 2.0 */ { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2514) }, /* USB2514B USB 2.0 */
{ USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2517) }, /* USB2517 USB 2.0 */ { USB_DEVICE(VENDOR_ID_MICROCHIP, 0x2517) }, /* USB2517 USB 2.0 */
{ USB_DEVICE(VENDOR_ID_REALTEK, 0x0411) }, /* RTS5411 USB 3.1 */ { USB_DEVICE(VENDOR_ID_REALTEK, 0x0411) }, /* RTS5411 USB 3.1 */

View File

@ -22,11 +22,23 @@ static const struct onboard_hub_pdata ti_tusb8041_data = {
.reset_us = 3000, .reset_us = 3000,
}; };
static const struct onboard_hub_pdata genesys_gl850g_data = {
.reset_us = 3,
};
static const struct onboard_hub_pdata genesys_gl852g_data = {
.reset_us = 50,
};
static const struct of_device_id onboard_hub_match[] = { static const struct of_device_id onboard_hub_match[] = {
{ .compatible = "usb424,2412", .data = &microchip_usb424_data, },
{ .compatible = "usb424,2514", .data = &microchip_usb424_data, }, { .compatible = "usb424,2514", .data = &microchip_usb424_data, },
{ .compatible = "usb424,2517", .data = &microchip_usb424_data, }, { .compatible = "usb424,2517", .data = &microchip_usb424_data, },
{ .compatible = "usb451,8140", .data = &ti_tusb8041_data, }, { .compatible = "usb451,8140", .data = &ti_tusb8041_data, },
{ .compatible = "usb451,8142", .data = &ti_tusb8041_data, }, { .compatible = "usb451,8142", .data = &ti_tusb8041_data, },
{ .compatible = "usb5e3,608", .data = &genesys_gl850g_data, },
{ .compatible = "usb5e3,610", .data = &genesys_gl852g_data, },
{ .compatible = "usb5e3,620", .data = &genesys_gl852g_data, },
{ .compatible = "usbbda,411", .data = &realtek_rts5411_data, }, { .compatible = "usbbda,411", .data = &realtek_rts5411_data, },
{ .compatible = "usbbda,5411", .data = &realtek_rts5411_data, }, { .compatible = "usbbda,5411", .data = &realtek_rts5411_data, },
{ .compatible = "usbbda,414", .data = &realtek_rts5411_data, }, { .compatible = "usbbda,414", .data = &realtek_rts5411_data, },

View File

@ -203,6 +203,9 @@ static void option_instat_callback(struct urb *urb);
#define DELL_PRODUCT_5829E_ESIM 0x81e4 #define DELL_PRODUCT_5829E_ESIM 0x81e4
#define DELL_PRODUCT_5829E 0x81e6 #define DELL_PRODUCT_5829E 0x81e6
#define DELL_PRODUCT_FM101R 0x8213
#define DELL_PRODUCT_FM101R_ESIM 0x8215
#define KYOCERA_VENDOR_ID 0x0c88 #define KYOCERA_VENDOR_ID 0x0c88
#define KYOCERA_PRODUCT_KPC650 0x17da #define KYOCERA_PRODUCT_KPC650 0x17da
#define KYOCERA_PRODUCT_KPC680 0x180a #define KYOCERA_PRODUCT_KPC680 0x180a
@ -1108,6 +1111,8 @@ static const struct usb_device_id option_ids[] = {
.driver_info = RSVD(0) | RSVD(6) }, .driver_info = RSVD(0) | RSVD(6) },
{ USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5829E_ESIM), { USB_DEVICE(DELL_VENDOR_ID, DELL_PRODUCT_5829E_ESIM),
.driver_info = RSVD(0) | RSVD(6) }, .driver_info = RSVD(0) | RSVD(6) },
{ USB_DEVICE_INTERFACE_CLASS(DELL_VENDOR_ID, DELL_PRODUCT_FM101R, 0xff) },
{ USB_DEVICE_INTERFACE_CLASS(DELL_VENDOR_ID, DELL_PRODUCT_FM101R_ESIM, 0xff) },
{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) }, /* ADU-E100, ADU-310 */ { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_E100A) }, /* ADU-E100, ADU-310 */
{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) }, { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_500A) },
{ USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) }, { USB_DEVICE(ANYDATA_VENDOR_ID, ANYDATA_PRODUCT_ADU_620UW) },
@ -1290,6 +1295,7 @@ static const struct usb_device_id option_ids[] = {
.driver_info = NCTRL(0) | RSVD(3) }, .driver_info = NCTRL(0) | RSVD(3) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff), /* Telit LE910C1-EUX (ECM) */ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff), /* Telit LE910C1-EUX (ECM) */
.driver_info = NCTRL(0) }, .driver_info = NCTRL(0) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1035, 0xff) }, /* Telit LE910C4-WWX (ECM) */
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0), { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
.driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) }, .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1), { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
@ -2262,6 +2268,7 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) }, /* GosunCn GM500 ECM/NCM */ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) }, /* GosunCn GM500 ECM/NCM */
{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) }, { USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) }, { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
{ } /* Terminating entry */ { } /* Terminating entry */

View File

@ -563,18 +563,30 @@ noinline int btrfs_cow_block(struct btrfs_trans_handle *trans,
u64 search_start; u64 search_start;
int ret; int ret;
if (test_bit(BTRFS_ROOT_DELETING, &root->state)) if (unlikely(test_bit(BTRFS_ROOT_DELETING, &root->state))) {
btrfs_err(fs_info, btrfs_abort_transaction(trans, -EUCLEAN);
"COW'ing blocks on a fs root that's being dropped"); btrfs_crit(fs_info,
"attempt to COW block %llu on root %llu that is being deleted",
buf->start, btrfs_root_id(root));
return -EUCLEAN;
}
if (trans->transaction != fs_info->running_transaction) /*
WARN(1, KERN_CRIT "trans %llu running %llu\n", * COWing must happen through a running transaction, which always
trans->transid, * matches the current fs generation (it's a transaction with a state
fs_info->running_transaction->transid); * less than TRANS_STATE_UNBLOCKED). If it doesn't, then turn the fs
* into error state to prevent the commit of any transaction.
if (trans->transid != fs_info->generation) */
WARN(1, KERN_CRIT "trans %llu running %llu\n", if (unlikely(trans->transaction != fs_info->running_transaction ||
trans->transid, fs_info->generation); trans->transid != fs_info->generation)) {
btrfs_abort_transaction(trans, -EUCLEAN);
btrfs_crit(fs_info,
"unexpected transaction when attempting to COW block %llu on root %llu, transaction %llu running transaction %llu fs generation %llu",
buf->start, btrfs_root_id(root), trans->transid,
fs_info->running_transaction->transid,
fs_info->generation);
return -EUCLEAN;
}
if (!should_cow_block(trans, root, buf)) { if (!should_cow_block(trans, root, buf)) {
*cow_ret = buf; *cow_ret = buf;
@ -686,8 +698,22 @@ int btrfs_realloc_node(struct btrfs_trans_handle *trans,
int progress_passed = 0; int progress_passed = 0;
struct btrfs_disk_key disk_key; struct btrfs_disk_key disk_key;
WARN_ON(trans->transaction != fs_info->running_transaction); /*
WARN_ON(trans->transid != fs_info->generation); * COWing must happen through a running transaction, which always
* matches the current fs generation (it's a transaction with a state
* less than TRANS_STATE_UNBLOCKED). If it doesn't, then turn the fs
* into error state to prevent the commit of any transaction.
*/
if (unlikely(trans->transaction != fs_info->running_transaction ||
trans->transid != fs_info->generation)) {
btrfs_abort_transaction(trans, -EUCLEAN);
btrfs_crit(fs_info,
"unexpected transaction when attempting to reallocate parent %llu for root %llu, transaction %llu running transaction %llu fs generation %llu",
parent->start, btrfs_root_id(root), trans->transid,
fs_info->running_transaction->transid,
fs_info->generation);
return -EUCLEAN;
}
parent_nritems = btrfs_header_nritems(parent); parent_nritems = btrfs_header_nritems(parent);
blocksize = fs_info->nodesize; blocksize = fs_info->nodesize;

Some files were not shown because too many files have changed in this diff Show More