Merge 5.10.180 into android12-5.10-lts

Changes in 5.10.180
	seccomp: Move copy_seccomp() to no failure path.
	counter: 104-quad-8: Fix race condition between FLAG and CNTR reads
	KVM: arm64: Fix buffer overflow in kvm_arm_set_fw_reg()
	wifi: brcmfmac: slab-out-of-bounds read in brcmf_get_assoc_ies()
	drm/fb-helper: set x/yres_virtual in drm_fb_helper_check_var
	bluetooth: Perform careful capability checks in hci_sock_ioctl()
	x86/fpu: Prevent FPU state corruption
	USB: serial: option: add UNISOC vendor and TOZED LT70C product
	driver core: Don't require dynamic_debug for initcall_debug probe timing
	iio: adc: palmas_gpadc: fix NULL dereference on rmmod
	ASoC: Intel: bytcr_rt5640: Add quirk for the Acer Iconia One 7 B1-750
	asm-generic/io.h: suppress endianness warnings for readq() and writeq()
	wireguard: timers: cast enum limits members to int in prints
	PCI: pciehp: Fix AB-BA deadlock between reset_lock and device_lock
	PCI: qcom: Fix the incorrect register usage in v2.7.0 config
	USB: dwc3: fix runtime pm imbalance on probe errors
	USB: dwc3: fix runtime pm imbalance on unbind
	hwmon: (k10temp) Check range scale when CUR_TEMP register is read-write
	hwmon: (adt7475) Use device_property APIs when configuring polarity
	posix-cpu-timers: Implement the missing timer_wait_running callback
	perf sched: Cast PTHREAD_STACK_MIN to int as it may turn into sysconf(__SC_THREAD_STACK_MIN_VALUE)
	blk-mq: release crypto keyslot before reporting I/O complete
	blk-crypto: make blk_crypto_evict_key() return void
	blk-crypto: make blk_crypto_evict_key() more robust
	ext4: use ext4_journal_start/stop for fast commit transactions
	staging: iio: resolver: ads1210: fix config mode
	xhci: fix debugfs register accesses while suspended
	tick/nohz: Fix cpu_is_hotpluggable() by checking with nohz subsystem
	MIPS: fw: Allow firmware to pass a empty env
	ipmi:ssif: Add send_retries increment
	ipmi: fix SSIF not responding under certain cond.
	kheaders: Use array declaration instead of char
	pwm: meson: Fix axg ao mux parents
	pwm: meson: Fix g12a ao clk81 name
	ring-buffer: Sync IRQ works before buffer destruction
	crypto: api - Demote BUG_ON() in crypto_unregister_alg() to a WARN_ON()
	crypto: safexcel - Cleanup ring IRQ workqueues on load failure
	rcu: Avoid stack overflow due to __rcu_irq_enter_check_tick() being kprobe-ed
	reiserfs: Add security prefix to xattr name in reiserfs_security_write()
	KVM: nVMX: Emulate NOPs in L2, and PAUSE if it's not intercepted
	relayfs: fix out-of-bounds access in relay_file_read
	writeback, cgroup: fix null-ptr-deref write in bdi_split_work_to_wbs
	i2c: omap: Fix standard mode false ACK readings
	iommu/amd: Fix "Guest Virtual APIC Table Root Pointer" configuration in IRTE
	Revert "ubifs: dirty_cow_znode: Fix memleak in error handling path"
	ubifs: Fix memleak when insert_old_idx() failed
	ubi: Fix return value overwrite issue in try_write_vid_and_data()
	ubifs: Free memory for tmpfile name
	sound/oss/dmasound: fix build when drivers are mixed =y/=m
	parisc: Fix argument pointer in real64_call_asm()
	nilfs2: do not write dirty data after degenerating to read-only
	nilfs2: fix infinite loop in nilfs_mdt_get_block()
	md/raid10: fix null-ptr-deref in raid10_sync_request
	mailbox: zynqmp: Fix IPI isr handling
	mailbox: zynqmp: Fix typo in IPI documentation
	wifi: rtl8xxxu: RTL8192EU always needs full init
	clk: rockchip: rk3399: allow clk_cifout to force clk_cifout_src to reparent
	rcu: Fix missing TICK_DEP_MASK_RCU_EXP dependency check
	selftests/resctrl: Return NULL if malloc_and_init_memory() did not alloc mem
	selftests/resctrl: Check for return value after write_schemata()
	selinux: fix Makefile dependencies of flask.h
	selinux: ensure av_permissions.h is built when needed
	tpm, tpm_tis: Do not skip reset of original interrupt vector
	tpm, tpm_tis: Claim locality before writing TPM_INT_ENABLE register
	tpm, tpm_tis: Disable interrupts if tpm_tis_probe_irq() failed
	tpm, tpm_tis: Claim locality before writing interrupt registers
	tpm, tpm: Implement usage counter for locality
	tpm, tpm_tis: Claim locality when interrupts are reenabled on resume
	erofs: stop parsing non-compact HEAD index if clusterofs is invalid
	erofs: fix potential overflow calculating xattr_isize
	drm/rockchip: Drop unbalanced obj unref
	drm/vgem: add missing mutex_destroy
	drm/probe-helper: Cancel previous job before starting new one
	soc: ti: pm33xx: Enable basic PM runtime support for genpd
	soc: ti: pm33xx: Fix refcount leak in am33xx_pm_probe
	arm64: dts: renesas: r8a77990: Remove bogus voltages from OPP table
	arm64: dts: renesas: r8a774c0: Remove bogus voltages from OPP table
	drm/msm/disp/dpu: check for crtc enable rather than crtc active to release shared resources
	EDAC/skx: Fix overflows on the DRAM row address mapping arrays
	arm64: dts: qcom: msm8998: Fix stm-stimulus-base reg name
	arm64: dts: qcom: sdm845: correct dynamic power coefficients
	arm64: dts: qcom: sdm845: Fix the PCI I/O port range
	arm64: dts: qcom: msm8998: Fix the PCI I/O port range
	arm64: dts: qcom: ipq8074: Fix the PCI I/O port range
	arm64: dts: qcom: msm8996: Fix the PCI I/O port range
	ARM: dts: qcom: ipq4019: Fix the PCI I/O port range
	ARM: dts: qcom: ipq8064: reduce pci IO size to 64K
	ARM: dts: qcom: ipq8064: Fix the PCI I/O port range
	x86/MCE/AMD: Use an u64 for bank_map
	media: bdisp: Add missing check for create_workqueue
	firmware: qcom_scm: Clear download bit during reboot
	drm/bridge: adv7533: Fix adv7533_mode_valid for adv7533 and adv7535
	media: max9286: Free control handler
	drm/msm/adreno: Defer enabling runpm until hw_init()
	drm/msm/adreno: drop bogus pm_runtime_set_active()
	drm: msm: adreno: Disable preemption on Adreno 510
	ACPI: processor: Fix evaluating _PDC method when running as Xen dom0
	mmc: sdhci-of-esdhc: fix quirk to ignore command inhibit for data
	ARM: dts: gta04: fix excess dma channel usage
	drm/lima/lima_drv: Add missing unwind goto in lima_pdev_probe()
	regulator: core: Consistently set mutex_owner when using ww_mutex_lock_slow()
	regulator: core: Avoid lockdep reports when resolving supplies
	x86/apic: Fix atomic update of offset in reserve_eilvt_offset()
	media: rkvdec: fix use after free bug in rkvdec_remove
	media: dm1105: Fix use after free bug in dm1105_remove due to race condition
	media: saa7134: fix use after free bug in saa7134_finidev due to race condition
	media: rcar_fdp1: simplify error check logic at fdp_open()
	media: rcar_fdp1: fix pm_runtime_get_sync() usage count
	media: rcar_fdp1: Make use of the helper function devm_platform_ioremap_resource()
	media: rcar_fdp1: Fix the correct variable assignments
	media: rcar_fdp1: Fix refcount leak in probe and remove function
	media: rc: gpio-ir-recv: Fix support for wake-up
	media: venus: vdec: Fix non reliable setting of LAST flag
	media: venus: vdec: Make decoder return LAST flag for sufficient event
	media: venus: preserve DRC state across seeks
	media: venus: vdec: Handle DRC after drain
	media: venus: dec: Fix handling of the start cmd
	regulator: stm32-pwr: fix of_iomap leak
	x86/ioapic: Don't return 0 from arch_dynirq_lower_bound()
	arm64: kgdb: Set PSTATE.SS to 1 to re-enable single-step
	debugobject: Prevent init race with static objects
	drm/i915: Make intel_get_crtc_new_encoder() less oopsy
	tick/sched: Use tick_next_period for lockless quick check
	tick/sched: Reduce seqcount held scope in tick_do_update_jiffies64()
	tick/sched: Optimize tick_do_update_jiffies64() further
	tick: Get rid of tick_period
	tick/common: Align tick period with the HZ tick.
	wifi: ath6kl: minor fix for allocation size
	wifi: ath9k: hif_usb: fix memory leak of remain_skbs
	wifi: ath5k: fix an off by one check in ath5k_eeprom_read_freq_list()
	wifi: ath6kl: reduce WARN to dev_dbg() in callback
	tools: bpftool: Remove invalid \' json escape
	wifi: rtw88: mac: Return the original error from rtw_pwr_seq_parser()
	wifi: rtw88: mac: Return the original error from rtw_mac_power_switch()
	bpf: take into account liveness when propagating precision
	bpf: fix precision propagation verbose logging
	scm: fix MSG_CTRUNC setting condition for SO_PASSSEC
	bpf: Remove misleading spec_v1 check on var-offset stack read
	vlan: partially enable SIOCSHWTSTAMP in container
	net/packet: annotate accesses to po->xmit
	net/packet: convert po->origdev to an atomic flag
	net/packet: convert po->auxdata to an atomic flag
	scsi: target: Rename struct sense_info to sense_detail
	scsi: target: Rename cmd.bad_sector to cmd.sense_info
	scsi: target: Make state_list per CPU
	scsi: target: Fix multiple LUN_RESET handling
	scsi: target: iscsit: Fix TAS handling during conn cleanup
	scsi: megaraid: Fix mega_cmd_done() CMDID_INT_CMDS
	f2fs: handle dqget error in f2fs_transfer_project_quota()
	f2fs: enforce single zone capacity
	f2fs: apply zone capacity to all zone type
	f2fs: compress: fix to call f2fs_wait_on_page_writeback() in f2fs_write_raw_pages()
	crypto: caam - Clear some memory in instantiate_rng
	crypto: sa2ul - Select CRYPTO_DES
	wifi: rtlwifi: fix incorrect error codes in rtl_debugfs_set_write_rfreg()
	wifi: rtlwifi: fix incorrect error codes in rtl_debugfs_set_write_reg()
	net: qrtr: correct types of trace event parameters
	selftests/bpf: Wait for receive in cg_storage_multi test
	bpftool: Fix bug for long instructions in program CFG dumps
	crypto: drbg - make drbg_prepare_hrng() handle jent instantiation errors
	crypto: drbg - Only fail when jent is unavailable in FIPS mode
	xsk: Fix unaligned descriptor validation
	f2fs: fix to avoid use-after-free for cached IPU bio
	scsi: lpfc: Fix ioremap issues in lpfc_sli4_pci_mem_setup()
	net: ethernet: stmmac: dwmac-rk: fix optional phy regulator handling
	bpf, sockmap: fix deadlocks in the sockhash and sockmap
	nvme: handle the persistent internal error AER
	nvme: fix async event trace event
	nvme-fcloop: fix "inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage"
	bpf, sockmap: Revert buggy deadlock fix in the sockhash and sockmap
	md/raid10: fix leak of 'r10bio->remaining' for recovery
	md/raid10: fix memleak for 'conf->bio_split'
	md/raid10: fix memleak of md thread
	wifi: iwlwifi: yoyo: Fix possible division by zero
	wifi: iwlwifi: fw: move memset before early return
	jdb2: Don't refuse invalidation of already invalidated buffers
	wifi: iwlwifi: make the loop for card preparation effective
	wifi: iwlwifi: mvm: check firmware response size
	wifi: iwlwifi: fw: fix memory leak in debugfs
	ixgbe: Allow flow hash to be set via ethtool
	ixgbe: Enable setting RSS table to default values
	bpf: Don't EFAULT for getsockopt with optval=NULL
	netfilter: nf_tables: don't write table validation state without mutex
	net/sched: sch_fq: fix integer overflow of "credit"
	ipv4: Fix potential uninit variable access bug in __ip_make_skb()
	Revert "Bluetooth: btsdio: fix use after free bug in btsdio_remove due to unfinished work"
	netlink: Use copy_to_user() for optval in netlink_getsockopt().
	net: amd: Fix link leak when verifying config failed
	tcp/udp: Fix memleaks of sk and zerocopy skbs with TX timestamp.
	ipmi: ASPEED_BT_IPMI_BMC: select REGMAP_MMIO instead of depending on it
	pstore: Revert pmsg_lock back to a normal mutex
	usb: host: xhci-rcar: remove leftover quirk handling
	usb: dwc3: gadget: Change condition for processing suspend event
	fpga: bridge: fix kernel-doc parameter description
	iio: light: max44009: add missing OF device matching
	spi: spi-imx: using pm_runtime_resume_and_get instead of pm_runtime_get_sync
	spi: imx: Don't skip cleanup in remove's error path
	usb: gadget: udc: renesas_usb3: Fix use after free bug in renesas_usb3_remove due to race condition
	PCI: imx6: Install the fault handler only on compatible match
	ASoC: es8316: Use IRQF_NO_AUTOEN when requesting the IRQ
	ASoC: es8316: Handle optional IRQ assignment
	linux/vt_buffer.h: allow either builtin or modular for macros
	spi: qup: Don't skip cleanup in remove's error path
	spi: fsl-spi: Fix CPM/QE mode Litte Endian
	vmci_host: fix a race condition in vmci_host_poll() causing GPF
	of: Fix modalias string generation
	PCI/EDR: Clear Device Status after EDR error recovery
	ia64: mm/contig: fix section mismatch warning/error
	ia64: salinfo: placate defined-but-not-used warning
	scripts/gdb: bail early if there are no clocks
	scripts/gdb: bail early if there are no generic PD
	coresight: etm_pmu: Set the module field
	ASoC: fsl_mqs: move of_node_put() to the correct location
	spi: cadence-quadspi: fix suspend-resume implementations
	i2c: cadence: cdns_i2c_master_xfer(): Fix runtime PM leak on error path
	uapi/linux/const.h: prefer ISO-friendly __typeof__
	sh: sq: Fix incorrect element size for allocating bitmap buffer
	usb: gadget: tegra-xudc: Fix crash in vbus_draw
	usb: chipidea: fix missing goto in `ci_hdrc_probe`
	usb: mtu3: fix kernel panic at qmu transfer done irq handler
	firmware: stratix10-svc: Fix an NULL vs IS_ERR() bug in probe
	tty: serial: fsl_lpuart: adjust buffer length to the intended size
	serial: 8250: Add missing wakeup event reporting
	staging: rtl8192e: Fix W_DISABLE# does not work after stop/start
	spmi: Add a check for remove callback when removing a SPMI driver
	macintosh/windfarm_smu_sat: Add missing of_node_put()
	powerpc/mpc512x: fix resource printk format warning
	powerpc/wii: fix resource printk format warnings
	powerpc/sysdev/tsi108: fix resource printk format warnings
	macintosh: via-pmu-led: requires ATA to be set
	powerpc/rtas: use memmove for potentially overlapping buffer copy
	perf/core: Fix hardlockup failure caused by perf throttle
	clk: at91: clk-sam9x60-pll: fix return value check
	RDMA/siw: Fix potential page_array out of range access
	RDMA/rdmavt: Delete unnecessary NULL check
	workqueue: Rename "delayed" (delayed by active management) to "inactive"
	workqueue: Fix hung time report of worker pools
	rtc: omap: include header for omap_rtc_power_off_program prototype
	RDMA/mlx4: Prevent shift wrapping in set_user_sq_size()
	rtc: meson-vrtc: Use ktime_get_real_ts64() to get the current time
	power: supply: generic-adc-battery: fix unit scaling
	clk: add missing of_node_put() in "assigned-clocks" property parsing
	RDMA/siw: Remove namespace check from siw_netdev_event()
	RDMA/cm: Trace icm_send_rej event before the cm state is reset
	RDMA/srpt: Add a check for valid 'mad_agent' pointer
	IB/hfi1: Fix SDMA mmu_rb_node not being evicted in LRU order
	IB/hfi1: Add AIP tx traces
	IB/hfi1: Add additional usdma traces
	IB/hfi1: Fix bugs with non-PAGE_SIZE-end multi-iovec user SDMA requests
	NFSv4.1: Always send a RECLAIM_COMPLETE after establishing lease
	firmware: raspberrypi: Introduce devm_rpi_firmware_get()
	input: raspberrypi-ts: Release firmware handle when not needed
	Input: raspberrypi-ts - fix refcount leak in rpi_ts_probe
	RDMA/mlx5: Fix flow counter query via DEVX
	SUNRPC: remove the maximum number of retries in call_bind_status
	RDMA/mlx5: Use correct device num_ports when modify DC
	clocksource/drivers/davinci: Fix memory leak in davinci_timer_register when init fails
	openrisc: Properly store r31 to pt_regs on unhandled exceptions
	ext4: fix use-after-free read in ext4_find_extent for bigalloc + inline
	leds: TI_LMU_COMMON: select REGMAP instead of depending on it
	dmaengine: mv_xor_v2: Fix an error code.
	leds: tca6507: Fix error handling of using fwnode_property_read_string
	pwm: mtk-disp: Don't check the return code of pwmchip_remove()
	pwm: mtk-disp: Adjust the clocks to avoid them mismatch
	pwm: mtk-disp: Disable shadow registers before setting backlight values
	phy: tegra: xusb: Add missing tegra_xusb_port_unregister for usb2_port and ulpi_port
	dmaengine: dw-edma: Fix to change for continuous transfer
	dmaengine: dw-edma: Fix to enable to issue dma request on DMA processing
	dmaengine: at_xdmac: do not enable all cyclic channels
	thermal/drivers/mediatek: Use devm_of_iomap to avoid resource leak in mtk_thermal_probe
	mfd: tqmx86: Do not access I2C_DETECT register through io_base
	mfd: tqmx86: Remove incorrect TQMx90UC board ID
	mfd: tqmx86: Add support for TQMx110EB and TQMxE40x
	mfd: tqmx86: Specify IO port register range more precisely
	mfd: tqmx86: Correct board names for TQMxE39x
	afs: Fix updating of i_size with dv jump from server
	scripts/gdb: fix lx-timerlist for Python3
	btrfs: scrub: reject unsupported scrub flags
	s390/dasd: fix hanging blockdevice after request requeue
	ia64: fix an addr to taddr in huge_pte_offset()
	dm clone: call kmem_cache_destroy() in dm_clone_init() error path
	dm integrity: call kmem_cache_destroy() in dm_integrity_init() error path
	dm flakey: fix a crash with invalid table line
	dm ioctl: fix nested locking in table_clear() to remove deadlock concern
	perf auxtrace: Fix address filter entire kernel size
	perf intel-pt: Fix CYC timestamps after standalone CBR
	arm64: Always load shadow stack pointer directly from the task struct
	arm64: Stash shadow stack pointer in the task struct on interrupt
	debugobject: Ensure pool refill (again)
	sound/oss/dmasound: fix 'dmasound_setup' defined but not used
	arm64: dts: qcom: sdm845: correct dynamic power coefficients
	scsi: target: core: Avoid smp_processor_id() in preemptible code
	netfilter: nf_tables: deactivate anonymous set from preparation phase
	tty: create internal tty.h file
	tty: audit: move some local functions out of tty.h
	tty: move some internal tty lock enums and functions out of tty.h
	tty: move some tty-only functions to drivers/tty/tty.h
	tty: clean include/linux/tty.h up
	tty: Prevent writing chars during tcsetattr TCSADRAIN/FLUSH
	ring-buffer: Ensure proper resetting of atomic variables in ring_buffer_reset_online_cpus
	crypto: ccp - Clear PSP interrupt status register before calling handler
	mailbox: zynq: Switch to flexible array to simplify code
	mailbox: zynqmp: Fix counts of child nodes
	dm verity: skip redundant verity_handle_err() on I/O errors
	dm verity: fix error handling for check_at_most_once on FEC
	scsi: qedi: Fix use after free bug in qedi_remove()
	net/ncsi: clear Tx enable mode when handling a Config required AEN
	net/sched: cls_api: remove block_cb from driver_list before freeing
	sit: update dev->needed_headroom in ipip6_tunnel_bind_dev()
	net: dsa: mv88e6xxx: add mv88e6321 rsvd2cpu
	writeback: fix call of incorrect macro
	watchdog: dw_wdt: Fix the error handling path of dw_wdt_drv_probe()
	net/sched: act_mirred: Add carrier check
	sfc: Fix module EEPROM reporting for QSFP modules
	rxrpc: Fix hard call timeout units
	octeontx2-pf: Disable packet I/O for graceful exit
	octeontx2-vf: Detach LF resources on probe cleanup
	ionic: remove noise from ethtool rxnfc error msg
	af_packet: Don't send zero-byte data in packet_sendmsg_spkt().
	drm/amdgpu: add a missing lock for AMDGPU_SCHED
	ALSA: caiaq: input: Add error handling for unsupported input methods in `snd_usb_caiaq_input_init`
	net: dsa: mt7530: fix corrupt frames using trgmii on 40 MHz XTAL MT7621
	virtio_net: split free_unused_bufs()
	virtio_net: suppress cpu stall when free_unused_bufs
	net: enetc: check the index of the SFI rather than the handle
	perf vendor events power9: Remove UTF-8 characters from JSON files
	perf pmu: zfree() expects a pointer to a pointer to zero it after freeing its contents
	perf map: Delete two variable initialisations before null pointer checks in sort__sym_from_cmp()
	crypto: sun8i-ss - Fix a test in sun8i_ss_setup_ivs()
	perf symbols: Fix return incorrect build_id size in elf_read_build_id()
	btrfs: fix btrfs_prev_leaf() to not return the same key twice
	btrfs: don't free qgroup space unless specified
	btrfs: print-tree: parent bytenr must be aligned to sector size
	cifs: fix pcchunk length type in smb2_copychunk_range
	platform/x86: touchscreen_dmi: Add upside-down quirk for GDIX1002 ts on the Juno Tablet
	platform/x86: touchscreen_dmi: Add info for the Dexp Ursus KX210i
	inotify: Avoid reporting event with invalid wd
	sh: math-emu: fix macro redefined warning
	sh: mcount.S: fix build error when PRINTK is not enabled
	sh: init: use OF_EARLY_FLATTREE for early init
	sh: nmi_debug: fix return value of __setup handler
	remoteproc: stm32: Call of_node_put() on iteration error
	remoteproc: st: Call of_node_put() on iteration error
	ARM: dts: exynos: fix WM8960 clock name in Itop Elite
	ARM: dts: s5pv210: correct MIPI CSIS clock name
	f2fs: fix potential corruption when moving a directory
	drm/panel: otm8009a: Set backlight parent to panel device
	drm/amdgpu: fix an amdgpu_irq_put() issue in gmc_v9_0_hw_fini()
	drm/amdgpu/gfx: disable gfx9 cp_ecc_error_irq only when enabling legacy gfx ras
	drm/amdgpu: disable sdma ecc irq only when sdma RAS is enabled in suspend
	HID: wacom: Set a default resolution for older tablets
	HID: wacom: insert timestamp to packed Bluetooth (BT) events
	KVM: x86: hyper-v: Avoid calling kvm_make_vcpus_request_mask() with vcpu_mask==NULL
	KVM: x86: do not report a vCPU as preempted outside instruction boundaries
	ext4: fix WARNING in mb_find_extent
	ext4: avoid a potential slab-out-of-bounds in ext4_group_desc_csum
	ext4: fix data races when using cached status extents
	ext4: check iomap type only if ext4_iomap_begin() does not fail
	ext4: improve error recovery code paths in __ext4_remount()
	ext4: fix deadlock when converting an inline directory in nojournal mode
	ext4: add bounds checking in get_max_inline_xattr_value_size()
	ext4: bail out of ext4_xattr_ibody_get() fails for any reason
	ext4: remove a BUG_ON in ext4_mb_release_group_pa()
	ext4: fix invalid free tracking in ext4_xattr_move_to_block()
	serial: 8250: Fix serial8250_tx_empty() race with DMA Tx
	drbd: correctly submit flush bio on barrier
	KVM: x86: Ensure PV TLB flush tracepoint reflects KVM behavior
	KVM: x86: Fix recording of guest steal time / preempted status
	KVM: Fix steal time asm constraints
	KVM: x86: Remove obsolete disabling of page faults in kvm_arch_vcpu_put()
	KVM: x86: do not set st->preempted when going back to user space
	KVM: x86: revalidate steal time cache if MSR value changes
	KVM: x86: do not report preemption if the steal time cache is stale
	KVM: x86: move guest_pv_has out of user_access section
	printk: declare printk_deferred_{enter,safe}() in include/linux/printk.h
	drm/exynos: move to use request_irq by IRQF_NO_AUTOEN flag
	mm/page_alloc: fix potential deadlock on zonelist_update_seq seqlock
	drm/amd/display: Fix hang when skipping modeset
	Linux 5.10.180

Change-Id: Ie0c8ae79d56d844ec23ec277d91d4c70c3e1e9a8
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman 2023-06-16 09:56:39 +00:00
commit d70c95bd81
95 changed files with 720 additions and 312 deletions

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 10
SUBLEVEL = 179
SUBLEVEL = 180
EXTRAVERSION =
NAME = Dare mighty things

View File

@ -179,7 +179,7 @@ codec: wm8960@1a {
compatible = "wlf,wm8960";
reg = <0x1a>;
clocks = <&pmu_system_controller 0>;
clock-names = "MCLK1";
clock-names = "mclk";
wlf,shared-lrclk;
#sound-dai-cells = <0>;
};

View File

@ -583,7 +583,7 @@ csis0: csis@fa600000 {
interrupts = <29>;
clocks = <&clocks CLK_CSIS>,
<&clocks SCLK_CSIS>;
clock-names = "clk_csis",
clock-names = "csis",
"sclk_csis";
bus-width = <4>;
status = "disabled";

View File

@ -197,7 +197,7 @@ CPU0: cpu@0 {
&LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
capacity-dmips-mhz = <611>;
dynamic-power-coefficient = <290>;
dynamic-power-coefficient = <154>;
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@ -222,7 +222,7 @@ CPU1: cpu@100 {
&LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
capacity-dmips-mhz = <611>;
dynamic-power-coefficient = <290>;
dynamic-power-coefficient = <154>;
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@ -244,7 +244,7 @@ CPU2: cpu@200 {
&LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
capacity-dmips-mhz = <611>;
dynamic-power-coefficient = <290>;
dynamic-power-coefficient = <154>;
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,
@ -266,7 +266,7 @@ CPU3: cpu@300 {
&LITTLE_CPU_SLEEP_1
&CLUSTER_SLEEP_0>;
capacity-dmips-mhz = <611>;
dynamic-power-coefficient = <290>;
dynamic-power-coefficient = <154>;
qcom,freq-domain = <&cpufreq_hw 0>;
operating-points-v2 = <&cpu0_opp_table>;
interconnects = <&gladiator_noc MASTER_APPSS_PROC 3 &mem_noc SLAVE_EBI1 3>,

View File

@ -530,9 +530,7 @@ SYM_CODE_END(__swpan_exit_el0)
.macro irq_stack_entry
mov x19, sp // preserve the original sp
#ifdef CONFIG_SHADOW_CALL_STACK
mov x24, scs_sp // preserve the original shadow stack
#endif
scs_save tsk // preserve the original shadow stack
/*
* Compare sp with the base of the task stack.
@ -566,9 +564,7 @@ SYM_CODE_END(__swpan_exit_el0)
*/
.macro irq_stack_exit
mov sp, x19
#ifdef CONFIG_SHADOW_CALL_STACK
mov scs_sp, x24
#endif
scs_load_current
.endm
/* GPRs used by entry code */

View File

@ -18,7 +18,7 @@ config SH_STANDARD_BIOS
config STACK_DEBUG
bool "Check for stack overflows"
depends on DEBUG_KERNEL
depends on DEBUG_KERNEL && PRINTK
help
This option will cause messages to be printed if free stack space
drops below a certain limit. Saying Y here will add overhead to

View File

@ -64,7 +64,7 @@ ENTRY(_stext)
ldc r0, r6_bank
#endif
#ifdef CONFIG_OF_FLATTREE
#ifdef CONFIG_OF_EARLY_FLATTREE
mov r4, r12 ! Store device tree blob pointer in r12
#endif
@ -315,7 +315,7 @@ ENTRY(_stext)
10:
#endif
#ifdef CONFIG_OF_FLATTREE
#ifdef CONFIG_OF_EARLY_FLATTREE
mov.l 8f, r0 ! Make flat device tree available early.
jsr @r0
mov r12, r4
@ -346,7 +346,7 @@ ENTRY(stack_start)
5: .long start_kernel
6: .long cpu_init
7: .long init_thread_union
#if defined(CONFIG_OF_FLATTREE)
#if defined(CONFIG_OF_EARLY_FLATTREE)
8: .long sh_fdt_init
#endif

View File

@ -49,7 +49,7 @@ static int __init nmi_debug_setup(char *str)
register_die_notifier(&nmi_debug_nb);
if (*str != '=')
return 0;
return 1;
for (p = str + 1; *p; p = sep + 1) {
sep = strchr(p, ',');
@ -70,6 +70,6 @@ static int __init nmi_debug_setup(char *str)
break;
}
return 0;
return 1;
}
__setup("nmi_debug", nmi_debug_setup);

View File

@ -244,7 +244,7 @@ void __init __weak plat_early_device_setup(void)
{
}
#ifdef CONFIG_OF_FLATTREE
#ifdef CONFIG_OF_EARLY_FLATTREE
void __ref sh_fdt_init(phys_addr_t dt_phys)
{
static int done = 0;
@ -329,7 +329,7 @@ void __init setup_arch(char **cmdline_p)
/* Let earlyprintk output early console messages */
sh_early_platform_driver_probe("earlyprintk", 1, 1);
#ifdef CONFIG_OF_FLATTREE
#ifdef CONFIG_OF_EARLY_FLATTREE
#ifdef CONFIG_USE_BUILTIN_DTB
unflatten_and_copy_device_tree();
#else

View File

@ -67,7 +67,3 @@
} while (0)
#define abort() return 0
#define __BYTE_ORDER __LITTLE_ENDIAN

View File

@ -664,7 +664,7 @@ struct kvm_vcpu_arch {
u8 preempted;
u64 msr_val;
u64 last_steal;
struct gfn_to_pfn_cache cache;
struct gfn_to_hva_cache cache;
} st;
u64 l1_tsc_offset;

View File

@ -1562,16 +1562,19 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *current_vcpu, u64 ingpa,
cpumask_clear(&hv_vcpu->tlb_flush);
vcpu_mask = all_cpus ? NULL :
sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
vp_bitmap, vcpu_bitmap);
/*
* vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
* analyze it here, flush TLB regardless of the specified address space.
*/
if (all_cpus) {
kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH_GUEST);
} else {
vcpu_mask = sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
vp_bitmap, vcpu_bitmap);
kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
NULL, vcpu_mask, &hv_vcpu->tlb_flush);
}
ret_success:
/* We always do full TLB flush, set rep_done = rep_cnt. */

View File

@ -3022,51 +3022,95 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
static void record_steal_time(struct kvm_vcpu *vcpu)
{
struct kvm_host_map map;
struct kvm_steal_time *st;
struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;
struct kvm_steal_time __user *st;
struct kvm_memslots *slots;
gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
u64 steal;
u32 version;
if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
return;
/* -EAGAIN is returned in atomic context so we can just return. */
if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT,
&map, &vcpu->arch.st.cache, false))
if (WARN_ON_ONCE(current->mm != vcpu->kvm->mm))
return;
st = map.hva +
offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
slots = kvm_memslots(vcpu->kvm);
if (unlikely(slots->generation != ghc->generation ||
gpa != ghc->gpa ||
kvm_is_error_hva(ghc->hva) || !ghc->memslot)) {
/* We rely on the fact that it fits in a single page. */
BUILD_BUG_ON((sizeof(*st) - 1) & KVM_STEAL_VALID_BITS);
if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, gpa, sizeof(*st)) ||
kvm_is_error_hva(ghc->hva) || !ghc->memslot)
return;
}
st = (struct kvm_steal_time __user *)ghc->hva;
/*
* Doing a TLB flush here, on the guest's behalf, can avoid
* expensive IPIs.
*/
if (guest_pv_has(vcpu, KVM_FEATURE_PV_TLB_FLUSH)) {
trace_kvm_pv_tlb_flush(vcpu->vcpu_id,
st->preempted & KVM_VCPU_FLUSH_TLB);
if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB)
kvm_vcpu_flush_tlb_guest(vcpu);
} else {
st->preempted = 0;
}
u8 st_preempted = 0;
int err = -EFAULT;
if (!user_access_begin(st, sizeof(*st)))
return;
asm volatile("1: xchgb %0, %2\n"
"xor %1, %1\n"
"2:\n"
_ASM_EXTABLE_UA(1b, 2b)
: "+q" (st_preempted),
"+&r" (err),
"+m" (st->preempted));
if (err)
goto out;
user_access_end();
vcpu->arch.st.preempted = 0;
if (st->version & 1)
st->version += 1; /* first time write, random junk */
trace_kvm_pv_tlb_flush(vcpu->vcpu_id,
st_preempted & KVM_VCPU_FLUSH_TLB);
if (st_preempted & KVM_VCPU_FLUSH_TLB)
kvm_vcpu_flush_tlb_guest(vcpu);
st->version += 1;
if (!user_access_begin(st, sizeof(*st)))
goto dirty;
} else {
if (!user_access_begin(st, sizeof(*st)))
return;
unsafe_put_user(0, &st->preempted, out);
vcpu->arch.st.preempted = 0;
}
unsafe_get_user(version, &st->version, out);
if (version & 1)
version += 1; /* first time write, random junk */
version += 1;
unsafe_put_user(version, &st->version, out);
smp_wmb();
st->steal += current->sched_info.run_delay -
unsafe_get_user(steal, &st->steal, out);
steal += current->sched_info.run_delay -
vcpu->arch.st.last_steal;
vcpu->arch.st.last_steal = current->sched_info.run_delay;
unsafe_put_user(steal, &st->steal, out);
smp_wmb();
version += 1;
unsafe_put_user(version, &st->version, out);
st->version += 1;
kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, false);
out:
user_access_end();
dirty:
mark_page_dirty_in_slot(ghc->memslot, gpa_to_gfn(ghc->gpa));
}
int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
@ -4051,8 +4095,11 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
{
struct kvm_host_map map;
struct kvm_steal_time *st;
struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;
struct kvm_steal_time __user *st;
struct kvm_memslots *slots;
static const u8 preempted = KVM_VCPU_PREEMPTED;
gpa_t gpa = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
/*
* The vCPU can be marked preempted if and only if the VM-Exit was on
@ -4073,42 +4120,42 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
if (vcpu->arch.st.preempted)
return;
if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT, &map,
&vcpu->arch.st.cache, true))
/* This happens on process exit */
if (unlikely(current->mm != vcpu->kvm->mm))
return;
st = map.hva +
offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
slots = kvm_memslots(vcpu->kvm);
st->preempted = vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED;
if (unlikely(slots->generation != ghc->generation ||
gpa != ghc->gpa ||
kvm_is_error_hva(ghc->hva) || !ghc->memslot))
return;
kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, true);
st = (struct kvm_steal_time __user *)ghc->hva;
BUILD_BUG_ON(sizeof(st->preempted) != sizeof(preempted));
if (!copy_to_user_nofault(&st->preempted, &preempted, sizeof(preempted)))
vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED;
mark_page_dirty_in_slot(ghc->memslot, gpa_to_gfn(ghc->gpa));
}
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{
int idx;
if (vcpu->preempted)
if (vcpu->preempted) {
vcpu->arch.preempted_in_kernel = !kvm_x86_ops.get_cpl(vcpu);
/*
* Disable page faults because we're in atomic context here.
* kvm_write_guest_offset_cached() would call might_fault()
* that relies on pagefault_disable() to tell if there's a
* bug. NOTE: the write to guest memory may not go through if
* during postcopy live migration or if there's heavy guest
* paging.
*/
pagefault_disable();
/*
* kvm_memslots() will be called by
* kvm_write_guest_offset_cached() so take the srcu lock.
* Take the srcu lock as memslots will be accessed to check the gfn
* cache generation against the memslots generation.
*/
idx = srcu_read_lock(&vcpu->kvm->srcu);
kvm_steal_time_set_preempted(vcpu);
srcu_read_unlock(&vcpu->kvm->srcu, idx);
pagefault_enable();
}
kvm_x86_ops.vcpu_put(vcpu);
vcpu->arch.last_host_tsc = rdtsc();
/*
@ -10264,11 +10311,8 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
{
struct gfn_to_pfn_cache *cache = &vcpu->arch.st.cache;
int idx;
kvm_release_pfn(cache->pfn, cache->dirty, cache);
kvmclock_reset(vcpu);
kvm_x86_ops.vcpu_free(vcpu);

View File

@ -1299,7 +1299,7 @@ static void submit_one_flush(struct drbd_device *device, struct issue_flush_cont
bio_set_dev(bio, device->ldev->backing_bdev);
bio->bi_private = octx;
bio->bi_end_io = one_flush_endio;
bio->bi_opf = REQ_OP_FLUSH | REQ_PREFLUSH;
bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
device->flush_jif = jiffies;
set_bit(FLUSH_PENDING, &device->flags);

View File

@ -132,7 +132,7 @@ static int sun8i_ss_setup_ivs(struct skcipher_request *areq)
}
rctx->p_iv[i] = a;
/* we need to setup all others IVs only in the decrypt way */
if (rctx->op_dir & SS_ENCRYPTION)
if (rctx->op_dir == SS_ENCRYPTION)
return 0;
todo = min(len, sg_dma_len(sg));
len -= todo;

View File

@ -42,6 +42,9 @@ static irqreturn_t psp_irq_handler(int irq, void *data)
/* Read the interrupt status: */
status = ioread32(psp->io_regs + psp->vdata->intsts_reg);
/* Clear the interrupt status by writing the same value we read. */
iowrite32(status, psp->io_regs + psp->vdata->intsts_reg);
/* invoke subdevice interrupt handlers */
if (status) {
if (psp->sev_irq_handler)
@ -51,9 +54,6 @@ static irqreturn_t psp_irq_handler(int irq, void *data)
psp->tee_irq_handler(irq, psp->tee_irq_data, status);
}
/* Clear the interrupt status by writing the same value we read. */
iowrite32(status, psp->io_regs + psp->vdata->intsts_reg);
return IRQ_HANDLED;
}

View File

@ -66,6 +66,7 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
{
struct fd f = fdget(fd);
struct amdgpu_fpriv *fpriv;
struct amdgpu_ctx_mgr *mgr;
struct amdgpu_ctx *ctx;
uint32_t id;
int r;
@ -79,8 +80,11 @@ static int amdgpu_sched_process_priority_override(struct amdgpu_device *adev,
return r;
}
idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id)
mgr = &fpriv->ctx_mgr;
mutex_lock(&mgr->lock);
idr_for_each_entry(&mgr->ctx_handles, ctx, id)
amdgpu_ctx_priority_override(ctx, priority);
mutex_unlock(&mgr->lock);
fdput(f);
return 0;

View File

@ -3943,6 +3943,7 @@ static int gfx_v9_0_hw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__GFX))
amdgpu_irq_put(adev, &adev->gfx.cp_ecc_error_irq, 0);
amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);
amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);

View File

@ -1686,7 +1686,6 @@ static int gmc_v9_0_hw_fini(void *handle)
return 0;
}
amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);
amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);
return 0;

View File

@ -1979,10 +1979,12 @@ static int sdma_v4_0_hw_fini(void *handle)
if (amdgpu_sriov_vf(adev))
return 0;
if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__SDMA)) {
for (i = 0; i < adev->sdma.num_instances; i++) {
amdgpu_irq_put(adev, &adev->sdma.ecc_irq,
AMDGPU_SDMA_IRQ_INSTANCE0 + i);
}
}
sdma_v4_0_ctx_switch_enable(adev, false);
sdma_v4_0_enable(adev, false);

View File

@ -7248,6 +7248,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
continue;
dc_plane = dm_new_plane_state->dc_state;
if (!dc_plane)
continue;
bundle->surface_updates[planes_count].surface = dc_plane;
if (new_pcrtc_state->color_mgmt_changed) {
@ -8562,8 +8564,9 @@ static int dm_update_plane_state(struct dc *dc,
return -EINVAL;
}
if (dm_old_plane_state->dc_state)
dc_plane_state_release(dm_old_plane_state->dc_state);
dm_new_plane_state->dc_state = NULL;
*lock_and_validation_needed = true;

View File

@ -1502,6 +1502,9 @@ bool dc_remove_plane_from_context(
struct dc_stream_status *stream_status = NULL;
struct resource_pool *pool = dc->res_pool;
if (!plane_state)
return true;
for (i = 0; i < context->stream_count; i++)
if (context->streams[i] == stream) {
stream_status = &context->stream_status[i];

View File

@ -775,8 +775,8 @@ static int decon_conf_irq(struct decon_context *ctx, const char *name,
return irq;
}
}
irq_set_status_flags(irq, IRQ_NOAUTOEN);
ret = devm_request_irq(ctx->dev, irq, handler, flags, "drm_decon", ctx);
ret = devm_request_irq(ctx->dev, irq, handler,
flags | IRQF_NO_AUTOEN, "drm_decon", ctx);
if (ret < 0) {
dev_err(ctx->dev, "IRQ %s request failed\n", name);
return ret;

View File

@ -1353,10 +1353,9 @@ static int exynos_dsi_register_te_irq(struct exynos_dsi *dsi,
}
te_gpio_irq = gpio_to_irq(dsi->te_gpio);
irq_set_status_flags(te_gpio_irq, IRQ_NOAUTOEN);
ret = request_threaded_irq(te_gpio_irq, exynos_dsi_te_irq_handler, NULL,
IRQF_TRIGGER_RISING, "TE", dsi);
IRQF_TRIGGER_RISING | IRQF_NO_AUTOEN, "TE", dsi);
if (ret) {
dev_err(dsi->dev, "request interrupt failed with %d\n", ret);
gpio_free(dsi->te_gpio);
@ -1802,9 +1801,9 @@ static int exynos_dsi_probe(struct platform_device *pdev)
if (dsi->irq < 0)
return dsi->irq;
irq_set_status_flags(dsi->irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(dev, dsi->irq, NULL,
exynos_dsi_irq, IRQF_ONESHOT,
exynos_dsi_irq,
IRQF_ONESHOT | IRQF_NO_AUTOEN,
dev_name(dev), dsi);
if (ret) {
dev_err(dev, "failed to request dsi irq\n");

View File

@ -458,7 +458,7 @@ static int otm8009a_probe(struct mipi_dsi_device *dsi)
DRM_MODE_CONNECTOR_DSI);
ctx->bl_dev = devm_backlight_device_register(dev, dev_name(dev),
dsi->host->dev, ctx,
dev, ctx,
&otm8009a_backlight_ops,
NULL);
if (IS_ERR(ctx->bl_dev)) {

View File

@ -1265,6 +1265,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
struct input_dev *pen_input = wacom->pen_input;
unsigned char *data = wacom->data;
int number_of_valid_frames = 0;
int time_interval = 15000000;
ktime_t time_packet_received = ktime_get();
int i;
if (wacom->features.type == INTUOSP2_BT ||
@ -1285,12 +1288,30 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
wacom->id[0] |= (wacom->serial[0] >> 32) & 0xFFFFF;
}
/* number of valid frames */
for (i = 0; i < pen_frames; i++) {
unsigned char *frame = &data[i*pen_frame_len + 1];
bool valid = frame[0] & 0x80;
if (valid)
number_of_valid_frames++;
}
if (number_of_valid_frames) {
if (wacom->hid_data.time_delayed)
time_interval = ktime_get() - wacom->hid_data.time_delayed;
time_interval /= number_of_valid_frames;
wacom->hid_data.time_delayed = time_packet_received;
}
for (i = 0; i < number_of_valid_frames; i++) {
unsigned char *frame = &data[i*pen_frame_len + 1];
bool valid = frame[0] & 0x80;
bool prox = frame[0] & 0x40;
bool range = frame[0] & 0x20;
bool invert = frame[0] & 0x10;
int frames_number_reversed = number_of_valid_frames - i - 1;
int event_timestamp = time_packet_received - frames_number_reversed * time_interval;
if (!valid)
continue;
@ -1303,6 +1324,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
wacom->tool[0] = 0;
wacom->id[0] = 0;
wacom->serial[0] = 0;
wacom->hid_data.time_delayed = 0;
return;
}
@ -1339,6 +1361,7 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
get_unaligned_le16(&frame[11]));
}
}
if (wacom->tool[0]) {
input_report_abs(pen_input, ABS_PRESSURE, get_unaligned_le16(&frame[5]));
if (wacom->features.type == INTUOSP2_BT ||
@ -1362,6 +1385,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
wacom->shared->stylus_in_proximity = prox;
/* add timestamp to unpack the frames */
input_set_timestamp(pen_input, event_timestamp);
input_sync(pen_input);
}
}
@ -1853,6 +1879,7 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
int fmax = field->logical_maximum;
unsigned int equivalent_usage = wacom_equivalent_usage(usage->hid);
int resolution_code = code;
int resolution = hidinput_calc_abs_res(field, resolution_code);
if (equivalent_usage == HID_DG_TWIST) {
resolution_code = ABS_RZ;
@ -1875,8 +1902,15 @@ static void wacom_map_usage(struct input_dev *input, struct hid_usage *usage,
switch (type) {
case EV_ABS:
input_set_abs_params(input, code, fmin, fmax, fuzz, 0);
input_abs_set_res(input, code,
hidinput_calc_abs_res(field, resolution_code));
/* older tablet may miss physical usage */
if ((code == ABS_X || code == ABS_Y) && !resolution) {
resolution = WACOM_INTUOS_RES;
hid_warn(input,
"Wacom usage (%d) missing resolution \n",
code);
}
input_abs_set_res(input, code, resolution);
break;
case EV_KEY:
input_set_capability(input, EV_KEY, code);

View File

@ -320,6 +320,7 @@ struct hid_data {
int bat_connected;
int ps_connected;
bool pad_input_event_flag;
int time_delayed;
};
struct wacom_remote_data {

View File

@ -110,7 +110,7 @@ struct zynqmp_ipi_pdata {
unsigned int method;
u32 local_id;
int num_mboxes;
struct zynqmp_ipi_mbox *ipi_mboxes;
struct zynqmp_ipi_mbox ipi_mboxes[];
};
static struct device_driver zynqmp_ipi_mbox_driver = {
@ -634,8 +634,13 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
struct zynqmp_ipi_mbox *mbox;
int num_mboxes, ret = -EINVAL;
num_mboxes = of_get_child_count(np);
pdata = devm_kzalloc(dev, sizeof(*pdata) + (num_mboxes * sizeof(*mbox)),
num_mboxes = of_get_available_child_count(np);
if (num_mboxes == 0) {
dev_err(dev, "mailbox nodes not available\n");
return -EINVAL;
}
pdata = devm_kzalloc(dev, struct_size(pdata, ipi_mboxes, num_mboxes),
GFP_KERNEL);
if (!pdata)
return -ENOMEM;
@ -649,8 +654,6 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
}
pdata->num_mboxes = num_mboxes;
pdata->ipi_mboxes = (struct zynqmp_ipi_mbox *)
((char *)pdata + sizeof(*pdata));
mbox = pdata->ipi_mboxes;
for_each_available_child_of_node(np, nc) {

View File

@ -482,7 +482,7 @@ static int verity_verify_io(struct dm_verity_io *io)
sector_t cur_block = io->block + b;
struct ahash_request *req = verity_io_hash_req(v, io);
if (v->validated_blocks &&
if (v->validated_blocks && bio->bi_status == BLK_STS_OK &&
likely(test_bit(cur_block, v->validated_blocks))) {
verity_bv_skip_block(v, io, &io->iter);
continue;

View File

@ -404,9 +404,9 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
case PHY_INTERFACE_MODE_TRGMII:
trgint = 1;
if (priv->id == ID_MT7621) {
/* PLL frequency: 150MHz: 1.2GBit */
/* PLL frequency: 125MHz: 1.0GBit */
if (xtal == HWTRAP_XTAL_40MHZ)
ncpo1 = 0x0780;
ncpo1 = 0x0640;
if (xtal == HWTRAP_XTAL_25MHZ)
ncpo1 = 0x0a00;
} else { /* PLL frequency: 250MHz: 2.0Gbit */

View File

@ -4182,6 +4182,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
.set_cpu_port = mv88e6095_g1_set_cpu_port,
.set_egress_port = mv88e6095_g1_set_egress_port,
.watchdog_ops = &mv88e6390_watchdog_ops,
.mgmt_rsvd2cpu = mv88e6352_g2_mgmt_rsvd2cpu,
.reset = mv88e6352_g1_reset,
.vtu_getnext = mv88e6185_g1_vtu_getnext,
.vtu_loadpurge = mv88e6185_g1_vtu_loadpurge,

View File

@ -1266,7 +1266,7 @@ static int enetc_psfp_parse_clsflower(struct enetc_ndev_priv *priv,
int index;
index = enetc_get_free_index(priv);
if (sfi->handle < 0) {
if (index < 0) {
NL_SET_ERR_MSG_MOD(extack, "No Stream Filter resource!");
err = -ENOSPC;
goto free_fmi;

View File

@ -1589,11 +1589,20 @@ int otx2_open(struct net_device *netdev)
otx2_config_pause_frm(pf);
err = otx2_rxtx_enable(pf, true);
if (err)
/* If a mbox communication error happens at this point then interface
* will end up in a state such that it is in down state but hardware
* mcam entries are enabled to receive the packets. Hence disable the
* packet I/O.
*/
if (err == EIO)
goto err_disable_rxtx;
else if (err)
goto err_tx_stop_queues;
return 0;
err_disable_rxtx:
otx2_rxtx_enable(pf, false);
err_tx_stop_queues:
netif_tx_stop_all_queues(netdev);
netif_carrier_off(netdev);

View File

@ -542,7 +542,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
err = otx2vf_realloc_msix_vectors(vf);
if (err)
goto err_mbox_destroy;
goto err_detach_rsrc;
err = otx2_set_real_num_queues(netdev, qcount, qcount);
if (err)

View File

@ -693,7 +693,7 @@ static int ionic_get_rxnfc(struct net_device *netdev,
info->data = lif->nxqs;
break;
default:
netdev_err(netdev, "Command parameter %d is not supported\n",
netdev_dbg(netdev, "Command parameter %d is not supported\n",
info->cmd);
err = -EOPNOTSUPP;
}

View File

@ -974,12 +974,15 @@ static u32 efx_mcdi_phy_module_type(struct efx_nic *efx)
/* A QSFP+ NIC may actually have an SFP+ module attached.
* The ID is page 0, byte 0.
* QSFP28 is of type SFF_8636, however, this is treated
* the same by ethtool, so we can also treat them the same.
*/
switch (efx_mcdi_phy_get_module_eeprom_byte(efx, 0, 0)) {
case 0x3:
case 0x3: /* SFP */
return MC_CMD_MEDIA_SFP_PLUS;
case 0xc:
case 0xd:
case 0xc: /* QSFP */
case 0xd: /* QSFP+ */
case 0x11: /* QSFP28 */
return MC_CMD_MEDIA_QSFP_PLUS;
default:
return 0;
@ -1077,7 +1080,7 @@ int efx_mcdi_phy_get_module_info(struct efx_nic *efx, struct ethtool_modinfo *mo
case MC_CMD_MEDIA_QSFP_PLUS:
modinfo->type = ETH_MODULE_SFF_8436;
modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN;
modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
break;
default:

View File

@ -2750,6 +2750,27 @@ static void free_receive_page_frags(struct virtnet_info *vi)
put_page(vi->rq[i].alloc_frag.page);
}
static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
{
if (!is_xdp_frame(buf))
dev_kfree_skb(buf);
else
xdp_return_frame(ptr_to_xdp(buf));
}
static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
{
struct virtnet_info *vi = vq->vdev->priv;
int i = vq2rxq(vq);
if (vi->mergeable_rx_bufs)
put_page(virt_to_head_page(buf));
else if (vi->big_packets)
give_pages(&vi->rq[i], buf);
else
put_page(virt_to_head_page(buf));
}
static void free_unused_bufs(struct virtnet_info *vi)
{
void *buf;
@ -2757,26 +2778,16 @@ static void free_unused_bufs(struct virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
struct virtqueue *vq = vi->sq[i].vq;
while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
if (!is_xdp_frame(buf))
dev_kfree_skb(buf);
else
xdp_return_frame(ptr_to_xdp(buf));
}
while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
virtnet_sq_free_unused_buf(vq, buf);
cond_resched();
}
for (i = 0; i < vi->max_queue_pairs; i++) {
struct virtqueue *vq = vi->rq[i].vq;
while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
if (vi->mergeable_rx_bufs) {
put_page(virt_to_head_page(buf));
} else if (vi->big_packets) {
give_pages(&vi->rq[i], buf);
} else {
put_page(virt_to_head_page(buf));
}
}
while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
virtnet_rq_free_unused_buf(vq, buf);
cond_resched();
}
}

View File

@ -327,6 +327,22 @@ static const struct ts_dmi_data dexp_ursus_7w_data = {
.properties = dexp_ursus_7w_props,
};
static const struct property_entry dexp_ursus_kx210i_props[] = {
PROPERTY_ENTRY_U32("touchscreen-min-x", 5),
PROPERTY_ENTRY_U32("touchscreen-min-y", 2),
PROPERTY_ENTRY_U32("touchscreen-size-x", 1720),
PROPERTY_ENTRY_U32("touchscreen-size-y", 1137),
PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-dexp-ursus-kx210i.fw"),
PROPERTY_ENTRY_U32("silead,max-fingers", 10),
PROPERTY_ENTRY_BOOL("silead,home-button"),
{ }
};
static const struct ts_dmi_data dexp_ursus_kx210i_data = {
.acpi_name = "MSSL1680:00",
.properties = dexp_ursus_kx210i_props,
};
static const struct property_entry digma_citi_e200_props[] = {
PROPERTY_ENTRY_U32("touchscreen-size-x", 1980),
PROPERTY_ENTRY_U32("touchscreen-size-y", 1500),
@ -381,6 +397,11 @@ static const struct ts_dmi_data glavey_tm800a550l_data = {
.properties = glavey_tm800a550l_props,
};
static const struct ts_dmi_data gdix1002_00_upside_down_data = {
.acpi_name = "GDIX1002:00",
.properties = gdix1001_upside_down_props,
};
static const struct property_entry gp_electronic_t701_props[] = {
PROPERTY_ENTRY_U32("touchscreen-size-x", 960),
PROPERTY_ENTRY_U32("touchscreen-size-y", 640),
@ -1118,6 +1139,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "7W"),
},
},
{
/* DEXP Ursus KX210i */
.driver_data = (void *)&dexp_ursus_kx210i_data,
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "INSYDE Corp."),
DMI_MATCH(DMI_PRODUCT_NAME, "S107I"),
},
},
{
/* Digma Citi E200 */
.driver_data = (void *)&digma_citi_e200_data,
@ -1227,6 +1256,18 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
DMI_MATCH(DMI_BIOS_VERSION, "jumperx.T87.KFBNEEA"),
},
},
{
/* Juno Tablet */
.driver_data = (void *)&gdix1002_00_upside_down_data,
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Default string"),
/* Both product- and board-name being "Default string" is somewhat rare */
DMI_MATCH(DMI_PRODUCT_NAME, "Default string"),
DMI_MATCH(DMI_BOARD_NAME, "Default string"),
/* Above matches are too generic, add partial bios-version match */
DMI_MATCH(DMI_BIOS_VERSION, "JP2V1."),
},
},
{
/* Mediacom WinPad 7.0 W700 (same hw as Wintron surftab 7") */
.driver_data = (void *)&trekstor_surftab_wintron70_data,

View File

@ -129,6 +129,7 @@ static int st_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
while (of_phandle_iterator_next(&it) == 0) {
rmem = of_reserved_mem_lookup(it.node);
if (!rmem) {
of_node_put(it.node);
dev_err(dev, "unable to acquire memory-region\n");
return -EINVAL;
}
@ -150,8 +151,10 @@ static int st_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
it.node->name);
}
if (!mem)
if (!mem) {
of_node_put(it.node);
return -ENOMEM;
}
rproc_add_carveout(rproc, mem);
index++;

View File

@ -231,11 +231,13 @@ static int stm32_rproc_parse_memory_regions(struct rproc *rproc)
while (of_phandle_iterator_next(&it) == 0) {
rmem = of_reserved_mem_lookup(it.node);
if (!rmem) {
of_node_put(it.node);
dev_err(dev, "unable to acquire memory-region\n");
return -EINVAL;
}
if (stm32_rproc_pa_to_da(rproc, rmem->base, &da) < 0) {
of_node_put(it.node);
dev_err(dev, "memory region not valid %pa\n",
&rmem->base);
return -EINVAL;
@ -262,8 +264,10 @@ static int stm32_rproc_parse_memory_regions(struct rproc *rproc)
it.node->name);
}
if (!mem)
if (!mem) {
of_node_put(it.node);
return -ENOMEM;
}
rproc_add_carveout(rproc, mem);
index++;

View File

@ -2456,6 +2456,9 @@ static void __qedi_remove(struct pci_dev *pdev, int mode)
qedi_ops->ll2->stop(qedi->cdev);
}
cancel_delayed_work_sync(&qedi->recovery_work);
cancel_delayed_work_sync(&qedi->board_disable_work);
qedi_free_iscsi_pf_param(qedi);
rval = qedi_ops->common->update_drv_state(qedi->cdev, false);

View File

@ -1396,7 +1396,7 @@ void transport_init_se_cmd(
cmd->orig_fe_lun = unpacked_lun;
if (!(cmd->se_cmd_flags & SCF_USE_CPUID))
cmd->cpuid = smp_processor_id();
cmd->cpuid = raw_smp_processor_id();
cmd->state_active = false;
}

View File

@ -50,6 +50,7 @@
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/gsmmux.h>
#include "tty.h"
static int debug;
module_param(debug, int, 0600);

View File

@ -100,6 +100,7 @@
#include <asm/termios.h>
#include <linux/uaccess.h>
#include "tty.h"
/*
* Buffers for individual HDLC frames

View File

@ -49,6 +49,7 @@
#include <linux/module.h>
#include <linux/ratelimit.h>
#include <linux/vmalloc.h>
#include "tty.h"
/*
* Until this number of characters is queued in the xmit buffer, select will

View File

@ -29,6 +29,7 @@
#include <linux/file.h>
#include <linux/ioctl.h>
#include <linux/compat.h>
#include "tty.h"
#undef TTY_DEBUG_HANGUP
#ifdef TTY_DEBUG_HANGUP

View File

@ -330,6 +330,13 @@ extern int serial8250_rx_dma(struct uart_8250_port *);
extern void serial8250_rx_dma_flush(struct uart_8250_port *);
extern int serial8250_request_dma(struct uart_8250_port *);
extern void serial8250_release_dma(struct uart_8250_port *);
static inline bool serial8250_tx_dma_running(struct uart_8250_port *p)
{
struct uart_8250_dma *dma = p->dma;
return dma && dma->tx_running;
}
#else
static inline int serial8250_tx_dma(struct uart_8250_port *p)
{
@ -345,6 +352,11 @@ static inline int serial8250_request_dma(struct uart_8250_port *p)
return -1;
}
static inline void serial8250_release_dma(struct uart_8250_port *p) { }
static inline bool serial8250_tx_dma_running(struct uart_8250_port *p)
{
return false;
}
#endif
static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)

View File

@ -1970,19 +1970,25 @@ static int serial8250_tx_threshold_handle_irq(struct uart_port *port)
static unsigned int serial8250_tx_empty(struct uart_port *port)
{
struct uart_8250_port *up = up_to_u8250p(port);
unsigned int result = 0;
unsigned long flags;
unsigned int lsr;
serial8250_rpm_get(up);
spin_lock_irqsave(&port->lock, flags);
if (!serial8250_tx_dma_running(up)) {
lsr = serial_port_in(port, UART_LSR);
up->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS;
if ((lsr & BOTH_EMPTY) == BOTH_EMPTY)
result = TIOCSER_TEMT;
}
spin_unlock_irqrestore(&port->lock, flags);
serial8250_rpm_put(up);
return (lsr & BOTH_EMPTY) == BOTH_EMPTY ? TIOCSER_TEMT : 0;
return result;
}
unsigned int serial8250_do_get_mctrl(struct uart_port *port)

117
drivers/tty/tty.h Normal file
View File

@ -0,0 +1,117 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* TTY core internal functions
*/
#ifndef _TTY_INTERNAL_H
#define _TTY_INTERNAL_H
#define tty_msg(fn, tty, f, ...) \
fn("%s %s: " f, tty_driver_name(tty), tty_name(tty), ##__VA_ARGS__)
#define tty_debug(tty, f, ...) tty_msg(pr_debug, tty, f, ##__VA_ARGS__)
#define tty_info(tty, f, ...) tty_msg(pr_info, tty, f, ##__VA_ARGS__)
#define tty_notice(tty, f, ...) tty_msg(pr_notice, tty, f, ##__VA_ARGS__)
#define tty_warn(tty, f, ...) tty_msg(pr_warn, tty, f, ##__VA_ARGS__)
#define tty_err(tty, f, ...) tty_msg(pr_err, tty, f, ##__VA_ARGS__)
#define tty_info_ratelimited(tty, f, ...) \
tty_msg(pr_info_ratelimited, tty, f, ##__VA_ARGS__)
/*
* Lock subclasses for tty locks
*
* TTY_LOCK_NORMAL is for normal ttys and master ptys.
* TTY_LOCK_SLAVE is for slave ptys only.
*
* Lock subclasses are necessary for handling nested locking with pty pairs.
* tty locks which use nested locking:
*
* legacy_mutex - Nested tty locks are necessary for releasing pty pairs.
* The stable lock order is master pty first, then slave pty.
* termios_rwsem - The stable lock order is tty_buffer lock->termios_rwsem.
* Subclassing this lock enables the slave pty to hold its
* termios_rwsem when claiming the master tty_buffer lock.
* tty_buffer lock - slave ptys can claim nested buffer lock when handling
* signal chars. The stable lock order is slave pty, then
* master.
*/
enum {
TTY_LOCK_NORMAL = 0,
TTY_LOCK_SLAVE,
};
/* Values for tty->flow_change */
#define TTY_THROTTLE_SAFE 1
#define TTY_UNTHROTTLE_SAFE 2
static inline void __tty_set_flow_change(struct tty_struct *tty, int val)
{
tty->flow_change = val;
}
static inline void tty_set_flow_change(struct tty_struct *tty, int val)
{
tty->flow_change = val;
smp_mb();
}
int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout);
void tty_ldisc_unlock(struct tty_struct *tty);
int __tty_check_change(struct tty_struct *tty, int sig);
int tty_check_change(struct tty_struct *tty);
void __stop_tty(struct tty_struct *tty);
void __start_tty(struct tty_struct *tty);
void tty_write_unlock(struct tty_struct *tty);
int tty_write_lock(struct tty_struct *tty, int ndelay);
void tty_vhangup_session(struct tty_struct *tty);
void tty_open_proc_set_tty(struct file *filp, struct tty_struct *tty);
int tty_signal_session_leader(struct tty_struct *tty, int exit_session);
void session_clear_tty(struct pid *session);
void tty_buffer_free_all(struct tty_port *port);
void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld);
void tty_buffer_init(struct tty_port *port);
void tty_buffer_set_lock_subclass(struct tty_port *port);
bool tty_buffer_restart_work(struct tty_port *port);
bool tty_buffer_cancel_work(struct tty_port *port);
void tty_buffer_flush_work(struct tty_port *port);
speed_t tty_termios_input_baud_rate(struct ktermios *termios);
void tty_ldisc_hangup(struct tty_struct *tty, bool reset);
int tty_ldisc_reinit(struct tty_struct *tty, int disc);
long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
long tty_jobctrl_ioctl(struct tty_struct *tty, struct tty_struct *real_tty,
struct file *file, unsigned int cmd, unsigned long arg);
void tty_default_fops(struct file_operations *fops);
struct tty_struct *alloc_tty_struct(struct tty_driver *driver, int idx);
int tty_alloc_file(struct file *file);
void tty_add_file(struct tty_struct *tty, struct file *file);
void tty_free_file(struct file *file);
int tty_release(struct inode *inode, struct file *filp);
#define tty_is_writelocked(tty) (mutex_is_locked(&tty->atomic_write_lock))
int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty);
void tty_ldisc_release(struct tty_struct *tty);
int __must_check tty_ldisc_init(struct tty_struct *tty);
void tty_ldisc_deinit(struct tty_struct *tty);
void tty_sysctl_init(void);
/* tty_audit.c */
#ifdef CONFIG_AUDIT
void tty_audit_add_data(struct tty_struct *tty, const void *data, size_t size);
void tty_audit_tiocsti(struct tty_struct *tty, char ch);
#else
static inline void tty_audit_add_data(struct tty_struct *tty, const void *data,
size_t size)
{
}
static inline void tty_audit_tiocsti(struct tty_struct *tty, char ch)
{
}
#endif
ssize_t redirected_tty_write(struct kiocb *, struct iov_iter *);
#endif

View File

@ -10,6 +10,7 @@
#include <linux/audit.h>
#include <linux/slab.h>
#include <linux/tty.h>
#include "tty.h"
struct tty_audit_buf {
struct mutex mutex; /* Protects all data below */

View File

@ -8,6 +8,7 @@
#include <linux/termios.h>
#include <linux/tty.h>
#include <linux/export.h>
#include "tty.h"
/*

View File

@ -17,7 +17,7 @@
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/ratelimit.h>
#include "tty.h"
#define MIN_TTYB_SIZE 256
#define TTYB_ALIGN_MASK 255

View File

@ -108,6 +108,7 @@
#include <linux/kmod.h>
#include <linux/nsproxy.h>
#include "tty.h"
#undef TTY_DEBUG_HANGUP
#ifdef TTY_DEBUG_HANGUP
@ -941,13 +942,13 @@ static ssize_t tty_read(struct kiocb *iocb, struct iov_iter *to)
return i;
}
static void tty_write_unlock(struct tty_struct *tty)
void tty_write_unlock(struct tty_struct *tty)
{
mutex_unlock(&tty->atomic_write_lock);
wake_up_interruptible_poll(&tty->write_wait, EPOLLOUT);
}
static int tty_write_lock(struct tty_struct *tty, int ndelay)
int tty_write_lock(struct tty_struct *tty, int ndelay)
{
if (!mutex_trylock(&tty->atomic_write_lock)) {
if (ndelay)

View File

@ -21,6 +21,7 @@
#include <linux/bitops.h>
#include <linux/mutex.h>
#include <linux/compat.h>
#include "tty.h"
#include <asm/io.h>
#include <linux/uaccess.h>
@ -397,22 +398,43 @@ static int set_termios(struct tty_struct *tty, void __user *arg, int opt)
tmp_termios.c_ispeed = tty_termios_input_baud_rate(&tmp_termios);
tmp_termios.c_ospeed = tty_termios_baud_rate(&tmp_termios);
ld = tty_ldisc_ref(tty);
if (opt & (TERMIOS_FLUSH|TERMIOS_WAIT)) {
retry_write_wait:
retval = wait_event_interruptible(tty->write_wait, !tty_chars_in_buffer(tty));
if (retval < 0)
return retval;
if (tty_write_lock(tty, 0) < 0)
goto retry_write_wait;
/* Racing writer? */
if (tty_chars_in_buffer(tty)) {
tty_write_unlock(tty);
goto retry_write_wait;
}
ld = tty_ldisc_ref(tty);
if (ld != NULL) {
if ((opt & TERMIOS_FLUSH) && ld->ops->flush_buffer)
ld->ops->flush_buffer(tty);
tty_ldisc_deref(ld);
}
if (opt & TERMIOS_WAIT) {
tty_wait_until_sent(tty, 0);
if (signal_pending(current))
if ((opt & TERMIOS_WAIT) && tty->ops->wait_until_sent) {
tty->ops->wait_until_sent(tty, 0);
if (signal_pending(current)) {
tty_write_unlock(tty);
return -ERESTARTSYS;
}
}
tty_set_termios(tty, &tmp_termios);
tty_write_unlock(tty);
} else {
tty_set_termios(tty, &tmp_termios);
}
/* FIXME: Arguably if tmp_termios == tty->termios AND the
actual requested termios was not tmp_termios then we may
want to return an error as no user requested change has

View File

@ -11,6 +11,7 @@
#include <linux/tty.h>
#include <linux/fcntl.h>
#include <linux/uaccess.h>
#include "tty.h"
static int is_ignored(int sig)
{

View File

@ -19,6 +19,7 @@
#include <linux/seq_file.h>
#include <linux/uaccess.h>
#include <linux/ratelimit.h>
#include "tty.h"
#undef LDISC_DEBUG_HANGUP

View File

@ -4,6 +4,7 @@
#include <linux/kallsyms.h>
#include <linux/semaphore.h>
#include <linux/sched.h>
#include "tty.h"
/* Legacy tty mutex glue */

View File

@ -18,6 +18,7 @@
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/serdev.h>
#include "tty.h"
static int tty_port_default_receive_buf(struct tty_port *port,
const unsigned char *p,

View File

@ -638,7 +638,7 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
ret = dw_wdt_init_timeouts(dw_wdt, dev);
if (ret)
goto out_disable_clk;
goto out_assert_rst;
wdd = &dw_wdt->wdd;
wdd->ops = &dw_wdt_ops;
@ -669,12 +669,15 @@ static int dw_wdt_drv_probe(struct platform_device *pdev)
ret = watchdog_register_device(wdd);
if (ret)
goto out_disable_pclk;
goto out_assert_rst;
dw_wdt_dbgfs_init(dw_wdt);
return 0;
out_assert_rst:
reset_control_assert(dw_wdt->rst);
out_disable_pclk:
clk_disable_unprepare(dw_wdt->pclk);

View File

@ -121,7 +121,8 @@ static u64 block_rsv_release_bytes(struct btrfs_fs_info *fs_info,
} else {
num_bytes = 0;
}
if (block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) {
if (qgroup_to_release_ret &&
block_rsv->qgroup_rsv_reserved >= block_rsv->qgroup_rsv_size) {
qgroup_to_release = block_rsv->qgroup_rsv_reserved -
block_rsv->qgroup_rsv_size;
block_rsv->qgroup_rsv_reserved = block_rsv->qgroup_rsv_size;

View File

@ -5160,10 +5160,12 @@ int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
int btrfs_prev_leaf(struct btrfs_root *root, struct btrfs_path *path)
{
struct btrfs_key key;
struct btrfs_key orig_key;
struct btrfs_disk_key found_key;
int ret;
btrfs_item_key_to_cpu(path->nodes[0], &key, 0);
orig_key = key;
if (key.offset > 0) {
key.offset--;
@ -5180,8 +5182,36 @@ int btrfs_prev_leaf(struct btrfs_root *root, struct btrfs_path *path)
btrfs_release_path(path);
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
if (ret < 0)
if (ret <= 0)
return ret;
/*
* Previous key not found. Even if we were at slot 0 of the leaf we had
* before releasing the path and calling btrfs_search_slot(), we now may
* be in a slot pointing to the same original key - this can happen if
* after we released the path, one of more items were moved from a
* sibling leaf into the front of the leaf we had due to an insertion
* (see push_leaf_right()).
* If we hit this case and our slot is > 0 and just decrement the slot
* so that the caller does not process the same key again, which may or
* may not break the caller, depending on its logic.
*/
if (path->slots[0] < btrfs_header_nritems(path->nodes[0])) {
btrfs_item_key(path->nodes[0], &found_key, path->slots[0]);
ret = comp_keys(&found_key, &orig_key);
if (ret == 0) {
if (path->slots[0] > 0) {
path->slots[0]--;
return 0;
}
/*
* At slot 0, same key as before, it means orig_key is
* the lowest, leftmost, key in the tree. We're done.
*/
return 1;
}
}
btrfs_item_key(path->nodes[0], &found_key, 0);
ret = comp_keys(&found_key, &key);
/*

View File

@ -147,10 +147,10 @@ static void print_extent_item(struct extent_buffer *eb, int slot, int type)
pr_cont("shared data backref parent %llu count %u\n",
offset, btrfs_shared_data_ref_count(eb, sref));
/*
* offset is supposed to be a tree block which
* must be aligned to nodesize.
* Offset is supposed to be a tree block which must be
* aligned to sectorsize.
*/
if (!IS_ALIGNED(offset, eb->fs_info->nodesize))
if (!IS_ALIGNED(offset, eb->fs_info->sectorsize))
pr_info(
"\t\t\t(parent %llu not aligned to sectorsize %u)\n",
offset, eb->fs_info->sectorsize);

View File

@ -1784,7 +1784,7 @@ smb2_copychunk_range(const unsigned int xid,
pcchunk->SourceOffset = cpu_to_le64(src_off);
pcchunk->TargetOffset = cpu_to_le64(dest_off);
pcchunk->Length =
cpu_to_le32(min_t(u32, len, tcon->max_bytes_chunk));
cpu_to_le32(min_t(u64, len, tcon->max_bytes_chunk));
/* Request server copy to target from src identified by key */
kfree(retbuf);

View File

@ -303,6 +303,22 @@ struct ext4_group_desc * ext4_get_group_desc(struct super_block *sb,
return desc;
}
static ext4_fsblk_t ext4_valid_block_bitmap_padding(struct super_block *sb,
ext4_group_t block_group,
struct buffer_head *bh)
{
ext4_grpblk_t next_zero_bit;
unsigned long bitmap_size = sb->s_blocksize * 8;
unsigned int offset = num_clusters_in_group(sb, block_group);
if (bitmap_size <= offset)
return 0;
next_zero_bit = ext4_find_next_zero_bit(bh->b_data, bitmap_size, offset);
return (next_zero_bit < bitmap_size ? next_zero_bit : 0);
}
/*
* Return the block number which was discovered to be invalid, or 0 if
* the block bitmap is valid.
@ -401,6 +417,15 @@ static int ext4_validate_block_bitmap(struct super_block *sb,
EXT4_GROUP_INFO_BBITMAP_CORRUPT);
return -EFSCORRUPTED;
}
blk = ext4_valid_block_bitmap_padding(sb, block_group, bh);
if (unlikely(blk != 0)) {
ext4_unlock_group(sb, block_group);
ext4_error(sb, "bg %u: block %llu: padding at end of block bitmap is not set",
block_group, blk);
ext4_mark_group_bitmap_corrupted(sb, block_group,
EXT4_GROUP_INFO_BBITMAP_CORRUPT);
return -EFSCORRUPTED;
}
set_buffer_verified(bh);
verified:
ext4_unlock_group(sb, block_group);

View File

@ -269,15 +269,13 @@ static void __es_find_extent_range(struct inode *inode,
/* see if the extent has been cached */
es->es_lblk = es->es_len = es->es_pblk = 0;
if (tree->cache_es) {
es1 = tree->cache_es;
if (in_range(lblk, es1->es_lblk, es1->es_len)) {
es1 = READ_ONCE(tree->cache_es);
if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) {
es_debug("%u cached by [%u/%u) %llu %x\n",
lblk, es1->es_lblk, es1->es_len,
ext4_es_pblock(es1), ext4_es_status(es1));
goto out;
}
}
es1 = __es_tree_search(&tree->root, lblk);
@ -295,7 +293,7 @@ static void __es_find_extent_range(struct inode *inode,
}
if (es1 && matching_fn(es1)) {
tree->cache_es = es1;
WRITE_ONCE(tree->cache_es, es1);
es->es_lblk = es1->es_lblk;
es->es_len = es1->es_len;
es->es_pblk = es1->es_pblk;
@ -934,15 +932,13 @@ int ext4_es_lookup_extent(struct inode *inode, ext4_lblk_t lblk,
/* find extent in cache firstly */
es->es_lblk = es->es_len = es->es_pblk = 0;
if (tree->cache_es) {
es1 = tree->cache_es;
if (in_range(lblk, es1->es_lblk, es1->es_len)) {
es1 = READ_ONCE(tree->cache_es);
if (es1 && in_range(lblk, es1->es_lblk, es1->es_len)) {
es_debug("%u cached by [%u/%u)\n",
lblk, es1->es_lblk, es1->es_len);
found = 1;
goto out;
}
}
node = tree->root.rb_node;
while (node) {

View File

@ -33,6 +33,7 @@ static int get_max_inline_xattr_value_size(struct inode *inode,
struct ext4_xattr_ibody_header *header;
struct ext4_xattr_entry *entry;
struct ext4_inode *raw_inode;
void *end;
int free, min_offs;
if (!EXT4_INODE_HAS_XATTR_SPACE(inode))
@ -56,14 +57,23 @@ static int get_max_inline_xattr_value_size(struct inode *inode,
raw_inode = ext4_raw_inode(iloc);
header = IHDR(inode, raw_inode);
entry = IFIRST(header);
end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size;
/* Compute min_offs. */
for (; !IS_LAST_ENTRY(entry); entry = EXT4_XATTR_NEXT(entry)) {
while (!IS_LAST_ENTRY(entry)) {
void *next = EXT4_XATTR_NEXT(entry);
if (next >= end) {
EXT4_ERROR_INODE(inode,
"corrupt xattr in inline inode");
return 0;
}
if (!entry->e_value_inum && entry->e_value_size) {
size_t offs = le16_to_cpu(entry->e_value_offs);
if (offs < min_offs)
min_offs = offs;
}
entry = next;
}
free = min_offs -
((void *)entry - (void *)IFIRST(header)) - sizeof(__u32);
@ -349,7 +359,7 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
error = ext4_xattr_ibody_get(inode, i.name_index, i.name,
value, len);
if (error == -ENODATA)
if (error < 0)
goto out;
BUFFER_TRACE(is.iloc.bh, "get_write_access");
@ -1186,6 +1196,7 @@ static int ext4_finish_convert_inline_dir(handle_t *handle,
ext4_initialize_dirent_tail(dir_block,
inode->i_sb->s_blocksize);
set_buffer_uptodate(dir_block);
unlock_buffer(dir_block);
err = ext4_handle_dirty_dirblock(handle, inode, dir_block);
if (err)
return err;
@ -1259,6 +1270,7 @@ static int ext4_convert_inline_data_nolock(handle_t *handle,
if (!S_ISDIR(inode->i_mode)) {
memcpy(data_bh->b_data, buf, inline_size);
set_buffer_uptodate(data_bh);
unlock_buffer(data_bh);
error = ext4_handle_dirty_metadata(handle,
inode, data_bh);
} else {
@ -1266,7 +1278,6 @@ static int ext4_convert_inline_data_nolock(handle_t *handle,
buf, inline_size);
}
unlock_buffer(data_bh);
out_restore:
if (error)
ext4_restore_inline_data(handle, inode, iloc, buf, inline_size);

View File

@ -3596,7 +3596,7 @@ static int ext4_iomap_overwrite_begin(struct inode *inode, loff_t offset,
*/
flags &= ~IOMAP_WRITE;
ret = ext4_iomap_begin(inode, offset, length, flags, iomap, srcmap);
WARN_ON_ONCE(iomap->type != IOMAP_MAPPED);
WARN_ON_ONCE(!ret && iomap->type != IOMAP_MAPPED);
return ret;
}

View File

@ -4250,7 +4250,11 @@ ext4_mb_release_group_pa(struct ext4_buddy *e4b,
trace_ext4_mb_release_group_pa(sb, pa);
BUG_ON(pa->pa_deleted == 0);
ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit);
BUG_ON(group != e4b->bd_group && pa->pa_len != 0);
if (unlikely(group != e4b->bd_group && pa->pa_len != 0)) {
ext4_warning(sb, "bad group: expected %u, group %u, pa_start %llu",
e4b->bd_group, group, pa->pa_pstart);
return 0;
}
mb_free_blocks(pa->pa_inode, e4b, bit, pa->pa_len);
atomic_add(pa->pa_len, &EXT4_SB(sb)->s_mb_discarded);
trace_ext4_mballoc_discard(sb, NULL, group, bit, pa->pa_len);

View File

@ -6017,9 +6017,6 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
}
#ifdef CONFIG_QUOTA
/* Release old quota file names */
for (i = 0; i < EXT4_MAXQUOTAS; i++)
kfree(old_opts.s_qf_names[i]);
if (enable_quota) {
if (sb_any_quota_suspended(sb))
dquot_resume(sb, -1);
@ -6029,6 +6026,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
goto restore_opts;
}
}
/* Release old quota file names */
for (i = 0; i < EXT4_MAXQUOTAS; i++)
kfree(old_opts.s_qf_names[i]);
#endif
if (!test_opt(sb, BLOCK_VALIDITY) && sbi->s_system_blks)
ext4_release_system_zone(sb);
@ -6048,6 +6048,13 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
return 0;
restore_opts:
/*
* If there was a failing r/w to ro transition, we may need to
* re-enable quota
*/
if ((sb->s_flags & SB_RDONLY) && !(old_sb_flags & SB_RDONLY) &&
sb_any_quota_suspended(sb))
dquot_resume(sb, -1);
sb->s_flags = old_sb_flags;
sbi->s_mount_opt = old_opts.s_mount_opt;
sbi->s_mount_opt2 = old_opts.s_mount_opt2;

View File

@ -973,12 +973,20 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
goto out;
}
/*
* Copied from ext4_rename: we need to protect against old.inode
* directory getting converted from inline directory format into
* a normal one.
*/
if (S_ISDIR(old_inode->i_mode))
inode_lock_nested(old_inode, I_MUTEX_NONDIR2);
err = -ENOENT;
old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page);
if (!old_entry) {
if (IS_ERR(old_page))
err = PTR_ERR(old_page);
goto out;
goto out_unlock_old;
}
if (S_ISDIR(old_inode->i_mode)) {
@ -1086,6 +1094,9 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
f2fs_unlock_op(sbi);
if (S_ISDIR(old_inode->i_mode))
inode_unlock(old_inode);
if (IS_DIRSYNC(old_dir) || IS_DIRSYNC(new_dir))
f2fs_sync_fs(sbi->sb, 1);
@ -1100,6 +1111,9 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
f2fs_put_page(old_dir_page, 0);
out_old:
f2fs_put_page(old_page, 0);
out_unlock_old:
if (S_ISDIR(old_inode->i_mode))
inode_unlock(old_inode);
out:
if (whiteout)
iput(whiteout);

View File

@ -700,7 +700,7 @@ void wbc_detach_inode(struct writeback_control *wbc)
* is okay. The main goal is avoiding keeping an inode on
* the wrong wb for an extended period of time.
*/
if (hweight32(history) > WB_FRN_HIST_THR_SLOTS)
if (hweight16(history) > WB_FRN_HIST_THR_SLOTS)
inode_switch_wbs(inode, max_id);
}

View File

@ -64,7 +64,7 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
struct fsnotify_event *fsn_event;
struct fsnotify_group *group = inode_mark->group;
int ret;
int len = 0;
int len = 0, wd;
int alloc_len = sizeof(struct inotify_event_info);
struct mem_cgroup *old_memcg;
@ -79,6 +79,13 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
i_mark = container_of(inode_mark, struct inotify_inode_mark,
fsn_mark);
/*
* We can be racing with mark being detached. Don't report event with
* invalid wd.
*/
wd = READ_ONCE(i_mark->wd);
if (wd == -1)
return 0;
/*
* Whoever is interested in the event, pays for the allocation. Do not
* trigger OOM killer in the target monitoring memcg as it may have
@ -109,7 +116,7 @@ int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, u32 mask,
fsn_event = &event->fse;
fsnotify_init_event(fsn_event, 0);
event->mask = mask;
event->wd = i_mark->wd;
event->wd = wd;
event->sync_cookie = cookie;
event->name_len = len;
if (len)

View File

@ -627,4 +627,23 @@ static inline void print_hex_dump_debug(const char *prefix_str, int prefix_type,
#define print_hex_dump_bytes(prefix_str, prefix_type, buf, len) \
print_hex_dump_debug(prefix_str, prefix_type, 16, 1, buf, len, true)
#ifdef CONFIG_PRINTK
extern void __printk_safe_enter(void);
extern void __printk_safe_exit(void);
/*
* The printk_deferred_enter/exit macros are available only as a hack for
* some code paths that need to defer all printk console printing. Interrupts
* must be disabled for the deferred duration.
*/
#define printk_deferred_enter __printk_safe_enter
#define printk_deferred_exit __printk_safe_exit
#else
static inline void printk_deferred_enter(void)
{
}
static inline void printk_deferred_exit(void)
{
}
#endif
#endif

View File

@ -17,30 +17,6 @@
#include <linux/android_kabi.h>
/*
* Lock subclasses for tty locks
*
* TTY_LOCK_NORMAL is for normal ttys and master ptys.
* TTY_LOCK_SLAVE is for slave ptys only.
*
* Lock subclasses are necessary for handling nested locking with pty pairs.
* tty locks which use nested locking:
*
* legacy_mutex - Nested tty locks are necessary for releasing pty pairs.
* The stable lock order is master pty first, then slave pty.
* termios_rwsem - The stable lock order is tty_buffer lock->termios_rwsem.
* Subclassing this lock enables the slave pty to hold its
* termios_rwsem when claiming the master tty_buffer lock.
* tty_buffer lock - slave ptys can claim nested buffer lock when handling
* signal chars. The stable lock order is slave pty, then
* master.
*/
enum {
TTY_LOCK_NORMAL = 0,
TTY_LOCK_SLAVE,
};
/*
* (Note: the *_driver.minor_start values 1, 64, 128, 192 are
* hardcoded at present.)
@ -386,21 +362,6 @@ struct tty_file_private {
#define TTY_LDISC_CHANGING 20 /* Change pending - non-block IO */
#define TTY_LDISC_HALTED 22 /* Line discipline is halted */
/* Values for tty->flow_change */
#define TTY_THROTTLE_SAFE 1
#define TTY_UNTHROTTLE_SAFE 2
static inline void __tty_set_flow_change(struct tty_struct *tty, int val)
{
tty->flow_change = val;
}
static inline void tty_set_flow_change(struct tty_struct *tty, int val)
{
tty->flow_change = val;
smp_mb();
}
static inline bool tty_io_nonblock(struct tty_struct *tty, struct file *file)
{
return file->f_flags & O_NONBLOCK ||
@ -431,9 +392,6 @@ extern const char *tty_name(const struct tty_struct *tty);
extern struct tty_struct *tty_kopen(dev_t device);
extern void tty_kclose(struct tty_struct *tty);
extern int tty_dev_name_to_number(const char *name, dev_t *number);
extern int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout);
extern void tty_ldisc_unlock(struct tty_struct *tty);
extern ssize_t redirected_tty_write(struct kiocb *, struct iov_iter *);
#else
static inline void tty_kref_put(struct tty_struct *tty)
{ }
@ -486,11 +444,7 @@ static inline struct tty_struct *tty_kref_get(struct tty_struct *tty)
extern const char *tty_driver_name(const struct tty_struct *tty);
extern void tty_wait_until_sent(struct tty_struct *tty, long timeout);
extern int __tty_check_change(struct tty_struct *tty, int sig);
extern int tty_check_change(struct tty_struct *tty);
extern void __stop_tty(struct tty_struct *tty);
extern void stop_tty(struct tty_struct *tty);
extern void __start_tty(struct tty_struct *tty);
extern void start_tty(struct tty_struct *tty);
extern int tty_register_driver(struct tty_driver *driver);
extern int tty_unregister_driver(struct tty_driver *driver);
@ -515,23 +469,11 @@ extern int tty_do_resize(struct tty_struct *tty, struct winsize *ws);
extern int is_current_pgrp_orphaned(void);
extern void tty_hangup(struct tty_struct *tty);
extern void tty_vhangup(struct tty_struct *tty);
extern void tty_vhangup_session(struct tty_struct *tty);
extern int tty_hung_up_p(struct file *filp);
extern void do_SAK(struct tty_struct *tty);
extern void __do_SAK(struct tty_struct *tty);
extern void tty_open_proc_set_tty(struct file *filp, struct tty_struct *tty);
extern int tty_signal_session_leader(struct tty_struct *tty, int exit_session);
extern void session_clear_tty(struct pid *session);
extern void no_tty(void);
extern void tty_buffer_free_all(struct tty_port *port);
extern void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld);
extern void tty_buffer_init(struct tty_port *port);
extern void tty_buffer_set_lock_subclass(struct tty_port *port);
extern bool tty_buffer_restart_work(struct tty_port *port);
extern bool tty_buffer_cancel_work(struct tty_port *port);
extern void tty_buffer_flush_work(struct tty_port *port);
extern speed_t tty_termios_baud_rate(struct ktermios *termios);
extern speed_t tty_termios_input_baud_rate(struct ktermios *termios);
extern void tty_termios_encode_baud_rate(struct ktermios *termios,
speed_t ibaud, speed_t obaud);
extern void tty_encode_baud_rate(struct tty_struct *tty,
@ -559,27 +501,16 @@ extern int tty_set_termios(struct tty_struct *tty, struct ktermios *kt);
extern struct tty_ldisc *tty_ldisc_ref(struct tty_struct *);
extern void tty_ldisc_deref(struct tty_ldisc *);
extern struct tty_ldisc *tty_ldisc_ref_wait(struct tty_struct *);
extern void tty_ldisc_hangup(struct tty_struct *tty, bool reset);
extern int tty_ldisc_reinit(struct tty_struct *tty, int disc);
extern const struct seq_operations tty_ldiscs_seq_ops;
extern void tty_wakeup(struct tty_struct *tty);
extern void tty_ldisc_flush(struct tty_struct *tty);
extern long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
extern int tty_mode_ioctl(struct tty_struct *tty, struct file *file,
unsigned int cmd, unsigned long arg);
extern long tty_jobctrl_ioctl(struct tty_struct *tty, struct tty_struct *real_tty,
struct file *file, unsigned int cmd, unsigned long arg);
extern int tty_perform_flush(struct tty_struct *tty, unsigned long arg);
extern void tty_default_fops(struct file_operations *fops);
extern struct tty_struct *alloc_tty_struct(struct tty_driver *driver, int idx);
extern int tty_alloc_file(struct file *file);
extern void tty_add_file(struct tty_struct *tty, struct file *file);
extern void tty_free_file(struct file *file);
extern struct tty_struct *tty_init_dev(struct tty_driver *driver, int idx);
extern void tty_release_struct(struct tty_struct *tty, int idx);
extern int tty_release(struct inode *inode, struct file *filp);
extern void tty_init_termios(struct tty_struct *tty);
extern void tty_save_termios(struct tty_struct *tty);
extern int tty_standard_install(struct tty_driver *driver,
@ -587,8 +518,6 @@ extern int tty_standard_install(struct tty_driver *driver,
extern struct mutex tty_mutex;
#define tty_is_writelocked(tty) (mutex_is_locked(&tty->atomic_write_lock))
extern void tty_port_init(struct tty_port *port);
extern void tty_port_link_device(struct tty_port *port,
struct tty_driver *driver, unsigned index);
@ -726,10 +655,6 @@ static inline int tty_port_users(struct tty_port *port)
extern int tty_register_ldisc(int disc, struct tty_ldisc_ops *new_ldisc);
extern int tty_unregister_ldisc(int disc);
extern int tty_set_ldisc(struct tty_struct *tty, int disc);
extern int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty);
extern void tty_ldisc_release(struct tty_struct *tty);
extern int __must_check tty_ldisc_init(struct tty_struct *tty);
extern void tty_ldisc_deinit(struct tty_struct *tty);
extern int tty_ldisc_receive_buf(struct tty_ldisc *ld, const unsigned char *p,
char *f, int count);
@ -743,20 +668,10 @@ static inline void n_tty_init(void) { }
/* tty_audit.c */
#ifdef CONFIG_AUDIT
extern void tty_audit_add_data(struct tty_struct *tty, const void *data,
size_t size);
extern void tty_audit_exit(void);
extern void tty_audit_fork(struct signal_struct *sig);
extern void tty_audit_tiocsti(struct tty_struct *tty, char ch);
extern int tty_audit_push(void);
#else
static inline void tty_audit_add_data(struct tty_struct *tty, const void *data,
size_t size)
{
}
static inline void tty_audit_tiocsti(struct tty_struct *tty, char ch)
{
}
static inline void tty_audit_exit(void)
{
}
@ -798,16 +713,4 @@ static inline void proc_tty_register_driver(struct tty_driver *d) {}
static inline void proc_tty_unregister_driver(struct tty_driver *d) {}
#endif
#define tty_msg(fn, tty, f, ...) \
fn("%s %s: " f, tty_driver_name(tty), tty_name(tty), ##__VA_ARGS__)
#define tty_debug(tty, f, ...) tty_msg(pr_debug, tty, f, ##__VA_ARGS__)
#define tty_info(tty, f, ...) tty_msg(pr_info, tty, f, ##__VA_ARGS__)
#define tty_notice(tty, f, ...) tty_msg(pr_notice, tty, f, ##__VA_ARGS__)
#define tty_warn(tty, f, ...) tty_msg(pr_warn, tty, f, ##__VA_ARGS__)
#define tty_err(tty, f, ...) tty_msg(pr_err, tty, f, ##__VA_ARGS__)
#define tty_info_ratelimited(tty, f, ...) \
tty_msg(pr_info_ratelimited, tty, f, ##__VA_ARGS__)
#endif

View File

@ -507,6 +507,7 @@ struct nft_set_binding {
};
enum nft_trans_phase;
void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set);
void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_set_binding *binding,
enum nft_trans_phase phase);

View File

@ -5051,6 +5051,9 @@ void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu)
}
EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
/* Flag to ensure proper resetting of atomic variables */
#define RESET_BIT (1 << 30)
/**
* ring_buffer_reset_cpu - reset a ring buffer per CPU buffer
* @buffer: The ring buffer to reset a per cpu buffer of
@ -5067,20 +5070,27 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
for_each_online_buffer_cpu(buffer, cpu) {
cpu_buffer = buffer->buffers[cpu];
atomic_inc(&cpu_buffer->resize_disabled);
atomic_add(RESET_BIT, &cpu_buffer->resize_disabled);
atomic_inc(&cpu_buffer->record_disabled);
}
/* Make sure all commits have finished */
synchronize_rcu();
for_each_online_buffer_cpu(buffer, cpu) {
for_each_buffer_cpu(buffer, cpu) {
cpu_buffer = buffer->buffers[cpu];
/*
* If a CPU came online during the synchronize_rcu(), then
* ignore it.
*/
if (!(atomic_read(&cpu_buffer->resize_disabled) & RESET_BIT))
continue;
reset_disabled_cpu_buffer(cpu_buffer);
atomic_dec(&cpu_buffer->record_disabled);
atomic_dec(&cpu_buffer->resize_disabled);
atomic_sub(RESET_BIT, &cpu_buffer->resize_disabled);
}
mutex_unlock(&buffer->mutex);

View File

@ -590,6 +590,16 @@ static struct debug_obj *lookup_object_or_alloc(void *addr, struct debug_bucket
return NULL;
}
static void debug_objects_fill_pool(void)
{
/*
* On RT enabled kernels the pool refill must happen in preemptible
* context:
*/
if (!IS_ENABLED(CONFIG_PREEMPT_RT) || preemptible())
fill_pool();
}
static void
__debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack)
{
@ -598,7 +608,7 @@ __debug_object_init(void *addr, const struct debug_obj_descr *descr, int onstack
struct debug_obj *obj;
unsigned long flags;
fill_pool();
debug_objects_fill_pool();
db = get_bucket((unsigned long) addr);
@ -683,6 +693,8 @@ int debug_object_activate(void *addr, const struct debug_obj_descr *descr)
if (!debug_objects_enabled)
return 0;
debug_objects_fill_pool();
db = get_bucket((unsigned long) addr);
raw_spin_lock_irqsave(&db->lock, flags);
@ -892,6 +904,8 @@ void debug_object_assert_init(void *addr, const struct debug_obj_descr *descr)
if (!debug_objects_enabled)
return;
debug_objects_fill_pool();
db = get_bucket((unsigned long) addr);
raw_spin_lock_irqsave(&db->lock, flags);

View File

@ -6155,7 +6155,21 @@ static void __build_all_zonelists(void *data)
int nid;
int __maybe_unused cpu;
pg_data_t *self = data;
unsigned long flags;
/*
* Explicitly disable this CPU's interrupts before taking seqlock
* to prevent any IRQ handler from calling into the page allocator
* (e.g. GFP_ATOMIC) that could hit zonelist_iter_begin and livelock.
*/
local_irq_save(flags);
/*
* Explicitly disable this CPU's synchronous printk() before taking
* seqlock to prevent any printk() from trying to hold port->lock, for
* tty_insert_flip_string_and_push_buffer() on other CPU might be
* calling kmalloc(GFP_ATOMIC | __GFP_NOWARN) with port->lock held.
*/
printk_deferred_enter();
write_seqlock(&zonelist_update_seq);
#ifdef CONFIG_NUMA
@ -6190,6 +6204,8 @@ static void __build_all_zonelists(void *data)
}
write_sequnlock(&zonelist_update_seq);
printk_deferred_exit();
local_irq_restore(flags);
}
static noinline void __init

View File

@ -1094,12 +1094,13 @@ static netdev_tx_t sit_tunnel_xmit(struct sk_buff *skb,
static void ipip6_tunnel_bind_dev(struct net_device *dev)
{
struct ip_tunnel *tunnel = netdev_priv(dev);
int t_hlen = tunnel->hlen + sizeof(struct iphdr);
struct net_device *tdev = NULL;
struct ip_tunnel *tunnel;
int hlen = LL_MAX_HEADER;
const struct iphdr *iph;
struct flowi4 fl4;
tunnel = netdev_priv(dev);
iph = &tunnel->parms.iph;
if (iph->daddr) {
@ -1122,14 +1123,15 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
if (tdev && !netif_is_l3_master(tdev)) {
int t_hlen = tunnel->hlen + sizeof(struct iphdr);
int mtu;
mtu = tdev->mtu - t_hlen;
if (mtu < IPV6_MIN_MTU)
mtu = IPV6_MIN_MTU;
WRITE_ONCE(dev->mtu, mtu);
hlen = tdev->hard_header_len + tdev->needed_headroom;
}
dev->needed_headroom = t_hlen + hlen;
}
static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p,

View File

@ -165,6 +165,7 @@ static int ncsi_aen_handler_cr(struct ncsi_dev_priv *ndp,
nc->state = NCSI_CHANNEL_INACTIVE;
list_add_tail_rcu(&nc->link, &ndp->channel_queue);
spin_unlock_irqrestore(&ndp->lock, flags);
nc->modes[NCSI_MODE_TX_ENABLE].enable = 0;
return ncsi_process_next_channel(ndp);
}

View File

@ -4479,12 +4479,24 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
}
}
void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
{
if (nft_set_is_anonymous(set))
nft_clear(ctx->net, set);
set->use++;
}
EXPORT_SYMBOL_GPL(nf_tables_activate_set);
void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_set_binding *binding,
enum nft_trans_phase phase)
{
switch (phase) {
case NFT_TRANS_PREPARE:
if (nft_set_is_anonymous(set))
nft_deactivate_next(ctx->net, set);
set->use--;
return;
case NFT_TRANS_ABORT:

View File

@ -233,7 +233,7 @@ static void nft_dynset_activate(const struct nft_ctx *ctx,
{
struct nft_dynset *priv = nft_expr_priv(expr);
priv->set->use++;
nf_tables_activate_set(ctx, priv->set);
}
static void nft_dynset_destroy(const struct nft_ctx *ctx,

View File

@ -132,7 +132,7 @@ static void nft_lookup_activate(const struct nft_ctx *ctx,
{
struct nft_lookup *priv = nft_expr_priv(expr);
priv->set->use++;
nf_tables_activate_set(ctx, priv->set);
}
static void nft_lookup_destroy(const struct nft_ctx *ctx,

View File

@ -180,7 +180,7 @@ static void nft_objref_map_activate(const struct nft_ctx *ctx,
{
struct nft_objref_map *priv = nft_expr_priv(expr);
priv->set->use++;
nf_tables_activate_set(ctx, priv->set);
}
static void nft_objref_map_destroy(const struct nft_ctx *ctx,

View File

@ -1996,7 +1996,7 @@ static int packet_sendmsg_spkt(struct socket *sock, struct msghdr *msg,
goto retry;
}
if (!dev_validate_header(dev, skb->data, len)) {
if (!dev_validate_header(dev, skb->data, len) || !skb->len) {
err = -EINVAL;
goto out_unlock;
}

View File

@ -753,7 +753,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
fallthrough;
case 1:
if (p.call.timeouts.hard > 0) {
j = msecs_to_jiffies(p.call.timeouts.hard);
j = p.call.timeouts.hard * HZ;
now = jiffies;
j += now;
WRITE_ONCE(call->expect_term_by, j);

View File

@ -244,7 +244,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
goto out;
}
if (unlikely(!(dev->flags & IFF_UP))) {
if (unlikely(!(dev->flags & IFF_UP)) || !netif_carrier_ok(dev)) {
net_notice_ratelimited("tc mirred to Houston: device %s is down\n",
dev->name);
goto out;

View File

@ -1466,6 +1466,7 @@ static int tcf_block_bind(struct tcf_block *block,
err_unroll:
list_for_each_entry_safe(block_cb, next, &bo->cb_list, list) {
list_del(&block_cb->driver_list);
if (i-- > 0) {
list_del(&block_cb->list);
tcf_block_playback_offloads(block, block_cb->cb,

View File

@ -1442,7 +1442,7 @@ void dmasound_deinit(void)
unregister_sound_dsp(sq_unit);
}
static int dmasound_setup(char *str)
static int __maybe_unused dmasound_setup(char *str)
{
int ints[6], size;

View File

@ -804,6 +804,7 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev)
default:
/* no input methods supported on this device */
ret = -EINVAL;
goto exit_free_idev;
}

View File

@ -1417,7 +1417,7 @@
{
"EventCode": "0x45054",
"EventName": "PM_FMA_CMPL",
"BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only. "
"BriefDescription": "two flops operation completed (fmadd, fnmadd, fmsub, fnmsub) Scalar instructions only."
},
{
"EventCode": "0x201E8",
@ -2017,7 +2017,7 @@
{
"EventCode": "0xC0BC",
"EventName": "PM_LSU_FLUSH_OTHER",
"BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the “bad dval” back and flush all younger ops)"
"BriefDescription": "Other LSU flushes including: Sync (sync ack from L2 caused search of LRQ for oldest snooped load, This will either signal a Precise Flush of the oldest snooped loa or a Flush Next PPC); Data Valid Flush Next (several cases of this, one example is store and reload are lined up such that a store-hit-reload scenario exists and the CDF has already launched and has gotten bad/stale data); Bad Data Valid Flush Next (might be a few cases of this, one example is a larxa (D$ hit) return data and dval but can't allocate to LMQ (LMQ full or other reason). Already gave dval but can't watch it for snoop_hit_larx. Need to take the 'bad dval' back and flush all younger ops)"
},
{
"EventCode": "0x5094",

View File

@ -442,7 +442,7 @@
{
"EventCode": "0x4D052",
"EventName": "PM_2FLOP_CMPL",
"BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg "
"BriefDescription": "DP vector version of fmul, fsub, fcmp, fsel, fabs, fnabs, fres ,fsqrte, fneg"
},
{
"EventCode": "0x1F142",

View File

@ -1670,7 +1670,7 @@ static int perf_pmu__new_caps(struct list_head *list, char *name, char *value)
return 0;
free_name:
zfree(caps->name);
zfree(&caps->name);
free_caps:
free(caps);

View File

@ -873,8 +873,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type,
static int64_t
sort__sym_from_cmp(struct hist_entry *left, struct hist_entry *right)
{
struct addr_map_symbol *from_l = &left->branch_info->from;
struct addr_map_symbol *from_r = &right->branch_info->from;
struct addr_map_symbol *from_l, *from_r;
if (!left->branch_info || !right->branch_info)
return cmp_null(left->branch_info, right->branch_info);

View File

@ -548,7 +548,7 @@ static int elf_read_build_id(Elf *elf, void *bf, size_t size)
size_t sz = min(size, descsz);
memcpy(bf, ptr, sz);
memset(bf + sz, 0, size - sz);
err = descsz;
err = sz;
break;
}
}