This is the 6.1.11 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmPkywAACgkQONu9yGCS aT42Kw/9FFrdwv29yND651dPIglYKgO0Oz27/LFNGqst1A/G1ITzfs/94NSRr+9j uvwmBLbC+n/OXYavliBVWlPaYUCLqoFSfR+q953yz/UT0803E8BUvQ8NN8O7lsg7 hfbWJaASxt5puy2pBFypeWM+OXoVOvUBj3VhbgtUwwcYLPuYafj9rCAytdIIf5fr RKWBLfx7As4OJ+Hb3KNkolTkFDTfV5+zqCAc9Ko474d1bpRnF15UdQN8Kkinr2+O YNGTvDT8jR8eAk/9PiCNrG7DEMSKaczP8n/ap6PikD/KnK7ShtCLwZztLnmu65g1 vZG+cnEda8FuY3Ms03UrHhKqzMzBY/vslzBNMBTNmDsr+b7ilhffAYXPKS8s7xrg bJjmfzfITFAjXrml25enVO0V9RtTxv6E07U7SnDrLsvE2KBFZfUR/3Xl70bVBb0S db60kmEoq3XHHtoVySOHlfihVHSy02V9dlFcLOYMQsDHsGVsRXOR87g6d7+rJS3h hYWz5YxMLJUr2qn2836DPBnX9Ix0VjDx+X2fB4bNYzKc1dMlgzbpYrhk9LEOUDsx emJuqZskjkLby9Bw36N3eHW3fKPOFrwpYwPWYJHdWx1mmFSNdV6MdfEtZXpuEkFJ iFyJPeeODGadoiznnXTaBFfhozRj+B6FXrY6pkF+WMoSt8ZlZpM= =vu7j -----END PGP SIGNATURE----- Merge 6.1.11 into android14-6.1 Changes in 6.1.11 firewire: fix memory leak for payload of request subaction to IEC 61883-1 FCP region bus: sunxi-rsb: Fix error handling in sunxi_rsb_init() arm64: dts: imx8m-venice: Remove incorrect 'uart-has-rtscts' arm64: dts: freescale: imx8dxl: fix sc_pwrkey's property name linux,keycode ASoC: amd: acp-es8336: Drop reference count of ACPI device after use ASoC: Intel: bytcht_es8316: Drop reference count of ACPI device after use ASoC: Intel: bytcr_rt5651: Drop reference count of ACPI device after use ASoC: Intel: bytcr_rt5640: Drop reference count of ACPI device after use ASoC: Intel: bytcr_wm5102: Drop reference count of ACPI device after use ASoC: Intel: sof_es8336: Drop reference count of ACPI device after use ASoC: Intel: avs: Implement PCI shutdown bpf: Fix off-by-one error in bpf_mem_cache_idx() bpf: Fix a possible task gone issue with bpf_send_signal[_thread]() helpers ALSA: hda/via: Avoid potential array out-of-bound in add_secret_dac_path() bpf: Fix to preserve reg parent/live fields when copying range info selftests/filesystems: grant executable permission to run_fat_tests.sh ASoC: SOF: ipc4-mtrace: prevent underflow in sof_ipc4_priority_mask_dfs_write() bpf: Add missing btf_put to register_btf_id_dtor_kfuncs media: v4l2-ctrls-api.c: move ctrl->is_new = 1 to the correct line bpf, sockmap: Check for any of tcp_bpf_prots when cloning a listener arm64: dts: imx8mm: Fix pad control for UART1_DTE_RX arm64: dts: imx8mm-verdin: Do not power down eth-phy drm/vc4: hdmi: make CEC adapter name unique drm/ssd130x: Init display before the SSD130X_DISPLAY_ON command scsi: Revert "scsi: core: map PQ=1, PDT=other values to SCSI_SCAN_TARGET_PRESENT" bpf: Fix the kernel crash caused by bpf_setsockopt(). ALSA: memalloc: Workaround for Xen PV vhost/net: Clear the pending messages when the backend is removed copy_oldmem_kernel() - WRITE is "data source", not destination WRITE is "data source", not destination... READ is "data destination", not source... zcore: WRITE is "data source", not destination... memcpy_real(): WRITE is "data source", not destination... fix iov_iter_bvec() "direction" argument fix 'direction' argument of iov_iter_{init,bvec}() fix "direction" argument of iov_iter_kvec() use less confusing names for iov_iter direction initializers vhost-scsi: unbreak any layout for response ice: Prevent set_channel from changing queues while RDMA active qede: execute xdp_do_flush() before napi_complete_done() virtio-net: execute xdp_do_flush() before napi_complete_done() dpaa_eth: execute xdp_do_flush() before napi_complete_done() dpaa2-eth: execute xdp_do_flush() before napi_complete_done() skb: Do mix page pool and page referenced frags in GRO sfc: correctly advertise tunneled IPv6 segmentation net: phy: dp83822: Fix null pointer access on DP83825/DP83826 devices net: wwan: t7xx: Fix Runtime PM initialization block, bfq: replace 0/1 with false/true in bic apis block, bfq: fix uaf for bfqq in bic_set_bfqq() netrom: Fix use-after-free caused by accept on already connected socket fscache: Use wait_on_bit() to wait for the freeing of relinquished volume platform/x86/amd/pmf: update to auto-mode limits only after AMT event platform/x86/amd/pmf: Add helper routine to update SPS thermals platform/x86/amd/pmf: Fix to update SPS default pprof thermals platform/x86/amd/pmf: Add helper routine to check pprof is balanced platform/x86/amd/pmf: Fix to update SPS thermals when power supply change platform/x86/amd/pmf: Ensure mutexes are initialized before use platform/x86: thinkpad_acpi: Fix thinklight LED brightness returning 255 drm/i915/guc: Fix locking when searching for a hung request drm/i915: Fix request ref counting during error capture & debugfs dump drm/i915: Fix up locking around dumping requests lists drm/i915/adlp: Fix typo for reference clock net/tls: tls_is_tx_ready() checked list_entry ALSA: firewire-motu: fix unreleased lock warning in hwdep device netfilter: br_netfilter: disable sabotage_in hook after first suppression block: ublk: extending queue_size to fix overflow kunit: fix kunit_test_init_section_suites(...) squashfs: harden sanity check in squashfs_read_xattr_id_table maple_tree: should get pivots boundary by type sctp: do not check hb_timer.expires when resetting hb_timer net: phy: meson-gxl: Add generic dummy stubs for MMD register access drm/panel: boe-tv101wum-nl6: Ensure DSI writes succeed during disable ip/ip6_gre: Fix changing addr gen mode not generating IPv6 link local address ip/ip6_gre: Fix non-point-to-point tunnel not generating IPv6 link local address riscv: kprobe: Fixup kernel panic when probing an illegal position igc: return an error if the mac type is unknown in igc_ptp_systim_to_hwtstamp() octeontx2-af: Fix devlink unregister can: j1939: fix errant WARN_ON_ONCE in j1939_session_deactivate can: raw: fix CAN FD frame transmissions over CAN XL devices can: mcp251xfd: mcp251xfd_ring_set_ringparam(): assign missing tx_obj_num_coalesce_irq ata: libata: Fix sata_down_spd_limit() when no link speed is reported selftests: net: udpgso_bench_rx: Fix 'used uninitialized' compiler warning selftests: net: udpgso_bench_rx/tx: Stop when wrong CLI args are provided selftests: net: udpgso_bench: Fix racing bug between the rx/tx programs selftests: net: udpgso_bench_tx: Cater for pending datagrams zerocopy benchmarking virtio-net: Keep stop() to follow mirror sequence of open() net: openvswitch: fix flow memory leak in ovs_flow_cmd_new efi: fix potential NULL deref in efi_mem_reserve_persistent rtc: sunplus: fix format string for printing resource certs: Fix build error when PKCS#11 URI contains semicolon kbuild: modinst: Fix build error when CONFIG_MODULE_SIG_KEY is a PKCS#11 URI i2c: designware-pci: Add new PCI IDs for AMD NAVI GPU i2c: mxs: suppress probe-deferral error message scsi: target: core: Fix warning on RT kernels x86/aperfmperf: Erase stale arch_freq_scale values when disabling frequency invariance readings perf/x86/intel: Add Emerald Rapids perf/x86/intel/cstate: Add Emerald Rapids scsi: iscsi_tcp: Fix UAF during logout when accessing the shost ipaddress scsi: iscsi_tcp: Fix UAF during login when accessing the shost ipaddress i2c: rk3x: fix a bunch of kernel-doc warnings Revert "gfs2: stop using generic_writepages in gfs2_ail1_start_one" x86/build: Move '-mindirect-branch-cs-prefix' out of GCC-only block platform/x86: dell-wmi: Add a keymap for KEY_MUTE in type 0x0010 table platform/x86: hp-wmi: Handle Omen Key event platform/x86: gigabyte-wmi: add support for B450M DS3H WIFI-CF platform/x86/amd: pmc: Disable IRQ1 wakeup for RN/CZN net/x25: Fix to not accept on connected socket drm/amd/display: Fix timing not changning when freesync video is enabled bcache: Silence memcpy() run-time false positive warnings iio: adc: stm32-dfsdm: fill module aliases usb: dwc3: qcom: enable vbus override when in OTG dr-mode usb: gadget: f_fs: Fix unbalanced spinlock in __ffs_ep0_queue_wait vc_screen: move load of struct vc_data pointer in vcs_read() to avoid UAF fbcon: Check font dimension limits cgroup/cpuset: Fix wrong check in update_parent_subparts_cpumask() hv_netvsc: Fix missed pagebuf entries in netvsc_dma_map/unmap() ARM: dts: imx7d-smegw01: Fix USB host over-current polarity net: qrtr: free memory on error path in radix_tree_insert() can: isotp: split tx timer into transmission and timeout can: isotp: handle wait_event_interruptible() return values watchdog: diag288_wdt: do not use stack buffers for hardware data watchdog: diag288_wdt: fix __diag288() inline assembly ALSA: hda/realtek: Add Acer Predator PH315-54 ALSA: hda/realtek: fix mute/micmute LEDs, speaker don't work for a HP platform ASoC: codecs: wsa883x: correct playback min/max rates ASoC: SOF: sof-audio: unprepare when swidget->use_count > 0 ASoC: SOF: sof-audio: skip prepare/unprepare if swidget is NULL ASoC: SOF: keep prepare/unprepare widgets in sink path efi: Accept version 2 of memory attributes table rtc: efi: Enable SET/GET WAKEUP services as optional iio: hid: fix the retval in accel_3d_capture_sample iio: hid: fix the retval in gyro_3d_capture_sample iio: adc: xilinx-ams: fix devm_krealloc() return value check iio: adc: berlin2-adc: Add missing of_node_put() in error path iio: imx8qxp-adc: fix irq flood when call imx8qxp_adc_read_raw() iio:adc:twl6030: Enable measurements of VUSB, VBAT and others iio: light: cm32181: Fix PM support on system with 2 I2C resources iio: imu: fxos8700: fix ACCEL measurement range selection iio: imu: fxos8700: fix incomplete ACCEL and MAGN channels readback iio: imu: fxos8700: fix IMU data bits returned to user space iio: imu: fxos8700: fix map label of channel type to MAGN sensor iio: imu: fxos8700: fix swapped ACCEL and MAGN channels readback iio: imu: fxos8700: fix incorrect ODR mode readback iio: imu: fxos8700: fix failed initialization ODR mode assignment iio: imu: fxos8700: remove definition FXOS8700_CTRL_ODR_MIN iio: imu: fxos8700: fix MAGN sensor scale and unit nvmem: brcm_nvram: Add check for kzalloc nvmem: sunxi_sid: Always use 32-bit MMIO reads nvmem: qcom-spmi-sdam: fix module autoloading parisc: Fix return code of pdc_iodc_print() parisc: Replace hardcoded value with PRIV_USER constant in ptrace.c parisc: Wire up PTRACE_GETREGS/PTRACE_SETREGS for compat case riscv: disable generation of unwind tables Revert "mm: kmemleak: alloc gray object for reserved region with direct map" mm: multi-gen LRU: fix crash during cgroup migration mm: hugetlb: proc: check for hugetlb shared PMD in /proc/PID/smaps mm: memcg: fix NULL pointer in mem_cgroup_track_foreign_dirty_slowpath() usb: gadget: f_uac2: Fix incorrect increment of bNumEndpoints usb: typec: ucsi: Don't attempt to resume the ports before they exist usb: gadget: udc: do not clear gadget driver.bus kernel/irq/irqdomain.c: fix memory leak with using debugfs_lookup() HV: hv_balloon: fix memory leak with using debugfs_lookup() x86/debug: Fix stack recursion caused by wrongly ordered DR7 accesses fpga: m10bmc-sec: Fix probe rollback fpga: stratix10-soc: Fix return value check in s10_ops_write_init() mm/uffd: fix pte marker when fork() without fork event mm/swapfile: add cond_resched() in get_swap_pages() mm/khugepaged: fix ->anon_vma race mm, mremap: fix mremap() expanding for vma's with vm_ops->close() mm/MADV_COLLAPSE: catch !none !huge !bad pmd lookups highmem: round down the address passed to kunmap_flush_on_unmap() ia64: fix build error due to switch case label appearing next to declaration Squashfs: fix handling and sanity checking of xattr_ids count maple_tree: fix mas_empty_area_rev() lower bound validation migrate: hugetlb: check for hugetlb shared PMD in node migration dma-buf: actually set signaling bit for private stub fences serial: stm32: Merge hard IRQ and threaded IRQ handling into single IRQ handler drm/i915: Avoid potential vm use-after-free drm/i915: Fix potential bit_17 double-free drm/amd: Fix initialization for nbio 4.3.0 drm/amd/pm: drop unneeded dpm features disablement for SMU 13.0.4/11 drm/amdgpu: update wave data type to 3 for gfx11 nvmem: core: initialise nvmem->id early nvmem: core: remove nvmem_config wp_gpio nvmem: core: fix cleanup after dev_set_name() nvmem: core: fix registration vs use race nvmem: core: fix device node refcounting nvmem: core: fix cell removal on error nvmem: core: fix return value phy: qcom-qmp-combo: fix runtime suspend serial: 8250_dma: Fix DMA Rx completion race serial: 8250_dma: Fix DMA Rx rearm race platform/x86/amd: pmc: add CONFIG_SERIO dependency ASoC: SOF: sof-audio: prepare_widgets: Check swidget for NULL on sink failure iio:adc:twl6030: Enable measurement of VAC powerpc/64s/radix: Fix crash with unaligned relocated kernel powerpc/64s: Fix local irq disable when PMIs are disabled powerpc/imc-pmu: Revert nest_init_lock to being a mutex fs/ntfs3: Validate attribute data and valid sizes ovl: Use "buf" flexible array for memcpy() destination f2fs: initialize locks earlier in f2fs_fill_super() fbdev: smscufx: fix error handling code in ufx_usb_probe f2fs: fix to do sanity check on i_extra_isize in is_alive() wifi: brcmfmac: Check the count value of channel spec to prevent out-of-bounds reads gfs2: Cosmetic gfs2_dinode_{in,out} cleanup gfs2: Always check inode size of inline inodes bpf: Skip invalid kfunc call in backtrack_insn Linux 6.1.11 Change-Id: I69722bc9711b91f2fca18de59746ada373f64c5e Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
c747c01851
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 1
|
||||
SUBLEVEL = 10
|
||||
SUBLEVEL = 11
|
||||
EXTRAVERSION =
|
||||
NAME = Hurr durr I'ma ninja sloth
|
||||
|
||||
|
@ -198,6 +198,7 @@ &usbotg1 {
|
||||
&usbotg2 {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_usbotg2>;
|
||||
over-current-active-low;
|
||||
dr_mode = "host";
|
||||
status = "okay";
|
||||
};
|
||||
@ -374,7 +375,7 @@ MX7D_PAD_LPSR_GPIO1_IO05__GPIO1_IO5 0x04
|
||||
|
||||
pinctrl_usbotg2: usbotg2grp {
|
||||
fsl,pins = <
|
||||
MX7D_PAD_UART3_RTS_B__USB_OTG2_OC 0x04
|
||||
MX7D_PAD_UART3_RTS_B__USB_OTG2_OC 0x5c
|
||||
>;
|
||||
};
|
||||
|
||||
|
@ -157,7 +157,7 @@ rtc: rtc {
|
||||
|
||||
sc_pwrkey: keys {
|
||||
compatible = "fsl,imx8qxp-sc-key", "fsl,imx-sc-key";
|
||||
linux,keycode = <KEY_POWER>;
|
||||
linux,keycodes = <KEY_POWER>;
|
||||
wakeup-source;
|
||||
};
|
||||
|
||||
|
@ -602,7 +602,7 @@
|
||||
#define MX8MM_IOMUXC_UART1_RXD_GPIO5_IO22 0x234 0x49C 0x000 0x5 0x0
|
||||
#define MX8MM_IOMUXC_UART1_RXD_TPSMP_HDATA24 0x234 0x49C 0x000 0x7 0x0
|
||||
#define MX8MM_IOMUXC_UART1_TXD_UART1_DCE_TX 0x238 0x4A0 0x000 0x0 0x0
|
||||
#define MX8MM_IOMUXC_UART1_TXD_UART1_DTE_RX 0x238 0x4A0 0x4F4 0x0 0x0
|
||||
#define MX8MM_IOMUXC_UART1_TXD_UART1_DTE_RX 0x238 0x4A0 0x4F4 0x0 0x1
|
||||
#define MX8MM_IOMUXC_UART1_TXD_ECSPI3_MOSI 0x238 0x4A0 0x000 0x1 0x0
|
||||
#define MX8MM_IOMUXC_UART1_TXD_GPIO5_IO23 0x238 0x4A0 0x000 0x5 0x0
|
||||
#define MX8MM_IOMUXC_UART1_TXD_TPSMP_HDATA25 0x238 0x4A0 0x000 0x7 0x0
|
||||
|
@ -33,7 +33,6 @@ &uart2 {
|
||||
pinctrl-0 = <&pinctrl_uart2>;
|
||||
rts-gpios = <&gpio5 29 GPIO_ACTIVE_LOW>;
|
||||
cts-gpios = <&gpio5 28 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
@ -33,7 +33,6 @@ &uart2 {
|
||||
pinctrl-0 = <&pinctrl_uart2>;
|
||||
rts-gpios = <&gpio5 29 GPIO_ACTIVE_LOW>;
|
||||
cts-gpios = <&gpio5 28 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
@ -222,7 +222,6 @@ &uart3 {
|
||||
pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_bten>;
|
||||
cts-gpios = <&gpio5 8 GPIO_ACTIVE_LOW>;
|
||||
rts-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
|
||||
bluetooth {
|
||||
|
@ -721,7 +721,6 @@ &uart1 {
|
||||
dtr-gpios = <&gpio1 14 GPIO_ACTIVE_LOW>;
|
||||
dsr-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
|
||||
dcd-gpios = <&gpio1 11 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
@ -737,7 +736,6 @@ &uart3 {
|
||||
pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_uart3_gpio>;
|
||||
cts-gpios = <&gpio4 10 GPIO_ACTIVE_LOW>;
|
||||
rts-gpios = <&gpio4 9 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
@ -746,7 +744,6 @@ &uart4 {
|
||||
pinctrl-0 = <&pinctrl_uart4>, <&pinctrl_uart4_gpio>;
|
||||
cts-gpios = <&gpio5 11 GPIO_ACTIVE_LOW>;
|
||||
rts-gpios = <&gpio5 12 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
@ -651,7 +651,6 @@ &uart1 {
|
||||
pinctrl-0 = <&pinctrl_uart1>, <&pinctrl_uart1_gpio>;
|
||||
rts-gpios = <&gpio4 10 GPIO_ACTIVE_LOW>;
|
||||
cts-gpios = <&gpio4 24 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
@ -668,7 +667,6 @@ &uart3 {
|
||||
pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_uart3_gpio>;
|
||||
rts-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>;
|
||||
cts-gpios = <&gpio2 0 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
|
||||
bluetooth {
|
||||
@ -686,7 +684,6 @@ &uart4 {
|
||||
dtr-gpios = <&gpio4 3 GPIO_ACTIVE_LOW>;
|
||||
dsr-gpios = <&gpio4 4 GPIO_ACTIVE_LOW>;
|
||||
dcd-gpios = <&gpio4 6 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
@ -572,7 +572,6 @@ &uart1 {
|
||||
dtr-gpios = <&gpio1 0 GPIO_ACTIVE_LOW>;
|
||||
dsr-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
|
||||
dcd-gpios = <&gpio3 24 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
@ -98,6 +98,7 @@ reg_ethphy: regulator-ethphy {
|
||||
off-on-delay = <500000>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_reg_eth>;
|
||||
regulator-always-on;
|
||||
regulator-boot-on;
|
||||
regulator-max-microvolt = <3300000>;
|
||||
regulator-min-microvolt = <3300000>;
|
||||
|
@ -631,7 +631,6 @@ &uart3 {
|
||||
pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_uart3_gpio>;
|
||||
rts-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>;
|
||||
cts-gpios = <&gpio2 0 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
|
||||
bluetooth {
|
||||
|
@ -611,7 +611,6 @@ &uart3 {
|
||||
pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_uart3_gpio>;
|
||||
cts-gpios = <&gpio3 21 GPIO_ACTIVE_LOW>;
|
||||
rts-gpios = <&gpio3 22 GPIO_ACTIVE_LOW>;
|
||||
uart-has-rtscts;
|
||||
status = "okay";
|
||||
|
||||
bluetooth {
|
||||
|
@ -170,6 +170,9 @@ ia64_mremap (unsigned long addr, unsigned long old_len, unsigned long new_len, u
|
||||
asmlinkage long
|
||||
ia64_clock_getres(const clockid_t which_clock, struct __kernel_timespec __user *tp)
|
||||
{
|
||||
struct timespec64 rtn_tp;
|
||||
s64 tick_ns;
|
||||
|
||||
/*
|
||||
* ia64's clock_gettime() syscall is implemented as a vdso call
|
||||
* fsys_clock_gettime(). Currently it handles only
|
||||
@ -185,8 +188,8 @@ ia64_clock_getres(const clockid_t which_clock, struct __kernel_timespec __user *
|
||||
switch (which_clock) {
|
||||
case CLOCK_REALTIME:
|
||||
case CLOCK_MONOTONIC:
|
||||
s64 tick_ns = DIV_ROUND_UP(NSEC_PER_SEC, local_cpu_data->itc_freq);
|
||||
struct timespec64 rtn_tp = ns_to_timespec64(tick_ns);
|
||||
tick_ns = DIV_ROUND_UP(NSEC_PER_SEC, local_cpu_data->itc_freq);
|
||||
rtn_tp = ns_to_timespec64(tick_ns);
|
||||
return put_timespec64(&rtn_tp, tp);
|
||||
}
|
||||
|
||||
|
@ -1303,7 +1303,7 @@ static char iodc_dbuf[4096] __page_aligned_bss;
|
||||
*/
|
||||
int pdc_iodc_print(const unsigned char *str, unsigned count)
|
||||
{
|
||||
unsigned int i;
|
||||
unsigned int i, found = 0;
|
||||
unsigned long flags;
|
||||
|
||||
count = min_t(unsigned int, count, sizeof(iodc_dbuf));
|
||||
@ -1315,6 +1315,7 @@ int pdc_iodc_print(const unsigned char *str, unsigned count)
|
||||
iodc_dbuf[i+0] = '\r';
|
||||
iodc_dbuf[i+1] = '\n';
|
||||
i += 2;
|
||||
found = 1;
|
||||
goto print;
|
||||
default:
|
||||
iodc_dbuf[i] = str[i];
|
||||
@ -1330,7 +1331,7 @@ int pdc_iodc_print(const unsigned char *str, unsigned count)
|
||||
__pa(pdc_result), 0, __pa(iodc_dbuf), i, 0);
|
||||
spin_unlock_irqrestore(&pdc_lock, flags);
|
||||
|
||||
return i;
|
||||
return i - found;
|
||||
}
|
||||
|
||||
#if !defined(BOOTLOADER)
|
||||
|
@ -126,6 +126,12 @@ long arch_ptrace(struct task_struct *child, long request,
|
||||
unsigned long tmp;
|
||||
long ret = -EIO;
|
||||
|
||||
unsigned long user_regs_struct_size = sizeof(struct user_regs_struct);
|
||||
#ifdef CONFIG_64BIT
|
||||
if (is_compat_task())
|
||||
user_regs_struct_size /= 2;
|
||||
#endif
|
||||
|
||||
switch (request) {
|
||||
|
||||
/* Read the word at location addr in the USER area. For ptraced
|
||||
@ -166,7 +172,7 @@ long arch_ptrace(struct task_struct *child, long request,
|
||||
addr >= sizeof(struct pt_regs))
|
||||
break;
|
||||
if (addr == PT_IAOQ0 || addr == PT_IAOQ1) {
|
||||
data |= 3; /* ensure userspace privilege */
|
||||
data |= PRIV_USER; /* ensure userspace privilege */
|
||||
}
|
||||
if ((addr >= PT_GR1 && addr <= PT_GR31) ||
|
||||
addr == PT_IAOQ0 || addr == PT_IAOQ1 ||
|
||||
@ -181,14 +187,14 @@ long arch_ptrace(struct task_struct *child, long request,
|
||||
return copy_regset_to_user(child,
|
||||
task_user_regset_view(current),
|
||||
REGSET_GENERAL,
|
||||
0, sizeof(struct user_regs_struct),
|
||||
0, user_regs_struct_size,
|
||||
datap);
|
||||
|
||||
case PTRACE_SETREGS: /* Set all gp regs in the child. */
|
||||
return copy_regset_from_user(child,
|
||||
task_user_regset_view(current),
|
||||
REGSET_GENERAL,
|
||||
0, sizeof(struct user_regs_struct),
|
||||
0, user_regs_struct_size,
|
||||
datap);
|
||||
|
||||
case PTRACE_GETFPREGS: /* Get the child FPU state. */
|
||||
@ -285,7 +291,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
|
||||
if (addr >= sizeof(struct pt_regs))
|
||||
break;
|
||||
if (addr == PT_IAOQ0+4 || addr == PT_IAOQ1+4) {
|
||||
data |= 3; /* ensure userspace privilege */
|
||||
data |= PRIV_USER; /* ensure userspace privilege */
|
||||
}
|
||||
if (addr >= PT_FR0 && addr <= PT_FR31 + 4) {
|
||||
/* Special case, fp regs are 64 bits anyway */
|
||||
@ -302,6 +308,11 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
|
||||
}
|
||||
}
|
||||
break;
|
||||
case PTRACE_GETREGS:
|
||||
case PTRACE_SETREGS:
|
||||
case PTRACE_GETFPREGS:
|
||||
case PTRACE_SETFPREGS:
|
||||
return arch_ptrace(child, request, addr, data);
|
||||
|
||||
default:
|
||||
ret = compat_ptrace_request(child, request, addr, data);
|
||||
@ -483,7 +494,7 @@ static void set_reg(struct pt_regs *regs, int num, unsigned long val)
|
||||
case RI(iaoq[0]):
|
||||
case RI(iaoq[1]):
|
||||
/* set 2 lowest bits to ensure userspace privilege: */
|
||||
regs->iaoq[num - RI(iaoq[0])] = val | 3;
|
||||
regs->iaoq[num - RI(iaoq[0])] = val | PRIV_USER;
|
||||
return;
|
||||
case RI(sar): regs->sar = val;
|
||||
return;
|
||||
|
@ -192,7 +192,7 @@ static inline void arch_local_irq_enable(void)
|
||||
|
||||
static inline unsigned long arch_local_irq_save(void)
|
||||
{
|
||||
return irq_soft_mask_set_return(IRQS_DISABLED);
|
||||
return irq_soft_mask_or_return(IRQS_DISABLED);
|
||||
}
|
||||
|
||||
static inline bool arch_irqs_disabled_flags(unsigned long flags)
|
||||
|
@ -262,6 +262,17 @@ print_mapping(unsigned long start, unsigned long end, unsigned long size, bool e
|
||||
static unsigned long next_boundary(unsigned long addr, unsigned long end)
|
||||
{
|
||||
#ifdef CONFIG_STRICT_KERNEL_RWX
|
||||
unsigned long stext_phys;
|
||||
|
||||
stext_phys = __pa_symbol(_stext);
|
||||
|
||||
// Relocatable kernel running at non-zero real address
|
||||
if (stext_phys != 0) {
|
||||
// Start of relocated kernel text is a rodata boundary
|
||||
if (addr < stext_phys)
|
||||
return stext_phys;
|
||||
}
|
||||
|
||||
if (addr < __pa_symbol(__srwx_boundary))
|
||||
return __pa_symbol(__srwx_boundary);
|
||||
#endif
|
||||
|
@ -22,7 +22,7 @@
|
||||
* Used to avoid races in counting the nest-pmu units during hotplug
|
||||
* register and unregister
|
||||
*/
|
||||
static DEFINE_SPINLOCK(nest_init_lock);
|
||||
static DEFINE_MUTEX(nest_init_lock);
|
||||
static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc);
|
||||
static struct imc_pmu **per_nest_pmu_arr;
|
||||
static cpumask_t nest_imc_cpumask;
|
||||
@ -1629,7 +1629,7 @@ static void imc_common_mem_free(struct imc_pmu *pmu_ptr)
|
||||
static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr)
|
||||
{
|
||||
if (pmu_ptr->domain == IMC_DOMAIN_NEST) {
|
||||
spin_lock(&nest_init_lock);
|
||||
mutex_lock(&nest_init_lock);
|
||||
if (nest_pmus == 1) {
|
||||
cpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE);
|
||||
kfree(nest_imc_refc);
|
||||
@ -1639,7 +1639,7 @@ static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr)
|
||||
|
||||
if (nest_pmus > 0)
|
||||
nest_pmus--;
|
||||
spin_unlock(&nest_init_lock);
|
||||
mutex_unlock(&nest_init_lock);
|
||||
}
|
||||
|
||||
/* Free core_imc memory */
|
||||
@ -1796,11 +1796,11 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
|
||||
* rest. To handle the cpuhotplug callback unregister, we track
|
||||
* the number of nest pmus in "nest_pmus".
|
||||
*/
|
||||
spin_lock(&nest_init_lock);
|
||||
mutex_lock(&nest_init_lock);
|
||||
if (nest_pmus == 0) {
|
||||
ret = init_nest_pmu_ref();
|
||||
if (ret) {
|
||||
spin_unlock(&nest_init_lock);
|
||||
mutex_unlock(&nest_init_lock);
|
||||
kfree(per_nest_pmu_arr);
|
||||
per_nest_pmu_arr = NULL;
|
||||
goto err_free_mem;
|
||||
@ -1808,7 +1808,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
|
||||
/* Register for cpu hotplug notification. */
|
||||
ret = nest_pmu_cpumask_init();
|
||||
if (ret) {
|
||||
spin_unlock(&nest_init_lock);
|
||||
mutex_unlock(&nest_init_lock);
|
||||
kfree(nest_imc_refc);
|
||||
kfree(per_nest_pmu_arr);
|
||||
per_nest_pmu_arr = NULL;
|
||||
@ -1816,7 +1816,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
|
||||
}
|
||||
}
|
||||
nest_pmus++;
|
||||
spin_unlock(&nest_init_lock);
|
||||
mutex_unlock(&nest_init_lock);
|
||||
break;
|
||||
case IMC_DOMAIN_CORE:
|
||||
ret = core_imc_pmu_cpumask_init();
|
||||
|
@ -80,6 +80,9 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
|
||||
KBUILD_CFLAGS += -fno-omit-frame-pointer
|
||||
endif
|
||||
|
||||
# Avoid generating .eh_frame sections.
|
||||
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
|
||||
|
||||
KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax)
|
||||
KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
|
||||
|
||||
|
@ -48,6 +48,21 @@ static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
|
||||
post_kprobe_handler(p, kcb, regs);
|
||||
}
|
||||
|
||||
static bool __kprobes arch_check_kprobe(struct kprobe *p)
|
||||
{
|
||||
unsigned long tmp = (unsigned long)p->addr - p->offset;
|
||||
unsigned long addr = (unsigned long)p->addr;
|
||||
|
||||
while (tmp <= addr) {
|
||||
if (tmp == addr)
|
||||
return true;
|
||||
|
||||
tmp += GET_INSN_LENGTH(*(u16 *)tmp);
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
int __kprobes arch_prepare_kprobe(struct kprobe *p)
|
||||
{
|
||||
unsigned long probe_addr = (unsigned long)p->addr;
|
||||
@ -55,6 +70,9 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
|
||||
if (probe_addr & 0x1)
|
||||
return -EILSEQ;
|
||||
|
||||
if (!arch_check_kprobe(p))
|
||||
return -EILSEQ;
|
||||
|
||||
/* copy instruction */
|
||||
p->opcode = *p->addr;
|
||||
|
||||
|
@ -153,7 +153,7 @@ int copy_oldmem_kernel(void *dst, unsigned long src, size_t count)
|
||||
|
||||
kvec.iov_base = dst;
|
||||
kvec.iov_len = count;
|
||||
iov_iter_kvec(&iter, WRITE, &kvec, 1, count);
|
||||
iov_iter_kvec(&iter, ITER_DEST, &kvec, 1, count);
|
||||
if (copy_oldmem_iter(&iter, src, count) < count)
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
|
@ -128,7 +128,7 @@ int memcpy_real(void *dest, unsigned long src, size_t count)
|
||||
|
||||
kvec.iov_base = dest;
|
||||
kvec.iov_len = count;
|
||||
iov_iter_kvec(&iter, WRITE, &kvec, 1, count);
|
||||
iov_iter_kvec(&iter, ITER_DEST, &kvec, 1, count);
|
||||
if (memcpy_real_iter(&iter, src, count) < count)
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
|
@ -14,13 +14,13 @@ endif
|
||||
|
||||
ifdef CONFIG_CC_IS_GCC
|
||||
RETPOLINE_CFLAGS := $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register)
|
||||
RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix)
|
||||
RETPOLINE_VDSO_CFLAGS := $(call cc-option,-mindirect-branch=thunk-inline -mindirect-branch-register)
|
||||
endif
|
||||
ifdef CONFIG_CC_IS_CLANG
|
||||
RETPOLINE_CFLAGS := -mretpoline-external-thunk
|
||||
RETPOLINE_VDSO_CFLAGS := -mretpoline
|
||||
endif
|
||||
RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix)
|
||||
|
||||
ifdef CONFIG_RETHUNK
|
||||
RETHUNK_CFLAGS := -mfunction-return=thunk-extern
|
||||
|
@ -6342,6 +6342,7 @@ __init int intel_pmu_init(void)
|
||||
break;
|
||||
|
||||
case INTEL_FAM6_SAPPHIRERAPIDS_X:
|
||||
case INTEL_FAM6_EMERALDRAPIDS_X:
|
||||
pmem = true;
|
||||
x86_pmu.late_ack = true;
|
||||
memcpy(hw_cache_event_ids, spr_hw_cache_event_ids, sizeof(hw_cache_event_ids));
|
||||
|
@ -677,6 +677,7 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, &icx_cstates),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, &icx_cstates),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &icx_cstates),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, &icx_cstates),
|
||||
|
||||
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, &icl_cstates),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, &icl_cstates),
|
||||
|
@ -39,7 +39,20 @@ static __always_inline unsigned long native_get_debugreg(int regno)
|
||||
asm("mov %%db6, %0" :"=r" (val));
|
||||
break;
|
||||
case 7:
|
||||
asm("mov %%db7, %0" :"=r" (val));
|
||||
/*
|
||||
* Apply __FORCE_ORDER to DR7 reads to forbid re-ordering them
|
||||
* with other code.
|
||||
*
|
||||
* This is needed because a DR7 access can cause a #VC exception
|
||||
* when running under SEV-ES. Taking a #VC exception is not a
|
||||
* safe thing to do just anywhere in the entry code and
|
||||
* re-ordering might place the access into an unsafe location.
|
||||
*
|
||||
* This happened in the NMI handler, where the DR7 read was
|
||||
* re-ordered to happen before the call to sev_es_ist_enter(),
|
||||
* causing stack recursion.
|
||||
*/
|
||||
asm volatile("mov %%db7, %0" : "=r" (val) : __FORCE_ORDER);
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
@ -66,7 +79,16 @@ static __always_inline void native_set_debugreg(int regno, unsigned long value)
|
||||
asm("mov %0, %%db6" ::"r" (value));
|
||||
break;
|
||||
case 7:
|
||||
asm("mov %0, %%db7" ::"r" (value));
|
||||
/*
|
||||
* Apply __FORCE_ORDER to DR7 writes to forbid re-ordering them
|
||||
* with other code.
|
||||
*
|
||||
* While is didn't happen with a DR7 write (see the DR7 read
|
||||
* comment above which explains where it happened), add the
|
||||
* __FORCE_ORDER here too to avoid similar problems in the
|
||||
* future.
|
||||
*/
|
||||
asm volatile("mov %0, %%db7" ::"r" (value), __FORCE_ORDER);
|
||||
break;
|
||||
default:
|
||||
BUG();
|
||||
|
@ -330,7 +330,16 @@ static void __init bp_init_freq_invariance(void)
|
||||
|
||||
static void disable_freq_invariance_workfn(struct work_struct *work)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
static_branch_disable(&arch_scale_freq_key);
|
||||
|
||||
/*
|
||||
* Set arch_freq_scale to a default value on all cpus
|
||||
* This negates the effect of scaling
|
||||
*/
|
||||
for_each_possible_cpu(cpu)
|
||||
per_cpu(arch_freq_scale, cpu) = SCHED_CAPACITY_SCALE;
|
||||
}
|
||||
|
||||
static DECLARE_WORK(disable_freq_invariance_work,
|
||||
|
@ -902,7 +902,7 @@ static enum ucode_state request_microcode_fw(int cpu, struct device *device,
|
||||
|
||||
kvec.iov_base = (void *)firmware->data;
|
||||
kvec.iov_len = firmware->size;
|
||||
iov_iter_kvec(&iter, WRITE, &kvec, 1, firmware->size);
|
||||
iov_iter_kvec(&iter, ITER_SOURCE, &kvec, 1, firmware->size);
|
||||
ret = generic_load_microcode(cpu, &iter);
|
||||
|
||||
release_firmware(firmware);
|
||||
|
@ -57,7 +57,7 @@ ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
|
||||
struct kvec kvec = { .iov_base = buf, .iov_len = count };
|
||||
struct iov_iter iter;
|
||||
|
||||
iov_iter_kvec(&iter, READ, &kvec, 1, count);
|
||||
iov_iter_kvec(&iter, ITER_DEST, &kvec, 1, count);
|
||||
|
||||
return read_from_oldmem(&iter, count, ppos,
|
||||
cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
|
||||
|
@ -718,15 +718,15 @@ static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
|
||||
struct bfq_io_cq *bic,
|
||||
struct bfq_group *bfqg)
|
||||
{
|
||||
struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
|
||||
struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
|
||||
struct bfq_queue *async_bfqq = bic_to_bfqq(bic, false);
|
||||
struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, true);
|
||||
struct bfq_entity *entity;
|
||||
|
||||
if (async_bfqq) {
|
||||
entity = &async_bfqq->entity;
|
||||
|
||||
if (entity->sched_data != &bfqg->sched_data) {
|
||||
bic_set_bfqq(bic, NULL, 0);
|
||||
bic_set_bfqq(bic, NULL, false);
|
||||
bfq_release_process_ref(bfqd, async_bfqq);
|
||||
}
|
||||
}
|
||||
@ -761,8 +761,8 @@ static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
|
||||
* request from the old cgroup.
|
||||
*/
|
||||
bfq_put_cooperator(sync_bfqq);
|
||||
bic_set_bfqq(bic, NULL, true);
|
||||
bfq_release_process_ref(bfqd, sync_bfqq);
|
||||
bic_set_bfqq(bic, NULL, 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -3180,7 +3180,7 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
|
||||
/*
|
||||
* Merge queues (that is, let bic redirect its requests to new_bfqq)
|
||||
*/
|
||||
bic_set_bfqq(bic, new_bfqq, 1);
|
||||
bic_set_bfqq(bic, new_bfqq, true);
|
||||
bfq_mark_bfqq_coop(new_bfqq);
|
||||
/*
|
||||
* new_bfqq now belongs to at least two bics (it is a shared queue):
|
||||
@ -5491,9 +5491,11 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
|
||||
|
||||
bfqq = bic_to_bfqq(bic, false);
|
||||
if (bfqq) {
|
||||
bfq_release_process_ref(bfqd, bfqq);
|
||||
struct bfq_queue *old_bfqq = bfqq;
|
||||
|
||||
bfqq = bfq_get_queue(bfqd, bio, false, bic, true);
|
||||
bic_set_bfqq(bic, bfqq, false);
|
||||
bfq_release_process_ref(bfqd, old_bfqq);
|
||||
}
|
||||
|
||||
bfqq = bic_to_bfqq(bic, true);
|
||||
@ -6627,7 +6629,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
|
||||
return bfqq;
|
||||
}
|
||||
|
||||
bic_set_bfqq(bic, NULL, 1);
|
||||
bic_set_bfqq(bic, NULL, true);
|
||||
|
||||
bfq_put_cooperator(bfqq);
|
||||
|
||||
|
@ -23,8 +23,8 @@ $(obj)/blacklist_hash_list: $(CONFIG_SYSTEM_BLACKLIST_HASH_LIST) FORCE
|
||||
targets += blacklist_hash_list
|
||||
|
||||
quiet_cmd_extract_certs = CERT $@
|
||||
cmd_extract_certs = $(obj)/extract-cert $(extract-cert-in) $@
|
||||
extract-cert-in = $(or $(filter-out $(obj)/extract-cert, $(real-prereqs)),"")
|
||||
cmd_extract_certs = $(obj)/extract-cert "$(extract-cert-in)" $@
|
||||
extract-cert-in = $(filter-out $(obj)/extract-cert, $(real-prereqs))
|
||||
|
||||
$(obj)/system_certificates.o: $(obj)/x509_certificate_list
|
||||
|
||||
|
@ -766,7 +766,7 @@ static int build_cipher_test_sglists(struct cipher_test_sglists *tsgls,
|
||||
struct iov_iter input;
|
||||
int err;
|
||||
|
||||
iov_iter_kvec(&input, WRITE, inputs, nr_inputs, src_total_len);
|
||||
iov_iter_kvec(&input, ITER_SOURCE, inputs, nr_inputs, src_total_len);
|
||||
err = build_test_sglist(&tsgls->src, cfg->src_divs, alignmask,
|
||||
cfg->inplace_mode != OUT_OF_PLACE ?
|
||||
max(dst_total_len, src_total_len) :
|
||||
@ -1180,7 +1180,7 @@ static int build_hash_sglist(struct test_sglist *tsgl,
|
||||
|
||||
kv.iov_base = (void *)vec->plaintext;
|
||||
kv.iov_len = vec->psize;
|
||||
iov_iter_kvec(&input, WRITE, &kv, 1, vec->psize);
|
||||
iov_iter_kvec(&input, ITER_SOURCE, &kv, 1, vec->psize);
|
||||
return build_test_sglist(tsgl, cfg->src_divs, alignmask, vec->psize,
|
||||
&input, divs);
|
||||
}
|
||||
|
@ -455,7 +455,7 @@ static ssize_t pfru_write(struct file *file, const char __user *buf,
|
||||
|
||||
iov.iov_base = (void __user *)buf;
|
||||
iov.iov_len = len;
|
||||
iov_iter_init(&iter, WRITE, &iov, 1, len);
|
||||
iov_iter_init(&iter, ITER_SOURCE, &iov, 1, len);
|
||||
|
||||
/* map the communication buffer */
|
||||
phy_addr = (phys_addr_t)((buf_info.addr_hi << 32) | buf_info.addr_lo);
|
||||
|
@ -3108,7 +3108,7 @@ int sata_down_spd_limit(struct ata_link *link, u32 spd_limit)
|
||||
*/
|
||||
if (spd > 1)
|
||||
mask &= (1 << (spd - 1)) - 1;
|
||||
else
|
||||
else if (link->sata_spd)
|
||||
return -EINVAL;
|
||||
|
||||
/* were we already at the bottom? */
|
||||
|
@ -1816,7 +1816,7 @@ int drbd_send(struct drbd_connection *connection, struct socket *sock,
|
||||
|
||||
/* THINK if (signal_pending) return ... ? */
|
||||
|
||||
iov_iter_kvec(&msg.msg_iter, WRITE, &iov, 1, size);
|
||||
iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iov, 1, size);
|
||||
|
||||
if (sock == connection->data.socket) {
|
||||
rcu_read_lock();
|
||||
|
@ -507,7 +507,7 @@ static int drbd_recv_short(struct socket *sock, void *buf, size_t size, int flag
|
||||
struct msghdr msg = {
|
||||
.msg_flags = (flags ? flags : MSG_WAITALL | MSG_NOSIGNAL)
|
||||
};
|
||||
iov_iter_kvec(&msg.msg_iter, READ, &iov, 1, size);
|
||||
iov_iter_kvec(&msg.msg_iter, ITER_DEST, &iov, 1, size);
|
||||
return sock_recvmsg(sock, &msg, msg.msg_flags);
|
||||
}
|
||||
|
||||
|
@ -243,7 +243,7 @@ static int lo_write_bvec(struct file *file, struct bio_vec *bvec, loff_t *ppos)
|
||||
struct iov_iter i;
|
||||
ssize_t bw;
|
||||
|
||||
iov_iter_bvec(&i, WRITE, bvec, 1, bvec->bv_len);
|
||||
iov_iter_bvec(&i, ITER_SOURCE, bvec, 1, bvec->bv_len);
|
||||
|
||||
file_start_write(file);
|
||||
bw = vfs_iter_write(file, &i, ppos, 0);
|
||||
@ -286,7 +286,7 @@ static int lo_read_simple(struct loop_device *lo, struct request *rq,
|
||||
ssize_t len;
|
||||
|
||||
rq_for_each_segment(bvec, rq, iter) {
|
||||
iov_iter_bvec(&i, READ, &bvec, 1, bvec.bv_len);
|
||||
iov_iter_bvec(&i, ITER_DEST, &bvec, 1, bvec.bv_len);
|
||||
len = vfs_iter_read(lo->lo_backing_file, &i, &pos, 0);
|
||||
if (len < 0)
|
||||
return len;
|
||||
@ -392,7 +392,7 @@ static void lo_rw_aio_complete(struct kiocb *iocb, long ret)
|
||||
}
|
||||
|
||||
static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
|
||||
loff_t pos, bool rw)
|
||||
loff_t pos, int rw)
|
||||
{
|
||||
struct iov_iter iter;
|
||||
struct req_iterator rq_iter;
|
||||
@ -448,7 +448,7 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
|
||||
cmd->iocb.ki_flags = IOCB_DIRECT;
|
||||
cmd->iocb.ki_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
|
||||
|
||||
if (rw == WRITE)
|
||||
if (rw == ITER_SOURCE)
|
||||
ret = call_write_iter(file, &cmd->iocb, &iter);
|
||||
else
|
||||
ret = call_read_iter(file, &cmd->iocb, &iter);
|
||||
@ -490,12 +490,12 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
|
||||
return lo_fallocate(lo, rq, pos, FALLOC_FL_PUNCH_HOLE);
|
||||
case REQ_OP_WRITE:
|
||||
if (cmd->use_aio)
|
||||
return lo_rw_aio(lo, cmd, pos, WRITE);
|
||||
return lo_rw_aio(lo, cmd, pos, ITER_SOURCE);
|
||||
else
|
||||
return lo_write_simple(lo, rq, pos);
|
||||
case REQ_OP_READ:
|
||||
if (cmd->use_aio)
|
||||
return lo_rw_aio(lo, cmd, pos, READ);
|
||||
return lo_rw_aio(lo, cmd, pos, ITER_DEST);
|
||||
else
|
||||
return lo_read_simple(lo, rq, pos);
|
||||
default:
|
||||
|
@ -563,7 +563,7 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
|
||||
u32 nbd_cmd_flags = 0;
|
||||
int sent = nsock->sent, skip = 0;
|
||||
|
||||
iov_iter_kvec(&from, WRITE, &iov, 1, sizeof(request));
|
||||
iov_iter_kvec(&from, ITER_SOURCE, &iov, 1, sizeof(request));
|
||||
|
||||
type = req_to_nbd_cmd_type(req);
|
||||
if (type == U32_MAX)
|
||||
@ -649,7 +649,7 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
|
||||
|
||||
dev_dbg(nbd_to_dev(nbd), "request %p: sending %d bytes data\n",
|
||||
req, bvec.bv_len);
|
||||
iov_iter_bvec(&from, WRITE, &bvec, 1, bvec.bv_len);
|
||||
iov_iter_bvec(&from, ITER_SOURCE, &bvec, 1, bvec.bv_len);
|
||||
if (skip) {
|
||||
if (skip >= iov_iter_count(&from)) {
|
||||
skip -= iov_iter_count(&from);
|
||||
@ -701,7 +701,7 @@ static int nbd_read_reply(struct nbd_device *nbd, int index,
|
||||
int result;
|
||||
|
||||
reply->magic = 0;
|
||||
iov_iter_kvec(&to, READ, &iov, 1, sizeof(*reply));
|
||||
iov_iter_kvec(&to, ITER_DEST, &iov, 1, sizeof(*reply));
|
||||
result = sock_xmit(nbd, index, 0, &to, MSG_WAITALL, NULL);
|
||||
if (result < 0) {
|
||||
if (!nbd_disconnected(nbd->config))
|
||||
@ -790,7 +790,7 @@ static struct nbd_cmd *nbd_handle_reply(struct nbd_device *nbd, int index,
|
||||
struct iov_iter to;
|
||||
|
||||
rq_for_each_segment(bvec, req, iter) {
|
||||
iov_iter_bvec(&to, READ, &bvec, 1, bvec.bv_len);
|
||||
iov_iter_bvec(&to, ITER_DEST, &bvec, 1, bvec.bv_len);
|
||||
result = sock_xmit(nbd, index, 0, &to, MSG_WAITALL, NULL);
|
||||
if (result < 0) {
|
||||
dev_err(disk_to_dev(nbd->disk), "Receive data failed (result %d)\n",
|
||||
@ -1267,7 +1267,7 @@ static void send_disconnects(struct nbd_device *nbd)
|
||||
for (i = 0; i < config->num_connections; i++) {
|
||||
struct nbd_sock *nsock = config->socks[i];
|
||||
|
||||
iov_iter_kvec(&from, WRITE, &iov, 1, sizeof(request));
|
||||
iov_iter_kvec(&from, ITER_SOURCE, &iov, 1, sizeof(request));
|
||||
mutex_lock(&nsock->tx_lock);
|
||||
ret = sock_xmit(nbd, i, 1, &from, 0, NULL);
|
||||
if (ret < 0)
|
||||
|
@ -137,7 +137,7 @@ struct ublk_device {
|
||||
|
||||
char *__queues;
|
||||
|
||||
unsigned short queue_size;
|
||||
unsigned int queue_size;
|
||||
struct ublksrv_ctrl_dev_info dev_info;
|
||||
|
||||
struct blk_mq_tag_set tag_set;
|
||||
|
@ -857,7 +857,13 @@ static int __init sunxi_rsb_init(void)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return platform_driver_register(&sunxi_rsb_driver);
|
||||
ret = platform_driver_register(&sunxi_rsb_driver);
|
||||
if (ret) {
|
||||
bus_unregister(&sunxi_rsb_bus);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
module_init(sunxi_rsb_init);
|
||||
|
||||
|
@ -1329,7 +1329,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int, flags
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = import_single_range(READ, ubuf, len, &iov, &iter);
|
||||
ret = import_single_range(ITER_DEST, ubuf, len, &iov, &iter);
|
||||
if (unlikely(ret))
|
||||
return ret;
|
||||
return get_random_bytes_user(&iter);
|
||||
@ -1447,7 +1447,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
|
||||
return -EINVAL;
|
||||
if (get_user(len, p++))
|
||||
return -EFAULT;
|
||||
ret = import_single_range(WRITE, p, len, &iov, &iter);
|
||||
ret = import_single_range(ITER_SOURCE, p, len, &iov, &iter);
|
||||
if (unlikely(ret))
|
||||
return ret;
|
||||
ret = write_pool_user(&iter);
|
||||
|
@ -167,7 +167,7 @@ struct dma_fence *dma_fence_allocate_private_stub(void)
|
||||
0, 0);
|
||||
|
||||
set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
|
||||
&dma_fence_stub.flags);
|
||||
&fence->flags);
|
||||
|
||||
dma_fence_signal(fence);
|
||||
|
||||
|
@ -819,8 +819,10 @@ static int ioctl_send_response(struct client *client, union ioctl_arg *arg)
|
||||
|
||||
r = container_of(resource, struct inbound_transaction_resource,
|
||||
resource);
|
||||
if (is_fcp_request(r->request))
|
||||
if (is_fcp_request(r->request)) {
|
||||
kfree(r->data);
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (a->length != fw_get_response_length(r->request)) {
|
||||
ret = -EINVAL;
|
||||
|
@ -984,6 +984,8 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
|
||||
/* first try to find a slot in an existing linked list entry */
|
||||
for (prsv = efi_memreserve_root->next; prsv; ) {
|
||||
rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB);
|
||||
if (!rsv)
|
||||
return -ENOMEM;
|
||||
index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size);
|
||||
if (index < rsv->size) {
|
||||
rsv->entry[index].base = addr;
|
||||
|
@ -33,7 +33,7 @@ int __init efi_memattr_init(void)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (tbl->version > 1) {
|
||||
if (tbl->version > 2) {
|
||||
pr_warn("Unexpected EFI Memory Attributes table version %d\n",
|
||||
tbl->version);
|
||||
goto unmap;
|
||||
|
@ -574,20 +574,27 @@ static int m10bmc_sec_probe(struct platform_device *pdev)
|
||||
len = scnprintf(buf, SEC_UPDATE_LEN_MAX, "secure-update%d",
|
||||
sec->fw_name_id);
|
||||
sec->fw_name = kmemdup_nul(buf, len, GFP_KERNEL);
|
||||
if (!sec->fw_name)
|
||||
return -ENOMEM;
|
||||
if (!sec->fw_name) {
|
||||
ret = -ENOMEM;
|
||||
goto fw_name_fail;
|
||||
}
|
||||
|
||||
fwl = firmware_upload_register(THIS_MODULE, sec->dev, sec->fw_name,
|
||||
&m10bmc_ops, sec);
|
||||
if (IS_ERR(fwl)) {
|
||||
dev_err(sec->dev, "Firmware Upload driver failed to start\n");
|
||||
kfree(sec->fw_name);
|
||||
xa_erase(&fw_upload_xa, sec->fw_name_id);
|
||||
return PTR_ERR(fwl);
|
||||
ret = PTR_ERR(fwl);
|
||||
goto fw_uploader_fail;
|
||||
}
|
||||
|
||||
sec->fwl = fwl;
|
||||
return 0;
|
||||
|
||||
fw_uploader_fail:
|
||||
kfree(sec->fw_name);
|
||||
fw_name_fail:
|
||||
xa_erase(&fw_upload_xa, sec->fw_name_id);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int m10bmc_sec_remove(struct platform_device *pdev)
|
||||
|
@ -213,9 +213,9 @@ static int s10_ops_write_init(struct fpga_manager *mgr,
|
||||
/* Allocate buffers from the service layer's pool. */
|
||||
for (i = 0; i < NUM_SVC_BUFS; i++) {
|
||||
kbuf = stratix10_svc_allocate_memory(priv->chan, SVC_BUF_SIZE);
|
||||
if (!kbuf) {
|
||||
if (IS_ERR(kbuf)) {
|
||||
s10_free_buffers(mgr);
|
||||
ret = -ENOMEM;
|
||||
ret = PTR_ERR(kbuf);
|
||||
goto init_done;
|
||||
}
|
||||
|
||||
|
@ -659,7 +659,7 @@ static void sbefifo_collect_async_ffdc(struct sbefifo *sbefifo)
|
||||
}
|
||||
ffdc_iov.iov_base = ffdc;
|
||||
ffdc_iov.iov_len = SBEFIFO_MAX_FFDC_SIZE;
|
||||
iov_iter_kvec(&ffdc_iter, WRITE, &ffdc_iov, 1, SBEFIFO_MAX_FFDC_SIZE);
|
||||
iov_iter_kvec(&ffdc_iter, ITER_DEST, &ffdc_iov, 1, SBEFIFO_MAX_FFDC_SIZE);
|
||||
cmd[0] = cpu_to_be32(2);
|
||||
cmd[1] = cpu_to_be32(SBEFIFO_CMD_GET_SBE_FFDC);
|
||||
rc = sbefifo_do_command(sbefifo, cmd, 2, &ffdc_iter);
|
||||
@ -756,7 +756,7 @@ int sbefifo_submit(struct device *dev, const __be32 *command, size_t cmd_len,
|
||||
rbytes = (*resp_len) * sizeof(__be32);
|
||||
resp_iov.iov_base = response;
|
||||
resp_iov.iov_len = rbytes;
|
||||
iov_iter_kvec(&resp_iter, WRITE, &resp_iov, 1, rbytes);
|
||||
iov_iter_kvec(&resp_iter, ITER_DEST, &resp_iov, 1, rbytes);
|
||||
|
||||
/* Perform the command */
|
||||
rc = mutex_lock_interruptible(&sbefifo->lock);
|
||||
@ -839,7 +839,7 @@ static ssize_t sbefifo_user_read(struct file *file, char __user *buf,
|
||||
/* Prepare iov iterator */
|
||||
resp_iov.iov_base = buf;
|
||||
resp_iov.iov_len = len;
|
||||
iov_iter_init(&resp_iter, WRITE, &resp_iov, 1, len);
|
||||
iov_iter_init(&resp_iter, ITER_DEST, &resp_iov, 1, len);
|
||||
|
||||
/* Perform the command */
|
||||
rc = mutex_lock_interruptible(&sbefifo->lock);
|
||||
|
@ -790,8 +790,8 @@ static void gfx_v11_0_read_wave_data(struct amdgpu_device *adev, uint32_t simd,
|
||||
* zero here */
|
||||
WARN_ON(simd != 0);
|
||||
|
||||
/* type 2 wave data */
|
||||
dst[(*no_fields)++] = 2;
|
||||
/* type 3 wave data */
|
||||
dst[(*no_fields)++] = 3;
|
||||
dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_STATUS);
|
||||
dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_LO);
|
||||
dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_HI);
|
||||
|
@ -337,7 +337,13 @@ const struct nbio_hdp_flush_reg nbio_v4_3_hdp_flush_reg = {
|
||||
|
||||
static void nbio_v4_3_init_registers(struct amdgpu_device *adev)
|
||||
{
|
||||
return;
|
||||
if (adev->ip_versions[NBIO_HWIP][0] == IP_VERSION(4, 3, 0)) {
|
||||
uint32_t data;
|
||||
|
||||
data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2);
|
||||
data &= ~RCC_DEV0_EPF2_STRAP2__STRAP_NO_SOFT_RESET_DEV0_F2_MASK;
|
||||
WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2, data);
|
||||
}
|
||||
}
|
||||
|
||||
static u32 nbio_v4_3_get_rom_offset(struct amdgpu_device *adev)
|
||||
|
@ -8784,6 +8784,13 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
|
||||
if (!dm_old_crtc_state->stream)
|
||||
goto skip_modeset;
|
||||
|
||||
/* Unset freesync video if it was active before */
|
||||
if (dm_old_crtc_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED) {
|
||||
dm_new_crtc_state->freesync_config.state = VRR_STATE_INACTIVE;
|
||||
dm_new_crtc_state->freesync_config.fixed_refresh_in_uhz = 0;
|
||||
}
|
||||
|
||||
/* Now check if we should set freesync video mode */
|
||||
if (amdgpu_freesync_vid_mode && dm_new_crtc_state->stream &&
|
||||
is_timing_unchanged_for_freesync(new_crtc_state,
|
||||
old_crtc_state)) {
|
||||
|
@ -1498,6 +1498,20 @@ static int smu_disable_dpms(struct smu_context *smu)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* For SMU 13.0.4/11, PMFW will handle the features disablement properly
|
||||
* for gpu reset case. Driver involvement is unnecessary.
|
||||
*/
|
||||
if (amdgpu_in_reset(adev)) {
|
||||
switch (adev->ip_versions[MP1_HWIP][0]) {
|
||||
case IP_VERSION(13, 0, 4):
|
||||
case IP_VERSION(13, 0, 11):
|
||||
return 0;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* For gpu reset, runpm and hibernation through BACO,
|
||||
* BACO feature has to be kept enabled.
|
||||
|
@ -1323,7 +1323,7 @@ static const struct intel_cdclk_vals adlp_cdclk_table[] = {
|
||||
{ .refclk = 24000, .cdclk = 192000, .divider = 2, .ratio = 16 },
|
||||
{ .refclk = 24000, .cdclk = 312000, .divider = 2, .ratio = 26 },
|
||||
{ .refclk = 24000, .cdclk = 552000, .divider = 2, .ratio = 46 },
|
||||
{ .refclk = 24400, .cdclk = 648000, .divider = 2, .ratio = 54 },
|
||||
{ .refclk = 24000, .cdclk = 648000, .divider = 2, .ratio = 54 },
|
||||
|
||||
{ .refclk = 38400, .cdclk = 179200, .divider = 3, .ratio = 14 },
|
||||
{ .refclk = 38400, .cdclk = 192000, .divider = 2, .ratio = 10 },
|
||||
|
@ -1861,12 +1861,20 @@ static int get_ppgtt(struct drm_i915_file_private *file_priv,
|
||||
vm = ctx->vm;
|
||||
GEM_BUG_ON(!vm);
|
||||
|
||||
err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/*
|
||||
* Get a reference for the allocated handle. Once the handle is
|
||||
* visible in the vm_xa table, userspace could try to close it
|
||||
* from under our feet, so we need to hold the extra reference
|
||||
* first.
|
||||
*/
|
||||
i915_vm_get(vm);
|
||||
|
||||
err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL);
|
||||
if (err) {
|
||||
i915_vm_put(vm);
|
||||
return err;
|
||||
}
|
||||
|
||||
GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */
|
||||
args->value = id;
|
||||
args->size = 0;
|
||||
|
@ -305,10 +305,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
|
||||
spin_unlock(&obj->vma.lock);
|
||||
|
||||
obj->tiling_and_stride = tiling | stride;
|
||||
i915_gem_object_unlock(obj);
|
||||
|
||||
/* Force the fence to be reacquired for GTT access */
|
||||
i915_gem_object_release_mmap_gtt(obj);
|
||||
|
||||
/* Try to preallocate memory required to save swizzling on put-pages */
|
||||
if (i915_gem_object_needs_bit17_swizzle(obj)) {
|
||||
@ -321,6 +317,11 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
|
||||
obj->bit_17 = NULL;
|
||||
}
|
||||
|
||||
i915_gem_object_unlock(obj);
|
||||
|
||||
/* Force the fence to be reacquired for GTT access */
|
||||
i915_gem_object_release_mmap_gtt(obj);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -528,7 +528,7 @@ struct i915_request *intel_context_create_request(struct intel_context *ce)
|
||||
return rq;
|
||||
}
|
||||
|
||||
struct i915_request *intel_context_find_active_request(struct intel_context *ce)
|
||||
struct i915_request *intel_context_get_active_request(struct intel_context *ce)
|
||||
{
|
||||
struct intel_context *parent = intel_context_to_parent(ce);
|
||||
struct i915_request *rq, *active = NULL;
|
||||
@ -552,6 +552,8 @@ struct i915_request *intel_context_find_active_request(struct intel_context *ce)
|
||||
|
||||
active = rq;
|
||||
}
|
||||
if (active)
|
||||
active = i915_request_get_rcu(active);
|
||||
spin_unlock_irqrestore(&parent->guc_state.lock, flags);
|
||||
|
||||
return active;
|
||||
|
@ -268,8 +268,7 @@ int intel_context_prepare_remote_request(struct intel_context *ce,
|
||||
|
||||
struct i915_request *intel_context_create_request(struct intel_context *ce);
|
||||
|
||||
struct i915_request *
|
||||
intel_context_find_active_request(struct intel_context *ce);
|
||||
struct i915_request *intel_context_get_active_request(struct intel_context *ce);
|
||||
|
||||
static inline bool intel_context_is_barrier(const struct intel_context *ce)
|
||||
{
|
||||
|
@ -248,8 +248,8 @@ void intel_engine_dump_active_requests(struct list_head *requests,
|
||||
ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine,
|
||||
ktime_t *now);
|
||||
|
||||
struct i915_request *
|
||||
intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine);
|
||||
void intel_engine_get_hung_entity(struct intel_engine_cs *engine,
|
||||
struct intel_context **ce, struct i915_request **rq);
|
||||
|
||||
u32 intel_engine_context_size(struct intel_gt *gt, u8 class);
|
||||
struct intel_context *
|
||||
|
@ -2078,17 +2078,6 @@ static void print_request_ring(struct drm_printer *m, struct i915_request *rq)
|
||||
}
|
||||
}
|
||||
|
||||
static unsigned long list_count(struct list_head *list)
|
||||
{
|
||||
struct list_head *pos;
|
||||
unsigned long count = 0;
|
||||
|
||||
list_for_each(pos, list)
|
||||
count++;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static unsigned long read_ul(void *p, size_t x)
|
||||
{
|
||||
return *(unsigned long *)(p + x);
|
||||
@ -2180,11 +2169,11 @@ void intel_engine_dump_active_requests(struct list_head *requests,
|
||||
}
|
||||
}
|
||||
|
||||
static void engine_dump_active_requests(struct intel_engine_cs *engine, struct drm_printer *m)
|
||||
static void engine_dump_active_requests(struct intel_engine_cs *engine,
|
||||
struct drm_printer *m)
|
||||
{
|
||||
struct intel_context *hung_ce = NULL;
|
||||
struct i915_request *hung_rq = NULL;
|
||||
struct intel_context *ce;
|
||||
bool guc;
|
||||
|
||||
/*
|
||||
* No need for an engine->irq_seqno_barrier() before the seqno reads.
|
||||
@ -2193,27 +2182,22 @@ static void engine_dump_active_requests(struct intel_engine_cs *engine, struct d
|
||||
* But the intention here is just to report an instantaneous snapshot
|
||||
* so that's fine.
|
||||
*/
|
||||
lockdep_assert_held(&engine->sched_engine->lock);
|
||||
intel_engine_get_hung_entity(engine, &hung_ce, &hung_rq);
|
||||
|
||||
drm_printf(m, "\tRequests:\n");
|
||||
|
||||
guc = intel_uc_uses_guc_submission(&engine->gt->uc);
|
||||
if (guc) {
|
||||
ce = intel_engine_get_hung_context(engine);
|
||||
if (ce)
|
||||
hung_rq = intel_context_find_active_request(ce);
|
||||
} else {
|
||||
hung_rq = intel_engine_execlist_find_hung_request(engine);
|
||||
}
|
||||
|
||||
if (hung_rq)
|
||||
engine_dump_request(hung_rq, m, "\t\thung");
|
||||
else if (hung_ce)
|
||||
drm_printf(m, "\t\tGot hung ce but no hung rq!\n");
|
||||
|
||||
if (guc)
|
||||
if (intel_uc_uses_guc_submission(&engine->gt->uc))
|
||||
intel_guc_dump_active_requests(engine, hung_rq, m);
|
||||
else
|
||||
intel_engine_dump_active_requests(&engine->sched_engine->requests,
|
||||
hung_rq, m);
|
||||
intel_execlists_dump_active_requests(engine, hung_rq, m);
|
||||
|
||||
if (hung_rq)
|
||||
i915_request_put(hung_rq);
|
||||
}
|
||||
|
||||
void intel_engine_dump(struct intel_engine_cs *engine,
|
||||
@ -2223,7 +2207,6 @@ void intel_engine_dump(struct intel_engine_cs *engine,
|
||||
struct i915_gpu_error * const error = &engine->i915->gpu_error;
|
||||
struct i915_request *rq;
|
||||
intel_wakeref_t wakeref;
|
||||
unsigned long flags;
|
||||
ktime_t dummy;
|
||||
|
||||
if (header) {
|
||||
@ -2260,13 +2243,8 @@ void intel_engine_dump(struct intel_engine_cs *engine,
|
||||
i915_reset_count(error));
|
||||
print_properties(engine, m);
|
||||
|
||||
spin_lock_irqsave(&engine->sched_engine->lock, flags);
|
||||
engine_dump_active_requests(engine, m);
|
||||
|
||||
drm_printf(m, "\tOn hold?: %lu\n",
|
||||
list_count(&engine->sched_engine->hold));
|
||||
spin_unlock_irqrestore(&engine->sched_engine->lock, flags);
|
||||
|
||||
drm_printf(m, "\tMMIO base: 0x%08x\n", engine->mmio_base);
|
||||
wakeref = intel_runtime_pm_get_if_in_use(engine->uncore->rpm);
|
||||
if (wakeref) {
|
||||
@ -2312,8 +2290,7 @@ intel_engine_create_virtual(struct intel_engine_cs **siblings,
|
||||
return siblings[0]->cops->create_virtual(siblings, count, flags);
|
||||
}
|
||||
|
||||
struct i915_request *
|
||||
intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine)
|
||||
static struct i915_request *engine_execlist_find_hung_request(struct intel_engine_cs *engine)
|
||||
{
|
||||
struct i915_request *request, *active = NULL;
|
||||
|
||||
@ -2365,6 +2342,33 @@ intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine)
|
||||
return active;
|
||||
}
|
||||
|
||||
void intel_engine_get_hung_entity(struct intel_engine_cs *engine,
|
||||
struct intel_context **ce, struct i915_request **rq)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
*ce = intel_engine_get_hung_context(engine);
|
||||
if (*ce) {
|
||||
intel_engine_clear_hung_context(engine);
|
||||
|
||||
*rq = intel_context_get_active_request(*ce);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Getting here with GuC enabled means it is a forced error capture
|
||||
* with no actual hang. So, no need to attempt the execlist search.
|
||||
*/
|
||||
if (intel_uc_uses_guc_submission(&engine->gt->uc))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&engine->sched_engine->lock, flags);
|
||||
*rq = engine_execlist_find_hung_request(engine);
|
||||
if (*rq)
|
||||
*rq = i915_request_get_rcu(*rq);
|
||||
spin_unlock_irqrestore(&engine->sched_engine->lock, flags);
|
||||
}
|
||||
|
||||
void xehp_enable_ccs_engines(struct intel_engine_cs *engine)
|
||||
{
|
||||
/*
|
||||
|
@ -4144,6 +4144,33 @@ void intel_execlists_show_requests(struct intel_engine_cs *engine,
|
||||
spin_unlock_irqrestore(&sched_engine->lock, flags);
|
||||
}
|
||||
|
||||
static unsigned long list_count(struct list_head *list)
|
||||
{
|
||||
struct list_head *pos;
|
||||
unsigned long count = 0;
|
||||
|
||||
list_for_each(pos, list)
|
||||
count++;
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
void intel_execlists_dump_active_requests(struct intel_engine_cs *engine,
|
||||
struct i915_request *hung_rq,
|
||||
struct drm_printer *m)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&engine->sched_engine->lock, flags);
|
||||
|
||||
intel_engine_dump_active_requests(&engine->sched_engine->requests, hung_rq, m);
|
||||
|
||||
drm_printf(m, "\tOn hold?: %lu\n",
|
||||
list_count(&engine->sched_engine->hold));
|
||||
|
||||
spin_unlock_irqrestore(&engine->sched_engine->lock, flags);
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
|
||||
#include "selftest_execlists.c"
|
||||
#endif
|
||||
|
@ -32,6 +32,10 @@ void intel_execlists_show_requests(struct intel_engine_cs *engine,
|
||||
int indent),
|
||||
unsigned int max);
|
||||
|
||||
void intel_execlists_dump_active_requests(struct intel_engine_cs *engine,
|
||||
struct i915_request *hung_rq,
|
||||
struct drm_printer *m);
|
||||
|
||||
bool
|
||||
intel_engine_in_execlists_submission_mode(const struct intel_engine_cs *engine);
|
||||
|
||||
|
@ -1685,7 +1685,7 @@ static void __guc_reset_context(struct intel_context *ce, intel_engine_mask_t st
|
||||
goto next_context;
|
||||
|
||||
guilty = false;
|
||||
rq = intel_context_find_active_request(ce);
|
||||
rq = intel_context_get_active_request(ce);
|
||||
if (!rq) {
|
||||
head = ce->ring->tail;
|
||||
goto out_replay;
|
||||
@ -1698,6 +1698,7 @@ static void __guc_reset_context(struct intel_context *ce, intel_engine_mask_t st
|
||||
head = intel_ring_wrap(ce->ring, rq->head);
|
||||
|
||||
__i915_request_reset(rq, guilty);
|
||||
i915_request_put(rq);
|
||||
out_replay:
|
||||
guc_reset_state(ce, head, guilty);
|
||||
next_context:
|
||||
@ -4587,6 +4588,8 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine)
|
||||
|
||||
xa_lock_irqsave(&guc->context_lookup, flags);
|
||||
xa_for_each(&guc->context_lookup, index, ce) {
|
||||
bool found;
|
||||
|
||||
if (!kref_get_unless_zero(&ce->ref))
|
||||
continue;
|
||||
|
||||
@ -4603,10 +4606,18 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine)
|
||||
goto next;
|
||||
}
|
||||
|
||||
found = false;
|
||||
spin_lock(&ce->guc_state.lock);
|
||||
list_for_each_entry(rq, &ce->guc_state.requests, sched.link) {
|
||||
if (i915_test_request_state(rq) != I915_REQUEST_ACTIVE)
|
||||
continue;
|
||||
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
spin_unlock(&ce->guc_state.lock);
|
||||
|
||||
if (found) {
|
||||
intel_engine_set_hung_context(engine, ce);
|
||||
|
||||
/* Can only cope with one hang at a time... */
|
||||
@ -4614,6 +4625,7 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine)
|
||||
xa_lock(&guc->context_lookup);
|
||||
goto done;
|
||||
}
|
||||
|
||||
next:
|
||||
intel_context_put(ce);
|
||||
xa_lock(&guc->context_lookup);
|
||||
|
@ -1592,43 +1592,20 @@ capture_engine(struct intel_engine_cs *engine,
|
||||
{
|
||||
struct intel_engine_capture_vma *capture = NULL;
|
||||
struct intel_engine_coredump *ee;
|
||||
struct intel_context *ce;
|
||||
struct intel_context *ce = NULL;
|
||||
struct i915_request *rq = NULL;
|
||||
unsigned long flags;
|
||||
|
||||
ee = intel_engine_coredump_alloc(engine, ALLOW_FAIL, dump_flags);
|
||||
if (!ee)
|
||||
return NULL;
|
||||
|
||||
ce = intel_engine_get_hung_context(engine);
|
||||
if (ce) {
|
||||
intel_engine_clear_hung_context(engine);
|
||||
rq = intel_context_find_active_request(ce);
|
||||
if (!rq || !i915_request_started(rq))
|
||||
goto no_request_capture;
|
||||
} else {
|
||||
/*
|
||||
* Getting here with GuC enabled means it is a forced error capture
|
||||
* with no actual hang. So, no need to attempt the execlist search.
|
||||
*/
|
||||
if (!intel_uc_uses_guc_submission(&engine->gt->uc)) {
|
||||
spin_lock_irqsave(&engine->sched_engine->lock, flags);
|
||||
rq = intel_engine_execlist_find_hung_request(engine);
|
||||
spin_unlock_irqrestore(&engine->sched_engine->lock,
|
||||
flags);
|
||||
}
|
||||
}
|
||||
if (rq)
|
||||
rq = i915_request_get_rcu(rq);
|
||||
|
||||
if (!rq)
|
||||
intel_engine_get_hung_entity(engine, &ce, &rq);
|
||||
if (!rq || !i915_request_started(rq))
|
||||
goto no_request_capture;
|
||||
|
||||
capture = intel_engine_coredump_add_request(ee, rq, ATOMIC_MAYFAIL);
|
||||
if (!capture) {
|
||||
i915_request_put(rq);
|
||||
if (!capture)
|
||||
goto no_request_capture;
|
||||
}
|
||||
if (dump_flags & CORE_DUMP_FLAG_IS_GUC_CAPTURE)
|
||||
intel_guc_capture_get_matching_node(engine->gt, ee, ce);
|
||||
|
||||
@ -1638,6 +1615,8 @@ capture_engine(struct intel_engine_cs *engine,
|
||||
return ee;
|
||||
|
||||
no_request_capture:
|
||||
if (rq)
|
||||
i915_request_put(rq);
|
||||
kfree(ee);
|
||||
return NULL;
|
||||
}
|
||||
|
@ -1193,14 +1193,11 @@ static int boe_panel_enter_sleep_mode(struct boe_panel *boe)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int boe_panel_unprepare(struct drm_panel *panel)
|
||||
static int boe_panel_disable(struct drm_panel *panel)
|
||||
{
|
||||
struct boe_panel *boe = to_boe_panel(panel);
|
||||
int ret;
|
||||
|
||||
if (!boe->prepared)
|
||||
return 0;
|
||||
|
||||
ret = boe_panel_enter_sleep_mode(boe);
|
||||
if (ret < 0) {
|
||||
dev_err(panel->dev, "failed to set panel off: %d\n", ret);
|
||||
@ -1209,6 +1206,16 @@ static int boe_panel_unprepare(struct drm_panel *panel)
|
||||
|
||||
msleep(150);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int boe_panel_unprepare(struct drm_panel *panel)
|
||||
{
|
||||
struct boe_panel *boe = to_boe_panel(panel);
|
||||
|
||||
if (!boe->prepared)
|
||||
return 0;
|
||||
|
||||
if (boe->desc->discharge_on_disable) {
|
||||
regulator_disable(boe->avee);
|
||||
regulator_disable(boe->avdd);
|
||||
@ -1528,6 +1535,7 @@ static enum drm_panel_orientation boe_panel_get_orientation(struct drm_panel *pa
|
||||
}
|
||||
|
||||
static const struct drm_panel_funcs boe_panel_funcs = {
|
||||
.disable = boe_panel_disable,
|
||||
.unprepare = boe_panel_unprepare,
|
||||
.prepare = boe_panel_prepare,
|
||||
.enable = boe_panel_enable,
|
||||
|
@ -665,18 +665,8 @@ static const struct drm_crtc_helper_funcs ssd130x_crtc_helper_funcs = {
|
||||
.atomic_check = ssd130x_crtc_helper_atomic_check,
|
||||
};
|
||||
|
||||
static void ssd130x_crtc_reset(struct drm_crtc *crtc)
|
||||
{
|
||||
struct drm_device *drm = crtc->dev;
|
||||
struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
|
||||
|
||||
ssd130x_init(ssd130x);
|
||||
|
||||
drm_atomic_helper_crtc_reset(crtc);
|
||||
}
|
||||
|
||||
static const struct drm_crtc_funcs ssd130x_crtc_funcs = {
|
||||
.reset = ssd130x_crtc_reset,
|
||||
.reset = drm_atomic_helper_crtc_reset,
|
||||
.destroy = drm_crtc_cleanup,
|
||||
.set_config = drm_atomic_helper_set_config,
|
||||
.page_flip = drm_atomic_helper_page_flip,
|
||||
@ -695,6 +685,12 @@ static void ssd130x_encoder_helper_atomic_enable(struct drm_encoder *encoder,
|
||||
if (ret)
|
||||
return;
|
||||
|
||||
ret = ssd130x_init(ssd130x);
|
||||
if (ret) {
|
||||
ssd130x_power_off(ssd130x);
|
||||
return;
|
||||
}
|
||||
|
||||
ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON);
|
||||
|
||||
backlight_enable(ssd130x->bl_dev);
|
||||
|
@ -3009,7 +3009,8 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
|
||||
}
|
||||
|
||||
vc4_hdmi->cec_adap = cec_allocate_adapter(&vc4_hdmi_cec_adap_ops,
|
||||
vc4_hdmi, "vc4",
|
||||
vc4_hdmi,
|
||||
vc4_hdmi->variant->card_name,
|
||||
CEC_CAP_DEFAULTS |
|
||||
CEC_CAP_CONNECTOR_INFO, 1);
|
||||
ret = PTR_ERR_OR_ZERO(vc4_hdmi->cec_adap);
|
||||
|
@ -1911,7 +1911,7 @@ static void hv_balloon_debugfs_init(struct hv_dynmem_device *b)
|
||||
|
||||
static void hv_balloon_debugfs_exit(struct hv_dynmem_device *b)
|
||||
{
|
||||
debugfs_remove(debugfs_lookup("hv-balloon", NULL));
|
||||
debugfs_lookup_and_remove("hv-balloon", NULL);
|
||||
}
|
||||
|
||||
#else
|
||||
|
@ -396,6 +396,8 @@ static const struct pci_device_id i2_designware_pci_ids[] = {
|
||||
{ PCI_VDEVICE(ATI, 0x73a4), navi_amd },
|
||||
{ PCI_VDEVICE(ATI, 0x73e4), navi_amd },
|
||||
{ PCI_VDEVICE(ATI, 0x73c4), navi_amd },
|
||||
{ PCI_VDEVICE(ATI, 0x7444), navi_amd },
|
||||
{ PCI_VDEVICE(ATI, 0x7464), navi_amd },
|
||||
{ 0,}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pci, i2_designware_pci_ids);
|
||||
|
@ -826,8 +826,8 @@ static int mxs_i2c_probe(struct platform_device *pdev)
|
||||
/* Setup the DMA */
|
||||
i2c->dmach = dma_request_chan(dev, "rx-tx");
|
||||
if (IS_ERR(i2c->dmach)) {
|
||||
dev_err(dev, "Failed to request dma\n");
|
||||
return PTR_ERR(i2c->dmach);
|
||||
return dev_err_probe(dev, PTR_ERR(i2c->dmach),
|
||||
"Failed to request dma\n");
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, i2c);
|
||||
|
@ -80,7 +80,7 @@ enum {
|
||||
#define DEFAULT_SCL_RATE (100 * 1000) /* Hz */
|
||||
|
||||
/**
|
||||
* struct i2c_spec_values:
|
||||
* struct i2c_spec_values - I2C specification values for various modes
|
||||
* @min_hold_start_ns: min hold time (repeated) START condition
|
||||
* @min_low_ns: min LOW period of the SCL clock
|
||||
* @min_high_ns: min HIGH period of the SCL cloc
|
||||
@ -136,7 +136,7 @@ static const struct i2c_spec_values fast_mode_plus_spec = {
|
||||
};
|
||||
|
||||
/**
|
||||
* struct rk3x_i2c_calced_timings:
|
||||
* struct rk3x_i2c_calced_timings - calculated V1 timings
|
||||
* @div_low: Divider output for low
|
||||
* @div_high: Divider output for high
|
||||
* @tuning: Used to adjust setup/hold data time,
|
||||
@ -159,7 +159,7 @@ enum rk3x_i2c_state {
|
||||
};
|
||||
|
||||
/**
|
||||
* struct rk3x_i2c_soc_data:
|
||||
* struct rk3x_i2c_soc_data - SOC-specific data
|
||||
* @grf_offset: offset inside the grf regmap for setting the i2c type
|
||||
* @calc_timings: Callback function for i2c timing information calculated
|
||||
*/
|
||||
@ -239,7 +239,8 @@ static inline void rk3x_i2c_clean_ipd(struct rk3x_i2c *i2c)
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a START condition, which triggers a REG_INT_START interrupt.
|
||||
* rk3x_i2c_start - Generate a START condition, which triggers a REG_INT_START interrupt.
|
||||
* @i2c: target controller data
|
||||
*/
|
||||
static void rk3x_i2c_start(struct rk3x_i2c *i2c)
|
||||
{
|
||||
@ -258,8 +259,8 @@ static void rk3x_i2c_start(struct rk3x_i2c *i2c)
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a STOP condition, which triggers a REG_INT_STOP interrupt.
|
||||
*
|
||||
* rk3x_i2c_stop - Generate a STOP condition, which triggers a REG_INT_STOP interrupt.
|
||||
* @i2c: target controller data
|
||||
* @error: Error code to return in rk3x_i2c_xfer
|
||||
*/
|
||||
static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error)
|
||||
@ -298,7 +299,8 @@ static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error)
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup a read according to i2c->msg
|
||||
* rk3x_i2c_prepare_read - Setup a read according to i2c->msg
|
||||
* @i2c: target controller data
|
||||
*/
|
||||
static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c)
|
||||
{
|
||||
@ -329,7 +331,8 @@ static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c)
|
||||
}
|
||||
|
||||
/**
|
||||
* Fill the transmit buffer with data from i2c->msg
|
||||
* rk3x_i2c_fill_transmit_buf - Fill the transmit buffer with data from i2c->msg
|
||||
* @i2c: target controller data
|
||||
*/
|
||||
static void rk3x_i2c_fill_transmit_buf(struct rk3x_i2c *i2c)
|
||||
{
|
||||
@ -532,11 +535,10 @@ static irqreturn_t rk3x_i2c_irq(int irqno, void *dev_id)
|
||||
}
|
||||
|
||||
/**
|
||||
* Get timing values of I2C specification
|
||||
*
|
||||
* rk3x_i2c_get_spec - Get timing values of I2C specification
|
||||
* @speed: Desired SCL frequency
|
||||
*
|
||||
* Returns: Matched i2c spec values.
|
||||
* Return: Matched i2c_spec_values.
|
||||
*/
|
||||
static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed)
|
||||
{
|
||||
@ -549,13 +551,12 @@ static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed)
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate divider values for desired SCL frequency
|
||||
*
|
||||
* rk3x_i2c_v0_calc_timings - Calculate divider values for desired SCL frequency
|
||||
* @clk_rate: I2C input clock rate
|
||||
* @t: Known I2C timing information
|
||||
* @t_calc: Caculated rk3x private timings that would be written into regs
|
||||
*
|
||||
* Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case
|
||||
* Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case
|
||||
* a best-effort divider value is returned in divs. If the target rate is
|
||||
* too high, we silently use the highest possible rate.
|
||||
*/
|
||||
@ -710,13 +711,12 @@ static int rk3x_i2c_v0_calc_timings(unsigned long clk_rate,
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate timing values for desired SCL frequency
|
||||
*
|
||||
* rk3x_i2c_v1_calc_timings - Calculate timing values for desired SCL frequency
|
||||
* @clk_rate: I2C input clock rate
|
||||
* @t: Known I2C timing information
|
||||
* @t_calc: Caculated rk3x private timings that would be written into regs
|
||||
*
|
||||
* Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case
|
||||
* Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case
|
||||
* a best-effort divider value is returned in divs. If the target rate is
|
||||
* too high, we silently use the highest possible rate.
|
||||
* The following formulas are v1's method to calculate timings.
|
||||
@ -960,14 +960,14 @@ static int rk3x_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long
|
||||
}
|
||||
|
||||
/**
|
||||
* Setup I2C registers for an I2C operation specified by msgs, num.
|
||||
*
|
||||
* Must be called with i2c->lock held.
|
||||
*
|
||||
* rk3x_i2c_setup - Setup I2C registers for an I2C operation specified by msgs, num.
|
||||
* @i2c: target controller data
|
||||
* @msgs: I2C msgs to process
|
||||
* @num: Number of msgs
|
||||
*
|
||||
* returns: Number of I2C msgs processed or negative in case of error
|
||||
* Must be called with i2c->lock held.
|
||||
*
|
||||
* Return: Number of I2C msgs processed or negative in case of error
|
||||
*/
|
||||
static int rk3x_i2c_setup(struct rk3x_i2c *i2c, struct i2c_msg *msgs, int num)
|
||||
{
|
||||
|
@ -280,6 +280,7 @@ static int accel_3d_capture_sample(struct hid_sensor_hub_device *hsdev,
|
||||
hid_sensor_convert_timestamp(
|
||||
&accel_state->common_attributes,
|
||||
*(int64_t *)raw_data);
|
||||
ret = 0;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
|
@ -298,8 +298,10 @@ static int berlin2_adc_probe(struct platform_device *pdev)
|
||||
int ret;
|
||||
|
||||
indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv));
|
||||
if (!indio_dev)
|
||||
if (!indio_dev) {
|
||||
of_node_put(parent_np);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
priv = iio_priv(indio_dev);
|
||||
|
||||
|
@ -86,6 +86,8 @@
|
||||
|
||||
#define IMX8QXP_ADC_TIMEOUT msecs_to_jiffies(100)
|
||||
|
||||
#define IMX8QXP_ADC_MAX_FIFO_SIZE 16
|
||||
|
||||
struct imx8qxp_adc {
|
||||
struct device *dev;
|
||||
void __iomem *regs;
|
||||
@ -95,6 +97,7 @@ struct imx8qxp_adc {
|
||||
/* Serialise ADC channel reads */
|
||||
struct mutex lock;
|
||||
struct completion completion;
|
||||
u32 fifo[IMX8QXP_ADC_MAX_FIFO_SIZE];
|
||||
};
|
||||
|
||||
#define IMX8QXP_ADC_CHAN(_idx) { \
|
||||
@ -238,8 +241,7 @@ static int imx8qxp_adc_read_raw(struct iio_dev *indio_dev,
|
||||
return ret;
|
||||
}
|
||||
|
||||
*val = FIELD_GET(IMX8QXP_ADC_RESFIFO_VAL_MASK,
|
||||
readl(adc->regs + IMX8QXP_ADR_ADC_RESFIFO));
|
||||
*val = adc->fifo[0];
|
||||
|
||||
mutex_unlock(&adc->lock);
|
||||
return IIO_VAL_INT;
|
||||
@ -265,10 +267,15 @@ static irqreturn_t imx8qxp_adc_isr(int irq, void *dev_id)
|
||||
{
|
||||
struct imx8qxp_adc *adc = dev_id;
|
||||
u32 fifo_count;
|
||||
int i;
|
||||
|
||||
fifo_count = FIELD_GET(IMX8QXP_ADC_FCTRL_FCOUNT_MASK,
|
||||
readl(adc->regs + IMX8QXP_ADR_ADC_FCTRL));
|
||||
|
||||
for (i = 0; i < fifo_count; i++)
|
||||
adc->fifo[i] = FIELD_GET(IMX8QXP_ADC_RESFIFO_VAL_MASK,
|
||||
readl_relaxed(adc->regs + IMX8QXP_ADR_ADC_RESFIFO));
|
||||
|
||||
if (fifo_count)
|
||||
complete(&adc->completion);
|
||||
|
||||
|
@ -1520,6 +1520,7 @@ static const struct of_device_id stm32_dfsdm_adc_match[] = {
|
||||
},
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, stm32_dfsdm_adc_match);
|
||||
|
||||
static int stm32_dfsdm_adc_probe(struct platform_device *pdev)
|
||||
{
|
||||
|
@ -57,6 +57,18 @@
|
||||
#define TWL6030_GPADCS BIT(1)
|
||||
#define TWL6030_GPADCR BIT(0)
|
||||
|
||||
#define USB_VBUS_CTRL_SET 0x04
|
||||
#define USB_ID_CTRL_SET 0x06
|
||||
|
||||
#define TWL6030_MISC1 0xE4
|
||||
#define VBUS_MEAS 0x01
|
||||
#define ID_MEAS 0x01
|
||||
|
||||
#define VAC_MEAS 0x04
|
||||
#define VBAT_MEAS 0x02
|
||||
#define BB_MEAS 0x01
|
||||
|
||||
|
||||
/**
|
||||
* struct twl6030_chnl_calib - channel calibration
|
||||
* @gain: slope coefficient for ideal curve
|
||||
@ -927,6 +939,26 @@ static int twl6030_gpadc_probe(struct platform_device *pdev)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = twl_i2c_write_u8(TWL_MODULE_USB, VBUS_MEAS, USB_VBUS_CTRL_SET);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to wire up inputs\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = twl_i2c_write_u8(TWL_MODULE_USB, ID_MEAS, USB_ID_CTRL_SET);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to wire up inputs\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = twl_i2c_write_u8(TWL6030_MODULE_ID0,
|
||||
VBAT_MEAS | BB_MEAS | VAC_MEAS,
|
||||
TWL6030_MISC1);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "failed to wire up inputs\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
indio_dev->name = DRIVER_NAME;
|
||||
indio_dev->info = &twl6030_gpadc_iio_info;
|
||||
indio_dev->modes = INDIO_DIRECT_MODE;
|
||||
|
@ -1329,7 +1329,7 @@ static int ams_parse_firmware(struct iio_dev *indio_dev)
|
||||
|
||||
dev_channels = devm_krealloc(dev, ams_channels, dev_size, GFP_KERNEL);
|
||||
if (!dev_channels)
|
||||
ret = -ENOMEM;
|
||||
return -ENOMEM;
|
||||
|
||||
indio_dev->channels = dev_channels;
|
||||
indio_dev->num_channels = num_channels;
|
||||
|
@ -231,6 +231,7 @@ static int gyro_3d_capture_sample(struct hid_sensor_hub_device *hsdev,
|
||||
gyro_state->timestamp =
|
||||
hid_sensor_convert_timestamp(&gyro_state->common_attributes,
|
||||
*(s64 *)raw_data);
|
||||
ret = 0;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
|
@ -10,6 +10,7 @@
|
||||
#include <linux/regmap.h>
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/bitfield.h>
|
||||
|
||||
#include <linux/iio/iio.h>
|
||||
#include <linux/iio/sysfs.h>
|
||||
@ -144,9 +145,8 @@
|
||||
#define FXOS8700_NVM_DATA_BNK0 0xa7
|
||||
|
||||
/* Bit definitions for FXOS8700_CTRL_REG1 */
|
||||
#define FXOS8700_CTRL_ODR_MSK 0x38
|
||||
#define FXOS8700_CTRL_ODR_MAX 0x00
|
||||
#define FXOS8700_CTRL_ODR_MIN GENMASK(4, 3)
|
||||
#define FXOS8700_CTRL_ODR_MSK GENMASK(5, 3)
|
||||
|
||||
/* Bit definitions for FXOS8700_M_CTRL_REG1 */
|
||||
#define FXOS8700_HMS_MASK GENMASK(1, 0)
|
||||
@ -320,7 +320,7 @@ static enum fxos8700_sensor fxos8700_to_sensor(enum iio_chan_type iio_type)
|
||||
switch (iio_type) {
|
||||
case IIO_ACCEL:
|
||||
return FXOS8700_ACCEL;
|
||||
case IIO_ANGL_VEL:
|
||||
case IIO_MAGN:
|
||||
return FXOS8700_MAGN;
|
||||
default:
|
||||
return -EINVAL;
|
||||
@ -345,15 +345,35 @@ static int fxos8700_set_active_mode(struct fxos8700_data *data,
|
||||
static int fxos8700_set_scale(struct fxos8700_data *data,
|
||||
enum fxos8700_sensor t, int uscale)
|
||||
{
|
||||
int i;
|
||||
int i, ret, val;
|
||||
bool active_mode;
|
||||
static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale);
|
||||
struct device *dev = regmap_get_device(data->regmap);
|
||||
|
||||
if (t == FXOS8700_MAGN) {
|
||||
dev_err(dev, "Magnetometer scale is locked at 1200uT\n");
|
||||
dev_err(dev, "Magnetometer scale is locked at 0.001Gs\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* When device is in active mode, it failed to set an ACCEL
|
||||
* full-scale range(2g/4g/8g) in FXOS8700_XYZ_DATA_CFG.
|
||||
* This is not align with the datasheet, but it is a fxos8700
|
||||
* chip behavier. Set the device in standby mode before setting
|
||||
* an ACCEL full-scale range.
|
||||
*/
|
||||
ret = regmap_read(data->regmap, FXOS8700_CTRL_REG1, &val);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
active_mode = val & FXOS8700_ACTIVE;
|
||||
if (active_mode) {
|
||||
ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1,
|
||||
val & ~FXOS8700_ACTIVE);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
for (i = 0; i < scale_num; i++)
|
||||
if (fxos8700_accel_scale[i].uscale == uscale)
|
||||
break;
|
||||
@ -361,8 +381,12 @@ static int fxos8700_set_scale(struct fxos8700_data *data,
|
||||
if (i == scale_num)
|
||||
return -EINVAL;
|
||||
|
||||
return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG,
|
||||
ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG,
|
||||
fxos8700_accel_scale[i].bits);
|
||||
if (ret)
|
||||
return ret;
|
||||
return regmap_write(data->regmap, FXOS8700_CTRL_REG1,
|
||||
active_mode);
|
||||
}
|
||||
|
||||
static int fxos8700_get_scale(struct fxos8700_data *data,
|
||||
@ -372,7 +396,7 @@ static int fxos8700_get_scale(struct fxos8700_data *data,
|
||||
static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale);
|
||||
|
||||
if (t == FXOS8700_MAGN) {
|
||||
*uscale = 1200; /* Magnetometer is locked at 1200uT */
|
||||
*uscale = 1000; /* Magnetometer is locked at 0.001Gs */
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -394,22 +418,61 @@ static int fxos8700_get_data(struct fxos8700_data *data, int chan_type,
|
||||
int axis, int *val)
|
||||
{
|
||||
u8 base, reg;
|
||||
s16 tmp;
|
||||
int ret;
|
||||
enum fxos8700_sensor type = fxos8700_to_sensor(chan_type);
|
||||
|
||||
base = type ? FXOS8700_OUT_X_MSB : FXOS8700_M_OUT_X_MSB;
|
||||
/*
|
||||
* Different register base addresses varies with channel types.
|
||||
* This bug hasn't been noticed before because using an enum is
|
||||
* really hard to read. Use an a switch statement to take over that.
|
||||
*/
|
||||
switch (chan_type) {
|
||||
case IIO_ACCEL:
|
||||
base = FXOS8700_OUT_X_MSB;
|
||||
break;
|
||||
case IIO_MAGN:
|
||||
base = FXOS8700_M_OUT_X_MSB;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Block read 6 bytes of device output registers to avoid data loss */
|
||||
ret = regmap_bulk_read(data->regmap, base, data->buf,
|
||||
FXOS8700_DATA_BUF_SIZE);
|
||||
sizeof(data->buf));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Convert axis to buffer index */
|
||||
reg = axis - IIO_MOD_X;
|
||||
|
||||
/*
|
||||
* Convert to native endianness. The accel data and magn data
|
||||
* are signed, so a forced type conversion is needed.
|
||||
*/
|
||||
tmp = be16_to_cpu(data->buf[reg]);
|
||||
|
||||
/*
|
||||
* ACCEL output data registers contain the X-axis, Y-axis, and Z-axis
|
||||
* 14-bit left-justified sample data and MAGN output data registers
|
||||
* contain the X-axis, Y-axis, and Z-axis 16-bit sample data. Apply
|
||||
* a signed 2 bits right shift to the readback raw data from ACCEL
|
||||
* output data register and keep that from MAGN sensor as the origin.
|
||||
* Value should be extended to 32 bit.
|
||||
*/
|
||||
switch (chan_type) {
|
||||
case IIO_ACCEL:
|
||||
tmp = tmp >> 2;
|
||||
break;
|
||||
case IIO_MAGN:
|
||||
/* Nothing to do */
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Convert to native endianness */
|
||||
*val = sign_extend32(be16_to_cpu(data->buf[reg]), 15);
|
||||
*val = sign_extend32(tmp, 15);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -445,10 +508,9 @@ static int fxos8700_set_odr(struct fxos8700_data *data, enum fxos8700_sensor t,
|
||||
if (i >= odr_num)
|
||||
return -EINVAL;
|
||||
|
||||
return regmap_update_bits(data->regmap,
|
||||
FXOS8700_CTRL_REG1,
|
||||
FXOS8700_CTRL_ODR_MSK + FXOS8700_ACTIVE,
|
||||
fxos8700_odr[i].bits << 3 | active_mode);
|
||||
val &= ~FXOS8700_CTRL_ODR_MSK;
|
||||
val |= FIELD_PREP(FXOS8700_CTRL_ODR_MSK, fxos8700_odr[i].bits) | FXOS8700_ACTIVE;
|
||||
return regmap_write(data->regmap, FXOS8700_CTRL_REG1, val);
|
||||
}
|
||||
|
||||
static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t,
|
||||
@ -461,7 +523,7 @@ static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
val &= FXOS8700_CTRL_ODR_MSK;
|
||||
val = FIELD_GET(FXOS8700_CTRL_ODR_MSK, val);
|
||||
|
||||
for (i = 0; i < odr_num; i++)
|
||||
if (val == fxos8700_odr[i].bits)
|
||||
@ -526,7 +588,7 @@ static IIO_CONST_ATTR(in_accel_sampling_frequency_available,
|
||||
static IIO_CONST_ATTR(in_magn_sampling_frequency_available,
|
||||
"1.5625 6.25 12.5 50 100 200 400 800");
|
||||
static IIO_CONST_ATTR(in_accel_scale_available, "0.000244 0.000488 0.000976");
|
||||
static IIO_CONST_ATTR(in_magn_scale_available, "0.000001200");
|
||||
static IIO_CONST_ATTR(in_magn_scale_available, "0.001000");
|
||||
|
||||
static struct attribute *fxos8700_attrs[] = {
|
||||
&iio_const_attr_in_accel_sampling_frequency_available.dev_attr.attr,
|
||||
@ -592,14 +654,19 @@ static int fxos8700_chip_init(struct fxos8700_data *data, bool use_spi)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Max ODR (800Hz individual or 400Hz hybrid), active mode */
|
||||
ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1,
|
||||
FXOS8700_CTRL_ODR_MAX | FXOS8700_ACTIVE);
|
||||
/*
|
||||
* Set max full-scale range (+/-8G) for ACCEL sensor in chip
|
||||
* initialization then activate the device.
|
||||
*/
|
||||
ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Set for max full-scale range (+/-8G) */
|
||||
return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G);
|
||||
/* Max ODR (800Hz individual or 400Hz hybrid), active mode */
|
||||
return regmap_update_bits(data->regmap, FXOS8700_CTRL_REG1,
|
||||
FXOS8700_CTRL_ODR_MSK | FXOS8700_ACTIVE,
|
||||
FIELD_PREP(FXOS8700_CTRL_ODR_MSK, FXOS8700_CTRL_ODR_MAX) |
|
||||
FXOS8700_ACTIVE);
|
||||
}
|
||||
|
||||
static void fxos8700_chip_uninit(void *data)
|
||||
|
@ -440,6 +440,8 @@ static int cm32181_probe(struct i2c_client *client)
|
||||
if (!indio_dev)
|
||||
return -ENOMEM;
|
||||
|
||||
i2c_set_clientdata(client, indio_dev);
|
||||
|
||||
/*
|
||||
* Some ACPI systems list 2 I2C resources for the CM3218 sensor, the
|
||||
* SMBus Alert Response Address (ARA, 0x0c) and the actual I2C address.
|
||||
@ -460,8 +462,6 @@ static int cm32181_probe(struct i2c_client *client)
|
||||
return PTR_ERR(client);
|
||||
}
|
||||
|
||||
i2c_set_clientdata(client, indio_dev);
|
||||
|
||||
cm32181 = iio_priv(indio_dev);
|
||||
cm32181->client = client;
|
||||
cm32181->dev = dev;
|
||||
@ -490,7 +490,8 @@ static int cm32181_probe(struct i2c_client *client)
|
||||
|
||||
static int cm32181_suspend(struct device *dev)
|
||||
{
|
||||
struct i2c_client *client = to_i2c_client(dev);
|
||||
struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev));
|
||||
struct i2c_client *client = cm32181->client;
|
||||
|
||||
return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD,
|
||||
CM32181_CMD_ALS_DISABLE);
|
||||
@ -498,8 +499,8 @@ static int cm32181_suspend(struct device *dev)
|
||||
|
||||
static int cm32181_resume(struct device *dev)
|
||||
{
|
||||
struct i2c_client *client = to_i2c_client(dev);
|
||||
struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev));
|
||||
struct i2c_client *client = cm32181->client;
|
||||
|
||||
return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD,
|
||||
cm32181->conf_regs[CM32181_REG_ADDR_CMD]);
|
||||
|
@ -966,7 +966,7 @@ static void rtrs_clt_init_req(struct rtrs_clt_io_req *req,
|
||||
refcount_set(&req->ref, 1);
|
||||
req->mp_policy = clt_path->clt->mp_policy;
|
||||
|
||||
iov_iter_kvec(&iter, READ, vec, 1, usr_len);
|
||||
iov_iter_kvec(&iter, ITER_SOURCE, vec, 1, usr_len);
|
||||
len = _copy_from_iter(req->iu->buf, usr_len, &iter);
|
||||
WARN_ON(len != usr_len);
|
||||
|
||||
|
@ -706,7 +706,7 @@ l1oip_socket_thread(void *data)
|
||||
printk(KERN_DEBUG "%s: socket created and open\n",
|
||||
__func__);
|
||||
while (!signal_pending(current)) {
|
||||
iov_iter_kvec(&msg.msg_iter, READ, &iov, 1, recvbuf_size);
|
||||
iov_iter_kvec(&msg.msg_iter, ITER_DEST, &iov, 1, recvbuf_size);
|
||||
recvlen = sock_recvmsg(socket, &msg, 0);
|
||||
if (recvlen > 0) {
|
||||
l1oip_socket_parse(hc, &sin_rx, recvbuf, recvlen);
|
||||
|
@ -106,7 +106,8 @@ static inline unsigned long bkey_bytes(const struct bkey *k)
|
||||
return bkey_u64s(k) * sizeof(__u64);
|
||||
}
|
||||
|
||||
#define bkey_copy(_dest, _src) memcpy(_dest, _src, bkey_bytes(_src))
|
||||
#define bkey_copy(_dest, _src) unsafe_memcpy(_dest, _src, bkey_bytes(_src), \
|
||||
/* bkey is always padded */)
|
||||
|
||||
static inline void bkey_copy_key(struct bkey *dest, const struct bkey *src)
|
||||
{
|
||||
|
@ -149,7 +149,8 @@ reread: left = ca->sb.bucket_size - offset;
|
||||
bytes, GFP_KERNEL);
|
||||
if (!i)
|
||||
return -ENOMEM;
|
||||
memcpy(&i->j, j, bytes);
|
||||
unsafe_memcpy(&i->j, j, bytes,
|
||||
/* "bytes" was calculated by set_bytes() above */);
|
||||
/* Add to the location after 'where' points to */
|
||||
list_add(&i->list, where);
|
||||
ret = 1;
|
||||
|
@ -150,8 +150,8 @@ static int user_to_new(struct v4l2_ext_control *c, struct v4l2_ctrl *ctrl)
|
||||
* then return an error.
|
||||
*/
|
||||
if (strlen(ctrl->p_new.p_char) == ctrl->maximum && last)
|
||||
ctrl->is_new = 1;
|
||||
return -ERANGE;
|
||||
ctrl->is_new = 1;
|
||||
}
|
||||
return ret;
|
||||
default:
|
||||
|
@ -3044,7 +3044,7 @@ ssize_t vmci_qpair_enqueue(struct vmci_qp *qpair,
|
||||
if (!qpair || !buf)
|
||||
return VMCI_ERROR_INVALID_ARGS;
|
||||
|
||||
iov_iter_kvec(&from, WRITE, &v, 1, buf_size);
|
||||
iov_iter_kvec(&from, ITER_SOURCE, &v, 1, buf_size);
|
||||
|
||||
qp_lock(qpair);
|
||||
|
||||
@ -3088,7 +3088,7 @@ ssize_t vmci_qpair_dequeue(struct vmci_qp *qpair,
|
||||
if (!qpair || !buf)
|
||||
return VMCI_ERROR_INVALID_ARGS;
|
||||
|
||||
iov_iter_kvec(&to, READ, &v, 1, buf_size);
|
||||
iov_iter_kvec(&to, ITER_DEST, &v, 1, buf_size);
|
||||
|
||||
qp_lock(qpair);
|
||||
|
||||
@ -3133,7 +3133,7 @@ ssize_t vmci_qpair_peek(struct vmci_qp *qpair,
|
||||
if (!qpair || !buf)
|
||||
return VMCI_ERROR_INVALID_ARGS;
|
||||
|
||||
iov_iter_kvec(&to, READ, &v, 1, buf_size);
|
||||
iov_iter_kvec(&to, ITER_DEST, &v, 1, buf_size);
|
||||
|
||||
qp_lock(qpair);
|
||||
|
||||
|
@ -48,6 +48,7 @@ mcp251xfd_ring_set_ringparam(struct net_device *ndev,
|
||||
priv->rx_obj_num = layout.cur_rx;
|
||||
priv->rx_obj_num_coalesce_irq = layout.rx_coalesce;
|
||||
priv->tx->obj_num = layout.cur_tx;
|
||||
priv->tx_obj_num_coalesce_irq = layout.tx_coalesce;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -2400,6 +2400,9 @@ static int dpaa_eth_poll(struct napi_struct *napi, int budget)
|
||||
|
||||
cleaned = qman_p_poll_dqrr(np->p, budget);
|
||||
|
||||
if (np->xdp_act & XDP_REDIRECT)
|
||||
xdp_do_flush();
|
||||
|
||||
if (cleaned < budget) {
|
||||
napi_complete_done(napi, cleaned);
|
||||
qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
|
||||
@ -2407,9 +2410,6 @@ static int dpaa_eth_poll(struct napi_struct *napi, int budget)
|
||||
qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
|
||||
}
|
||||
|
||||
if (np->xdp_act & XDP_REDIRECT)
|
||||
xdp_do_flush();
|
||||
|
||||
return cleaned;
|
||||
}
|
||||
|
||||
|
@ -1868,10 +1868,15 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
|
||||
if (rx_cleaned >= budget ||
|
||||
txconf_cleaned >= DPAA2_ETH_TXCONF_PER_NAPI) {
|
||||
work_done = budget;
|
||||
if (ch->xdp.res & XDP_REDIRECT)
|
||||
xdp_do_flush();
|
||||
goto out;
|
||||
}
|
||||
} while (store_cleaned);
|
||||
|
||||
if (ch->xdp.res & XDP_REDIRECT)
|
||||
xdp_do_flush();
|
||||
|
||||
/* Update NET DIM with the values for this CDAN */
|
||||
dpaa2_io_update_net_dim(ch->dpio, ch->stats.frames_per_cdan,
|
||||
ch->stats.bytes_per_cdan);
|
||||
@ -1902,9 +1907,7 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
|
||||
txc_fq->dq_bytes = 0;
|
||||
}
|
||||
|
||||
if (ch->xdp.res & XDP_REDIRECT)
|
||||
xdp_do_flush_map();
|
||||
else if (rx_cleaned && ch->xdp.res & XDP_TX)
|
||||
if (rx_cleaned && ch->xdp.res & XDP_TX)
|
||||
dpaa2_eth_xdp_tx_flush(priv, ch, &priv->fq[flowid]);
|
||||
|
||||
return work_done;
|
||||
|
@ -856,7 +856,7 @@ void ice_set_ethtool_repr_ops(struct net_device *netdev);
|
||||
void ice_set_ethtool_safe_mode_ops(struct net_device *netdev);
|
||||
u16 ice_get_avail_txq_count(struct ice_pf *pf);
|
||||
u16 ice_get_avail_rxq_count(struct ice_pf *pf);
|
||||
int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx);
|
||||
int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked);
|
||||
void ice_update_vsi_stats(struct ice_vsi *vsi);
|
||||
void ice_update_pf_stats(struct ice_pf *pf);
|
||||
void
|
||||
|
@ -434,7 +434,7 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ice_pf_dcb_recfg(pf);
|
||||
ice_pf_dcb_recfg(pf, false);
|
||||
|
||||
out:
|
||||
/* enable previously downed VSIs */
|
||||
@ -724,12 +724,13 @@ static int ice_dcb_noncontig_cfg(struct ice_pf *pf)
|
||||
/**
|
||||
* ice_pf_dcb_recfg - Reconfigure all VEBs and VSIs
|
||||
* @pf: pointer to the PF struct
|
||||
* @locked: is adev device lock held
|
||||
*
|
||||
* Assumed caller has already disabled all VSIs before
|
||||
* calling this function. Reconfiguring DCB based on
|
||||
* local_dcbx_cfg.
|
||||
*/
|
||||
void ice_pf_dcb_recfg(struct ice_pf *pf)
|
||||
void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked)
|
||||
{
|
||||
struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
|
||||
struct iidc_event *event;
|
||||
@ -776,14 +777,16 @@ void ice_pf_dcb_recfg(struct ice_pf *pf)
|
||||
if (vsi->type == ICE_VSI_PF)
|
||||
ice_dcbnl_set_all(vsi);
|
||||
}
|
||||
/* Notify the AUX drivers that TC change is finished */
|
||||
event = kzalloc(sizeof(*event), GFP_KERNEL);
|
||||
if (!event)
|
||||
return;
|
||||
if (!locked) {
|
||||
/* Notify the AUX drivers that TC change is finished */
|
||||
event = kzalloc(sizeof(*event), GFP_KERNEL);
|
||||
if (!event)
|
||||
return;
|
||||
|
||||
set_bit(IIDC_EVENT_AFTER_TC_CHANGE, event->type);
|
||||
ice_send_event_to_aux(pf, event);
|
||||
kfree(event);
|
||||
set_bit(IIDC_EVENT_AFTER_TC_CHANGE, event->type);
|
||||
ice_send_event_to_aux(pf, event);
|
||||
kfree(event);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@ -1034,7 +1037,7 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
|
||||
}
|
||||
|
||||
/* changes in configuration update VSI */
|
||||
ice_pf_dcb_recfg(pf);
|
||||
ice_pf_dcb_recfg(pf, false);
|
||||
|
||||
/* enable previously downed VSIs */
|
||||
ice_dcb_ena_dis_vsi(pf, true, true);
|
||||
|
@ -23,7 +23,7 @@ u8 ice_dcb_get_tc(struct ice_vsi *vsi, int queue_index);
|
||||
int
|
||||
ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked);
|
||||
int ice_dcb_bwchk(struct ice_pf *pf, struct ice_dcbx_cfg *dcbcfg);
|
||||
void ice_pf_dcb_recfg(struct ice_pf *pf);
|
||||
void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked);
|
||||
void ice_vsi_cfg_dcb_rings(struct ice_vsi *vsi);
|
||||
int ice_init_pf_dcb(struct ice_pf *pf, bool locked);
|
||||
void ice_update_dcb_stats(struct ice_pf *pf);
|
||||
@ -128,7 +128,7 @@ static inline u8 ice_get_pfc_mode(struct ice_pf *pf)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void ice_pf_dcb_recfg(struct ice_pf *pf) { }
|
||||
static inline void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked) { }
|
||||
static inline void ice_vsi_cfg_dcb_rings(struct ice_vsi *vsi) { }
|
||||
static inline void ice_update_dcb_stats(struct ice_pf *pf) { }
|
||||
static inline void
|
||||
|
@ -3472,7 +3472,9 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
|
||||
struct ice_vsi *vsi = np->vsi;
|
||||
struct ice_pf *pf = vsi->back;
|
||||
int new_rx = 0, new_tx = 0;
|
||||
bool locked = false;
|
||||
u32 curr_combined;
|
||||
int ret = 0;
|
||||
|
||||
/* do not support changing channels in Safe Mode */
|
||||
if (ice_is_safe_mode(pf)) {
|
||||
@ -3536,15 +3538,33 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ice_vsi_recfg_qs(vsi, new_rx, new_tx);
|
||||
if (pf->adev) {
|
||||
mutex_lock(&pf->adev_mutex);
|
||||
device_lock(&pf->adev->dev);
|
||||
locked = true;
|
||||
if (pf->adev->dev.driver) {
|
||||
netdev_err(dev, "Cannot change channels when RDMA is active\n");
|
||||
ret = -EBUSY;
|
||||
goto adev_unlock;
|
||||
}
|
||||
}
|
||||
|
||||
if (!netif_is_rxfh_configured(dev))
|
||||
return ice_vsi_set_dflt_rss_lut(vsi, new_rx);
|
||||
ice_vsi_recfg_qs(vsi, new_rx, new_tx, locked);
|
||||
|
||||
if (!netif_is_rxfh_configured(dev)) {
|
||||
ret = ice_vsi_set_dflt_rss_lut(vsi, new_rx);
|
||||
goto adev_unlock;
|
||||
}
|
||||
|
||||
/* Update rss_size due to change in Rx queues */
|
||||
vsi->rss_size = ice_get_valid_rss_size(&pf->hw, new_rx);
|
||||
|
||||
return 0;
|
||||
adev_unlock:
|
||||
if (locked) {
|
||||
device_unlock(&pf->adev->dev);
|
||||
mutex_unlock(&pf->adev_mutex);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -4192,12 +4192,13 @@ bool ice_is_wol_supported(struct ice_hw *hw)
|
||||
* @vsi: VSI being changed
|
||||
* @new_rx: new number of Rx queues
|
||||
* @new_tx: new number of Tx queues
|
||||
* @locked: is adev device_lock held
|
||||
*
|
||||
* Only change the number of queues if new_tx, or new_rx is non-0.
|
||||
*
|
||||
* Returns 0 on success.
|
||||
*/
|
||||
int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx)
|
||||
int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked)
|
||||
{
|
||||
struct ice_pf *pf = vsi->back;
|
||||
int err = 0, timeout = 50;
|
||||
@ -4226,7 +4227,7 @@ int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx)
|
||||
|
||||
ice_vsi_close(vsi);
|
||||
ice_vsi_rebuild(vsi, false);
|
||||
ice_pf_dcb_recfg(pf);
|
||||
ice_pf_dcb_recfg(pf, locked);
|
||||
ice_vsi_open(vsi);
|
||||
done:
|
||||
clear_bit(ICE_CFG_BUSY, pf->state);
|
||||
|
@ -417,10 +417,12 @@ static int igc_ptp_verify_pin(struct ptp_clock_info *ptp, unsigned int pin,
|
||||
*
|
||||
* We need to convert the system time value stored in the RX/TXSTMP registers
|
||||
* into a hwtstamp which can be used by the upper level timestamping functions.
|
||||
*
|
||||
* Returns 0 on success.
|
||||
**/
|
||||
static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
|
||||
struct skb_shared_hwtstamps *hwtstamps,
|
||||
u64 systim)
|
||||
static int igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
|
||||
struct skb_shared_hwtstamps *hwtstamps,
|
||||
u64 systim)
|
||||
{
|
||||
switch (adapter->hw.mac.type) {
|
||||
case igc_i225:
|
||||
@ -430,8 +432,9 @@ static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
|
||||
systim & 0xFFFFFFFF);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -652,7 +655,8 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
|
||||
|
||||
regval = rd32(IGC_TXSTMPL);
|
||||
regval |= (u64)rd32(IGC_TXSTMPH) << 32;
|
||||
igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);
|
||||
if (igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval))
|
||||
return;
|
||||
|
||||
switch (adapter->link_speed) {
|
||||
case SPEED_10:
|
||||
|
@ -1500,6 +1500,9 @@ static const struct devlink_param rvu_af_dl_params[] = {
|
||||
BIT(DEVLINK_PARAM_CMODE_RUNTIME),
|
||||
rvu_af_dl_dwrr_mtu_get, rvu_af_dl_dwrr_mtu_set,
|
||||
rvu_af_dl_dwrr_mtu_validate),
|
||||
};
|
||||
|
||||
static const struct devlink_param rvu_af_dl_param_exact_match[] = {
|
||||
DEVLINK_PARAM_DRIVER(RVU_AF_DEVLINK_PARAM_ID_NPC_EXACT_FEATURE_DISABLE,
|
||||
"npc_exact_feature_disable", DEVLINK_PARAM_TYPE_STRING,
|
||||
BIT(DEVLINK_PARAM_CMODE_RUNTIME),
|
||||
@ -1563,7 +1566,6 @@ int rvu_register_dl(struct rvu *rvu)
|
||||
{
|
||||
struct rvu_devlink *rvu_dl;
|
||||
struct devlink *dl;
|
||||
size_t size;
|
||||
int err;
|
||||
|
||||
dl = devlink_alloc(&rvu_devlink_ops, sizeof(struct rvu_devlink),
|
||||
@ -1585,21 +1587,32 @@ int rvu_register_dl(struct rvu *rvu)
|
||||
goto err_dl_health;
|
||||
}
|
||||
|
||||
/* Register exact match devlink only for CN10K-B */
|
||||
size = ARRAY_SIZE(rvu_af_dl_params);
|
||||
if (!rvu_npc_exact_has_match_table(rvu))
|
||||
size -= 1;
|
||||
|
||||
err = devlink_params_register(dl, rvu_af_dl_params, size);
|
||||
err = devlink_params_register(dl, rvu_af_dl_params, ARRAY_SIZE(rvu_af_dl_params));
|
||||
if (err) {
|
||||
dev_err(rvu->dev,
|
||||
"devlink params register failed with error %d", err);
|
||||
goto err_dl_health;
|
||||
}
|
||||
|
||||
/* Register exact match devlink only for CN10K-B */
|
||||
if (!rvu_npc_exact_has_match_table(rvu))
|
||||
goto done;
|
||||
|
||||
err = devlink_params_register(dl, rvu_af_dl_param_exact_match,
|
||||
ARRAY_SIZE(rvu_af_dl_param_exact_match));
|
||||
if (err) {
|
||||
dev_err(rvu->dev,
|
||||
"devlink exact match params register failed with error %d", err);
|
||||
goto err_dl_exact_match;
|
||||
}
|
||||
|
||||
done:
|
||||
devlink_register(dl);
|
||||
return 0;
|
||||
|
||||
err_dl_exact_match:
|
||||
devlink_params_unregister(dl, rvu_af_dl_params, ARRAY_SIZE(rvu_af_dl_params));
|
||||
|
||||
err_dl_health:
|
||||
rvu_health_reporters_destroy(rvu);
|
||||
devlink_free(dl);
|
||||
@ -1612,8 +1625,14 @@ void rvu_unregister_dl(struct rvu *rvu)
|
||||
struct devlink *dl = rvu_dl->dl;
|
||||
|
||||
devlink_unregister(dl);
|
||||
devlink_params_unregister(dl, rvu_af_dl_params,
|
||||
ARRAY_SIZE(rvu_af_dl_params));
|
||||
|
||||
devlink_params_unregister(dl, rvu_af_dl_params, ARRAY_SIZE(rvu_af_dl_params));
|
||||
|
||||
/* Unregister exact match devlink only for CN10K-B */
|
||||
if (rvu_npc_exact_has_match_table(rvu))
|
||||
devlink_params_unregister(dl, rvu_af_dl_param_exact_match,
|
||||
ARRAY_SIZE(rvu_af_dl_param_exact_match));
|
||||
|
||||
rvu_health_reporters_destroy(rvu);
|
||||
devlink_free(dl);
|
||||
}
|
||||
|
@ -1438,6 +1438,10 @@ int qede_poll(struct napi_struct *napi, int budget)
|
||||
rx_work_done = (likely(fp->type & QEDE_FASTPATH_RX) &&
|
||||
qede_has_rx_work(fp->rxq)) ?
|
||||
qede_rx_int(fp, budget) : 0;
|
||||
|
||||
if (fp->xdp_xmit & QEDE_XDP_REDIRECT)
|
||||
xdp_do_flush();
|
||||
|
||||
/* Handle case where we are called by netpoll with a budget of 0 */
|
||||
if (rx_work_done < budget || !budget) {
|
||||
if (!qede_poll_is_more_work(fp)) {
|
||||
@ -1457,9 +1461,6 @@ int qede_poll(struct napi_struct *napi, int budget)
|
||||
qede_update_tx_producer(fp->xdp_tx);
|
||||
}
|
||||
|
||||
if (fp->xdp_xmit & QEDE_XDP_REDIRECT)
|
||||
xdp_do_flush_map();
|
||||
|
||||
return rx_work_done;
|
||||
}
|
||||
|
||||
|
@ -1003,8 +1003,11 @@ static int efx_pci_probe_post_io(struct efx_nic *efx)
|
||||
/* Determine netdevice features */
|
||||
net_dev->features |= (efx->type->offload_features | NETIF_F_SG |
|
||||
NETIF_F_TSO | NETIF_F_RXCSUM | NETIF_F_RXALL);
|
||||
if (efx->type->offload_features & (NETIF_F_IPV6_CSUM | NETIF_F_HW_CSUM))
|
||||
if (efx->type->offload_features & (NETIF_F_IPV6_CSUM | NETIF_F_HW_CSUM)) {
|
||||
net_dev->features |= NETIF_F_TSO6;
|
||||
if (efx_has_cap(efx, TX_TSO_V2_ENCAP))
|
||||
net_dev->hw_enc_features |= NETIF_F_TSO6;
|
||||
}
|
||||
/* Check whether device supports TSO */
|
||||
if (!efx->type->tso_versions || !efx->type->tso_versions(efx))
|
||||
net_dev->features &= ~NETIF_F_ALL_TSO;
|
||||
|
@ -987,9 +987,6 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device,
|
||||
void netvsc_dma_unmap(struct hv_device *hv_dev,
|
||||
struct hv_netvsc_packet *packet)
|
||||
{
|
||||
u32 page_count = packet->cp_partial ?
|
||||
packet->page_buf_cnt - packet->rmsg_pgcnt :
|
||||
packet->page_buf_cnt;
|
||||
int i;
|
||||
|
||||
if (!hv_is_isolation_supported())
|
||||
@ -998,7 +995,7 @@ void netvsc_dma_unmap(struct hv_device *hv_dev,
|
||||
if (!packet->dma_range)
|
||||
return;
|
||||
|
||||
for (i = 0; i < page_count; i++)
|
||||
for (i = 0; i < packet->page_buf_cnt; i++)
|
||||
dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
|
||||
packet->dma_range[i].mapping_size,
|
||||
DMA_TO_DEVICE);
|
||||
@ -1028,9 +1025,7 @@ static int netvsc_dma_map(struct hv_device *hv_dev,
|
||||
struct hv_netvsc_packet *packet,
|
||||
struct hv_page_buffer *pb)
|
||||
{
|
||||
u32 page_count = packet->cp_partial ?
|
||||
packet->page_buf_cnt - packet->rmsg_pgcnt :
|
||||
packet->page_buf_cnt;
|
||||
u32 page_count = packet->page_buf_cnt;
|
||||
dma_addr_t dma;
|
||||
int i;
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user