This is the 5.4.162 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmGgrToACgkQONu9yGCS aT4pJRAAsOxsgAfEFVBCPXef+6akoOvc/cp4nHb1xyvE6DJ/gOLckdEchMQLlhMs uKAom0qbxqvV3mtUqbIJ1b0dPwzeEcOEMjrmY1BSfvk3kXJ9GsyWfMlpB/FwjmLr xgAOQJmLaWc1H83oqlTUJZubKz2xFkYoZy9R2fHB98qCNKppdrAU+GAc4nR3xnTw wkHb/Ous3H4fv0u8u+PDjNJSqGu4zv6Rkp86CmSZJsv+WMs3KF8z/OmYGoxdEpy8 CnDLYHkjaGMJDw3KUSgJHQ642HJ1DqM+O/dtsJCjfmNdFT+5pynnIB5voNU9+ngJ JSrM+TEF4XihJxgYShCXTII99o/qne9XP1MX/G1wxji52RRZtDeOb3uBhcXr86yK J/EwDXsTmE39AZcc1bCDQiSha/R2huDpr6vWJjyqKmhw3IDIkPactIzEUN4XEdcd hiZmcnRB71d5C462eWFG+PxA4meDqqj7BfZFQNL5pztBHxBTs0PFWxx8KI5c5cTj sRQO24LESKYs0FG2dq6ES6nlpSLtnLGyIMPZ5lNAIWQ+Vucse/V0KvMXI/5PgCD8 f5hy31YqWPezVAw8cx+6qoAgD3BNoGaNza7jybBS5gNGQbtWgMOZvoYAjBtceqAc ZnhLpT9dPf9Oxb69Pid4NxbLeMfctZ1AiWY239KeMsn1ecGqgW8= =gXc6 -----END PGP SIGNATURE----- Merge 5.4.162 into android11-5.4-lts Changes in 5.4.162 arm64: zynqmp: Do not duplicate flash partition label property arm64: zynqmp: Fix serial compatible string ARM: dts: NSP: Fix mpcore, mmc node names scsi: lpfc: Fix list_add() corruption in lpfc_drain_txq() arm64: dts: hisilicon: fix arm,sp805 compatible string RDMA/bnxt_re: Check if the vlan is valid before reporting usb: musb: tusb6010: check return value after calling platform_get_resource() usb: typec: tipd: Remove WARN_ON in tps6598x_block_read arm64: dts: qcom: msm8998: Fix CPU/L2 idle state latency and residency arm64: dts: freescale: fix arm,sp805 compatible string ASoC: SOF: Intel: hda-dai: fix potential locking issue clk: imx: imx6ul: Move csi_sel mux to correct base register ASoC: nau8824: Add DMI quirk mechanism for active-high jack-detect scsi: advansys: Fix kernel pointer leak firmware_loader: fix pre-allocated buf built-in firmware use ARM: dts: omap: fix gpmc,mux-add-data type usb: host: ohci-tmio: check return value after calling platform_get_resource() ARM: dts: ls1021a: move thermal-zones node out of soc/ ARM: dts: ls1021a-tsn: use generic "jedec,spi-nor" compatible for flash ALSA: ISA: not for M68K tty: tty_buffer: Fix the softlockup issue in flush_to_ldisc MIPS: sni: Fix the build scsi: target: Fix ordered tag handling scsi: target: Fix alua_tg_pt_gps_count tracking iio: imu: st_lsm6dsx: Avoid potential array overflow in st_lsm6dsx_set_odr() powerpc/5200: dts: fix memory node unit name ALSA: gus: fix null pointer dereference on pointer block powerpc/dcr: Use cmplwi instead of 3-argument cmpli sh: check return code of request_irq maple: fix wrong return value of maple_bus_init(). f2fs: fix up f2fs_lookup tracepoints sh: fix kconfig unmet dependency warning for FRAME_POINTER sh: math-emu: drop unused functions sh: define __BIG_ENDIAN for math-emu clk: ingenic: Fix bugs with divided dividers clk/ast2600: Fix soc revision for AHB clk: qcom: gcc-msm8996: Drop (again) gcc_aggre1_pnoc_ahb_clk mips: BCM63XX: ensure that CPU_SUPPORTS_32BIT_KERNEL is set sched/core: Mitigate race cpus_share_cache()/update_top_cache_domain() tracing: Save normal string variables tracing/histogram: Do not copy the fixed-size char array field over the field size RDMA/netlink: Add __maybe_unused to static inline in C file perf bpf: Avoid memory leak from perf_env__insert_btf() perf bench futex: Fix memory leak of perf_cpu_map__new() perf tests: Remove bash construct from record+zstd_comp_decomp.sh net: bnx2x: fix variable dereferenced before check iavf: check for null in iavf_fix_features iavf: free q_vectors before queues in iavf_disable_vf iavf: Fix failure to exit out from last all-multicast mode iavf: prevent accidental free of filter structure iavf: validate pointers iavf: Fix for the false positive ASQ/ARQ errors while issuing VF reset MIPS: generic/yamon-dt: fix uninitialized variable error mips: bcm63xx: add support for clk_get_parent() mips: lantiq: add support for clk_get_parent() platform/x86: hp_accel: Fix an error handling path in 'lis3lv02d_probe()' scsi: core: sysfs: Fix hang when device state is set via sysfs net: sched: act_mirred: drop dst for the direction from egress to ingress net: dpaa2-eth: fix use-after-free in dpaa2_eth_remove net: virtio_net_hdr_to_skb: count transport header in UFO i40e: Fix correct max_pkt_size on VF RX queue i40e: Fix NULL ptr dereference on VSI filter sync i40e: Fix changing previously set num_queue_pairs for PFs i40e: Fix ping is lost after configuring ADq on VF i40e: Fix creation of first queue by omitting it if is not power of two i40e: Fix display error code in dmesg NFC: reorganize the functions in nci_request drm/nouveau: hdmigv100.c: fix corrupted HDMI Vendor InfoFrame NFC: reorder the logic in nfc_{un,}register_device KVM: PPC: Book3S HV: Use GLOBAL_TOC for kvmppc_h_set_dabr/xdabr() perf/x86/intel/uncore: Fix filter_tid mask for CHA events on Skylake Server perf/x86/intel/uncore: Fix IIO event constraints for Skylake Server s390/kexec: fix return code handling arm64: vdso32: suppress error message for 'make mrproper' tun: fix bonding active backup with arp monitoring hexagon: export raw I/O routines for modules ipc: WARN if trying to remove ipc object which is absent mm: kmemleak: slob: respect SLAB_NOLEAKTRACE flag x86/hyperv: Fix NULL deref in set_hv_tscchange_cb() if Hyper-V setup fails s390/kexec: fix memory leak of ipl report buffer udf: Fix crash after seekdir btrfs: fix memory ordering between normal and ordered work functions parisc/sticon: fix reverse colors cfg80211: call cfg80211_stop_ap when switch from P2P_GO type drm/udl: fix control-message timeout drm/nouveau: use drm_dev_unplug() during device removal drm/i915/dp: Ensure sink rate values are always valid drm/amdgpu: fix set scaling mode Full/Full aspect/Center not works on vga and dvi connectors Revert "net: mvpp2: disable force link UP during port init procedure" perf/core: Avoid put_page() when GUP fails batman-adv: Consider fragmentation for needed_headroom batman-adv: Reserve needed_*room for fragments batman-adv: Don't always reallocate the fragmentation skb head ASoC: DAPM: Cover regression by kctl change notification fix usb: max-3421: Use driver data instead of maintaining a list of bound devices ice: Delete always true check of PF pointer tlb: mmu_gather: add tlb_flush_*_range APIs hugetlbfs: flush TLBs correctly after huge_pmd_unshare ALSA: hda: hdac_ext_stream: fix potential locking issues ALSA: hda: hdac_stream: fix potential locking issue in snd_hdac_stream_assign() Linux 5.4.162 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: If121d473ee2cf6b0d464c9c8e72e4565b83dca5e
This commit is contained in:
commit
fe0ed45e42
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 161
|
||||
SUBLEVEL = 162
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
|
@ -77,7 +77,7 @@
|
||||
interrupt-affinity = <&cpu0>, <&cpu1>;
|
||||
};
|
||||
|
||||
mpcore@19000000 {
|
||||
mpcore-bus@19000000 {
|
||||
compatible = "simple-bus";
|
||||
ranges = <0x00000000 0x19000000 0x00023000>;
|
||||
#address-cells = <1>;
|
||||
@ -217,7 +217,7 @@
|
||||
#dma-cells = <1>;
|
||||
};
|
||||
|
||||
sdio: sdhci@21000 {
|
||||
sdio: mmc@21000 {
|
||||
compatible = "brcm,sdhci-iproc-cygnus";
|
||||
reg = <0x21000 0x100>;
|
||||
interrupts = <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
@ -247,7 +247,7 @@
|
||||
|
||||
flash@0 {
|
||||
/* Rev. A uses 64MB flash, Rev. B & C use 32MB flash */
|
||||
compatible = "jedec,spi-nor", "s25fl256s1", "s25fl512s";
|
||||
compatible = "jedec,spi-nor";
|
||||
spi-max-frequency = <20000000>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
@ -311,39 +311,6 @@
|
||||
#thermal-sensor-cells = <1>;
|
||||
};
|
||||
|
||||
thermal-zones {
|
||||
cpu_thermal: cpu-thermal {
|
||||
polling-delay-passive = <1000>;
|
||||
polling-delay = <5000>;
|
||||
|
||||
thermal-sensors = <&tmu 0>;
|
||||
|
||||
trips {
|
||||
cpu_alert: cpu-alert {
|
||||
temperature = <85000>;
|
||||
hysteresis = <2000>;
|
||||
type = "passive";
|
||||
};
|
||||
cpu_crit: cpu-crit {
|
||||
temperature = <95000>;
|
||||
hysteresis = <2000>;
|
||||
type = "critical";
|
||||
};
|
||||
};
|
||||
|
||||
cooling-maps {
|
||||
map0 {
|
||||
trip = <&cpu_alert>;
|
||||
cooling-device =
|
||||
<&cpu0 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>,
|
||||
<&cpu1 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
dspi0: spi@2100000 {
|
||||
compatible = "fsl,ls1021a-v1.0-dspi";
|
||||
#address-cells = <1>;
|
||||
@ -984,4 +951,37 @@
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
thermal-zones {
|
||||
cpu_thermal: cpu-thermal {
|
||||
polling-delay-passive = <1000>;
|
||||
polling-delay = <5000>;
|
||||
|
||||
thermal-sensors = <&tmu 0>;
|
||||
|
||||
trips {
|
||||
cpu_alert: cpu-alert {
|
||||
temperature = <85000>;
|
||||
hysteresis = <2000>;
|
||||
type = "passive";
|
||||
};
|
||||
cpu_crit: cpu-crit {
|
||||
temperature = <95000>;
|
||||
hysteresis = <2000>;
|
||||
type = "critical";
|
||||
};
|
||||
};
|
||||
|
||||
cooling-maps {
|
||||
map0 {
|
||||
trip = <&cpu_alert>;
|
||||
cooling-device =
|
||||
<&cpu0 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>,
|
||||
<&cpu1 THERMAL_NO_LIMIT
|
||||
THERMAL_NO_LIMIT>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
@ -29,7 +29,7 @@
|
||||
compatible = "smsc,lan9221","smsc,lan9115";
|
||||
bank-width = <2>;
|
||||
|
||||
gpmc,mux-add-data;
|
||||
gpmc,mux-add-data = <0>;
|
||||
gpmc,cs-on-ns = <0>;
|
||||
gpmc,cs-rd-off-ns = <42>;
|
||||
gpmc,cs-wr-off-ns = <36>;
|
||||
|
@ -22,7 +22,7 @@
|
||||
compatible = "smsc,lan9221","smsc,lan9115";
|
||||
bank-width = <2>;
|
||||
|
||||
gpmc,mux-add-data;
|
||||
gpmc,mux-add-data = <0>;
|
||||
gpmc,cs-on-ns = <0>;
|
||||
gpmc,cs-rd-off-ns = <42>;
|
||||
gpmc,cs-wr-off-ns = <36>;
|
||||
|
@ -637,56 +637,56 @@
|
||||
};
|
||||
|
||||
cluster1_core0_watchdog: wdt@c000000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc000000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster1_core1_watchdog: wdt@c010000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc010000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster1_core2_watchdog: wdt@c020000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc020000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster1_core3_watchdog: wdt@c030000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc030000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster2_core0_watchdog: wdt@c100000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc100000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster2_core1_watchdog: wdt@c110000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc110000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster2_core2_watchdog: wdt@c120000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc120000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster2_core3_watchdog: wdt@c130000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc130000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
|
@ -227,56 +227,56 @@
|
||||
};
|
||||
|
||||
cluster1_core0_watchdog: wdt@c000000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc000000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster1_core1_watchdog: wdt@c010000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc010000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster2_core0_watchdog: wdt@c100000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc100000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster2_core1_watchdog: wdt@c110000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc110000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster3_core0_watchdog: wdt@c200000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc200000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster3_core1_watchdog: wdt@c210000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc210000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster4_core0_watchdog: wdt@c300000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc300000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
};
|
||||
|
||||
cluster4_core1_watchdog: wdt@c310000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xc310000 0x0 0x1000>;
|
||||
clocks = <&clockgen 4 3>, <&clockgen 4 3>;
|
||||
clock-names = "wdog_clk", "apb_pclk";
|
||||
|
@ -1100,7 +1100,7 @@
|
||||
};
|
||||
|
||||
watchdog0: watchdog@e8a06000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xe8a06000 0x0 0x1000>;
|
||||
interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&crg_ctrl HI3660_OSC32K>;
|
||||
@ -1108,7 +1108,7 @@
|
||||
};
|
||||
|
||||
watchdog1: watchdog@e8a07000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xe8a07000 0x0 0x1000>;
|
||||
interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&crg_ctrl HI3660_OSC32K>;
|
||||
|
@ -839,7 +839,7 @@
|
||||
};
|
||||
|
||||
watchdog0: watchdog@f8005000 {
|
||||
compatible = "arm,sp805-wdt", "arm,primecell";
|
||||
compatible = "arm,sp805", "arm,primecell";
|
||||
reg = <0x0 0xf8005000 0x0 0x1000>;
|
||||
interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
|
||||
clocks = <&ao_ctrl HI6220_WDT0_PCLK>;
|
||||
|
@ -246,38 +246,42 @@
|
||||
LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
|
||||
compatible = "arm,idle-state";
|
||||
idle-state-name = "little-retention";
|
||||
/* CPU Retention (C2D), L2 Active */
|
||||
arm,psci-suspend-param = <0x00000002>;
|
||||
entry-latency-us = <81>;
|
||||
exit-latency-us = <86>;
|
||||
min-residency-us = <200>;
|
||||
min-residency-us = <504>;
|
||||
};
|
||||
|
||||
LITTLE_CPU_SLEEP_1: cpu-sleep-0-1 {
|
||||
compatible = "arm,idle-state";
|
||||
idle-state-name = "little-power-collapse";
|
||||
/* CPU + L2 Power Collapse (C3, D4) */
|
||||
arm,psci-suspend-param = <0x40000003>;
|
||||
entry-latency-us = <273>;
|
||||
exit-latency-us = <612>;
|
||||
min-residency-us = <1000>;
|
||||
entry-latency-us = <814>;
|
||||
exit-latency-us = <4562>;
|
||||
min-residency-us = <9183>;
|
||||
local-timer-stop;
|
||||
};
|
||||
|
||||
BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
|
||||
compatible = "arm,idle-state";
|
||||
idle-state-name = "big-retention";
|
||||
/* CPU Retention (C2D), L2 Active */
|
||||
arm,psci-suspend-param = <0x00000002>;
|
||||
entry-latency-us = <79>;
|
||||
exit-latency-us = <82>;
|
||||
min-residency-us = <200>;
|
||||
min-residency-us = <1302>;
|
||||
};
|
||||
|
||||
BIG_CPU_SLEEP_1: cpu-sleep-1-1 {
|
||||
compatible = "arm,idle-state";
|
||||
idle-state-name = "big-power-collapse";
|
||||
/* CPU + L2 Power Collapse (C3, D4) */
|
||||
arm,psci-suspend-param = <0x40000003>;
|
||||
entry-latency-us = <336>;
|
||||
exit-latency-us = <525>;
|
||||
min-residency-us = <1000>;
|
||||
entry-latency-us = <724>;
|
||||
exit-latency-us = <2027>;
|
||||
min-residency-us = <9419>;
|
||||
local-timer-stop;
|
||||
};
|
||||
};
|
||||
|
@ -131,7 +131,7 @@
|
||||
reg = <0>;
|
||||
|
||||
partition@0 {
|
||||
label = "data";
|
||||
label = "spi0-data";
|
||||
reg = <0x0 0x100000>;
|
||||
};
|
||||
};
|
||||
@ -149,7 +149,7 @@
|
||||
reg = <0>;
|
||||
|
||||
partition@0 {
|
||||
label = "data";
|
||||
label = "spi1-data";
|
||||
reg = <0x0 0x84000>;
|
||||
};
|
||||
};
|
||||
|
@ -582,7 +582,7 @@
|
||||
};
|
||||
|
||||
uart0: serial@ff000000 {
|
||||
compatible = "cdns,uart-r1p12", "xlnx,xuartps";
|
||||
compatible = "xlnx,zynqmp-uart", "cdns,uart-r1p12";
|
||||
status = "disabled";
|
||||
interrupt-parent = <&gic>;
|
||||
interrupts = <0 21 4>;
|
||||
@ -591,7 +591,7 @@
|
||||
};
|
||||
|
||||
uart1: serial@ff010000 {
|
||||
compatible = "cdns,uart-r1p12", "xlnx,xuartps";
|
||||
compatible = "xlnx,zynqmp-uart", "cdns,uart-r1p12";
|
||||
status = "disabled";
|
||||
interrupt-parent = <&gic>;
|
||||
interrupts = <0 22 4>;
|
||||
|
@ -43,7 +43,8 @@ cc32-as-instr = $(call try-run,\
|
||||
# As a result we set our own flags here.
|
||||
|
||||
# KBUILD_CPPFLAGS and NOSTDINC_FLAGS from top-level Makefile
|
||||
VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include)
|
||||
VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc
|
||||
VDSO_CPPFLAGS += -isystem $(shell $(CC_COMPAT) -print-file-name=include 2>/dev/null)
|
||||
VDSO_CPPFLAGS += $(LINUXINCLUDE)
|
||||
|
||||
# Common C and assembly flags
|
||||
|
@ -27,6 +27,7 @@ void __raw_readsw(const void __iomem *addr, void *data, int len)
|
||||
*dst++ = *src;
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL(__raw_readsw);
|
||||
|
||||
/*
|
||||
* __raw_writesw - read words a short at a time
|
||||
@ -47,6 +48,7 @@ void __raw_writesw(void __iomem *addr, const void *data, int len)
|
||||
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL(__raw_writesw);
|
||||
|
||||
/* Pretty sure len is pre-adjusted for the length of the access already */
|
||||
void __raw_readsl(const void __iomem *addr, void *data, int len)
|
||||
@ -62,6 +64,7 @@ void __raw_readsl(const void __iomem *addr, void *data, int len)
|
||||
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL(__raw_readsl);
|
||||
|
||||
void __raw_writesl(void __iomem *addr, const void *data, int len)
|
||||
{
|
||||
@ -76,3 +79,4 @@ void __raw_writesl(void __iomem *addr, const void *data, int len)
|
||||
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL(__raw_writesl);
|
||||
|
@ -294,6 +294,9 @@ config BCM63XX
|
||||
select SYS_SUPPORTS_32BIT_KERNEL
|
||||
select SYS_SUPPORTS_BIG_ENDIAN
|
||||
select SYS_HAS_EARLY_PRINTK
|
||||
select SYS_HAS_CPU_BMIPS32_3300
|
||||
select SYS_HAS_CPU_BMIPS4350
|
||||
select SYS_HAS_CPU_BMIPS4380
|
||||
select SWAP_IO_SPACE
|
||||
select GPIOLIB
|
||||
select HAVE_CLK
|
||||
|
@ -381,6 +381,12 @@ void clk_disable(struct clk *clk)
|
||||
|
||||
EXPORT_SYMBOL(clk_disable);
|
||||
|
||||
struct clk *clk_get_parent(struct clk *clk)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(clk_get_parent);
|
||||
|
||||
unsigned long clk_get_rate(struct clk *clk)
|
||||
{
|
||||
if (!clk)
|
||||
|
@ -75,7 +75,7 @@ static unsigned int __init gen_fdt_mem_array(
|
||||
__init int yamon_dt_append_memory(void *fdt,
|
||||
const struct yamon_mem_region *regions)
|
||||
{
|
||||
unsigned long phys_memsize, memsize;
|
||||
unsigned long phys_memsize = 0, memsize;
|
||||
__be32 mem_array[2 * MAX_MEM_ARRAY_ENTRIES];
|
||||
unsigned int mem_entries;
|
||||
int i, err, mem_off;
|
||||
|
@ -158,6 +158,12 @@ void clk_deactivate(struct clk *clk)
|
||||
}
|
||||
EXPORT_SYMBOL(clk_deactivate);
|
||||
|
||||
struct clk *clk_get_parent(struct clk *clk)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(clk_get_parent);
|
||||
|
||||
static inline u32 get_counter_resolution(void)
|
||||
{
|
||||
u32 res;
|
||||
|
@ -18,14 +18,14 @@ static int a20r_set_periodic(struct clock_event_device *evt)
|
||||
{
|
||||
*(volatile u8 *)(A20R_PT_CLOCK_BASE + 12) = 0x34;
|
||||
wmb();
|
||||
*(volatile u8 *)(A20R_PT_CLOCK_BASE + 0) = SNI_COUNTER0_DIV;
|
||||
*(volatile u8 *)(A20R_PT_CLOCK_BASE + 0) = SNI_COUNTER0_DIV & 0xff;
|
||||
wmb();
|
||||
*(volatile u8 *)(A20R_PT_CLOCK_BASE + 0) = SNI_COUNTER0_DIV >> 8;
|
||||
wmb();
|
||||
|
||||
*(volatile u8 *)(A20R_PT_CLOCK_BASE + 12) = 0xb4;
|
||||
wmb();
|
||||
*(volatile u8 *)(A20R_PT_CLOCK_BASE + 8) = SNI_COUNTER2_DIV;
|
||||
*(volatile u8 *)(A20R_PT_CLOCK_BASE + 8) = SNI_COUNTER2_DIV & 0xff;
|
||||
wmb();
|
||||
*(volatile u8 *)(A20R_PT_CLOCK_BASE + 8) = SNI_COUNTER2_DIV >> 8;
|
||||
wmb();
|
||||
|
@ -35,7 +35,7 @@
|
||||
};
|
||||
};
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
device_type = "memory";
|
||||
reg = <0x00000000 0x08000000>; // 128MB
|
||||
};
|
||||
|
@ -16,7 +16,7 @@
|
||||
model = "intercontrol,digsy-mtc";
|
||||
compatible = "intercontrol,digsy-mtc";
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
reg = <0x00000000 0x02000000>; // 32MB
|
||||
};
|
||||
|
||||
|
@ -32,7 +32,7 @@
|
||||
};
|
||||
};
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
device_type = "memory";
|
||||
reg = <0x00000000 0x04000000>; // 64MB
|
||||
};
|
||||
|
@ -31,7 +31,7 @@
|
||||
led4 { gpios = <&gpio_simple 2 1>; };
|
||||
};
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
reg = <0x00000000 0x10000000>; // 256MB
|
||||
};
|
||||
|
||||
|
@ -32,7 +32,7 @@
|
||||
};
|
||||
};
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
reg = <0x00000000 0x08000000>; // 128MB RAM
|
||||
};
|
||||
|
||||
|
@ -33,7 +33,7 @@
|
||||
};
|
||||
};
|
||||
|
||||
memory: memory {
|
||||
memory: memory@0 {
|
||||
device_type = "memory";
|
||||
reg = <0x00000000 0x04000000>; // 64MB
|
||||
};
|
||||
|
@ -12,7 +12,7 @@
|
||||
model = "ifm,o2d";
|
||||
compatible = "ifm,o2d";
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
reg = <0x00000000 0x08000000>; // 128MB
|
||||
};
|
||||
|
||||
|
@ -19,7 +19,7 @@
|
||||
model = "ifm,o2d";
|
||||
compatible = "ifm,o2d";
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
reg = <0x00000000 0x04000000>; // 64MB
|
||||
};
|
||||
|
||||
|
@ -12,7 +12,7 @@
|
||||
model = "ifm,o2dnt2";
|
||||
compatible = "ifm,o2d";
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
reg = <0x00000000 0x08000000>; // 128MB
|
||||
};
|
||||
|
||||
|
@ -12,7 +12,7 @@
|
||||
model = "ifm,o3dnt";
|
||||
compatible = "ifm,o2d";
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
reg = <0x00000000 0x04000000>; // 64MB
|
||||
};
|
||||
|
||||
|
@ -22,7 +22,7 @@
|
||||
model = "phytec,pcm032";
|
||||
compatible = "phytec,pcm032";
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
reg = <0x00000000 0x08000000>; // 128MB
|
||||
};
|
||||
|
||||
|
@ -32,7 +32,7 @@
|
||||
};
|
||||
};
|
||||
|
||||
memory {
|
||||
memory@0 {
|
||||
device_type = "memory";
|
||||
reg = <0x00000000 0x04000000>; // 64MB
|
||||
};
|
||||
|
@ -2535,7 +2535,7 @@ hcall_real_table:
|
||||
.globl hcall_real_table_end
|
||||
hcall_real_table_end:
|
||||
|
||||
_GLOBAL(kvmppc_h_set_xdabr)
|
||||
_GLOBAL_TOC(kvmppc_h_set_xdabr)
|
||||
EXPORT_SYMBOL_GPL(kvmppc_h_set_xdabr)
|
||||
andi. r0, r5, DABRX_USER | DABRX_KERNEL
|
||||
beq 6f
|
||||
@ -2545,7 +2545,7 @@ EXPORT_SYMBOL_GPL(kvmppc_h_set_xdabr)
|
||||
6: li r3, H_PARAMETER
|
||||
blr
|
||||
|
||||
_GLOBAL(kvmppc_h_set_dabr)
|
||||
_GLOBAL_TOC(kvmppc_h_set_dabr)
|
||||
EXPORT_SYMBOL_GPL(kvmppc_h_set_dabr)
|
||||
li r5, DABRX_USER | DABRX_KERNEL
|
||||
3:
|
||||
|
@ -11,7 +11,7 @@
|
||||
#include <asm/export.h>
|
||||
|
||||
#define DCR_ACCESS_PROLOG(table) \
|
||||
cmpli cr0,r3,1024; \
|
||||
cmplwi cr0,r3,1024; \
|
||||
rlwinm r3,r3,4,18,27; \
|
||||
lis r5,table@h; \
|
||||
ori r5,r5,table@l; \
|
||||
|
@ -74,6 +74,12 @@ void *kexec_file_add_components(struct kimage *image,
|
||||
int arch_kexec_do_relocs(int r_type, void *loc, unsigned long val,
|
||||
unsigned long addr);
|
||||
|
||||
#define ARCH_HAS_KIMAGE_ARCH
|
||||
|
||||
struct kimage_arch {
|
||||
void *ipl_buf;
|
||||
};
|
||||
|
||||
extern const struct kexec_file_ops s390_kexec_image_ops;
|
||||
extern const struct kexec_file_ops s390_kexec_elf_ops;
|
||||
|
||||
|
@ -1783,7 +1783,7 @@ void *ipl_report_finish(struct ipl_report *report)
|
||||
|
||||
buf = vzalloc(report->size);
|
||||
if (!buf)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
goto out;
|
||||
ptr = buf;
|
||||
|
||||
memcpy(ptr, report->ipib, report->ipib->hdr.len);
|
||||
@ -1822,6 +1822,7 @@ void *ipl_report_finish(struct ipl_report *report)
|
||||
}
|
||||
|
||||
BUG_ON(ptr > buf + report->size);
|
||||
out:
|
||||
return buf;
|
||||
}
|
||||
|
||||
|
@ -12,6 +12,7 @@
|
||||
#include <linux/kexec.h>
|
||||
#include <linux/module_signature.h>
|
||||
#include <linux/verification.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <asm/boot_data.h>
|
||||
#include <asm/ipl.h>
|
||||
#include <asm/setup.h>
|
||||
@ -170,6 +171,7 @@ static int kexec_file_add_ipl_report(struct kimage *image,
|
||||
struct kexec_buf buf;
|
||||
unsigned long addr;
|
||||
void *ptr, *end;
|
||||
int ret;
|
||||
|
||||
buf.image = image;
|
||||
|
||||
@ -199,9 +201,13 @@ static int kexec_file_add_ipl_report(struct kimage *image,
|
||||
ptr += len;
|
||||
}
|
||||
|
||||
ret = -ENOMEM;
|
||||
buf.buffer = ipl_report_finish(data->report);
|
||||
if (!buf.buffer)
|
||||
goto out;
|
||||
buf.bufsz = data->report->size;
|
||||
buf.memsz = buf.bufsz;
|
||||
image->arch.ipl_buf = buf.buffer;
|
||||
|
||||
data->memsz += buf.memsz;
|
||||
|
||||
@ -209,7 +215,9 @@ static int kexec_file_add_ipl_report(struct kimage *image,
|
||||
data->kernel_buf + offsetof(struct lowcore, ipl_parmblock_ptr);
|
||||
*lc_ipl_parmblock_ptr = (__u32)buf.mem;
|
||||
|
||||
return kexec_add_buffer(&buf);
|
||||
ret = kexec_add_buffer(&buf);
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
void *kexec_file_add_components(struct kimage *image,
|
||||
@ -321,3 +329,11 @@ int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
|
||||
|
||||
return kexec_image_probe_default(image, buf, buf_len);
|
||||
}
|
||||
|
||||
int arch_kimage_file_post_load_cleanup(struct kimage *image)
|
||||
{
|
||||
vfree(image->arch.ipl_buf);
|
||||
image->arch.ipl_buf = NULL;
|
||||
|
||||
return kexec_image_post_load_cleanup_default(image);
|
||||
}
|
||||
|
@ -58,6 +58,7 @@ config DUMP_CODE
|
||||
|
||||
config DWARF_UNWINDER
|
||||
bool "Enable the DWARF unwinder for stacktraces"
|
||||
depends on DEBUG_KERNEL
|
||||
select FRAME_POINTER
|
||||
depends on SUPERH32
|
||||
default n
|
||||
|
@ -13,6 +13,14 @@
|
||||
#ifndef _SFP_MACHINE_H
|
||||
#define _SFP_MACHINE_H
|
||||
|
||||
#ifdef __BIG_ENDIAN__
|
||||
#define __BYTE_ORDER __BIG_ENDIAN
|
||||
#define __LITTLE_ENDIAN 0
|
||||
#else
|
||||
#define __BYTE_ORDER __LITTLE_ENDIAN
|
||||
#define __BIG_ENDIAN 0
|
||||
#endif
|
||||
|
||||
#define _FP_W_TYPE_SIZE 32
|
||||
#define _FP_W_TYPE unsigned long
|
||||
#define _FP_WS_TYPE signed long
|
||||
|
@ -73,8 +73,9 @@ static void shx3_prepare_cpus(unsigned int max_cpus)
|
||||
BUILD_BUG_ON(SMP_MSG_NR >= 8);
|
||||
|
||||
for (i = 0; i < SMP_MSG_NR; i++)
|
||||
request_irq(104 + i, ipi_interrupt_handler,
|
||||
IRQF_PERCPU, "IPI", (void *)(long)i);
|
||||
if (request_irq(104 + i, ipi_interrupt_handler,
|
||||
IRQF_PERCPU, "IPI", (void *)(long)i))
|
||||
pr_err("Failed to request irq %d\n", i);
|
||||
|
||||
for (i = 0; i < max_cpus; i++)
|
||||
set_cpu_present(i, true);
|
||||
|
@ -467,109 +467,6 @@ static int fpu_emulate(u16 code, struct sh_fpu_soft_struct *fregs, struct pt_reg
|
||||
return id_sys(fregs, regs, code);
|
||||
}
|
||||
|
||||
/**
|
||||
* denormal_to_double - Given denormalized float number,
|
||||
* store double float
|
||||
*
|
||||
* @fpu: Pointer to sh_fpu_soft structure
|
||||
* @n: Index to FP register
|
||||
*/
|
||||
static void denormal_to_double(struct sh_fpu_soft_struct *fpu, int n)
|
||||
{
|
||||
unsigned long du, dl;
|
||||
unsigned long x = fpu->fpul;
|
||||
int exp = 1023 - 126;
|
||||
|
||||
if (x != 0 && (x & 0x7f800000) == 0) {
|
||||
du = (x & 0x80000000);
|
||||
while ((x & 0x00800000) == 0) {
|
||||
x <<= 1;
|
||||
exp--;
|
||||
}
|
||||
x &= 0x007fffff;
|
||||
du |= (exp << 20) | (x >> 3);
|
||||
dl = x << 29;
|
||||
|
||||
fpu->fp_regs[n] = du;
|
||||
fpu->fp_regs[n+1] = dl;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* ieee_fpe_handler - Handle denormalized number exception
|
||||
*
|
||||
* @regs: Pointer to register structure
|
||||
*
|
||||
* Returns 1 when it's handled (should not cause exception).
|
||||
*/
|
||||
static int ieee_fpe_handler(struct pt_regs *regs)
|
||||
{
|
||||
unsigned short insn = *(unsigned short *)regs->pc;
|
||||
unsigned short finsn;
|
||||
unsigned long nextpc;
|
||||
int nib[4] = {
|
||||
(insn >> 12) & 0xf,
|
||||
(insn >> 8) & 0xf,
|
||||
(insn >> 4) & 0xf,
|
||||
insn & 0xf};
|
||||
|
||||
if (nib[0] == 0xb ||
|
||||
(nib[0] == 0x4 && nib[2] == 0x0 && nib[3] == 0xb)) /* bsr & jsr */
|
||||
regs->pr = regs->pc + 4;
|
||||
|
||||
if (nib[0] == 0xa || nib[0] == 0xb) { /* bra & bsr */
|
||||
nextpc = regs->pc + 4 + ((short) ((insn & 0xfff) << 4) >> 3);
|
||||
finsn = *(unsigned short *) (regs->pc + 2);
|
||||
} else if (nib[0] == 0x8 && nib[1] == 0xd) { /* bt/s */
|
||||
if (regs->sr & 1)
|
||||
nextpc = regs->pc + 4 + ((char) (insn & 0xff) << 1);
|
||||
else
|
||||
nextpc = regs->pc + 4;
|
||||
finsn = *(unsigned short *) (regs->pc + 2);
|
||||
} else if (nib[0] == 0x8 && nib[1] == 0xf) { /* bf/s */
|
||||
if (regs->sr & 1)
|
||||
nextpc = regs->pc + 4;
|
||||
else
|
||||
nextpc = regs->pc + 4 + ((char) (insn & 0xff) << 1);
|
||||
finsn = *(unsigned short *) (regs->pc + 2);
|
||||
} else if (nib[0] == 0x4 && nib[3] == 0xb &&
|
||||
(nib[2] == 0x0 || nib[2] == 0x2)) { /* jmp & jsr */
|
||||
nextpc = regs->regs[nib[1]];
|
||||
finsn = *(unsigned short *) (regs->pc + 2);
|
||||
} else if (nib[0] == 0x0 && nib[3] == 0x3 &&
|
||||
(nib[2] == 0x0 || nib[2] == 0x2)) { /* braf & bsrf */
|
||||
nextpc = regs->pc + 4 + regs->regs[nib[1]];
|
||||
finsn = *(unsigned short *) (regs->pc + 2);
|
||||
} else if (insn == 0x000b) { /* rts */
|
||||
nextpc = regs->pr;
|
||||
finsn = *(unsigned short *) (regs->pc + 2);
|
||||
} else {
|
||||
nextpc = regs->pc + 2;
|
||||
finsn = insn;
|
||||
}
|
||||
|
||||
if ((finsn & 0xf1ff) == 0xf0ad) { /* fcnvsd */
|
||||
struct task_struct *tsk = current;
|
||||
|
||||
if ((tsk->thread.xstate->softfpu.fpscr & (1 << 17))) {
|
||||
/* FPU error */
|
||||
denormal_to_double (&tsk->thread.xstate->softfpu,
|
||||
(finsn >> 8) & 0xf);
|
||||
tsk->thread.xstate->softfpu.fpscr &=
|
||||
~(FPSCR_CAUSE_MASK | FPSCR_FLAG_MASK);
|
||||
task_thread_info(tsk)->status |= TS_USEDFPU;
|
||||
} else {
|
||||
force_sig_fault(SIGFPE, FPE_FLTINV,
|
||||
(void __user *)regs->pc);
|
||||
}
|
||||
|
||||
regs->pc = nextpc;
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* fpu_init - Initialize FPU registers
|
||||
* @fpu: Pointer to software emulated FPU registers.
|
||||
|
@ -3479,6 +3479,9 @@ static int skx_cha_hw_config(struct intel_uncore_box *box, struct perf_event *ev
|
||||
struct hw_perf_event_extra *reg1 = &event->hw.extra_reg;
|
||||
struct extra_reg *er;
|
||||
int idx = 0;
|
||||
/* Any of the CHA events may be filtered by Thread/Core-ID.*/
|
||||
if (event->hw.config & SNBEP_CBO_PMON_CTL_TID_EN)
|
||||
idx = SKX_CHA_MSR_PMON_BOX_FILTER_TID;
|
||||
|
||||
for (er = skx_uncore_cha_extra_regs; er->msr; er++) {
|
||||
if (er->event != (event->hw.config & er->config_mask))
|
||||
@ -3546,6 +3549,7 @@ static struct event_constraint skx_uncore_iio_constraints[] = {
|
||||
UNCORE_EVENT_CONSTRAINT(0xc0, 0xc),
|
||||
UNCORE_EVENT_CONSTRAINT(0xc5, 0xc),
|
||||
UNCORE_EVENT_CONSTRAINT(0xd4, 0xc),
|
||||
UNCORE_EVENT_CONSTRAINT(0xd5, 0xc),
|
||||
EVENT_CONSTRAINT_END
|
||||
};
|
||||
|
||||
|
@ -163,6 +163,9 @@ void set_hv_tscchange_cb(void (*cb)(void))
|
||||
return;
|
||||
}
|
||||
|
||||
if (!hv_vp_index)
|
||||
return;
|
||||
|
||||
hv_reenlightenment_cb = cb;
|
||||
|
||||
/* Make sure callback is registered before we write to MSRs */
|
||||
|
@ -98,12 +98,15 @@ static struct firmware_cache fw_cache;
|
||||
extern struct builtin_fw __start_builtin_fw[];
|
||||
extern struct builtin_fw __end_builtin_fw[];
|
||||
|
||||
static void fw_copy_to_prealloc_buf(struct firmware *fw,
|
||||
static bool fw_copy_to_prealloc_buf(struct firmware *fw,
|
||||
void *buf, size_t size)
|
||||
{
|
||||
if (!buf || size < fw->size)
|
||||
return;
|
||||
if (!buf)
|
||||
return true;
|
||||
if (size < fw->size)
|
||||
return false;
|
||||
memcpy(buf, fw->data, fw->size);
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool fw_get_builtin_firmware(struct firmware *fw, const char *name,
|
||||
@ -115,9 +118,7 @@ static bool fw_get_builtin_firmware(struct firmware *fw, const char *name,
|
||||
if (strcmp(name, b_fw->name) == 0) {
|
||||
fw->size = b_fw->size;
|
||||
fw->data = b_fw->data;
|
||||
fw_copy_to_prealloc_buf(fw, buf, size);
|
||||
|
||||
return true;
|
||||
return fw_copy_to_prealloc_buf(fw, buf, size);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -48,6 +48,8 @@ static DEFINE_SPINLOCK(aspeed_g6_clk_lock);
|
||||
static struct clk_hw_onecell_data *aspeed_g6_clk_data;
|
||||
|
||||
static void __iomem *scu_g6_base;
|
||||
/* AST2600 revision: A0, A1, A2, etc */
|
||||
static u8 soc_rev;
|
||||
|
||||
/*
|
||||
* Clocks marked with CLK_IS_CRITICAL:
|
||||
@ -190,9 +192,8 @@ static struct clk_hw *ast2600_calc_pll(const char *name, u32 val)
|
||||
static struct clk_hw *ast2600_calc_apll(const char *name, u32 val)
|
||||
{
|
||||
unsigned int mult, div;
|
||||
u32 chip_id = readl(scu_g6_base + ASPEED_G6_SILICON_REV);
|
||||
|
||||
if (((chip_id & CHIP_REVISION_ID) >> 16) >= 2) {
|
||||
if (soc_rev >= 2) {
|
||||
if (val & BIT(24)) {
|
||||
/* Pass through mode */
|
||||
mult = div = 1;
|
||||
@ -664,7 +665,7 @@ static const u32 ast2600_a1_axi_ahb200_tbl[] = {
|
||||
static void __init aspeed_g6_cc(struct regmap *map)
|
||||
{
|
||||
struct clk_hw *hw;
|
||||
u32 val, div, divbits, chip_id, axi_div, ahb_div;
|
||||
u32 val, div, divbits, axi_div, ahb_div;
|
||||
|
||||
clk_hw_register_fixed_rate(NULL, "clkin", NULL, 0, 25000000);
|
||||
|
||||
@ -695,8 +696,7 @@ static void __init aspeed_g6_cc(struct regmap *map)
|
||||
axi_div = 2;
|
||||
|
||||
divbits = (val >> 11) & 0x3;
|
||||
regmap_read(map, ASPEED_G6_SILICON_REV, &chip_id);
|
||||
if (chip_id & BIT(16)) {
|
||||
if (soc_rev >= 1) {
|
||||
if (!divbits) {
|
||||
ahb_div = ast2600_a1_axi_ahb200_tbl[(val >> 8) & 0x3];
|
||||
if (val & BIT(16))
|
||||
@ -741,6 +741,8 @@ static void __init aspeed_g6_cc_init(struct device_node *np)
|
||||
if (!scu_g6_base)
|
||||
return;
|
||||
|
||||
soc_rev = (readl(scu_g6_base + ASPEED_G6_SILICON_REV) & CHIP_REVISION_ID) >> 16;
|
||||
|
||||
aspeed_g6_clk_data = kzalloc(struct_size(aspeed_g6_clk_data, hws,
|
||||
ASPEED_G6_NUM_CLKS), GFP_KERNEL);
|
||||
if (!aspeed_g6_clk_data)
|
||||
|
@ -161,7 +161,6 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
|
||||
hws[IMX6UL_PLL5_BYPASS] = imx_clk_hw_mux_flags("pll5_bypass", base + 0xa0, 16, 1, pll5_bypass_sels, ARRAY_SIZE(pll5_bypass_sels), CLK_SET_RATE_PARENT);
|
||||
hws[IMX6UL_PLL6_BYPASS] = imx_clk_hw_mux_flags("pll6_bypass", base + 0xe0, 16, 1, pll6_bypass_sels, ARRAY_SIZE(pll6_bypass_sels), CLK_SET_RATE_PARENT);
|
||||
hws[IMX6UL_PLL7_BYPASS] = imx_clk_hw_mux_flags("pll7_bypass", base + 0x20, 16, 1, pll7_bypass_sels, ARRAY_SIZE(pll7_bypass_sels), CLK_SET_RATE_PARENT);
|
||||
hws[IMX6UL_CLK_CSI_SEL] = imx_clk_hw_mux_flags("csi_sel", base + 0x3c, 9, 2, csi_sels, ARRAY_SIZE(csi_sels), CLK_SET_RATE_PARENT);
|
||||
|
||||
/* Do not bypass PLLs initially */
|
||||
clk_set_parent(hws[IMX6UL_PLL1_BYPASS]->clk, hws[IMX6UL_CLK_PLL1]->clk);
|
||||
@ -270,6 +269,7 @@ static void __init imx6ul_clocks_init(struct device_node *ccm_node)
|
||||
hws[IMX6UL_CLK_ECSPI_SEL] = imx_clk_hw_mux("ecspi_sel", base + 0x38, 18, 1, ecspi_sels, ARRAY_SIZE(ecspi_sels));
|
||||
hws[IMX6UL_CLK_LCDIF_PRE_SEL] = imx_clk_hw_mux_flags("lcdif_pre_sel", base + 0x38, 15, 3, lcdif_pre_sels, ARRAY_SIZE(lcdif_pre_sels), CLK_SET_RATE_PARENT);
|
||||
hws[IMX6UL_CLK_LCDIF_SEL] = imx_clk_hw_mux("lcdif_sel", base + 0x38, 9, 3, lcdif_sels, ARRAY_SIZE(lcdif_sels));
|
||||
hws[IMX6UL_CLK_CSI_SEL] = imx_clk_hw_mux("csi_sel", base + 0x3c, 9, 2, csi_sels, ARRAY_SIZE(csi_sels));
|
||||
|
||||
hws[IMX6UL_CLK_LDB_DI0_DIV_SEL] = imx_clk_hw_mux("ldb_di0", base + 0x20, 10, 1, ldb_di0_div_sels, ARRAY_SIZE(ldb_di0_div_sels));
|
||||
hws[IMX6UL_CLK_LDB_DI1_DIV_SEL] = imx_clk_hw_mux("ldb_di1", base + 0x20, 11, 1, ldb_di1_div_sels, ARRAY_SIZE(ldb_di1_div_sels));
|
||||
|
@ -426,15 +426,15 @@ ingenic_clk_calc_div(const struct ingenic_cgu_clk_info *clk_info,
|
||||
}
|
||||
|
||||
/* Impose hardware constraints */
|
||||
div = min_t(unsigned, div, 1 << clk_info->div.bits);
|
||||
div = max_t(unsigned, div, 1);
|
||||
div = clamp_t(unsigned int, div, clk_info->div.div,
|
||||
clk_info->div.div << clk_info->div.bits);
|
||||
|
||||
/*
|
||||
* If the divider value itself must be divided before being written to
|
||||
* the divider register, we must ensure we don't have any bits set that
|
||||
* would be lost as a result of doing so.
|
||||
*/
|
||||
div /= clk_info->div.div;
|
||||
div = DIV_ROUND_UP(div, clk_info->div.div);
|
||||
div *= clk_info->div.div;
|
||||
|
||||
return div;
|
||||
|
@ -2937,20 +2937,6 @@ static struct clk_branch gcc_smmu_aggre0_ahb_clk = {
|
||||
},
|
||||
};
|
||||
|
||||
static struct clk_branch gcc_aggre1_pnoc_ahb_clk = {
|
||||
.halt_reg = 0x82014,
|
||||
.clkr = {
|
||||
.enable_reg = 0x82014,
|
||||
.enable_mask = BIT(0),
|
||||
.hw.init = &(struct clk_init_data){
|
||||
.name = "gcc_aggre1_pnoc_ahb_clk",
|
||||
.parent_names = (const char *[]){ "periph_noc_clk_src" },
|
||||
.num_parents = 1,
|
||||
.ops = &clk_branch2_ops,
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
static struct clk_branch gcc_aggre2_ufs_axi_clk = {
|
||||
.halt_reg = 0x83014,
|
||||
.clkr = {
|
||||
@ -3453,7 +3439,6 @@ static struct clk_regmap *gcc_msm8996_clocks[] = {
|
||||
[GCC_AGGRE0_CNOC_AHB_CLK] = &gcc_aggre0_cnoc_ahb_clk.clkr,
|
||||
[GCC_SMMU_AGGRE0_AXI_CLK] = &gcc_smmu_aggre0_axi_clk.clkr,
|
||||
[GCC_SMMU_AGGRE0_AHB_CLK] = &gcc_smmu_aggre0_ahb_clk.clkr,
|
||||
[GCC_AGGRE1_PNOC_AHB_CLK] = &gcc_aggre1_pnoc_ahb_clk.clkr,
|
||||
[GCC_AGGRE2_UFS_AXI_CLK] = &gcc_aggre2_ufs_axi_clk.clkr,
|
||||
[GCC_AGGRE2_USB3_AXI_CLK] = &gcc_aggre2_usb3_axi_clk.clkr,
|
||||
[GCC_QSPI_AHB_CLK] = &gcc_qspi_ahb_clk.clkr,
|
||||
|
@ -829,6 +829,7 @@ static int amdgpu_connector_vga_get_modes(struct drm_connector *connector)
|
||||
|
||||
amdgpu_connector_get_edid(connector);
|
||||
ret = amdgpu_connector_ddc_get_modes(connector);
|
||||
amdgpu_get_native_mode(connector);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -166,6 +166,12 @@ static void vlv_steal_power_sequencer(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe);
|
||||
static void intel_dp_unset_edid(struct intel_dp *intel_dp);
|
||||
|
||||
static void intel_dp_set_default_sink_rates(struct intel_dp *intel_dp)
|
||||
{
|
||||
intel_dp->sink_rates[0] = 162000;
|
||||
intel_dp->num_sink_rates = 1;
|
||||
}
|
||||
|
||||
/* update sink rates from dpcd */
|
||||
static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
|
||||
{
|
||||
@ -4261,6 +4267,9 @@ intel_edp_init_dpcd(struct intel_dp *intel_dp)
|
||||
*/
|
||||
intel_psr_init_dpcd(intel_dp);
|
||||
|
||||
/* Clear the default sink rates */
|
||||
intel_dp->num_sink_rates = 0;
|
||||
|
||||
/* Read the eDP 1.4+ supported link rates. */
|
||||
if (intel_dp->edp_dpcd[0] >= DP_EDP_14) {
|
||||
__le16 sink_rates[DP_MAX_SUPPORTED_RATES];
|
||||
@ -7167,6 +7176,8 @@ intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
|
||||
return false;
|
||||
|
||||
intel_dp_set_source_rates(intel_dp);
|
||||
intel_dp_set_default_sink_rates(intel_dp);
|
||||
intel_dp_set_common_rates(intel_dp);
|
||||
|
||||
intel_dp->reset_link_params = true;
|
||||
intel_dp->pps_pipe = INVALID_PIPE;
|
||||
|
@ -779,7 +779,7 @@ nouveau_drm_device_remove(struct drm_device *dev)
|
||||
struct nvkm_client *client;
|
||||
struct nvkm_device *device;
|
||||
|
||||
drm_dev_unregister(dev);
|
||||
drm_dev_unplug(dev);
|
||||
|
||||
dev->irq_enabled = false;
|
||||
client = nvxx_client(&drm->client.base);
|
||||
|
@ -62,7 +62,6 @@ gv100_hdmi_ctrl(struct nvkm_ior *ior, int head, bool enable, u8 max_ac_packet,
|
||||
nvkm_wr32(device, 0x6f0108 + hdmi, vendor_infoframe.header);
|
||||
nvkm_wr32(device, 0x6f010c + hdmi, vendor_infoframe.subpack0_low);
|
||||
nvkm_wr32(device, 0x6f0110 + hdmi, vendor_infoframe.subpack0_high);
|
||||
nvkm_wr32(device, 0x6f0110 + hdmi, 0x00000000);
|
||||
nvkm_wr32(device, 0x6f0114 + hdmi, 0x00000000);
|
||||
nvkm_wr32(device, 0x6f0118 + hdmi, 0x00000000);
|
||||
nvkm_wr32(device, 0x6f011c + hdmi, 0x00000000);
|
||||
|
@ -29,7 +29,7 @@ static int udl_get_edid_block(void *data, u8 *buf, unsigned int block,
|
||||
ret = usb_control_msg(udl->udev,
|
||||
usb_rcvctrlpipe(udl->udev, 0),
|
||||
(0x02), (0x80 | (0x02 << 5)), bval,
|
||||
0xA1, read_buff, 2, HZ);
|
||||
0xA1, read_buff, 2, 1000);
|
||||
if (ret < 1) {
|
||||
DRM_ERROR("Read EDID byte %d failed err %x\n", i, ret);
|
||||
kfree(read_buff);
|
||||
|
@ -1015,6 +1015,8 @@ static int st_lsm6dsx_set_odr(struct st_lsm6dsx_sensor *sensor, u16 req_odr)
|
||||
int err;
|
||||
|
||||
switch (sensor->id) {
|
||||
case ST_LSM6DSX_ID_GYRO:
|
||||
break;
|
||||
case ST_LSM6DSX_ID_EXT0:
|
||||
case ST_LSM6DSX_ID_EXT1:
|
||||
case ST_LSM6DSX_ID_EXT2:
|
||||
@ -1040,8 +1042,8 @@ static int st_lsm6dsx_set_odr(struct st_lsm6dsx_sensor *sensor, u16 req_odr)
|
||||
}
|
||||
break;
|
||||
}
|
||||
default:
|
||||
break;
|
||||
default: /* should never occur */
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (req_odr > 0) {
|
||||
|
@ -3081,8 +3081,11 @@ static void bnxt_re_process_res_ud_wc(struct bnxt_re_qp *qp,
|
||||
struct ib_wc *wc,
|
||||
struct bnxt_qplib_cqe *cqe)
|
||||
{
|
||||
struct bnxt_re_dev *rdev;
|
||||
u16 vlan_id = 0;
|
||||
u8 nw_type;
|
||||
|
||||
rdev = qp->rdev;
|
||||
wc->opcode = IB_WC_RECV;
|
||||
wc->status = __rc_to_ib_wc_status(cqe->status);
|
||||
|
||||
@ -3094,9 +3097,12 @@ static void bnxt_re_process_res_ud_wc(struct bnxt_re_qp *qp,
|
||||
memcpy(wc->smac, cqe->smac, ETH_ALEN);
|
||||
wc->wc_flags |= IB_WC_WITH_SMAC;
|
||||
if (cqe->flags & CQ_RES_UD_FLAGS_META_FORMAT_VLAN) {
|
||||
wc->vlan_id = (cqe->cfa_meta & 0xFFF);
|
||||
if (wc->vlan_id < 0x1000)
|
||||
wc->wc_flags |= IB_WC_WITH_VLAN;
|
||||
vlan_id = (cqe->cfa_meta & 0xFFF);
|
||||
}
|
||||
/* Mark only if vlan_id is non zero */
|
||||
if (vlan_id && bnxt_re_check_if_vlan_valid(rdev, vlan_id)) {
|
||||
wc->vlan_id = vlan_id;
|
||||
wc->wc_flags |= IB_WC_WITH_VLAN;
|
||||
}
|
||||
nw_type = (cqe->flags & CQ_RES_UD_FLAGS_ROCE_IP_VER_MASK) >>
|
||||
CQ_RES_UD_FLAGS_ROCE_IP_VER_SFT;
|
||||
|
@ -635,11 +635,13 @@ static int bnx2x_ilt_client_mem_op(struct bnx2x *bp, int cli_num,
|
||||
{
|
||||
int i, rc;
|
||||
struct bnx2x_ilt *ilt = BP_ILT(bp);
|
||||
struct ilt_client_info *ilt_cli = &ilt->clients[cli_num];
|
||||
struct ilt_client_info *ilt_cli;
|
||||
|
||||
if (!ilt || !ilt->lines)
|
||||
return -1;
|
||||
|
||||
ilt_cli = &ilt->clients[cli_num];
|
||||
|
||||
if (ilt_cli->flags & (ILT_CLIENT_SKIP_INIT | ILT_CLIENT_SKIP_MEM))
|
||||
return 0;
|
||||
|
||||
|
@ -3616,10 +3616,10 @@ static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
|
||||
|
||||
fsl_mc_portal_free(priv->mc_io);
|
||||
|
||||
free_netdev(net_dev);
|
||||
|
||||
dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
|
||||
|
||||
free_netdev(net_dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -169,6 +169,7 @@ enum i40e_vsi_state_t {
|
||||
__I40E_VSI_OVERFLOW_PROMISC,
|
||||
__I40E_VSI_REINIT_REQUESTED,
|
||||
__I40E_VSI_DOWN_REQUESTED,
|
||||
__I40E_VSI_RELEASING,
|
||||
/* This must be last as it determines the size of the BITMAP */
|
||||
__I40E_VSI_STATE_SIZE__,
|
||||
};
|
||||
@ -1146,6 +1147,7 @@ void i40e_ptp_save_hw_time(struct i40e_pf *pf);
|
||||
void i40e_ptp_restore_hw_time(struct i40e_pf *pf);
|
||||
void i40e_ptp_init(struct i40e_pf *pf);
|
||||
void i40e_ptp_stop(struct i40e_pf *pf);
|
||||
int i40e_update_adq_vsi_queues(struct i40e_vsi *vsi, int vsi_offset);
|
||||
int i40e_is_vsi_uplink_mode_veb(struct i40e_vsi *vsi);
|
||||
i40e_status i40e_get_partition_bw_setting(struct i40e_pf *pf);
|
||||
i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf);
|
||||
|
@ -1776,6 +1776,7 @@ static void i40e_vsi_setup_queue_map(struct i40e_vsi *vsi,
|
||||
bool is_add)
|
||||
{
|
||||
struct i40e_pf *pf = vsi->back;
|
||||
u16 num_tc_qps = 0;
|
||||
u16 sections = 0;
|
||||
u8 netdev_tc = 0;
|
||||
u16 numtc = 1;
|
||||
@ -1783,13 +1784,33 @@ static void i40e_vsi_setup_queue_map(struct i40e_vsi *vsi,
|
||||
u8 offset;
|
||||
u16 qmap;
|
||||
int i;
|
||||
u16 num_tc_qps = 0;
|
||||
|
||||
sections = I40E_AQ_VSI_PROP_QUEUE_MAP_VALID;
|
||||
offset = 0;
|
||||
/* zero out queue mapping, it will get updated on the end of the function */
|
||||
memset(ctxt->info.queue_mapping, 0, sizeof(ctxt->info.queue_mapping));
|
||||
|
||||
if (vsi->type == I40E_VSI_MAIN) {
|
||||
/* This code helps add more queue to the VSI if we have
|
||||
* more cores than RSS can support, the higher cores will
|
||||
* be served by ATR or other filters. Furthermore, the
|
||||
* non-zero req_queue_pairs says that user requested a new
|
||||
* queue count via ethtool's set_channels, so use this
|
||||
* value for queues distribution across traffic classes
|
||||
*/
|
||||
if (vsi->req_queue_pairs > 0)
|
||||
vsi->num_queue_pairs = vsi->req_queue_pairs;
|
||||
else if (pf->flags & I40E_FLAG_MSIX_ENABLED)
|
||||
vsi->num_queue_pairs = pf->num_lan_msix;
|
||||
}
|
||||
|
||||
/* Number of queues per enabled TC */
|
||||
num_tc_qps = vsi->alloc_queue_pairs;
|
||||
if (vsi->type == I40E_VSI_MAIN ||
|
||||
(vsi->type == I40E_VSI_SRIOV && vsi->num_queue_pairs != 0))
|
||||
num_tc_qps = vsi->num_queue_pairs;
|
||||
else
|
||||
num_tc_qps = vsi->alloc_queue_pairs;
|
||||
|
||||
if (enabled_tc && (vsi->back->flags & I40E_FLAG_DCB_ENABLED)) {
|
||||
/* Find numtc from enabled TC bitmap */
|
||||
for (i = 0, numtc = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
|
||||
@ -1867,15 +1888,11 @@ static void i40e_vsi_setup_queue_map(struct i40e_vsi *vsi,
|
||||
}
|
||||
ctxt->info.tc_mapping[i] = cpu_to_le16(qmap);
|
||||
}
|
||||
|
||||
/* Set actual Tx/Rx queue pairs */
|
||||
vsi->num_queue_pairs = offset;
|
||||
if ((vsi->type == I40E_VSI_MAIN) && (numtc == 1)) {
|
||||
if (vsi->req_queue_pairs > 0)
|
||||
vsi->num_queue_pairs = vsi->req_queue_pairs;
|
||||
else if (pf->flags & I40E_FLAG_MSIX_ENABLED)
|
||||
vsi->num_queue_pairs = pf->num_lan_msix;
|
||||
}
|
||||
/* Do not change previously set num_queue_pairs for PFs and VFs*/
|
||||
if ((vsi->type == I40E_VSI_MAIN && numtc != 1) ||
|
||||
(vsi->type == I40E_VSI_SRIOV && vsi->num_queue_pairs == 0) ||
|
||||
(vsi->type != I40E_VSI_MAIN && vsi->type != I40E_VSI_SRIOV))
|
||||
vsi->num_queue_pairs = offset;
|
||||
|
||||
/* Scheduler section valid can only be set for ADD VSI */
|
||||
if (is_add) {
|
||||
@ -2609,7 +2626,8 @@ static void i40e_sync_filters_subtask(struct i40e_pf *pf)
|
||||
|
||||
for (v = 0; v < pf->num_alloc_vsi; v++) {
|
||||
if (pf->vsi[v] &&
|
||||
(pf->vsi[v]->flags & I40E_VSI_FLAG_FILTER_CHANGED)) {
|
||||
(pf->vsi[v]->flags & I40E_VSI_FLAG_FILTER_CHANGED) &&
|
||||
!test_bit(__I40E_VSI_RELEASING, pf->vsi[v]->state)) {
|
||||
int ret = i40e_sync_vsi_filters(pf->vsi[v]);
|
||||
|
||||
if (ret) {
|
||||
@ -5371,6 +5389,58 @@ static void i40e_vsi_update_queue_map(struct i40e_vsi *vsi,
|
||||
sizeof(vsi->info.tc_mapping));
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_update_adq_vsi_queues - update queue mapping for ADq VSI
|
||||
* @vsi: the VSI being reconfigured
|
||||
* @vsi_offset: offset from main VF VSI
|
||||
*/
|
||||
int i40e_update_adq_vsi_queues(struct i40e_vsi *vsi, int vsi_offset)
|
||||
{
|
||||
struct i40e_vsi_context ctxt = {};
|
||||
struct i40e_pf *pf;
|
||||
struct i40e_hw *hw;
|
||||
int ret;
|
||||
|
||||
if (!vsi)
|
||||
return I40E_ERR_PARAM;
|
||||
pf = vsi->back;
|
||||
hw = &pf->hw;
|
||||
|
||||
ctxt.seid = vsi->seid;
|
||||
ctxt.pf_num = hw->pf_id;
|
||||
ctxt.vf_num = vsi->vf_id + hw->func_caps.vf_base_id + vsi_offset;
|
||||
ctxt.uplink_seid = vsi->uplink_seid;
|
||||
ctxt.connection_type = I40E_AQ_VSI_CONN_TYPE_NORMAL;
|
||||
ctxt.flags = I40E_AQ_VSI_TYPE_VF;
|
||||
ctxt.info = vsi->info;
|
||||
|
||||
i40e_vsi_setup_queue_map(vsi, &ctxt, vsi->tc_config.enabled_tc,
|
||||
false);
|
||||
if (vsi->reconfig_rss) {
|
||||
vsi->rss_size = min_t(int, pf->alloc_rss_size,
|
||||
vsi->num_queue_pairs);
|
||||
ret = i40e_vsi_config_rss(vsi);
|
||||
if (ret) {
|
||||
dev_info(&pf->pdev->dev, "Failed to reconfig rss for num_queues\n");
|
||||
return ret;
|
||||
}
|
||||
vsi->reconfig_rss = false;
|
||||
}
|
||||
|
||||
ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL);
|
||||
if (ret) {
|
||||
dev_info(&pf->pdev->dev, "Update vsi config failed, err %s aq_err %s\n",
|
||||
i40e_stat_str(hw, ret),
|
||||
i40e_aq_str(hw, hw->aq.asq_last_status));
|
||||
return ret;
|
||||
}
|
||||
/* update the local VSI info with updated queue map */
|
||||
i40e_vsi_update_queue_map(vsi, &ctxt);
|
||||
vsi->info.valid_sections = 0;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_vsi_config_tc - Configure VSI Tx Scheduler for given TC map
|
||||
* @vsi: VSI to be configured
|
||||
@ -5661,24 +5731,6 @@ static void i40e_remove_queue_channels(struct i40e_vsi *vsi)
|
||||
INIT_LIST_HEAD(&vsi->ch_list);
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_is_any_channel - channel exist or not
|
||||
* @vsi: ptr to VSI to which channels are associated with
|
||||
*
|
||||
* Returns true or false if channel(s) exist for associated VSI or not
|
||||
**/
|
||||
static bool i40e_is_any_channel(struct i40e_vsi *vsi)
|
||||
{
|
||||
struct i40e_channel *ch, *ch_tmp;
|
||||
|
||||
list_for_each_entry_safe(ch, ch_tmp, &vsi->ch_list, list) {
|
||||
if (ch->initialized)
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_get_max_queues_for_channel
|
||||
* @vsi: ptr to VSI to which channels are associated with
|
||||
@ -6186,26 +6238,15 @@ int i40e_create_queue_channel(struct i40e_vsi *vsi,
|
||||
/* By default we are in VEPA mode, if this is the first VF/VMDq
|
||||
* VSI to be added switch to VEB mode.
|
||||
*/
|
||||
if ((!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) ||
|
||||
(!i40e_is_any_channel(vsi))) {
|
||||
if (!is_power_of_2(vsi->tc_config.tc_info[0].qcount)) {
|
||||
dev_dbg(&pf->pdev->dev,
|
||||
"Failed to create channel. Override queues (%u) not power of 2\n",
|
||||
vsi->tc_config.tc_info[0].qcount);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) {
|
||||
pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
|
||||
if (!(pf->flags & I40E_FLAG_VEB_MODE_ENABLED)) {
|
||||
pf->flags |= I40E_FLAG_VEB_MODE_ENABLED;
|
||||
|
||||
if (vsi->type == I40E_VSI_MAIN) {
|
||||
if (pf->flags & I40E_FLAG_TC_MQPRIO)
|
||||
i40e_do_reset(pf, I40E_PF_RESET_FLAG,
|
||||
true);
|
||||
else
|
||||
i40e_do_reset_safe(pf,
|
||||
I40E_PF_RESET_FLAG);
|
||||
}
|
||||
if (vsi->type == I40E_VSI_MAIN) {
|
||||
if (pf->flags & I40E_FLAG_TC_MQPRIO)
|
||||
i40e_do_reset(pf, I40E_PF_RESET_FLAG, true);
|
||||
else
|
||||
i40e_do_reset_safe(pf, I40E_PF_RESET_FLAG);
|
||||
}
|
||||
/* now onwards for main VSI, number of queues will be value
|
||||
* of TC0's queue count
|
||||
@ -7497,12 +7538,20 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
|
||||
vsi->seid);
|
||||
need_reset = true;
|
||||
goto exit;
|
||||
} else {
|
||||
dev_info(&vsi->back->pdev->dev,
|
||||
"Setup channel (id:%u) utilizing num_queues %d\n",
|
||||
vsi->seid, vsi->tc_config.tc_info[0].qcount);
|
||||
} else if (enabled_tc &&
|
||||
(!is_power_of_2(vsi->tc_config.tc_info[0].qcount))) {
|
||||
netdev_info(netdev,
|
||||
"Failed to create channel. Override queues (%u) not power of 2\n",
|
||||
vsi->tc_config.tc_info[0].qcount);
|
||||
ret = -EINVAL;
|
||||
need_reset = true;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
dev_info(&vsi->back->pdev->dev,
|
||||
"Setup channel (id:%u) utilizing num_queues %d\n",
|
||||
vsi->seid, vsi->tc_config.tc_info[0].qcount);
|
||||
|
||||
if (pf->flags & I40E_FLAG_TC_MQPRIO) {
|
||||
if (vsi->mqprio_qopt.max_rate[0]) {
|
||||
u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
|
||||
@ -8067,9 +8116,8 @@ static int i40e_configure_clsflower(struct i40e_vsi *vsi,
|
||||
err = i40e_add_del_cloud_filter(vsi, filter, true);
|
||||
|
||||
if (err) {
|
||||
dev_err(&pf->pdev->dev,
|
||||
"Failed to add cloud filter, err %s\n",
|
||||
i40e_stat_str(&pf->hw, err));
|
||||
dev_err(&pf->pdev->dev, "Failed to add cloud filter, err %d\n",
|
||||
err);
|
||||
goto err;
|
||||
}
|
||||
|
||||
@ -13388,7 +13436,7 @@ int i40e_vsi_release(struct i40e_vsi *vsi)
|
||||
dev_info(&pf->pdev->dev, "Can't remove PF VSI\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
set_bit(__I40E_VSI_RELEASING, vsi->state);
|
||||
uplink_seid = vsi->uplink_seid;
|
||||
if (vsi->type != I40E_VSI_SRIOV) {
|
||||
if (vsi->netdev_registered) {
|
||||
|
@ -621,14 +621,13 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_id,
|
||||
u16 vsi_queue_id,
|
||||
struct virtchnl_rxq_info *info)
|
||||
{
|
||||
u16 pf_queue_id = i40e_vc_get_pf_queue_id(vf, vsi_id, vsi_queue_id);
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
struct i40e_hw *hw = &pf->hw;
|
||||
struct i40e_hmc_obj_rxq rx_ctx;
|
||||
u16 pf_queue_id;
|
||||
int ret = 0;
|
||||
|
||||
pf_queue_id = i40e_vc_get_pf_queue_id(vf, vsi_id, vsi_queue_id);
|
||||
|
||||
/* clear the context structure first */
|
||||
memset(&rx_ctx, 0, sizeof(struct i40e_hmc_obj_rxq));
|
||||
|
||||
@ -666,6 +665,10 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_id,
|
||||
}
|
||||
rx_ctx.rxmax = info->max_pkt_size;
|
||||
|
||||
/* if port VLAN is configured increase the max packet size */
|
||||
if (vsi->info.pvid)
|
||||
rx_ctx.rxmax += VLAN_HLEN;
|
||||
|
||||
/* enable 32bytes desc always */
|
||||
rx_ctx.dsize = 1;
|
||||
|
||||
@ -2097,11 +2100,12 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
|
||||
struct virtchnl_vsi_queue_config_info *qci =
|
||||
(struct virtchnl_vsi_queue_config_info *)msg;
|
||||
struct virtchnl_queue_pair_info *qpi;
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
u16 vsi_id, vsi_queue_id = 0;
|
||||
u16 num_qps_all = 0;
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
i40e_status aq_ret = 0;
|
||||
int i, j = 0, idx = 0;
|
||||
struct i40e_vsi *vsi;
|
||||
u16 num_qps_all = 0;
|
||||
|
||||
if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
|
||||
aq_ret = I40E_ERR_PARAM;
|
||||
@ -2190,9 +2194,15 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg)
|
||||
pf->vsi[vf->lan_vsi_idx]->num_queue_pairs =
|
||||
qci->num_queue_pairs;
|
||||
} else {
|
||||
for (i = 0; i < vf->num_tc; i++)
|
||||
pf->vsi[vf->ch[i].vsi_idx]->num_queue_pairs =
|
||||
vf->ch[i].num_qps;
|
||||
for (i = 0; i < vf->num_tc; i++) {
|
||||
vsi = pf->vsi[vf->ch[i].vsi_idx];
|
||||
vsi->num_queue_pairs = vf->ch[i].num_qps;
|
||||
|
||||
if (i40e_update_adq_vsi_queues(vsi, i)) {
|
||||
aq_ret = I40E_ERR_CONFIG;
|
||||
goto error_param;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
error_param:
|
||||
@ -4050,34 +4060,6 @@ int i40e_ndo_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_vsi_has_vlans - True if VSI has configured VLANs
|
||||
* @vsi: pointer to the vsi
|
||||
*
|
||||
* Check if a VSI has configured any VLANs. False if we have a port VLAN or if
|
||||
* we have no configured VLANs. Do not call while holding the
|
||||
* mac_filter_hash_lock.
|
||||
*/
|
||||
static bool i40e_vsi_has_vlans(struct i40e_vsi *vsi)
|
||||
{
|
||||
bool have_vlans;
|
||||
|
||||
/* If we have a port VLAN, then the VSI cannot have any VLANs
|
||||
* configured, as all MAC/VLAN filters will be assigned to the PVID.
|
||||
*/
|
||||
if (vsi->info.pvid)
|
||||
return false;
|
||||
|
||||
/* Since we don't have a PVID, we know that if the device is in VLAN
|
||||
* mode it must be because of a VLAN filter configured on this VSI.
|
||||
*/
|
||||
spin_lock_bh(&vsi->mac_filter_hash_lock);
|
||||
have_vlans = i40e_is_vsi_in_vlan(vsi);
|
||||
spin_unlock_bh(&vsi->mac_filter_hash_lock);
|
||||
|
||||
return have_vlans;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_ndo_set_vf_port_vlan
|
||||
* @netdev: network interface device structure
|
||||
@ -4134,19 +4116,9 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
|
||||
/* duplicate request, so just return success */
|
||||
goto error_pvid;
|
||||
|
||||
if (i40e_vsi_has_vlans(vsi)) {
|
||||
dev_err(&pf->pdev->dev,
|
||||
"VF %d has already configured VLAN filters and the administrator is requesting a port VLAN override.\nPlease unload and reload the VF driver for this change to take effect.\n",
|
||||
vf_id);
|
||||
/* Administrator Error - knock the VF offline until he does
|
||||
* the right thing by reconfiguring his network correctly
|
||||
* and then reloading the VF driver.
|
||||
*/
|
||||
i40e_vc_disable_vf(vf);
|
||||
/* During reset the VF got a new VSI, so refresh the pointer. */
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
}
|
||||
|
||||
i40e_vc_disable_vf(vf);
|
||||
/* During reset the VF got a new VSI, so refresh a pointer. */
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
/* Locked once because multiple functions below iterate list */
|
||||
spin_lock_bh(&vsi->mac_filter_hash_lock);
|
||||
|
||||
|
@ -962,14 +962,13 @@ static int iavf_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
|
||||
|
||||
if (hfunc)
|
||||
*hfunc = ETH_RSS_HASH_TOP;
|
||||
if (!indir)
|
||||
return 0;
|
||||
if (key)
|
||||
memcpy(key, adapter->rss_key, adapter->rss_key_size);
|
||||
|
||||
memcpy(key, adapter->rss_key, adapter->rss_key_size);
|
||||
|
||||
/* Each 32 bits pointed by 'indir' is stored with a lut entry */
|
||||
for (i = 0; i < adapter->rss_lut_size; i++)
|
||||
indir[i] = (u32)adapter->rss_lut[i];
|
||||
if (indir)
|
||||
/* Each 32 bits pointed by 'indir' is stored with a lut entry */
|
||||
for (i = 0; i < adapter->rss_lut_size; i++)
|
||||
indir[i] = (u32)adapter->rss_lut[i];
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1626,8 +1626,7 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter)
|
||||
iavf_set_promiscuous(adapter, FLAG_VF_MULTICAST_PROMISC);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if ((adapter->aq_required & IAVF_FLAG_AQ_RELEASE_PROMISC) &&
|
||||
if ((adapter->aq_required & IAVF_FLAG_AQ_RELEASE_PROMISC) ||
|
||||
(adapter->aq_required & IAVF_FLAG_AQ_RELEASE_ALLMULTI)) {
|
||||
iavf_set_promiscuous(adapter, 0);
|
||||
return 0;
|
||||
@ -2057,8 +2056,8 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
|
||||
|
||||
iavf_free_misc_irq(adapter);
|
||||
iavf_reset_interrupt_capability(adapter);
|
||||
iavf_free_queues(adapter);
|
||||
iavf_free_q_vectors(adapter);
|
||||
iavf_free_queues(adapter);
|
||||
memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE);
|
||||
iavf_shutdown_adminq(&adapter->hw);
|
||||
adapter->netdev->flags &= ~IFF_UP;
|
||||
@ -2342,7 +2341,7 @@ static void iavf_adminq_task(struct work_struct *work)
|
||||
|
||||
/* check for error indications */
|
||||
val = rd32(hw, hw->aq.arq.len);
|
||||
if (val == 0xdeadbeef) /* indicates device in reset */
|
||||
if (val == 0xdeadbeef || val == 0xffffffff) /* device in reset */
|
||||
goto freedom;
|
||||
oldval = val;
|
||||
if (val & IAVF_VF_ARQLEN1_ARQVFE_MASK) {
|
||||
@ -3034,11 +3033,11 @@ static int iavf_configure_clsflower(struct iavf_adapter *adapter,
|
||||
/* start out with flow type and eth type IPv4 to begin with */
|
||||
filter->f.flow_type = VIRTCHNL_TCP_V4_FLOW;
|
||||
err = iavf_parse_cls_flower(adapter, cls_flower, filter);
|
||||
if (err < 0)
|
||||
if (err)
|
||||
goto err;
|
||||
|
||||
err = iavf_handle_tclass(adapter, tc, filter);
|
||||
if (err < 0)
|
||||
if (err)
|
||||
goto err;
|
||||
|
||||
/* add filter to the list */
|
||||
@ -3425,7 +3424,8 @@ static netdev_features_t iavf_fix_features(struct net_device *netdev,
|
||||
{
|
||||
struct iavf_adapter *adapter = netdev_priv(netdev);
|
||||
|
||||
if (!(adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
|
||||
if (adapter->vf_res &&
|
||||
!(adapter->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN))
|
||||
features &= ~(NETIF_F_HW_VLAN_CTAG_TX |
|
||||
NETIF_F_HW_VLAN_CTAG_RX |
|
||||
NETIF_F_HW_VLAN_CTAG_FILTER);
|
||||
|
@ -3005,9 +3005,6 @@ static void ice_remove(struct pci_dev *pdev)
|
||||
struct ice_pf *pf = pci_get_drvdata(pdev);
|
||||
int i;
|
||||
|
||||
if (!pf)
|
||||
return;
|
||||
|
||||
for (i = 0; i < ICE_MAX_RESET_WAIT; i++) {
|
||||
if (!ice_is_reset_in_progress(pf->state))
|
||||
break;
|
||||
|
@ -4545,7 +4545,7 @@ static int mvpp2_port_init(struct mvpp2_port *port)
|
||||
struct mvpp2 *priv = port->priv;
|
||||
struct mvpp2_txq_pcpu *txq_pcpu;
|
||||
unsigned int thread;
|
||||
int queue, err, val;
|
||||
int queue, err;
|
||||
|
||||
/* Checks for hardware constraints */
|
||||
if (port->first_rxq + port->nrxqs >
|
||||
@ -4559,18 +4559,6 @@ static int mvpp2_port_init(struct mvpp2_port *port)
|
||||
mvpp2_egress_disable(port);
|
||||
mvpp2_port_disable(port);
|
||||
|
||||
if (mvpp2_is_xlg(port->phy_interface)) {
|
||||
val = readl(port->base + MVPP22_XLG_CTRL0_REG);
|
||||
val &= ~MVPP22_XLG_CTRL0_FORCE_LINK_PASS;
|
||||
val |= MVPP22_XLG_CTRL0_FORCE_LINK_DOWN;
|
||||
writel(val, port->base + MVPP22_XLG_CTRL0_REG);
|
||||
} else {
|
||||
val = readl(port->base + MVPP2_GMAC_AUTONEG_CONFIG);
|
||||
val &= ~MVPP2_GMAC_FORCE_LINK_PASS;
|
||||
val |= MVPP2_GMAC_FORCE_LINK_DOWN;
|
||||
writel(val, port->base + MVPP2_GMAC_AUTONEG_CONFIG);
|
||||
}
|
||||
|
||||
port->tx_time_coal = MVPP2_TXDONE_COAL_USEC;
|
||||
|
||||
port->txqs = devm_kcalloc(dev, port->ntxqs, sizeof(*port->txqs),
|
||||
|
@ -1071,6 +1071,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
{
|
||||
struct tun_struct *tun = netdev_priv(dev);
|
||||
int txq = skb->queue_mapping;
|
||||
struct netdev_queue *queue;
|
||||
struct tun_file *tfile;
|
||||
int len = skb->len;
|
||||
|
||||
@ -1117,6 +1118,10 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
if (ptr_ring_produce(&tfile->tx_ring, skb))
|
||||
goto drop;
|
||||
|
||||
/* NETIF_F_LLTX requires to do our own update of trans_start */
|
||||
queue = netdev_get_tx_queue(dev, txq);
|
||||
queue->trans_start = jiffies;
|
||||
|
||||
/* Notify and wake up reader process */
|
||||
if (tfile->flags & TUN_FASYNC)
|
||||
kill_fasync(&tfile->fasync, SIGIO, POLL_IN);
|
||||
|
@ -372,9 +372,11 @@ static int lis3lv02d_add(struct acpi_device *device)
|
||||
INIT_WORK(&hpled_led.work, delayed_set_status_worker);
|
||||
ret = led_classdev_register(NULL, &hpled_led.led_classdev);
|
||||
if (ret) {
|
||||
i8042_remove_filter(hp_accel_i8042_filter);
|
||||
lis3lv02d_joystick_disable(&lis3_dev);
|
||||
lis3lv02d_poweroff(&lis3_dev);
|
||||
flush_work(&hpled_led.work);
|
||||
lis3lv02d_remove_fs(&lis3_dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -3366,8 +3366,8 @@ static void asc_prt_adv_board_info(struct seq_file *m, struct Scsi_Host *shost)
|
||||
shost->host_no);
|
||||
|
||||
seq_printf(m,
|
||||
" iop_base 0x%lx, cable_detect: %X, err_code %u\n",
|
||||
(unsigned long)v->iop_base,
|
||||
" iop_base 0x%p, cable_detect: %X, err_code %u\n",
|
||||
v->iop_base,
|
||||
AdvReadWordRegister(iop_base,IOPW_SCSI_CFG1) & CABLE_DETECT,
|
||||
v->err_code);
|
||||
|
||||
|
@ -19692,6 +19692,7 @@ lpfc_drain_txq(struct lpfc_hba *phba)
|
||||
fail_msg,
|
||||
piocbq->iotag, piocbq->sli4_xritag);
|
||||
list_add_tail(&piocbq->list, &completions);
|
||||
fail_msg = NULL;
|
||||
}
|
||||
spin_unlock_irqrestore(&pring->ring_lock, iflags);
|
||||
}
|
||||
|
@ -776,6 +776,7 @@ store_state_field(struct device *dev, struct device_attribute *attr,
|
||||
int i, ret;
|
||||
struct scsi_device *sdev = to_scsi_device(dev);
|
||||
enum scsi_device_state state = 0;
|
||||
bool rescan_dev = false;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(sdev_states); i++) {
|
||||
const int len = strlen(sdev_states[i].name);
|
||||
@ -794,20 +795,27 @@ store_state_field(struct device *dev, struct device_attribute *attr,
|
||||
}
|
||||
|
||||
mutex_lock(&sdev->state_mutex);
|
||||
ret = scsi_device_set_state(sdev, state);
|
||||
/*
|
||||
* If the device state changes to SDEV_RUNNING, we need to
|
||||
* run the queue to avoid I/O hang, and rescan the device
|
||||
* to revalidate it. Running the queue first is necessary
|
||||
* because another thread may be waiting inside
|
||||
* blk_mq_freeze_queue_wait() and because that call may be
|
||||
* waiting for pending I/O to finish.
|
||||
*/
|
||||
if (ret == 0 && state == SDEV_RUNNING) {
|
||||
if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) {
|
||||
ret = count;
|
||||
} else {
|
||||
ret = scsi_device_set_state(sdev, state);
|
||||
if (ret == 0 && state == SDEV_RUNNING)
|
||||
rescan_dev = true;
|
||||
}
|
||||
mutex_unlock(&sdev->state_mutex);
|
||||
|
||||
if (rescan_dev) {
|
||||
/*
|
||||
* If the device state changes to SDEV_RUNNING, we need to
|
||||
* run the queue to avoid I/O hang, and rescan the device
|
||||
* to revalidate it. Running the queue first is necessary
|
||||
* because another thread may be waiting inside
|
||||
* blk_mq_freeze_queue_wait() and because that call may be
|
||||
* waiting for pending I/O to finish.
|
||||
*/
|
||||
blk_mq_run_hw_queues(sdev->request_queue, true);
|
||||
scsi_rescan_device(dev);
|
||||
}
|
||||
mutex_unlock(&sdev->state_mutex);
|
||||
|
||||
return ret == 0 ? count : -EINVAL;
|
||||
}
|
||||
|
@ -835,8 +835,10 @@ static int __init maple_bus_init(void)
|
||||
|
||||
maple_queue_cache = KMEM_CACHE(maple_buffer, SLAB_HWCACHE_ALIGN);
|
||||
|
||||
if (!maple_queue_cache)
|
||||
if (!maple_queue_cache) {
|
||||
retval = -ENOMEM;
|
||||
goto cleanup_bothirqs;
|
||||
}
|
||||
|
||||
INIT_LIST_HEAD(&maple_waitq);
|
||||
INIT_LIST_HEAD(&maple_sentq);
|
||||
@ -849,6 +851,7 @@ static int __init maple_bus_init(void)
|
||||
if (!mdev[i]) {
|
||||
while (i-- > 0)
|
||||
maple_free_dev(mdev[i]);
|
||||
retval = -ENOMEM;
|
||||
goto cleanup_cache;
|
||||
}
|
||||
baseunits[i] = mdev[i];
|
||||
|
@ -1702,7 +1702,6 @@ int core_alua_set_tg_pt_gp_id(
|
||||
pr_err("Maximum ALUA alua_tg_pt_gps_count:"
|
||||
" 0x0000ffff reached\n");
|
||||
spin_unlock(&dev->t10_alua.tg_pt_gps_lock);
|
||||
kmem_cache_free(t10_alua_tg_pt_gp_cache, tg_pt_gp);
|
||||
return -ENOSPC;
|
||||
}
|
||||
again:
|
||||
|
@ -758,6 +758,8 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
|
||||
INIT_LIST_HEAD(&dev->t10_alua.lba_map_list);
|
||||
spin_lock_init(&dev->t10_alua.lba_map_lock);
|
||||
|
||||
INIT_WORK(&dev->delayed_cmd_work, target_do_delayed_work);
|
||||
|
||||
dev->t10_wwn.t10_dev = dev;
|
||||
dev->t10_alua.t10_dev = dev;
|
||||
|
||||
|
@ -150,6 +150,7 @@ int transport_dump_vpd_ident(struct t10_vpd *, unsigned char *, int);
|
||||
void transport_clear_lun_ref(struct se_lun *);
|
||||
sense_reason_t target_cmd_size_check(struct se_cmd *cmd, unsigned int size);
|
||||
void target_qf_do_work(struct work_struct *work);
|
||||
void target_do_delayed_work(struct work_struct *work);
|
||||
bool target_check_wce(struct se_device *dev);
|
||||
bool target_check_fua(struct se_device *dev);
|
||||
void __target_execute_cmd(struct se_cmd *, bool);
|
||||
|
@ -2021,32 +2021,35 @@ static bool target_handle_task_attr(struct se_cmd *cmd)
|
||||
*/
|
||||
switch (cmd->sam_task_attr) {
|
||||
case TCM_HEAD_TAG:
|
||||
atomic_inc_mb(&dev->non_ordered);
|
||||
pr_debug("Added HEAD_OF_QUEUE for CDB: 0x%02x\n",
|
||||
cmd->t_task_cdb[0]);
|
||||
return false;
|
||||
case TCM_ORDERED_TAG:
|
||||
atomic_inc_mb(&dev->dev_ordered_sync);
|
||||
atomic_inc_mb(&dev->delayed_cmd_count);
|
||||
|
||||
pr_debug("Added ORDERED for CDB: 0x%02x to ordered list\n",
|
||||
cmd->t_task_cdb[0]);
|
||||
|
||||
/*
|
||||
* Execute an ORDERED command if no other older commands
|
||||
* exist that need to be completed first.
|
||||
*/
|
||||
if (!atomic_read(&dev->simple_cmds))
|
||||
return false;
|
||||
break;
|
||||
default:
|
||||
/*
|
||||
* For SIMPLE and UNTAGGED Task Attribute commands
|
||||
*/
|
||||
atomic_inc_mb(&dev->simple_cmds);
|
||||
atomic_inc_mb(&dev->non_ordered);
|
||||
|
||||
if (atomic_read(&dev->delayed_cmd_count) == 0)
|
||||
return false;
|
||||
break;
|
||||
}
|
||||
|
||||
if (atomic_read(&dev->dev_ordered_sync) == 0)
|
||||
return false;
|
||||
if (cmd->sam_task_attr != TCM_ORDERED_TAG) {
|
||||
atomic_inc_mb(&dev->delayed_cmd_count);
|
||||
/*
|
||||
* We will account for this when we dequeue from the delayed
|
||||
* list.
|
||||
*/
|
||||
atomic_dec_mb(&dev->non_ordered);
|
||||
}
|
||||
|
||||
spin_lock(&dev->delayed_cmd_lock);
|
||||
list_add_tail(&cmd->se_delayed_node, &dev->delayed_cmd_list);
|
||||
@ -2054,6 +2057,12 @@ static bool target_handle_task_attr(struct se_cmd *cmd)
|
||||
|
||||
pr_debug("Added CDB: 0x%02x Task Attr: 0x%02x to delayed CMD listn",
|
||||
cmd->t_task_cdb[0], cmd->sam_task_attr);
|
||||
/*
|
||||
* We may have no non ordered cmds when this function started or we
|
||||
* could have raced with the last simple/head cmd completing, so kick
|
||||
* the delayed handler here.
|
||||
*/
|
||||
schedule_work(&dev->delayed_cmd_work);
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -2091,29 +2100,48 @@ EXPORT_SYMBOL(target_execute_cmd);
|
||||
* Process all commands up to the last received ORDERED task attribute which
|
||||
* requires another blocking boundary
|
||||
*/
|
||||
static void target_restart_delayed_cmds(struct se_device *dev)
|
||||
void target_do_delayed_work(struct work_struct *work)
|
||||
{
|
||||
for (;;) {
|
||||
struct se_device *dev = container_of(work, struct se_device,
|
||||
delayed_cmd_work);
|
||||
|
||||
spin_lock(&dev->delayed_cmd_lock);
|
||||
while (!dev->ordered_sync_in_progress) {
|
||||
struct se_cmd *cmd;
|
||||
|
||||
spin_lock(&dev->delayed_cmd_lock);
|
||||
if (list_empty(&dev->delayed_cmd_list)) {
|
||||
spin_unlock(&dev->delayed_cmd_lock);
|
||||
if (list_empty(&dev->delayed_cmd_list))
|
||||
break;
|
||||
}
|
||||
|
||||
cmd = list_entry(dev->delayed_cmd_list.next,
|
||||
struct se_cmd, se_delayed_node);
|
||||
|
||||
if (cmd->sam_task_attr == TCM_ORDERED_TAG) {
|
||||
/*
|
||||
* Check if we started with:
|
||||
* [ordered] [simple] [ordered]
|
||||
* and we are now at the last ordered so we have to wait
|
||||
* for the simple cmd.
|
||||
*/
|
||||
if (atomic_read(&dev->non_ordered) > 0)
|
||||
break;
|
||||
|
||||
dev->ordered_sync_in_progress = true;
|
||||
}
|
||||
|
||||
list_del(&cmd->se_delayed_node);
|
||||
atomic_dec_mb(&dev->delayed_cmd_count);
|
||||
spin_unlock(&dev->delayed_cmd_lock);
|
||||
|
||||
if (cmd->sam_task_attr != TCM_ORDERED_TAG)
|
||||
atomic_inc_mb(&dev->non_ordered);
|
||||
|
||||
cmd->transport_state |= CMD_T_SENT;
|
||||
|
||||
__target_execute_cmd(cmd, true);
|
||||
|
||||
if (cmd->sam_task_attr == TCM_ORDERED_TAG)
|
||||
break;
|
||||
spin_lock(&dev->delayed_cmd_lock);
|
||||
}
|
||||
spin_unlock(&dev->delayed_cmd_lock);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -2131,14 +2159,17 @@ static void transport_complete_task_attr(struct se_cmd *cmd)
|
||||
goto restart;
|
||||
|
||||
if (cmd->sam_task_attr == TCM_SIMPLE_TAG) {
|
||||
atomic_dec_mb(&dev->simple_cmds);
|
||||
atomic_dec_mb(&dev->non_ordered);
|
||||
dev->dev_cur_ordered_id++;
|
||||
} else if (cmd->sam_task_attr == TCM_HEAD_TAG) {
|
||||
atomic_dec_mb(&dev->non_ordered);
|
||||
dev->dev_cur_ordered_id++;
|
||||
pr_debug("Incremented dev_cur_ordered_id: %u for HEAD_OF_QUEUE\n",
|
||||
dev->dev_cur_ordered_id);
|
||||
} else if (cmd->sam_task_attr == TCM_ORDERED_TAG) {
|
||||
atomic_dec_mb(&dev->dev_ordered_sync);
|
||||
spin_lock(&dev->delayed_cmd_lock);
|
||||
dev->ordered_sync_in_progress = false;
|
||||
spin_unlock(&dev->delayed_cmd_lock);
|
||||
|
||||
dev->dev_cur_ordered_id++;
|
||||
pr_debug("Incremented dev_cur_ordered_id: %u for ORDERED\n",
|
||||
@ -2147,7 +2178,8 @@ static void transport_complete_task_attr(struct se_cmd *cmd)
|
||||
cmd->se_cmd_flags &= ~SCF_TASK_ATTR_SET;
|
||||
|
||||
restart:
|
||||
target_restart_delayed_cmds(dev);
|
||||
if (atomic_read(&dev->delayed_cmd_count) > 0)
|
||||
schedule_work(&dev->delayed_cmd_work);
|
||||
}
|
||||
|
||||
static void transport_complete_qf(struct se_cmd *cmd)
|
||||
|
@ -534,6 +534,9 @@ static void flush_to_ldisc(struct work_struct *work)
|
||||
if (!count)
|
||||
break;
|
||||
head->read += count;
|
||||
|
||||
if (need_resched())
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
mutex_unlock(&buf->lock);
|
||||
|
@ -125,8 +125,6 @@ struct max3421_hcd {
|
||||
|
||||
struct task_struct *spi_thread;
|
||||
|
||||
struct max3421_hcd *next;
|
||||
|
||||
enum max3421_rh_state rh_state;
|
||||
/* lower 16 bits contain port status, upper 16 bits the change mask: */
|
||||
u32 port_status;
|
||||
@ -174,8 +172,6 @@ struct max3421_ep {
|
||||
u8 retransmit; /* packet needs retransmission */
|
||||
};
|
||||
|
||||
static struct max3421_hcd *max3421_hcd_list;
|
||||
|
||||
#define MAX3421_FIFO_SIZE 64
|
||||
|
||||
#define MAX3421_SPI_DIR_RD 0 /* read register from MAX3421 */
|
||||
@ -1882,9 +1878,8 @@ max3421_probe(struct spi_device *spi)
|
||||
}
|
||||
set_bit(HCD_FLAG_POLL_RH, &hcd->flags);
|
||||
max3421_hcd = hcd_to_max3421(hcd);
|
||||
max3421_hcd->next = max3421_hcd_list;
|
||||
max3421_hcd_list = max3421_hcd;
|
||||
INIT_LIST_HEAD(&max3421_hcd->ep_list);
|
||||
spi_set_drvdata(spi, max3421_hcd);
|
||||
|
||||
max3421_hcd->tx = kmalloc(sizeof(*max3421_hcd->tx), GFP_KERNEL);
|
||||
if (!max3421_hcd->tx)
|
||||
@ -1934,28 +1929,18 @@ max3421_probe(struct spi_device *spi)
|
||||
static int
|
||||
max3421_remove(struct spi_device *spi)
|
||||
{
|
||||
struct max3421_hcd *max3421_hcd = NULL, **prev;
|
||||
struct usb_hcd *hcd = NULL;
|
||||
struct max3421_hcd *max3421_hcd;
|
||||
struct usb_hcd *hcd;
|
||||
unsigned long flags;
|
||||
|
||||
for (prev = &max3421_hcd_list; *prev; prev = &(*prev)->next) {
|
||||
max3421_hcd = *prev;
|
||||
hcd = max3421_to_hcd(max3421_hcd);
|
||||
if (hcd->self.controller == &spi->dev)
|
||||
break;
|
||||
}
|
||||
if (!max3421_hcd) {
|
||||
dev_err(&spi->dev, "no MAX3421 HCD found for SPI device %p\n",
|
||||
spi);
|
||||
return -ENODEV;
|
||||
}
|
||||
max3421_hcd = spi_get_drvdata(spi);
|
||||
hcd = max3421_to_hcd(max3421_hcd);
|
||||
|
||||
usb_remove_hcd(hcd);
|
||||
|
||||
spin_lock_irqsave(&max3421_hcd->lock, flags);
|
||||
|
||||
kthread_stop(max3421_hcd->spi_thread);
|
||||
*prev = max3421_hcd->next;
|
||||
|
||||
spin_unlock_irqrestore(&max3421_hcd->lock, flags);
|
||||
|
||||
|
@ -199,7 +199,7 @@ static int ohci_hcd_tmio_drv_probe(struct platform_device *dev)
|
||||
if (usb_disabled())
|
||||
return -ENODEV;
|
||||
|
||||
if (!cell)
|
||||
if (!cell || !regs || !config || !sram)
|
||||
return -EINVAL;
|
||||
|
||||
if (irq < 0)
|
||||
|
@ -1103,6 +1103,11 @@ static int tusb_musb_init(struct musb *musb)
|
||||
|
||||
/* dma address for async dma */
|
||||
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (!mem) {
|
||||
pr_debug("no async dma resource?\n");
|
||||
ret = -ENODEV;
|
||||
goto done;
|
||||
}
|
||||
musb->async = mem->start;
|
||||
|
||||
/* dma address for sync dma */
|
||||
|
@ -108,7 +108,7 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
|
||||
u8 data[TPS_MAX_LEN + 1];
|
||||
int ret;
|
||||
|
||||
if (WARN_ON(len + 1 > sizeof(data)))
|
||||
if (len + 1 > sizeof(data))
|
||||
return -EINVAL;
|
||||
|
||||
if (!tps->i2c_protocol)
|
||||
|
@ -291,13 +291,13 @@ static unsigned long sticon_getxy(struct vc_data *conp, unsigned long pos,
|
||||
static u8 sticon_build_attr(struct vc_data *conp, u8 color, u8 intens,
|
||||
u8 blink, u8 underline, u8 reverse, u8 italic)
|
||||
{
|
||||
u8 attr = ((color & 0x70) >> 1) | ((color & 7));
|
||||
u8 fg = color & 7;
|
||||
u8 bg = (color & 0x70) >> 4;
|
||||
|
||||
if (reverse) {
|
||||
color = ((color >> 3) & 0x7) | ((color & 0x7) << 3);
|
||||
}
|
||||
|
||||
return attr;
|
||||
if (reverse)
|
||||
return (fg << 3) | bg;
|
||||
else
|
||||
return (bg << 3) | fg;
|
||||
}
|
||||
|
||||
static void sticon_invert_region(struct vc_data *conp, u16 *p, int count)
|
||||
|
@ -237,6 +237,13 @@ static void run_ordered_work(struct __btrfs_workqueue *wq,
|
||||
ordered_list);
|
||||
if (!test_bit(WORK_DONE_BIT, &work->flags))
|
||||
break;
|
||||
/*
|
||||
* Orders all subsequent loads after reading WORK_DONE_BIT,
|
||||
* paired with the smp_mb__before_atomic in btrfs_work_helper
|
||||
* this guarantees that the ordered function will see all
|
||||
* updates from ordinary work function.
|
||||
*/
|
||||
smp_rmb();
|
||||
|
||||
/*
|
||||
* we are going to call the ordered done function, but
|
||||
@ -325,6 +332,13 @@ static void btrfs_work_helper(struct work_struct *normal_work)
|
||||
thresh_exec_hook(wq);
|
||||
work->func(work);
|
||||
if (need_order) {
|
||||
/*
|
||||
* Ensures all memory accesses done in the work function are
|
||||
* ordered before setting the WORK_DONE_BIT. Ensuring the thread
|
||||
* which is going to executed the ordered work sees them.
|
||||
* Pairs with the smp_rmb in run_ordered_work.
|
||||
*/
|
||||
smp_mb__before_atomic();
|
||||
set_bit(WORK_DONE_BIT, &work->flags);
|
||||
run_ordered_work(wq, work);
|
||||
}
|
||||
|
32
fs/udf/dir.c
32
fs/udf/dir.c
@ -31,6 +31,7 @@
|
||||
#include <linux/mm.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/bio.h>
|
||||
#include <linux/iversion.h>
|
||||
|
||||
#include "udf_i.h"
|
||||
#include "udf_sb.h"
|
||||
@ -44,7 +45,7 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
|
||||
struct fileIdentDesc *fi = NULL;
|
||||
struct fileIdentDesc cfi;
|
||||
udf_pblk_t block, iblock;
|
||||
loff_t nf_pos;
|
||||
loff_t nf_pos, emit_pos = 0;
|
||||
int flen;
|
||||
unsigned char *fname = NULL, *copy_name = NULL;
|
||||
unsigned char *nameptr;
|
||||
@ -58,6 +59,7 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
|
||||
int i, num, ret = 0;
|
||||
struct extent_position epos = { NULL, 0, {0, 0} };
|
||||
struct super_block *sb = dir->i_sb;
|
||||
bool pos_valid = false;
|
||||
|
||||
if (ctx->pos == 0) {
|
||||
if (!dir_emit_dot(file, ctx))
|
||||
@ -68,6 +70,21 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
|
||||
if (nf_pos >= size)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* Something changed since last readdir (either lseek was called or dir
|
||||
* changed)? We need to verify the position correctly points at the
|
||||
* beginning of some dir entry so that the directory parsing code does
|
||||
* not get confused. Since UDF does not have any reliable way of
|
||||
* identifying beginning of dir entry (names are under user control),
|
||||
* we need to scan the directory from the beginning.
|
||||
*/
|
||||
if (!inode_eq_iversion(dir, file->f_version)) {
|
||||
emit_pos = nf_pos;
|
||||
nf_pos = 0;
|
||||
} else {
|
||||
pos_valid = true;
|
||||
}
|
||||
|
||||
fname = kmalloc(UDF_NAME_LEN, GFP_NOFS);
|
||||
if (!fname) {
|
||||
ret = -ENOMEM;
|
||||
@ -123,13 +140,21 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
|
||||
|
||||
while (nf_pos < size) {
|
||||
struct kernel_lb_addr tloc;
|
||||
loff_t cur_pos = nf_pos;
|
||||
|
||||
ctx->pos = (nf_pos >> 2) + 1;
|
||||
/* Update file position only if we got past the current one */
|
||||
if (nf_pos >= emit_pos) {
|
||||
ctx->pos = (nf_pos >> 2) + 1;
|
||||
pos_valid = true;
|
||||
}
|
||||
|
||||
fi = udf_fileident_read(dir, &nf_pos, &fibh, &cfi, &epos, &eloc,
|
||||
&elen, &offset);
|
||||
if (!fi)
|
||||
goto out;
|
||||
/* Still not at offset where user asked us to read from? */
|
||||
if (cur_pos < emit_pos)
|
||||
continue;
|
||||
|
||||
liu = le16_to_cpu(cfi.lengthOfImpUse);
|
||||
lfi = cfi.lengthFileIdent;
|
||||
@ -187,8 +212,11 @@ static int udf_readdir(struct file *file, struct dir_context *ctx)
|
||||
} /* end while */
|
||||
|
||||
ctx->pos = (nf_pos >> 2) + 1;
|
||||
pos_valid = true;
|
||||
|
||||
out:
|
||||
if (pos_valid)
|
||||
file->f_version = inode_query_iversion(dir);
|
||||
if (fibh.sbh != fibh.ebh)
|
||||
brelse(fibh.ebh);
|
||||
brelse(fibh.sbh);
|
||||
|
@ -30,6 +30,7 @@
|
||||
#include <linux/sched.h>
|
||||
#include <linux/crc-itu-t.h>
|
||||
#include <linux/exportfs.h>
|
||||
#include <linux/iversion.h>
|
||||
|
||||
static inline int udf_match(int len1, const unsigned char *name1, int len2,
|
||||
const unsigned char *name2)
|
||||
@ -135,6 +136,8 @@ int udf_write_fi(struct inode *inode, struct fileIdentDesc *cfi,
|
||||
mark_buffer_dirty_inode(fibh->ebh, inode);
|
||||
mark_buffer_dirty_inode(fibh->sbh, inode);
|
||||
}
|
||||
inode_inc_iversion(inode);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -57,6 +57,7 @@
|
||||
#include <linux/crc-itu-t.h>
|
||||
#include <linux/log2.h>
|
||||
#include <asm/byteorder.h>
|
||||
#include <linux/iversion.h>
|
||||
|
||||
#include "udf_sb.h"
|
||||
#include "udf_i.h"
|
||||
@ -149,6 +150,7 @@ static struct inode *udf_alloc_inode(struct super_block *sb)
|
||||
init_rwsem(&ei->i_data_sem);
|
||||
ei->cached_extent.lstart = -1;
|
||||
spin_lock_init(&ei->i_extent_cache_lock);
|
||||
inode_set_iversion(&ei->vfs_inode, 1);
|
||||
|
||||
return &ei->vfs_inode;
|
||||
}
|
||||
|
@ -495,6 +495,38 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* tlb_flush_{pte|pmd|pud|p4d}_range() adjust the tlb->start and tlb->end,
|
||||
* and set corresponding cleared_*.
|
||||
*/
|
||||
static inline void tlb_flush_pte_range(struct mmu_gather *tlb,
|
||||
unsigned long address, unsigned long size)
|
||||
{
|
||||
__tlb_adjust_range(tlb, address, size);
|
||||
tlb->cleared_ptes = 1;
|
||||
}
|
||||
|
||||
static inline void tlb_flush_pmd_range(struct mmu_gather *tlb,
|
||||
unsigned long address, unsigned long size)
|
||||
{
|
||||
__tlb_adjust_range(tlb, address, size);
|
||||
tlb->cleared_pmds = 1;
|
||||
}
|
||||
|
||||
static inline void tlb_flush_pud_range(struct mmu_gather *tlb,
|
||||
unsigned long address, unsigned long size)
|
||||
{
|
||||
__tlb_adjust_range(tlb, address, size);
|
||||
tlb->cleared_puds = 1;
|
||||
}
|
||||
|
||||
static inline void tlb_flush_p4d_range(struct mmu_gather *tlb,
|
||||
unsigned long address, unsigned long size)
|
||||
{
|
||||
__tlb_adjust_range(tlb, address, size);
|
||||
tlb->cleared_p4ds = 1;
|
||||
}
|
||||
|
||||
#ifndef __tlb_remove_tlb_entry
|
||||
#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
|
||||
#endif
|
||||
@ -508,19 +540,17 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm
|
||||
*/
|
||||
#define tlb_remove_tlb_entry(tlb, ptep, address) \
|
||||
do { \
|
||||
__tlb_adjust_range(tlb, address, PAGE_SIZE); \
|
||||
tlb->cleared_ptes = 1; \
|
||||
tlb_flush_pte_range(tlb, address, PAGE_SIZE); \
|
||||
__tlb_remove_tlb_entry(tlb, ptep, address); \
|
||||
} while (0)
|
||||
|
||||
#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \
|
||||
do { \
|
||||
unsigned long _sz = huge_page_size(h); \
|
||||
__tlb_adjust_range(tlb, address, _sz); \
|
||||
if (_sz == PMD_SIZE) \
|
||||
tlb->cleared_pmds = 1; \
|
||||
tlb_flush_pmd_range(tlb, address, _sz); \
|
||||
else if (_sz == PUD_SIZE) \
|
||||
tlb->cleared_puds = 1; \
|
||||
tlb_flush_pud_range(tlb, address, _sz); \
|
||||
__tlb_remove_tlb_entry(tlb, ptep, address); \
|
||||
} while (0)
|
||||
|
||||
@ -534,8 +564,7 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm
|
||||
|
||||
#define tlb_remove_pmd_tlb_entry(tlb, pmdp, address) \
|
||||
do { \
|
||||
__tlb_adjust_range(tlb, address, HPAGE_PMD_SIZE); \
|
||||
tlb->cleared_pmds = 1; \
|
||||
tlb_flush_pmd_range(tlb, address, HPAGE_PMD_SIZE); \
|
||||
__tlb_remove_pmd_tlb_entry(tlb, pmdp, address); \
|
||||
} while (0)
|
||||
|
||||
@ -549,8 +578,7 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm
|
||||
|
||||
#define tlb_remove_pud_tlb_entry(tlb, pudp, address) \
|
||||
do { \
|
||||
__tlb_adjust_range(tlb, address, HPAGE_PUD_SIZE); \
|
||||
tlb->cleared_puds = 1; \
|
||||
tlb_flush_pud_range(tlb, address, HPAGE_PUD_SIZE); \
|
||||
__tlb_remove_pud_tlb_entry(tlb, pudp, address); \
|
||||
} while (0)
|
||||
|
||||
@ -575,9 +603,8 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm
|
||||
#ifndef pte_free_tlb
|
||||
#define pte_free_tlb(tlb, ptep, address) \
|
||||
do { \
|
||||
__tlb_adjust_range(tlb, address, PAGE_SIZE); \
|
||||
tlb_flush_pmd_range(tlb, address, PAGE_SIZE); \
|
||||
tlb->freed_tables = 1; \
|
||||
tlb->cleared_pmds = 1; \
|
||||
__pte_free_tlb(tlb, ptep, address); \
|
||||
} while (0)
|
||||
#endif
|
||||
@ -585,9 +612,8 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm
|
||||
#ifndef pmd_free_tlb
|
||||
#define pmd_free_tlb(tlb, pmdp, address) \
|
||||
do { \
|
||||
__tlb_adjust_range(tlb, address, PAGE_SIZE); \
|
||||
tlb_flush_pud_range(tlb, address, PAGE_SIZE); \
|
||||
tlb->freed_tables = 1; \
|
||||
tlb->cleared_puds = 1; \
|
||||
__pmd_free_tlb(tlb, pmdp, address); \
|
||||
} while (0)
|
||||
#endif
|
||||
@ -596,9 +622,8 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm
|
||||
#ifndef pud_free_tlb
|
||||
#define pud_free_tlb(tlb, pudp, address) \
|
||||
do { \
|
||||
__tlb_adjust_range(tlb, address, PAGE_SIZE); \
|
||||
tlb_flush_p4d_range(tlb, address, PAGE_SIZE); \
|
||||
tlb->freed_tables = 1; \
|
||||
tlb->cleared_p4ds = 1; \
|
||||
__pud_free_tlb(tlb, pudp, address); \
|
||||
} while (0)
|
||||
#endif
|
||||
|
@ -120,10 +120,15 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
|
||||
|
||||
if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
|
||||
u16 gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size);
|
||||
unsigned int nh_off = p_off;
|
||||
struct skb_shared_info *shinfo = skb_shinfo(skb);
|
||||
|
||||
/* UFO may not include transport header in gso_size. */
|
||||
if (gso_type & SKB_GSO_UDP)
|
||||
nh_off -= thlen;
|
||||
|
||||
/* Too small packets are not really GSO ones. */
|
||||
if (skb->len - p_off > gso_size) {
|
||||
if (skb->len - nh_off > gso_size) {
|
||||
shinfo->gso_size = gso_size;
|
||||
shinfo->gso_type = gso_type;
|
||||
|
||||
|
@ -30,7 +30,7 @@ enum rdma_nl_flags {
|
||||
* constant as well and the compiler checks they are the same.
|
||||
*/
|
||||
#define MODULE_ALIAS_RDMA_NETLINK(_index, _val) \
|
||||
static inline void __chk_##_index(void) \
|
||||
static inline void __maybe_unused __chk_##_index(void) \
|
||||
{ \
|
||||
BUILD_BUG_ON(_index != _val); \
|
||||
} \
|
||||
|
@ -88,6 +88,8 @@ struct hdac_ext_stream *snd_hdac_ext_stream_assign(struct hdac_bus *bus,
|
||||
struct snd_pcm_substream *substream,
|
||||
int type);
|
||||
void snd_hdac_ext_stream_release(struct hdac_ext_stream *azx_dev, int type);
|
||||
void snd_hdac_ext_stream_decouple_locked(struct hdac_bus *bus,
|
||||
struct hdac_ext_stream *azx_dev, bool decouple);
|
||||
void snd_hdac_ext_stream_decouple(struct hdac_bus *bus,
|
||||
struct hdac_ext_stream *azx_dev, bool decouple);
|
||||
void snd_hdac_ext_stop_streams(struct hdac_bus *bus);
|
||||
|
@ -781,8 +781,9 @@ struct se_device {
|
||||
atomic_long_t read_bytes;
|
||||
atomic_long_t write_bytes;
|
||||
/* Active commands on this virtual SE device */
|
||||
atomic_t simple_cmds;
|
||||
atomic_t dev_ordered_sync;
|
||||
atomic_t non_ordered;
|
||||
bool ordered_sync_in_progress;
|
||||
atomic_t delayed_cmd_count;
|
||||
atomic_t dev_qf_count;
|
||||
u32 export_count;
|
||||
spinlock_t delayed_cmd_lock;
|
||||
@ -804,6 +805,7 @@ struct se_device {
|
||||
struct list_head dev_sep_list;
|
||||
struct list_head dev_tmr_list;
|
||||
struct work_struct qf_work_queue;
|
||||
struct work_struct delayed_cmd_work;
|
||||
struct list_head delayed_cmd_list;
|
||||
struct list_head state_list;
|
||||
struct list_head qf_cmd_list;
|
||||
|
@ -804,20 +804,20 @@ TRACE_EVENT(f2fs_lookup_start,
|
||||
TP_STRUCT__entry(
|
||||
__field(dev_t, dev)
|
||||
__field(ino_t, ino)
|
||||
__field(const char *, name)
|
||||
__string(name, dentry->d_name.name)
|
||||
__field(unsigned int, flags)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->dev = dir->i_sb->s_dev;
|
||||
__entry->ino = dir->i_ino;
|
||||
__entry->name = dentry->d_name.name;
|
||||
__assign_str(name, dentry->d_name.name);
|
||||
__entry->flags = flags;
|
||||
),
|
||||
|
||||
TP_printk("dev = (%d,%d), pino = %lu, name:%s, flags:%u",
|
||||
show_dev_ino(__entry),
|
||||
__entry->name,
|
||||
__get_str(name),
|
||||
__entry->flags)
|
||||
);
|
||||
|
||||
@ -831,7 +831,7 @@ TRACE_EVENT(f2fs_lookup_end,
|
||||
TP_STRUCT__entry(
|
||||
__field(dev_t, dev)
|
||||
__field(ino_t, ino)
|
||||
__field(const char *, name)
|
||||
__string(name, dentry->d_name.name)
|
||||
__field(nid_t, cino)
|
||||
__field(int, err)
|
||||
),
|
||||
@ -839,14 +839,14 @@ TRACE_EVENT(f2fs_lookup_end,
|
||||
TP_fast_assign(
|
||||
__entry->dev = dir->i_sb->s_dev;
|
||||
__entry->ino = dir->i_ino;
|
||||
__entry->name = dentry->d_name.name;
|
||||
__assign_str(name, dentry->d_name.name);
|
||||
__entry->cino = ino;
|
||||
__entry->err = err;
|
||||
),
|
||||
|
||||
TP_printk("dev = (%d,%d), pino = %lu, name:%s, ino:%u, err:%d",
|
||||
show_dev_ino(__entry),
|
||||
__entry->name,
|
||||
__get_str(name),
|
||||
__entry->cino,
|
||||
__entry->err)
|
||||
);
|
||||
|
@ -446,8 +446,8 @@ static int ipcget_public(struct ipc_namespace *ns, struct ipc_ids *ids,
|
||||
static void ipc_kht_remove(struct ipc_ids *ids, struct kern_ipc_perm *ipcp)
|
||||
{
|
||||
if (ipcp->key != IPC_PRIVATE)
|
||||
rhashtable_remove_fast(&ids->key_ht, &ipcp->khtnode,
|
||||
ipc_kht_params);
|
||||
WARN_ON_ONCE(rhashtable_remove_fast(&ids->key_ht, &ipcp->khtnode,
|
||||
ipc_kht_params));
|
||||
}
|
||||
|
||||
/**
|
||||
@ -462,7 +462,7 @@ void ipc_rmid(struct ipc_ids *ids, struct kern_ipc_perm *ipcp)
|
||||
{
|
||||
int idx = ipcid_to_idx(ipcp->id);
|
||||
|
||||
idr_remove(&ids->ipcs_idr, idx);
|
||||
WARN_ON_ONCE(idr_remove(&ids->ipcs_idr, idx) != ipcp);
|
||||
ipc_kht_remove(ids, ipcp);
|
||||
ids->in_use--;
|
||||
ipcp->deleted = true;
|
||||
|
@ -6552,7 +6552,6 @@ void perf_output_sample(struct perf_output_handle *handle,
|
||||
static u64 perf_virt_to_phys(u64 virt)
|
||||
{
|
||||
u64 phys_addr = 0;
|
||||
struct page *p = NULL;
|
||||
|
||||
if (!virt)
|
||||
return 0;
|
||||
@ -6571,14 +6570,15 @@ static u64 perf_virt_to_phys(u64 virt)
|
||||
* If failed, leave phys_addr as 0.
|
||||
*/
|
||||
if (current->mm != NULL) {
|
||||
struct page *p;
|
||||
|
||||
pagefault_disable();
|
||||
if (__get_user_pages_fast(virt, 1, 0, &p) == 1)
|
||||
if (__get_user_pages_fast(virt, 1, 0, &p) == 1) {
|
||||
phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
|
||||
put_page(p);
|
||||
}
|
||||
pagefault_enable();
|
||||
}
|
||||
|
||||
if (p)
|
||||
put_page(p);
|
||||
}
|
||||
|
||||
return phys_addr;
|
||||
|
@ -2500,6 +2500,9 @@ void wake_up_if_idle(int cpu)
|
||||
|
||||
bool cpus_share_cache(int this_cpu, int that_cpu)
|
||||
{
|
||||
if (this_cpu == that_cpu)
|
||||
return true;
|
||||
|
||||
return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
|
||||
}
|
||||
#endif /* CONFIG_SMP */
|
||||
|
@ -149,6 +149,8 @@ struct hist_field {
|
||||
*/
|
||||
unsigned int var_ref_idx;
|
||||
bool read_once;
|
||||
|
||||
unsigned int var_str_idx;
|
||||
};
|
||||
|
||||
static u64 hist_field_none(struct hist_field *field,
|
||||
@ -351,6 +353,7 @@ struct hist_trigger_data {
|
||||
unsigned int n_keys;
|
||||
unsigned int n_fields;
|
||||
unsigned int n_vars;
|
||||
unsigned int n_var_str;
|
||||
unsigned int key_size;
|
||||
struct tracing_map_sort_key sort_keys[TRACING_MAP_SORT_KEYS_MAX];
|
||||
unsigned int n_sort_keys;
|
||||
@ -2305,7 +2308,12 @@ static int hist_trigger_elt_data_alloc(struct tracing_map_elt *elt)
|
||||
}
|
||||
}
|
||||
|
||||
n_str = hist_data->n_field_var_str + hist_data->n_save_var_str;
|
||||
n_str = hist_data->n_field_var_str + hist_data->n_save_var_str +
|
||||
hist_data->n_var_str;
|
||||
if (n_str > SYNTH_FIELDS_MAX) {
|
||||
hist_elt_data_free(elt_data);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
size = STR_VAR_LEN_MAX;
|
||||
|
||||
@ -2582,9 +2590,10 @@ static struct hist_field *create_hist_field(struct hist_trigger_data *hist_data,
|
||||
if (!hist_field->type)
|
||||
goto free;
|
||||
|
||||
if (field->filter_type == FILTER_STATIC_STRING)
|
||||
if (field->filter_type == FILTER_STATIC_STRING) {
|
||||
hist_field->fn = hist_field_string;
|
||||
else if (field->filter_type == FILTER_DYN_STRING)
|
||||
hist_field->size = field->size;
|
||||
} else if (field->filter_type == FILTER_DYN_STRING)
|
||||
hist_field->fn = hist_field_dynstring;
|
||||
else
|
||||
hist_field->fn = hist_field_pstring;
|
||||
@ -3522,7 +3531,7 @@ static inline void __update_field_vars(struct tracing_map_elt *elt,
|
||||
char *str = elt_data->field_var_str[j++];
|
||||
char *val_str = (char *)(uintptr_t)var_val;
|
||||
|
||||
strscpy(str, val_str, STR_VAR_LEN_MAX);
|
||||
strscpy(str, val_str, val->size);
|
||||
var_val = (u64)(uintptr_t)str;
|
||||
}
|
||||
tracing_map_set_var(elt, var_idx, var_val);
|
||||
@ -4599,6 +4608,7 @@ static int create_var_field(struct hist_trigger_data *hist_data,
|
||||
{
|
||||
struct trace_array *tr = hist_data->event_file->tr;
|
||||
unsigned long flags = 0;
|
||||
int ret;
|
||||
|
||||
if (WARN_ON(val_idx >= TRACING_MAP_VALS_MAX + TRACING_MAP_VARS_MAX))
|
||||
return -EINVAL;
|
||||
@ -4613,7 +4623,12 @@ static int create_var_field(struct hist_trigger_data *hist_data,
|
||||
if (WARN_ON(hist_data->n_vars > TRACING_MAP_VARS_MAX))
|
||||
return -EINVAL;
|
||||
|
||||
return __create_val_field(hist_data, val_idx, file, var_name, expr_str, flags);
|
||||
ret = __create_val_field(hist_data, val_idx, file, var_name, expr_str, flags);
|
||||
|
||||
if (hist_data->fields[val_idx]->flags & HIST_FIELD_FL_STRING)
|
||||
hist_data->fields[val_idx]->var_str_idx = hist_data->n_var_str++;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int create_val_fields(struct hist_trigger_data *hist_data,
|
||||
@ -5333,6 +5348,22 @@ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
|
||||
hist_val = hist_field->fn(hist_field, elt, rbe, rec);
|
||||
if (hist_field->flags & HIST_FIELD_FL_VAR) {
|
||||
var_idx = hist_field->var.idx;
|
||||
|
||||
if (hist_field->flags & HIST_FIELD_FL_STRING) {
|
||||
unsigned int str_start, var_str_idx, idx;
|
||||
char *str, *val_str;
|
||||
|
||||
str_start = hist_data->n_field_var_str +
|
||||
hist_data->n_save_var_str;
|
||||
var_str_idx = hist_field->var_str_idx;
|
||||
idx = str_start + var_str_idx;
|
||||
|
||||
str = elt_data->field_var_str[idx];
|
||||
val_str = (char *)(uintptr_t)hist_val;
|
||||
strscpy(str, val_str, hist_field->size);
|
||||
|
||||
hist_val = (u64)(uintptr_t)str;
|
||||
}
|
||||
tracing_map_set_var(elt, var_idx, hist_val);
|
||||
continue;
|
||||
}
|
||||
|
23
mm/hugetlb.c
23
mm/hugetlb.c
@ -3589,6 +3589,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
|
||||
struct hstate *h = hstate_vma(vma);
|
||||
unsigned long sz = huge_page_size(h);
|
||||
struct mmu_notifier_range range;
|
||||
bool force_flush = false;
|
||||
|
||||
WARN_ON(!is_vm_hugetlb_page(vma));
|
||||
BUG_ON(start & ~huge_page_mask(h));
|
||||
@ -3617,10 +3618,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
|
||||
ptl = huge_pte_lock(h, mm, ptep);
|
||||
if (huge_pmd_unshare(mm, &address, ptep)) {
|
||||
spin_unlock(ptl);
|
||||
/*
|
||||
* We just unmapped a page of PMDs by clearing a PUD.
|
||||
* The caller's TLB flush range should cover this area.
|
||||
*/
|
||||
tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE);
|
||||
force_flush = true;
|
||||
continue;
|
||||
}
|
||||
|
||||
@ -3677,6 +3676,22 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
|
||||
}
|
||||
mmu_notifier_invalidate_range_end(&range);
|
||||
tlb_end_vma(tlb, vma);
|
||||
|
||||
/*
|
||||
* If we unshared PMDs, the TLB flush was not recorded in mmu_gather. We
|
||||
* could defer the flush until now, since by holding i_mmap_rwsem we
|
||||
* guaranteed that the last refernece would not be dropped. But we must
|
||||
* do the flushing before we return, as otherwise i_mmap_rwsem will be
|
||||
* dropped and the last reference to the shared PMDs page might be
|
||||
* dropped as well.
|
||||
*
|
||||
* In theory we could defer the freeing of the PMD pages as well, but
|
||||
* huge_pmd_unshare() relies on the exact page_count for the PMD page to
|
||||
* detect sharing, so we cannot defer the release of the page either.
|
||||
* Instead, do flush now.
|
||||
*/
|
||||
if (force_flush)
|
||||
tlb_flush_mmu_tlbonly(tlb);
|
||||
}
|
||||
|
||||
void __unmap_hugepage_range_final(struct mmu_gather *tlb,
|
||||
|
@ -211,7 +211,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
|
||||
#define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \
|
||||
SLAB_TEMPORARY | SLAB_ACCOUNT)
|
||||
#else
|
||||
#define SLAB_CACHE_FLAGS (0)
|
||||
#define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE)
|
||||
#endif
|
||||
|
||||
/* Common flags available with current configuration */
|
||||
|
@ -391,6 +391,7 @@ bool batadv_frag_skb_fwd(struct sk_buff *skb,
|
||||
|
||||
/**
|
||||
* batadv_frag_create() - create a fragment from skb
|
||||
* @net_dev: outgoing device for fragment
|
||||
* @skb: skb to create fragment from
|
||||
* @frag_head: header to use in new fragment
|
||||
* @fragment_size: size of new fragment
|
||||
@ -401,22 +402,25 @@ bool batadv_frag_skb_fwd(struct sk_buff *skb,
|
||||
*
|
||||
* Return: the new fragment, NULL on error.
|
||||
*/
|
||||
static struct sk_buff *batadv_frag_create(struct sk_buff *skb,
|
||||
static struct sk_buff *batadv_frag_create(struct net_device *net_dev,
|
||||
struct sk_buff *skb,
|
||||
struct batadv_frag_packet *frag_head,
|
||||
unsigned int fragment_size)
|
||||
{
|
||||
unsigned int ll_reserved = LL_RESERVED_SPACE(net_dev);
|
||||
unsigned int tailroom = net_dev->needed_tailroom;
|
||||
struct sk_buff *skb_fragment;
|
||||
unsigned int header_size = sizeof(*frag_head);
|
||||
unsigned int mtu = fragment_size + header_size;
|
||||
|
||||
skb_fragment = netdev_alloc_skb(NULL, mtu + ETH_HLEN);
|
||||
skb_fragment = dev_alloc_skb(ll_reserved + mtu + tailroom);
|
||||
if (!skb_fragment)
|
||||
goto err;
|
||||
|
||||
skb_fragment->priority = skb->priority;
|
||||
|
||||
/* Eat the last mtu-bytes of the skb */
|
||||
skb_reserve(skb_fragment, header_size + ETH_HLEN);
|
||||
skb_reserve(skb_fragment, ll_reserved + header_size);
|
||||
skb_split(skb, skb_fragment, skb->len - fragment_size);
|
||||
|
||||
/* Add the header */
|
||||
@ -439,11 +443,12 @@ int batadv_frag_send_packet(struct sk_buff *skb,
|
||||
struct batadv_orig_node *orig_node,
|
||||
struct batadv_neigh_node *neigh_node)
|
||||
{
|
||||
struct net_device *net_dev = neigh_node->if_incoming->net_dev;
|
||||
struct batadv_priv *bat_priv;
|
||||
struct batadv_hard_iface *primary_if = NULL;
|
||||
struct batadv_frag_packet frag_header;
|
||||
struct sk_buff *skb_fragment;
|
||||
unsigned int mtu = neigh_node->if_incoming->net_dev->mtu;
|
||||
unsigned int mtu = net_dev->mtu;
|
||||
unsigned int header_size = sizeof(frag_header);
|
||||
unsigned int max_fragment_size, num_fragments;
|
||||
int ret;
|
||||
@ -503,7 +508,7 @@ int batadv_frag_send_packet(struct sk_buff *skb,
|
||||
goto put_primary_if;
|
||||
}
|
||||
|
||||
skb_fragment = batadv_frag_create(skb, &frag_header,
|
||||
skb_fragment = batadv_frag_create(net_dev, skb, &frag_header,
|
||||
max_fragment_size);
|
||||
if (!skb_fragment) {
|
||||
ret = -ENOMEM;
|
||||
@ -522,13 +527,14 @@ int batadv_frag_send_packet(struct sk_buff *skb,
|
||||
frag_header.no++;
|
||||
}
|
||||
|
||||
/* Make room for the fragment header. */
|
||||
if (batadv_skb_head_push(skb, header_size) < 0 ||
|
||||
pskb_expand_head(skb, header_size + ETH_HLEN, 0, GFP_ATOMIC) < 0) {
|
||||
ret = -ENOMEM;
|
||||
/* make sure that there is at least enough head for the fragmentation
|
||||
* and ethernet headers
|
||||
*/
|
||||
ret = skb_cow_head(skb, ETH_HLEN + header_size);
|
||||
if (ret < 0)
|
||||
goto put_primary_if;
|
||||
}
|
||||
|
||||
skb_push(skb, header_size);
|
||||
memcpy(skb->data, &frag_header, header_size);
|
||||
|
||||
/* Send the last fragment */
|
||||
|
@ -554,6 +554,9 @@ static void batadv_hardif_recalc_extra_skbroom(struct net_device *soft_iface)
|
||||
needed_headroom = lower_headroom + (lower_header_len - ETH_HLEN);
|
||||
needed_headroom += batadv_max_header_len();
|
||||
|
||||
/* fragmentation headers don't strip the unicast/... header */
|
||||
needed_headroom += sizeof(struct batadv_frag_packet);
|
||||
|
||||
soft_iface->needed_headroom = needed_headroom;
|
||||
soft_iface->needed_tailroom = lower_tailroom;
|
||||
}
|
||||
|
@ -94,13 +94,13 @@ int nfc_dev_up(struct nfc_dev *dev)
|
||||
|
||||
device_lock(&dev->dev);
|
||||
|
||||
if (dev->rfkill && rfkill_blocked(dev->rfkill)) {
|
||||
rc = -ERFKILL;
|
||||
if (!device_is_registered(&dev->dev)) {
|
||||
rc = -ENODEV;
|
||||
goto error;
|
||||
}
|
||||
|
||||
if (!device_is_registered(&dev->dev)) {
|
||||
rc = -ENODEV;
|
||||
if (dev->rfkill && rfkill_blocked(dev->rfkill)) {
|
||||
rc = -ERFKILL;
|
||||
goto error;
|
||||
}
|
||||
|
||||
@ -1118,11 +1118,7 @@ int nfc_register_device(struct nfc_dev *dev)
|
||||
if (rc)
|
||||
pr_err("Could not register llcp device\n");
|
||||
|
||||
rc = nfc_genl_device_added(dev);
|
||||
if (rc)
|
||||
pr_debug("The userspace won't be notified that the device %s was added\n",
|
||||
dev_name(&dev->dev));
|
||||
|
||||
device_lock(&dev->dev);
|
||||
dev->rfkill = rfkill_alloc(dev_name(&dev->dev), &dev->dev,
|
||||
RFKILL_TYPE_NFC, &nfc_rfkill_ops, dev);
|
||||
if (dev->rfkill) {
|
||||
@ -1131,6 +1127,12 @@ int nfc_register_device(struct nfc_dev *dev)
|
||||
dev->rfkill = NULL;
|
||||
}
|
||||
}
|
||||
device_unlock(&dev->dev);
|
||||
|
||||
rc = nfc_genl_device_added(dev);
|
||||
if (rc)
|
||||
pr_debug("The userspace won't be notified that the device %s was added\n",
|
||||
dev_name(&dev->dev));
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -1147,10 +1149,17 @@ void nfc_unregister_device(struct nfc_dev *dev)
|
||||
|
||||
pr_debug("dev_name=%s\n", dev_name(&dev->dev));
|
||||
|
||||
rc = nfc_genl_device_removed(dev);
|
||||
if (rc)
|
||||
pr_debug("The userspace won't be notified that the device %s "
|
||||
"was removed\n", dev_name(&dev->dev));
|
||||
|
||||
device_lock(&dev->dev);
|
||||
if (dev->rfkill) {
|
||||
rfkill_unregister(dev->rfkill);
|
||||
rfkill_destroy(dev->rfkill);
|
||||
}
|
||||
device_unlock(&dev->dev);
|
||||
|
||||
if (dev->ops->check_presence) {
|
||||
device_lock(&dev->dev);
|
||||
@ -1160,11 +1169,6 @@ void nfc_unregister_device(struct nfc_dev *dev)
|
||||
cancel_work_sync(&dev->check_pres_work);
|
||||
}
|
||||
|
||||
rc = nfc_genl_device_removed(dev);
|
||||
if (rc)
|
||||
pr_debug("The userspace won't be notified that the device %s "
|
||||
"was removed\n", dev_name(&dev->dev));
|
||||
|
||||
nfc_llcp_unregister_device(dev);
|
||||
|
||||
mutex_lock(&nfc_devlist_mutex);
|
||||
|
@ -144,12 +144,15 @@ inline int nci_request(struct nci_dev *ndev,
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (!test_bit(NCI_UP, &ndev->flags))
|
||||
return -ENETDOWN;
|
||||
|
||||
/* Serialize all requests */
|
||||
mutex_lock(&ndev->req_lock);
|
||||
rc = __nci_request(ndev, req, opt, timeout);
|
||||
/* check the state after obtaing the lock against any races
|
||||
* from nci_close_device when the device gets removed.
|
||||
*/
|
||||
if (test_bit(NCI_UP, &ndev->flags))
|
||||
rc = __nci_request(ndev, req, opt, timeout);
|
||||
else
|
||||
rc = -ENETDOWN;
|
||||
mutex_unlock(&ndev->req_lock);
|
||||
|
||||
return rc;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user