This is the 5.4.168 stable release

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmHC4gUACgkQONu9yGCS
 aT7AXg//aQzVGDFrHPH/3gsN/YR4LlOCc+gQHYQdTUIhEePuwMaHff2hraG5O7pz
 WUTiRWjiqjII+UpBJi+Zcp5BsRfG5Z7bt3mWq9IrGt0Ttp+e7fIfuVyq0IWXmzcD
 n3arfW3eUgG8aRez6tTyjBErltUu/gQQIVmPlz1eL7nbUu2tHRjqL0hnXeWryeRx
 ilgxwynsCTHZ8HtSuc3mHtEeeVdV0ufzSWpxLre1ZCyoZwUsYJtDA+6Bmx3cv82r
 cF1wymH3hYcBibFEY3UNXhoimCiz4/TeC44P4UVwFNt0+SU4lVcT9QeqyMVG+fyn
 E7YEPHhkYO9U39cRo0BnTAfUMpwIkJ6Ro8AMDPKFK/FGWG60xkdhGWO6JStngt21
 HId4wCI6rNkiOZpPKiTN0LpKhv2qn1vOlJ+yNVDn+KQeLHo2oZXFPwSYBHDCly77
 nAcmRh4OwFBpn4VWiB9qPjcRpM2nSAzM1CIlw7LFBA5zoKYbiI6cn1nhSknb1zXo
 r78tgqT2crOiapqA4lyeguqk1YwhFtC2yX3OE2B3v5+QNC/HtijctO5o+d72FAYH
 MiSiTY3TUzqee9kqtxgheG87k7ksY8Q6qKetpshQtKzOyhq7BigLeUJGkul4XNyC
 RiVBy042i4QrwuMbuylRNmXRtBn1DkZAA98943Bey2zfz777Z9g=
 =IebC
 -----END PGP SIGNATURE-----

Merge 5.4.168 into android11-5.4-lts

Changes in 5.4.168
	KVM: selftests: Make sure kvm_create_max_vcpus test won't hit RLIMIT_NOFILE
	mac80211: mark TX-during-stop for TX in in_reconfig
	mac80211: send ADDBA requests using the tid/queue of the aggregation session
	firmware: arm_scpi: Fix string overflow in SCPI genpd driver
	virtio_ring: Fix querying of maximum DMA mapping size for virtio device
	recordmcount.pl: look for jgnop instruction as well as bcrl on s390
	dm btree remove: fix use after free in rebalance_children()
	audit: improve robustness of the audit queue handling
	iio: adc: stm32: fix a current leak by resetting pcsel before disabling vdda
	nfsd: fix use-after-free due to delegation race
	arm64: dts: rockchip: remove mmc-hs400-enhanced-strobe from rk3399-khadas-edge
	arm64: dts: rockchip: fix rk3399-leez-p710 vcc3v3-lan supply
	arm64: dts: rockchip: fix audio-supply for Rock Pi 4
	mac80211: track only QoS data frames for admission control
	ARM: socfpga: dts: fix qspi node compatible
	clk: Don't parent clks until the parent is fully registered
	selftests: net: Correct ping6 expected rc from 2 to 1
	s390/kexec_file: fix error handling when applying relocations
	sch_cake: do not call cake_destroy() from cake_init()
	inet_diag: use jiffies_delta_to_msecs()
	inet_diag: fix kernel-infoleak for UDP sockets
	selftests: Fix raw socket bind tests with VRF
	selftests: Fix IPv6 address bind tests
	dmaengine: st_fdma: fix MODULE_ALIAS
	selftest/net/forwarding: declare NETIFS p9 p10
	mac80211: agg-tx: refactor sending addba
	mac80211: agg-tx: don't schedule_and_wake_txq() under sta->lock
	mac80211: accept aggregation sessions on 6 GHz
	mac80211: fix lookup when adding AddBA extension element
	net: sched: lock action when translating it to flow_action infra
	flow_offload: return EOPNOTSUPP for the unsupported mpls action type
	rds: memory leak in __rds_conn_create()
	soc/tegra: fuse: Fix bitwise vs. logical OR warning
	igb: Fix removal of unicast MAC filters of VFs
	igbvf: fix double free in `igbvf_probe`
	ixgbe: set X550 MDIO speed before talking to PHY
	netdevsim: Zero-initialize memory for new map's value in function nsim_bpf_map_alloc
	net/packet: rx_owner_map depends on pg_vec
	net: Fix double 0x prefix print in SKB dump
	net/smc: Prevent smc_release() from long blocking
	net: systemport: Add global locking for descriptor lifecycle
	sit: do not call ipip6_dev_free() from sit_init_net()
	USB: gadget: bRequestType is a bitfield, not a enum
	USB: NO_LPM quirk Lenovo USB-C to Ethernet Adapher(RTL8153-04)
	PCI/MSI: Clear PCI_MSIX_FLAGS_MASKALL on error
	PCI/MSI: Mask MSI-X vectors only on success
	usb: xhci: Extend support for runtime power management for AMD's Yellow carp.
	USB: serial: cp210x: fix CP2105 GPIO registration
	USB: serial: option: add Telit FN990 compositions
	timekeeping: Really make sure wall_to_monotonic isn't positive
	libata: if T_LENGTH is zero, dma direction should be DMA_NONE
	drm/amdgpu: correct register access for RLC_JUMP_TABLE_RESTORE
	mac80211: validate extended element ID is present
	mwifiex: Remove unnecessary braces from HostCmd_SET_SEQ_NO_BSS_INFO
	Input: touchscreen - avoid bitwise vs logical OR warning
	ARM: dts: imx6ull-pinfunc: Fix CSI_DATA07__ESAI_TX0 pad name
	xsk: Do not sleep in poll() when need_wakeup set
	media: mxl111sf: change mutex_init() location
	fuse: annotate lock in fuse_reverse_inval_entry()
	ovl: fix warning in ovl_create_real()
	scsi: scsi_debug: Sanity check block descriptor length in resp_mode_select()
	rcu: Mark accesses to rcu_state.n_force_qs
	mac80211: fix regression in SSN handling of addba tx
	net: sched: Fix suspicious RCU usage while accessing tcf_tunnel_info
	Revert "xsk: Do not sleep in poll() when need_wakeup set"
	xen/blkfront: harden blkfront against event channel storms
	xen/netfront: harden netfront against event channel storms
	xen/console: harden hvc_xen against event channel storms
	xen/netback: fix rx queue stall detection
	xen/netback: don't queue unlimited number of packages
	Linux 5.4.168

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I0c9e50b773834cb761a3b371132274d8120a8e5b
This commit is contained in:
Greg Kroah-Hartman 2021-12-22 10:08:44 +01:00
commit 3cd0728e7b
75 changed files with 547 additions and 273 deletions

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 4
SUBLEVEL = 167
SUBLEVEL = 168
EXTRAVERSION =
NAME = Kleptomaniac Octopus

View File

@ -82,6 +82,6 @@
#define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS 0x01F4 0x0480 0x0000 0x9 0x0
#define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK 0x01F8 0x0484 0x0000 0x9 0x0
#define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0 0x01FC 0x0488 0x0000 0x9 0x0
#define MX6ULL_PAD_CSI_DATA07__ESAI_T0 0x0200 0x048C 0x0000 0x9 0x0
#define MX6ULL_PAD_CSI_DATA07__ESAI_TX0 0x0200 0x048C 0x0000 0x9 0x0
#endif /* __DTS_IMX6ULL_PINFUNC_H */

View File

@ -12,7 +12,7 @@
flash0: n25q00@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q00aa";
compatible = "micron,mt25qu02g", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;

View File

@ -119,7 +119,7 @@
flash: flash@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q256a";
compatible = "micron,n25q256a", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;

View File

@ -124,7 +124,7 @@
flash0: n25q00@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q00";
compatible = "micron,mt25qu02g", "jedec,spi-nor";
reg = <0>; /* chip select */
spi-max-frequency = <100000000>;

View File

@ -169,7 +169,7 @@
flash: flash@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q00";
compatible = "micron,mt25qu02g", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;

View File

@ -80,7 +80,7 @@
flash: flash@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q256a";
compatible = "micron,n25q256a", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;
m25p,fast-read;

View File

@ -116,7 +116,7 @@
flash0: n25q512a@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q512a";
compatible = "micron,n25q512a", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <100000000>;

View File

@ -224,7 +224,7 @@
n25q128@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q128";
compatible = "micron,n25q128", "jedec,spi-nor";
reg = <0>; /* chip select */
spi-max-frequency = <100000000>;
m25p,fast-read;
@ -241,7 +241,7 @@
n25q00@1 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "n25q00";
compatible = "micron,mt25qu02g", "jedec,spi-nor";
reg = <1>; /* chip select */
spi-max-frequency = <100000000>;
m25p,fast-read;

View File

@ -685,7 +685,6 @@
&sdhci {
bus-width = <8>;
mmc-hs400-1_8v;
mmc-hs400-enhanced-strobe;
non-removable;
status = "okay";
};

View File

@ -49,7 +49,7 @@
regulator-boot-on;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
vim-supply = <&vcc3v3_sys>;
vin-supply = <&vcc3v3_sys>;
};
vcc3v3_sys: vcc3v3-sys {

View File

@ -452,7 +452,7 @@
status = "okay";
bt656-supply = <&vcc_3v0>;
audio-supply = <&vcc_3v0>;
audio-supply = <&vcc1v8_codec>;
sdmmc-supply = <&vcc_sdio>;
gpio1830-supply = <&vcc_3v0>;
};

View File

@ -277,6 +277,7 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
{
Elf_Rela *relas;
int i, r_type;
int ret;
relas = (void *)pi->ehdr + relsec->sh_offset;
@ -311,7 +312,11 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
addr = section->sh_addr + relas[i].r_offset;
r_type = ELF64_R_TYPE(relas[i].r_info);
arch_kexec_do_relocs(r_type, loc, val, addr);
ret = arch_kexec_do_relocs(r_type, loc, val, addr);
if (ret) {
pr_err("Unknown rela relocation: %d\n", r_type);
return -ENOEXEC;
}
}
return 0;
}

View File

@ -3164,8 +3164,19 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
goto invalid_fld;
}
if (ata_is_ncq(tf->protocol) && (cdb[2 + cdb_offset] & 0x3) == 0)
tf->protocol = ATA_PROT_NCQ_NODATA;
if ((cdb[2 + cdb_offset] & 0x3) == 0) {
/*
* When T_LENGTH is zero (No data is transferred), dir should
* be DMA_NONE.
*/
if (scmd->sc_data_direction != DMA_NONE) {
fp = 2 + cdb_offset;
goto invalid_fld;
}
if (ata_is_ncq(tf->protocol))
tf->protocol = ATA_PROT_NCQ_NODATA;
}
/* enable LBA */
tf->flags |= ATA_TFLAG_LBA;

View File

@ -1565,9 +1565,12 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
unsigned long flags;
struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
struct blkfront_info *info = rinfo->dev_info;
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);
return IRQ_HANDLED;
}
spin_lock_irqsave(&rinfo->ring_lock, flags);
again:
@ -1583,6 +1586,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
unsigned long id;
unsigned int op;
eoiflag = 0;
RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
id = bret.id;
@ -1698,6 +1703,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
xen_irq_lateeoi(irq, eoiflag);
return IRQ_HANDLED;
err:
@ -1705,6 +1712,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
spin_unlock_irqrestore(&rinfo->ring_lock, flags);
/* No EOI in order to avoid further interrupts. */
pr_alert("%s disabled for further use\n", info->gd->disk_name);
return IRQ_HANDLED;
}
@ -1744,8 +1753,8 @@ static int setup_blkring(struct xenbus_device *dev,
if (err)
goto fail;
err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0,
"blkif", rinfo);
err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt,
0, "blkif", rinfo);
if (err <= 0) {
xenbus_dev_fatal(dev, err,
"bind_evtchn_to_irqhandler failed");

View File

@ -3379,6 +3379,14 @@ static int __clk_core_init(struct clk_core *core)
clk_prepare_lock();
/*
* Set hw->core after grabbing the prepare_lock to synchronize with
* callers of clk_core_fill_parent_index() where we treat hw->core
* being NULL as the clk not being registered yet. This is crucial so
* that clks aren't parented until their parent is fully registered.
*/
core->hw->core = core;
ret = clk_pm_runtime_get(core);
if (ret)
goto unlock;
@ -3534,8 +3542,10 @@ static int __clk_core_init(struct clk_core *core)
out:
clk_pm_runtime_put(core);
unlock:
if (ret)
if (ret) {
hlist_del_init(&core->child_node);
core->hw->core = NULL;
}
clk_prepare_unlock();
@ -3781,7 +3791,6 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
core->num_parents = init->num_parents;
core->min_rate = 0;
core->max_rate = ULONG_MAX;
hw->core = core;
ret = clk_core_populate_parent_map(core, init);
if (ret)
@ -3799,7 +3808,7 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
goto fail_create_clk;
}
clk_core_link_consumer(hw->core, hw->clk);
clk_core_link_consumer(core, hw->clk);
ret = __clk_core_init(core);
if (!ret)

View File

@ -873,4 +873,4 @@ MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("STMicroelectronics FDMA engine driver");
MODULE_AUTHOR("Ludovic.barre <Ludovic.barre@st.com>");
MODULE_AUTHOR("Peter Griffin <peter.griffin@linaro.org>");
MODULE_ALIAS("platform: " DRIVER_NAME);
MODULE_ALIAS("platform:" DRIVER_NAME);

View File

@ -16,7 +16,6 @@ struct scpi_pm_domain {
struct generic_pm_domain genpd;
struct scpi_ops *ops;
u32 domain;
char name[30];
};
/*
@ -110,8 +109,13 @@ static int scpi_pm_domain_probe(struct platform_device *pdev)
scpi_pd->domain = i;
scpi_pd->ops = scpi_ops;
sprintf(scpi_pd->name, "%pOFn.%d", np, i);
scpi_pd->genpd.name = scpi_pd->name;
scpi_pd->genpd.name = devm_kasprintf(dev, GFP_KERNEL,
"%pOFn.%d", np, i);
if (!scpi_pd->genpd.name) {
dev_err(dev, "Failed to allocate genpd name:%pOFn.%d\n",
np, i);
continue;
}
scpi_pd->genpd.power_off = scpi_pd_power_off;
scpi_pd->genpd.power_on = scpi_pd_power_on;

View File

@ -2906,8 +2906,8 @@ static void gfx_v9_0_init_pg(struct amdgpu_device *adev)
AMD_PG_SUPPORT_CP |
AMD_PG_SUPPORT_GDS |
AMD_PG_SUPPORT_RLC_SMU_HS)) {
WREG32(mmRLC_JUMP_TABLE_RESTORE,
adev->gfx.rlc.cp_table_gpu_addr >> 8);
WREG32_SOC15(GC, 0, mmRLC_JUMP_TABLE_RESTORE,
adev->gfx.rlc.cp_table_gpu_addr >> 8);
gfx_v9_0_init_gfx_power_gating(adev);
}
}

View File

@ -933,6 +933,7 @@ static int stm32h7_adc_prepare(struct stm32_adc *adc)
static void stm32h7_adc_unprepare(struct stm32_adc *adc)
{
stm32_adc_writel(adc, STM32H7_ADC_PCSEL, 0);
stm32h7_adc_disable(adc);
stm32h7_adc_enter_pwr_down(adc);
}

View File

@ -81,8 +81,8 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch,
touchscreen_get_prop_u32(dev, "touchscreen-size-x",
input_abs_get_max(input,
axis) + 1,
&maximum) |
touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
&maximum);
data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x",
input_abs_get_fuzz(input, axis),
&fuzz);
if (data_present)
@ -95,8 +95,8 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch,
touchscreen_get_prop_u32(dev, "touchscreen-size-y",
input_abs_get_max(input,
axis) + 1,
&maximum) |
touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
&maximum);
data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y",
input_abs_get_fuzz(input, axis),
&fuzz);
if (data_present)
@ -106,11 +106,11 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch,
data_present = touchscreen_get_prop_u32(dev,
"touchscreen-max-pressure",
input_abs_get_max(input, axis),
&maximum) |
touchscreen_get_prop_u32(dev,
"touchscreen-fuzz-pressure",
input_abs_get_fuzz(input, axis),
&fuzz);
&maximum);
data_present |= touchscreen_get_prop_u32(dev,
"touchscreen-fuzz-pressure",
input_abs_get_fuzz(input, axis),
&fuzz);
if (data_present)
touchscreen_set_params(input, axis, 0, maximum, fuzz);

View File

@ -423,9 +423,9 @@ static int rebalance_children(struct shadow_spine *s,
memcpy(n, dm_block_data(child),
dm_bm_block_size(dm_tm_get_bm(info->tm)));
dm_tm_unlock(info->tm, child);
dm_tm_dec(info->tm, dm_block_location(child));
dm_tm_unlock(info->tm, child);
return 0;
}

View File

@ -931,8 +931,6 @@ static int mxl111sf_init(struct dvb_usb_device *d)
.len = sizeof(eeprom), .buf = eeprom },
};
mutex_init(&state->msg_lock);
ret = get_chip_info(state);
if (mxl_fail(ret))
pr_err("failed to get chip info during probe");
@ -1074,6 +1072,14 @@ static int mxl111sf_get_stream_config_dvbt(struct dvb_frontend *fe,
return 0;
}
static int mxl111sf_probe(struct dvb_usb_device *dev)
{
struct mxl111sf_state *state = d_to_priv(dev);
mutex_init(&state->msg_lock);
return 0;
}
static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
.driver_name = KBUILD_MODNAME,
.owner = THIS_MODULE,
@ -1083,6 +1089,7 @@ static struct dvb_usb_device_properties mxl111sf_props_dvbt = {
.generic_bulk_ctrl_endpoint = 0x02,
.generic_bulk_ctrl_endpoint_response = 0x81,
.probe = mxl111sf_probe,
.i2c_algo = &mxl111sf_i2c_algo,
.frontend_attach = mxl111sf_frontend_attach_dvbt,
.tuner_attach = mxl111sf_attach_tuner,
@ -1124,6 +1131,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc = {
.generic_bulk_ctrl_endpoint = 0x02,
.generic_bulk_ctrl_endpoint_response = 0x81,
.probe = mxl111sf_probe,
.i2c_algo = &mxl111sf_i2c_algo,
.frontend_attach = mxl111sf_frontend_attach_atsc,
.tuner_attach = mxl111sf_attach_tuner,
@ -1165,6 +1173,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mh = {
.generic_bulk_ctrl_endpoint = 0x02,
.generic_bulk_ctrl_endpoint_response = 0x81,
.probe = mxl111sf_probe,
.i2c_algo = &mxl111sf_i2c_algo,
.frontend_attach = mxl111sf_frontend_attach_mh,
.tuner_attach = mxl111sf_attach_tuner,
@ -1233,6 +1242,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc_mh = {
.generic_bulk_ctrl_endpoint = 0x02,
.generic_bulk_ctrl_endpoint_response = 0x81,
.probe = mxl111sf_probe,
.i2c_algo = &mxl111sf_i2c_algo,
.frontend_attach = mxl111sf_frontend_attach_atsc_mh,
.tuner_attach = mxl111sf_attach_tuner,
@ -1311,6 +1321,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury = {
.generic_bulk_ctrl_endpoint = 0x02,
.generic_bulk_ctrl_endpoint_response = 0x81,
.probe = mxl111sf_probe,
.i2c_algo = &mxl111sf_i2c_algo,
.frontend_attach = mxl111sf_frontend_attach_mercury,
.tuner_attach = mxl111sf_attach_tuner,
@ -1381,6 +1392,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury_mh = {
.generic_bulk_ctrl_endpoint = 0x02,
.generic_bulk_ctrl_endpoint_response = 0x81,
.probe = mxl111sf_probe,
.i2c_algo = &mxl111sf_i2c_algo,
.frontend_attach = mxl111sf_frontend_attach_mercury_mh,
.tuner_attach = mxl111sf_attach_tuner,

View File

@ -1277,11 +1277,11 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
struct bcm_sysport_priv *priv = netdev_priv(dev);
struct device *kdev = &priv->pdev->dev;
struct bcm_sysport_tx_ring *ring;
unsigned long flags, desc_flags;
struct bcm_sysport_cb *cb;
struct netdev_queue *txq;
u32 len_status, addr_lo;
unsigned int skb_len;
unsigned long flags;
dma_addr_t mapping;
u16 queue;
int ret;
@ -1339,8 +1339,10 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb,
ring->desc_count--;
/* Ports are latched, so write upper address first */
spin_lock_irqsave(&priv->desc_lock, desc_flags);
tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index));
tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index));
spin_unlock_irqrestore(&priv->desc_lock, desc_flags);
/* Check ring space and update SW control flow */
if (ring->desc_count == 0)
@ -1970,6 +1972,7 @@ static int bcm_sysport_open(struct net_device *dev)
}
/* Initialize both hardware and software ring */
spin_lock_init(&priv->desc_lock);
for (i = 0; i < dev->num_tx_queues; i++) {
ret = bcm_sysport_init_tx_ring(priv, i);
if (ret) {

View File

@ -742,6 +742,7 @@ struct bcm_sysport_priv {
int wol_irq;
/* Transmit rings */
spinlock_t desc_lock;
struct bcm_sysport_tx_ring *tx_rings;
/* Receive queue */

View File

@ -7374,6 +7374,20 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
struct vf_mac_filter *entry = NULL;
int ret = 0;
if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
!vf_data->trusted) {
dev_warn(&pdev->dev,
"VF %d requested MAC filter but is administratively denied\n",
vf);
return -EINVAL;
}
if (!is_valid_ether_addr(addr)) {
dev_warn(&pdev->dev,
"VF %d attempted to set invalid MAC filter\n",
vf);
return -EINVAL;
}
switch (info) {
case E1000_VF_MAC_FILTER_CLR:
/* remove all unicast MAC filters related to the current VF */
@ -7387,20 +7401,6 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
}
break;
case E1000_VF_MAC_FILTER_ADD:
if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
!vf_data->trusted) {
dev_warn(&pdev->dev,
"VF %d requested MAC filter but is administratively denied\n",
vf);
return -EINVAL;
}
if (!is_valid_ether_addr(addr)) {
dev_warn(&pdev->dev,
"VF %d attempted to set invalid MAC filter\n",
vf);
return -EINVAL;
}
/* try to find empty slot in the list */
list_for_each(pos, &adapter->vf_macs.l) {
entry = list_entry(pos, struct vf_mac_filter, l);

View File

@ -2887,6 +2887,7 @@ static int igbvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return 0;
err_hw_init:
netif_napi_del(&adapter->rx_ring->napi);
kfree(adapter->tx_ring);
kfree(adapter->rx_ring);
err_sw_init:

View File

@ -3405,6 +3405,9 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
/* flush pending Tx transactions */
ixgbe_clear_tx_pending(hw);
/* set MDIO speed before talking to the PHY in case it's the 1st time */
ixgbe_set_mdio_speed(hw);
/* PHY ops must be identified and initialized prior to reset */
status = hw->phy.ops.init(hw);
if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||

View File

@ -510,6 +510,7 @@ nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap)
goto err_free;
key = nmap->entry[i].key;
*key = i;
memset(nmap->entry[i].value, 0, offmap->map.value_size);
}
}

View File

@ -322,9 +322,9 @@ static int mwifiex_dnld_sleep_confirm_cmd(struct mwifiex_adapter *adapter)
adapter->seq_num++;
sleep_cfm_buf->seq_num =
cpu_to_le16((HostCmd_SET_SEQ_NO_BSS_INFO
cpu_to_le16(HostCmd_SET_SEQ_NO_BSS_INFO
(adapter->seq_num, priv->bss_num,
priv->bss_type)));
priv->bss_type));
mwifiex_dbg(adapter, CMD,
"cmd: DNLD_CMD: %#x, act %#x, len %d, seqno %#x\n",

View File

@ -512,10 +512,10 @@ enum mwifiex_channel_flags {
#define RF_ANTENNA_AUTO 0xFFFF
#define HostCmd_SET_SEQ_NO_BSS_INFO(seq, num, type) { \
(((seq) & 0x00ff) | \
(((num) & 0x000f) << 8)) | \
(((type) & 0x000f) << 12); }
#define HostCmd_SET_SEQ_NO_BSS_INFO(seq, num, type) \
((((seq) & 0x00ff) | \
(((num) & 0x000f) << 8)) | \
(((type) & 0x000f) << 12))
#define HostCmd_GET_SEQ_NO(seq) \
((seq) & HostCmd_SEQ_NUM_MASK)

View File

@ -203,6 +203,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
unsigned int rx_queue_max;
unsigned int rx_queue_len;
unsigned long last_rx_time;
unsigned int rx_slots_needed;
bool stalled;
struct xenvif_copy_state rx_copy;

View File

@ -33,28 +33,36 @@
#include <xen/xen.h>
#include <xen/events.h>
/*
* Update the needed ring page slots for the first SKB queued.
* Note that any call sequence outside the RX thread calling this function
* needs to wake up the RX thread via a call of xenvif_kick_thread()
* afterwards in order to avoid a race with putting the thread to sleep.
*/
static void xenvif_update_needed_slots(struct xenvif_queue *queue,
const struct sk_buff *skb)
{
unsigned int needed = 0;
if (skb) {
needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
if (skb_is_gso(skb))
needed++;
if (skb->sw_hash)
needed++;
}
WRITE_ONCE(queue->rx_slots_needed, needed);
}
static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)
{
RING_IDX prod, cons;
struct sk_buff *skb;
int needed;
unsigned long flags;
unsigned int needed;
spin_lock_irqsave(&queue->rx_queue.lock, flags);
skb = skb_peek(&queue->rx_queue);
if (!skb) {
spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
needed = READ_ONCE(queue->rx_slots_needed);
if (!needed)
return false;
}
needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);
if (skb_is_gso(skb))
needed++;
if (skb->sw_hash)
needed++;
spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
do {
prod = queue->rx.sring->req_prod;
@ -80,13 +88,19 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb)
spin_lock_irqsave(&queue->rx_queue.lock, flags);
__skb_queue_tail(&queue->rx_queue, skb);
queue->rx_queue_len += skb->len;
if (queue->rx_queue_len > queue->rx_queue_max) {
if (queue->rx_queue_len >= queue->rx_queue_max) {
struct net_device *dev = queue->vif->dev;
netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
kfree_skb(skb);
queue->vif->dev->stats.rx_dropped++;
} else {
if (skb_queue_empty(&queue->rx_queue))
xenvif_update_needed_slots(queue, skb);
__skb_queue_tail(&queue->rx_queue, skb);
queue->rx_queue_len += skb->len;
}
spin_unlock_irqrestore(&queue->rx_queue.lock, flags);
@ -100,6 +114,8 @@ static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue)
skb = __skb_dequeue(&queue->rx_queue);
if (skb) {
xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue));
queue->rx_queue_len -= skb->len;
if (queue->rx_queue_len < queue->rx_queue_max) {
struct netdev_queue *txq;
@ -134,6 +150,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue)
break;
xenvif_rx_dequeue(queue);
kfree_skb(skb);
queue->vif->dev->stats.rx_dropped++;
}
}
@ -474,27 +491,31 @@ void xenvif_rx_action(struct xenvif_queue *queue)
xenvif_rx_copy_flush(queue);
}
static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)
static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue)
{
RING_IDX prod, cons;
prod = queue->rx.sring->req_prod;
cons = queue->rx.req_cons;
return prod - cons;
}
static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue)
{
unsigned int needed = READ_ONCE(queue->rx_slots_needed);
return !queue->stalled &&
prod - cons < 1 &&
xenvif_rx_queue_slots(queue) < needed &&
time_after(jiffies,
queue->last_rx_time + queue->vif->stall_timeout);
}
static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)
{
RING_IDX prod, cons;
unsigned int needed = READ_ONCE(queue->rx_slots_needed);
prod = queue->rx.sring->req_prod;
cons = queue->rx.req_cons;
return queue->stalled && prod - cons >= 1;
return queue->stalled && xenvif_rx_queue_slots(queue) >= needed;
}
bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)

View File

@ -142,6 +142,9 @@ struct netfront_queue {
struct sk_buff *rx_skbs[NET_RX_RING_SIZE];
grant_ref_t gref_rx_head;
grant_ref_t grant_rx_ref[NET_RX_RING_SIZE];
unsigned int rx_rsp_unconsumed;
spinlock_t rx_cons_lock;
};
struct netfront_info {
@ -364,12 +367,13 @@ static int xennet_open(struct net_device *dev)
return 0;
}
static void xennet_tx_buf_gc(struct netfront_queue *queue)
static bool xennet_tx_buf_gc(struct netfront_queue *queue)
{
RING_IDX cons, prod;
unsigned short id;
struct sk_buff *skb;
bool more_to_do;
bool work_done = false;
const struct device *dev = &queue->info->netdev->dev;
BUG_ON(!netif_carrier_ok(queue->info->netdev));
@ -386,6 +390,8 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
struct xen_netif_tx_response txrsp;
work_done = true;
RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
if (txrsp.status == XEN_NETIF_RSP_NULL)
continue;
@ -429,11 +435,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
xennet_maybe_wake_tx(queue);
return;
return work_done;
err:
queue->info->broken = true;
dev_alert(dev, "Disabled for further use\n");
return work_done;
}
struct xennet_gnttab_make_txreq {
@ -753,6 +761,16 @@ static int xennet_close(struct net_device *dev)
return 0;
}
static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val)
{
unsigned long flags;
spin_lock_irqsave(&queue->rx_cons_lock, flags);
queue->rx.rsp_cons = val;
queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
}
static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
grant_ref_t ref)
{
@ -804,7 +822,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
xennet_move_rx_slot(queue, skb, ref);
} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
queue->rx.rsp_cons = cons;
xennet_set_rx_rsp_cons(queue, cons);
return err;
}
@ -884,7 +902,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
}
if (unlikely(err))
queue->rx.rsp_cons = cons + slots;
xennet_set_rx_rsp_cons(queue, cons + slots);
return err;
}
@ -938,7 +956,8 @@ static int xennet_fill_frags(struct netfront_queue *queue,
__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
}
if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
queue->rx.rsp_cons = ++cons + skb_queue_len(list);
xennet_set_rx_rsp_cons(queue,
++cons + skb_queue_len(list));
kfree_skb(nskb);
return -ENOENT;
}
@ -951,7 +970,7 @@ static int xennet_fill_frags(struct netfront_queue *queue,
kfree_skb(nskb);
}
queue->rx.rsp_cons = cons;
xennet_set_rx_rsp_cons(queue, cons);
return 0;
}
@ -1072,7 +1091,9 @@ static int xennet_poll(struct napi_struct *napi, int budget)
if (unlikely(xennet_set_skb_gso(skb, gso))) {
__skb_queue_head(&tmpq, skb);
queue->rx.rsp_cons += skb_queue_len(&tmpq);
xennet_set_rx_rsp_cons(queue,
queue->rx.rsp_cons +
skb_queue_len(&tmpq));
goto err;
}
}
@ -1096,7 +1117,8 @@ static int xennet_poll(struct napi_struct *napi, int budget)
__skb_queue_tail(&rxq, skb);
i = ++queue->rx.rsp_cons;
i = queue->rx.rsp_cons + 1;
xennet_set_rx_rsp_cons(queue, i);
work_done++;
}
@ -1258,40 +1280,79 @@ static int xennet_set_features(struct net_device *dev,
return 0;
}
static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
static bool xennet_handle_tx(struct netfront_queue *queue, unsigned int *eoi)
{
struct netfront_queue *queue = dev_id;
unsigned long flags;
if (queue->info->broken)
return IRQ_HANDLED;
if (unlikely(queue->info->broken))
return false;
spin_lock_irqsave(&queue->tx_lock, flags);
xennet_tx_buf_gc(queue);
if (xennet_tx_buf_gc(queue))
*eoi = 0;
spin_unlock_irqrestore(&queue->tx_lock, flags);
return true;
}
static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
{
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
if (likely(xennet_handle_tx(dev_id, &eoiflag)))
xen_irq_lateeoi(irq, eoiflag);
return IRQ_HANDLED;
}
static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi)
{
unsigned int work_queued;
unsigned long flags;
if (unlikely(queue->info->broken))
return false;
spin_lock_irqsave(&queue->rx_cons_lock, flags);
work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx);
if (work_queued > queue->rx_rsp_unconsumed) {
queue->rx_rsp_unconsumed = work_queued;
*eoi = 0;
} else if (unlikely(work_queued < queue->rx_rsp_unconsumed)) {
const struct device *dev = &queue->info->netdev->dev;
spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
dev_alert(dev, "RX producer index going backwards\n");
dev_alert(dev, "Disabled for further use\n");
queue->info->broken = true;
return false;
}
spin_unlock_irqrestore(&queue->rx_cons_lock, flags);
if (likely(netif_carrier_ok(queue->info->netdev) && work_queued))
napi_schedule(&queue->napi);
return true;
}
static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
{
struct netfront_queue *queue = dev_id;
struct net_device *dev = queue->info->netdev;
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
if (queue->info->broken)
return IRQ_HANDLED;
if (likely(netif_carrier_ok(dev) &&
RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
napi_schedule(&queue->napi);
if (likely(xennet_handle_rx(dev_id, &eoiflag)))
xen_irq_lateeoi(irq, eoiflag);
return IRQ_HANDLED;
}
static irqreturn_t xennet_interrupt(int irq, void *dev_id)
{
xennet_tx_interrupt(irq, dev_id);
xennet_rx_interrupt(irq, dev_id);
unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;
if (xennet_handle_tx(dev_id, &eoiflag) &&
xennet_handle_rx(dev_id, &eoiflag))
xen_irq_lateeoi(irq, eoiflag);
return IRQ_HANDLED;
}
@ -1525,9 +1586,10 @@ static int setup_netfront_single(struct netfront_queue *queue)
if (err < 0)
goto fail;
err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
xennet_interrupt,
0, queue->info->netdev->name, queue);
err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
xennet_interrupt, 0,
queue->info->netdev->name,
queue);
if (err < 0)
goto bind_fail;
queue->rx_evtchn = queue->tx_evtchn;
@ -1555,18 +1617,18 @@ static int setup_netfront_split(struct netfront_queue *queue)
snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
"%s-tx", queue->name);
err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
xennet_tx_interrupt,
0, queue->tx_irq_name, queue);
err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn,
xennet_tx_interrupt, 0,
queue->tx_irq_name, queue);
if (err < 0)
goto bind_tx_fail;
queue->tx_irq = err;
snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
"%s-rx", queue->name);
err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
xennet_rx_interrupt,
0, queue->rx_irq_name, queue);
err = bind_evtchn_to_irqhandler_lateeoi(queue->rx_evtchn,
xennet_rx_interrupt, 0,
queue->rx_irq_name, queue);
if (err < 0)
goto bind_rx_fail;
queue->rx_irq = err;
@ -1668,6 +1730,7 @@ static int xennet_init_queue(struct netfront_queue *queue)
spin_lock_init(&queue->tx_lock);
spin_lock_init(&queue->rx_lock);
spin_lock_init(&queue->rx_cons_lock);
timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0);

View File

@ -826,9 +826,6 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
goto out_disable;
}
/* Ensure that all table entries are masked. */
msix_mask_all(base, tsize);
ret = msix_setup_entries(dev, base, entries, nvec, affd);
if (ret)
goto out_disable;
@ -851,6 +848,16 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
/* Set MSI-X enabled bits and unmask the function */
pci_intx_for_msi(dev, 0);
dev->msix_enabled = 1;
/*
* Ensure that all table entries are masked to prevent
* stale entries from firing in a crash kernel.
*
* Done late to deal with a broken Marvell NVME device
* which takes the MSI-X mask bits into account even
* when MSI-X is disabled, which prevents MSI delivery.
*/
msix_mask_all(base, tsize);
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);
pcibios_free_irq(dev);
@ -877,7 +884,7 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
free_msi_irqs(dev);
out_disable:
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0);
return ret;
}

View File

@ -2296,11 +2296,11 @@ static int resp_mode_select(struct scsi_cmnd *scp,
__func__, param_len, res);
md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2);
bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6);
if (md_len > 2) {
off = bd_len + (mselect6 ? 4 : 8);
if (md_len > 2 || off >= res) {
mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1);
return check_condition_result;
}
off = bd_len + (mselect6 ? 4 : 8);
mpage = arr[off] & 0x3f;
ps = !!(arr[off] & 0x80);
if (ps) {

View File

@ -172,7 +172,7 @@ static struct platform_driver tegra_fuse_driver = {
};
builtin_platform_driver(tegra_fuse_driver);
bool __init tegra_fuse_read_spare(unsigned int spare)
u32 __init tegra_fuse_read_spare(unsigned int spare)
{
unsigned int offset = fuse->soc->info->spare + spare * 4;

View File

@ -53,7 +53,7 @@ struct tegra_fuse {
void tegra_init_revision(void);
void tegra_init_apbmisc(void);
bool __init tegra_fuse_read_spare(unsigned int spare);
u32 __init tegra_fuse_read_spare(unsigned int spare);
u32 __init tegra_fuse_read_early(unsigned int offset);
#ifdef CONFIG_ARCH_TEGRA_2x_SOC

View File

@ -37,6 +37,8 @@ struct xencons_info {
struct xenbus_device *xbdev;
struct xencons_interface *intf;
unsigned int evtchn;
XENCONS_RING_IDX out_cons;
unsigned int out_cons_same;
struct hvc_struct *hvc;
int irq;
int vtermno;
@ -138,6 +140,8 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
XENCONS_RING_IDX cons, prod;
int recv = 0;
struct xencons_info *xencons = vtermno_to_xencons(vtermno);
unsigned int eoiflag = 0;
if (xencons == NULL)
return -EINVAL;
intf = xencons->intf;
@ -157,7 +161,27 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
mb(); /* read ring before consuming */
intf->in_cons = cons;
notify_daemon(xencons);
/*
* When to mark interrupt having been spurious:
* - there was no new data to be read, and
* - the backend did not consume some output bytes, and
* - the previous round with no read data didn't see consumed bytes
* (we might have a race with an interrupt being in flight while
* updating xencons->out_cons, so account for that by allowing one
* round without any visible reason)
*/
if (intf->out_cons != xencons->out_cons) {
xencons->out_cons = intf->out_cons;
xencons->out_cons_same = 0;
}
if (recv) {
notify_daemon(xencons);
} else if (xencons->out_cons_same++ > 1) {
eoiflag = XEN_EOI_FLAG_SPURIOUS;
}
xen_irq_lateeoi(xencons->irq, eoiflag);
return recv;
}
@ -386,7 +410,7 @@ static int xencons_connect_backend(struct xenbus_device *dev,
if (ret)
return ret;
info->evtchn = evtchn;
irq = bind_evtchn_to_irq(evtchn);
irq = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn);
if (irq < 0)
return irq;
info->irq = irq;
@ -550,7 +574,7 @@ static int __init xen_hvc_init(void)
return r;
info = vtermno_to_xencons(HVC_COOKIE);
info->irq = bind_evtchn_to_irq(info->evtchn);
info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn);
}
if (info->irq < 0)
info->irq = 0; /* NO_IRQ */

View File

@ -435,6 +435,9 @@ static const struct usb_device_id usb_quirk_list[] = {
{ USB_DEVICE(0x1532, 0x0116), .driver_info =
USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
/* Lenovo USB-C to Ethernet Adapter RTL8153-04 */
{ USB_DEVICE(0x17ef, 0x720c), .driver_info = USB_QUIRK_NO_LPM },
/* Lenovo Powered USB-C Travel Hub (4X90S92381, RTL8153 GigE) */
{ USB_DEVICE(0x17ef, 0x721e), .driver_info = USB_QUIRK_NO_LPM },

View File

@ -1649,14 +1649,14 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
u8 endp;
if (w_length > USB_COMP_EP0_BUFSIZ) {
if (ctrl->bRequestType == USB_DIR_OUT) {
goto done;
} else {
if (ctrl->bRequestType & USB_DIR_IN) {
/* Cast away the const, we are going to overwrite on purpose. */
__le16 *temp = (__le16 *)&ctrl->wLength;
*temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ);
w_length = USB_COMP_EP0_BUFSIZ;
} else {
goto done;
}
}

View File

@ -346,14 +346,14 @@ static int dbgp_setup(struct usb_gadget *gadget,
u16 len = 0;
if (length > DBGP_REQ_LEN) {
if (ctrl->bRequestType == USB_DIR_OUT) {
return err;
} else {
if (ctrl->bRequestType & USB_DIR_IN) {
/* Cast away the const, we are going to overwrite on purpose. */
__le16 *temp = (__le16 *)&ctrl->wLength;
*temp = cpu_to_le16(DBGP_REQ_LEN);
length = DBGP_REQ_LEN;
} else {
return err;
}
}

View File

@ -1336,14 +1336,14 @@ gadgetfs_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
u16 w_length = le16_to_cpu(ctrl->wLength);
if (w_length > RBUF_SIZE) {
if (ctrl->bRequestType == USB_DIR_OUT) {
return value;
} else {
if (ctrl->bRequestType & USB_DIR_IN) {
/* Cast away the const, we are going to overwrite on purpose. */
__le16 *temp = (__le16 *)&ctrl->wLength;
*temp = cpu_to_le16(RBUF_SIZE);
w_length = RBUF_SIZE;
} else {
return value;
}
}

View File

@ -66,6 +66,8 @@
#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 0x161e
#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 0x15d6
#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 0x15d7
#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 0x161c
#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8 0x161f
#define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042
#define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142
@ -302,7 +304,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 ||
pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 ||
pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 ||
pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6))
pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 ||
pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 ||
pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8))
xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
if (xhci->quirks & XHCI_RESET_ON_RESUME)

View File

@ -1552,6 +1552,8 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
/* 2 banks of GPIO - One for the pins taken from each serial port */
if (intf_num == 0) {
priv->gc.ngpio = 2;
if (mode.eci == CP210X_PIN_MODE_MODEM) {
/* mark all GPIOs of this interface as reserved */
priv->gpio_altfunc = 0xff;
@ -1562,8 +1564,9 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &
CP210X_ECI_GPIO_MODE_MASK) >>
CP210X_ECI_GPIO_MODE_OFFSET);
priv->gc.ngpio = 2;
} else if (intf_num == 1) {
priv->gc.ngpio = 3;
if (mode.sci == CP210X_PIN_MODE_MODEM) {
/* mark all GPIOs of this interface as reserved */
priv->gpio_altfunc = 0xff;
@ -1574,7 +1577,6 @@ static int cp2105_gpioconf_init(struct usb_serial *serial)
priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &
CP210X_SCI_GPIO_MODE_MASK) >>
CP210X_SCI_GPIO_MODE_OFFSET);
priv->gc.ngpio = 3;
} else {
return -ENODEV;
}

View File

@ -1219,6 +1219,14 @@ static const struct usb_device_id option_ids[] = {
.driver_info = NCTRL(2) | RSVD(3) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff), /* Telit LN920 (ECM) */
.driver_info = NCTRL(0) | RSVD(1) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff), /* Telit FN990 (rmnet) */
.driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff), /* Telit FN990 (MBIM) */
.driver_info = NCTRL(0) | RSVD(1) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff), /* Telit FN990 (RNDIS) */
.driver_info = NCTRL(2) | RSVD(3) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990 (ECM) */
.driver_info = NCTRL(0) | RSVD(1) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
.driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),

View File

@ -263,7 +263,7 @@ size_t virtio_max_dma_size(struct virtio_device *vdev)
size_t max_segment_size = SIZE_MAX;
if (vring_use_dma_api(vdev))
max_segment_size = dma_max_mapping_size(&vdev->dev);
max_segment_size = dma_max_mapping_size(vdev->dev.parent);
return max_segment_size;
}

View File

@ -1069,7 +1069,7 @@ int fuse_reverse_inval_entry(struct super_block *sb, u64 parent_nodeid,
if (!parent)
return -ENOENT;
inode_lock(parent);
inode_lock_nested(parent, I_MUTEX_PARENT);
if (!S_ISDIR(parent->i_mode))
goto unlock;

View File

@ -1041,6 +1041,11 @@ hash_delegation_locked(struct nfs4_delegation *dp, struct nfs4_file *fp)
return 0;
}
static bool delegation_hashed(struct nfs4_delegation *dp)
{
return !(list_empty(&dp->dl_perfile));
}
static bool
unhash_delegation_locked(struct nfs4_delegation *dp)
{
@ -1048,7 +1053,7 @@ unhash_delegation_locked(struct nfs4_delegation *dp)
lockdep_assert_held(&state_lock);
if (list_empty(&dp->dl_perfile))
if (!delegation_hashed(dp))
return false;
dp->dl_stid.sc_type = NFS4_CLOSED_DELEG_STID;
@ -4406,7 +4411,7 @@ static void nfsd4_cb_recall_prepare(struct nfsd4_callback *cb)
* queued for a lease break. Don't queue it again.
*/
spin_lock(&state_lock);
if (dp->dl_time == 0) {
if (delegation_hashed(dp) && dp->dl_time == 0) {
dp->dl_time = get_seconds();
list_add_tail(&dp->dl_recall_lru, &nn->del_recall_lru);
}

View File

@ -113,8 +113,7 @@ int ovl_cleanup_and_whiteout(struct dentry *workdir, struct inode *dir,
goto out;
}
static int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry,
umode_t mode)
int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode)
{
int err;
struct dentry *d, *dentry = *newdentry;

View File

@ -418,6 +418,7 @@ struct ovl_cattr {
#define OVL_CATTR(m) (&(struct ovl_cattr) { .mode = (m) })
int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode);
struct dentry *ovl_create_real(struct inode *dir, struct dentry *newdentry,
struct ovl_cattr *attr);
int ovl_cleanup(struct inode *dir, struct dentry *dentry);

View File

@ -671,10 +671,14 @@ static struct dentry *ovl_workdir_create(struct ovl_fs *ofs,
goto retry;
}
work = ovl_create_real(dir, work, OVL_CATTR(attr.ia_mode));
err = PTR_ERR(work);
if (IS_ERR(work))
goto out_err;
err = ovl_mkdir_real(dir, &work, attr.ia_mode);
if (err)
goto out_dput;
/* Weird filesystem returning with hashed negative (kernfs)? */
err = -EINVAL;
if (d_really_is_negative(work))
goto out_dput;
/*
* Try to remove POSIX ACL xattrs from workdir. We are good if:

View File

@ -52,7 +52,10 @@ static inline struct ip_tunnel_info *tcf_tunnel_info(const struct tc_action *a)
{
#ifdef CONFIG_NET_CLS_ACT
struct tcf_tunnel_key *t = to_tunnel_key(a);
struct tcf_tunnel_key_params *params = rtnl_dereference(t->params);
struct tcf_tunnel_key_params *params;
params = rcu_dereference_protected(t->params,
lockdep_is_held(&a->tcfa_lock));
return &params->tcft_enc_metadata->u.tun_info;
#else
@ -69,7 +72,7 @@ tcf_tunnel_info_copy(const struct tc_action *a)
if (tun) {
size_t tun_size = sizeof(*tun) + tun->options_len;
struct ip_tunnel_info *tun_copy = kmemdup(tun, tun_size,
GFP_KERNEL);
GFP_ATOMIC);
return tun_copy;
}

View File

@ -712,7 +712,7 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
{
int rc = 0;
struct sk_buff *skb;
static unsigned int failed = 0;
unsigned int failed = 0;
/* NOTE: kauditd_thread takes care of all our locking, we just use
* the netlink info passed to us (e.g. sk and portid) */
@ -729,32 +729,30 @@ static int kauditd_send_queue(struct sock *sk, u32 portid,
continue;
}
retry:
/* grab an extra skb reference in case of error */
skb_get(skb);
rc = netlink_unicast(sk, skb, portid, 0);
if (rc < 0) {
/* fatal failure for our queue flush attempt? */
/* send failed - try a few times unless fatal error */
if (++failed >= retry_limit ||
rc == -ECONNREFUSED || rc == -EPERM) {
/* yes - error processing for the queue */
sk = NULL;
if (err_hook)
(*err_hook)(skb);
if (!skb_hook)
goto out;
/* keep processing with the skb_hook */
if (rc == -EAGAIN)
rc = 0;
/* continue to drain the queue */
continue;
} else
/* no - requeue to preserve ordering */
skb_queue_head(queue, skb);
goto retry;
} else {
/* it worked - drop the extra reference and continue */
/* skb sent - drop the extra reference and continue */
consume_skb(skb);
failed = 0;
}
}
out:
return (rc >= 0 ? 0 : rc);
}
@ -1557,7 +1555,8 @@ static int __net_init audit_net_init(struct net *net)
audit_panic("cannot initialize netlink socket in namespace");
return -ENOMEM;
}
aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT;
/* limit the timeout in case auditd is blocked/stopped */
aunet->sk->sk_sndtimeo = HZ / 10;
return 0;
}

View File

@ -1604,7 +1604,7 @@ static void rcu_gp_fqs(bool first_time)
struct rcu_node *rnp = rcu_get_root();
WRITE_ONCE(rcu_state.gp_activity, jiffies);
rcu_state.n_force_qs++;
WRITE_ONCE(rcu_state.n_force_qs, rcu_state.n_force_qs + 1);
if (first_time) {
/* Collect dyntick-idle snapshots. */
force_qs_rnp(dyntick_save_progress_counter);
@ -2209,7 +2209,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
/* Reset ->qlen_last_fqs_check trigger if enough CBs have drained. */
if (count == 0 && rdp->qlen_last_fqs_check != 0) {
rdp->qlen_last_fqs_check = 0;
rdp->n_force_qs_snap = rcu_state.n_force_qs;
rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
} else if (count < rdp->qlen_last_fqs_check - qhimark)
rdp->qlen_last_fqs_check = count;
@ -2537,10 +2537,10 @@ static void __call_rcu_core(struct rcu_data *rdp, struct rcu_head *head,
} else {
/* Give the grace period a kick. */
rdp->blimit = DEFAULT_MAX_RCU_BLIMIT;
if (rcu_state.n_force_qs == rdp->n_force_qs_snap &&
if (READ_ONCE(rcu_state.n_force_qs) == rdp->n_force_qs_snap &&
rcu_segcblist_first_pend_cb(&rdp->cblist) != head)
rcu_force_quiescent_state();
rdp->n_force_qs_snap = rcu_state.n_force_qs;
rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
rdp->qlen_last_fqs_check = rcu_segcblist_n_cbs(&rdp->cblist);
}
}
@ -3031,7 +3031,7 @@ int rcutree_prepare_cpu(unsigned int cpu)
/* Set up local state, ensuring consistent view of global state. */
raw_spin_lock_irqsave_rcu_node(rnp, flags);
rdp->qlen_last_fqs_check = 0;
rdp->n_force_qs_snap = rcu_state.n_force_qs;
rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs);
rdp->blimit = blimit;
if (rcu_segcblist_empty(&rdp->cblist) && /* No early-boot CBs? */
!rcu_segcblist_is_offloaded(&rdp->cblist))

View File

@ -1236,8 +1236,7 @@ int do_settimeofday64(const struct timespec64 *ts)
timekeeping_forward_now(tk);
xt = tk_xtime(tk);
ts_delta.tv_sec = ts->tv_sec - xt.tv_sec;
ts_delta.tv_nsec = ts->tv_nsec - xt.tv_nsec;
ts_delta = timespec64_sub(*ts, xt);
if (timespec64_compare(&tk->wall_to_monotonic, &ts_delta) > 0) {
ret = -EINVAL;

View File

@ -770,7 +770,7 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt)
ntohs(skb->protocol), skb->pkt_type, skb->skb_iif);
if (dev)
printk("%sdev name=%s feat=0x%pNF\n",
printk("%sdev name=%s feat=%pNF\n",
level, dev->name, &dev->features);
if (sk)
printk("%ssk family=%hu type=%u proto=%u\n",

View File

@ -200,6 +200,7 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
r->idiag_state = sk->sk_state;
r->idiag_timer = 0;
r->idiag_retrans = 0;
r->idiag_expires = 0;
if (inet_diag_msg_attrs_fill(sk, skb, r, ext, user_ns, net_admin))
goto errout;
@ -240,20 +241,17 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk,
r->idiag_timer = 1;
r->idiag_retrans = icsk->icsk_retransmits;
r->idiag_expires =
jiffies_to_msecs(icsk->icsk_timeout - jiffies);
jiffies_delta_to_msecs(icsk->icsk_timeout - jiffies);
} else if (icsk->icsk_pending == ICSK_TIME_PROBE0) {
r->idiag_timer = 4;
r->idiag_retrans = icsk->icsk_probes_out;
r->idiag_expires =
jiffies_to_msecs(icsk->icsk_timeout - jiffies);
jiffies_delta_to_msecs(icsk->icsk_timeout - jiffies);
} else if (timer_pending(&sk->sk_timer)) {
r->idiag_timer = 2;
r->idiag_retrans = icsk->icsk_probes_out;
r->idiag_expires =
jiffies_to_msecs(sk->sk_timer.expires - jiffies);
} else {
r->idiag_timer = 0;
r->idiag_expires = 0;
jiffies_delta_to_msecs(sk->sk_timer.expires - jiffies);
}
if ((ext & (1 << (INET_DIAG_INFO - 1))) && handler->idiag_info_size) {
@ -338,16 +336,13 @@ static int inet_twsk_diag_fill(struct sock *sk,
r = nlmsg_data(nlh);
BUG_ON(tw->tw_state != TCP_TIME_WAIT);
tmo = tw->tw_timer.expires - jiffies;
if (tmo < 0)
tmo = 0;
inet_diag_msg_common_fill(r, sk);
r->idiag_retrans = 0;
r->idiag_state = tw->tw_substate;
r->idiag_timer = 3;
r->idiag_expires = jiffies_to_msecs(tmo);
tmo = tw->tw_timer.expires - jiffies;
r->idiag_expires = jiffies_delta_to_msecs(tmo);
r->idiag_rqueue = 0;
r->idiag_wqueue = 0;
r->idiag_uid = 0;
@ -381,7 +376,7 @@ static int inet_req_diag_fill(struct sock *sk, struct sk_buff *skb,
offsetof(struct sock, sk_cookie));
tmo = inet_reqsk(sk)->rsk_timer.expires - jiffies;
r->idiag_expires = (tmo >= 0) ? jiffies_to_msecs(tmo) : 0;
r->idiag_expires = jiffies_delta_to_msecs(tmo);
r->idiag_rqueue = 0;
r->idiag_wqueue = 0;
r->idiag_uid = 0;

View File

@ -1876,7 +1876,6 @@ static int __net_init sit_init_net(struct net *net)
return 0;
err_reg_dev:
ipip6_dev_free(sitn->fb_tunnel_dev);
free_netdev(sitn->fb_tunnel_dev);
err_alloc_dev:
return err;

View File

@ -9,7 +9,7 @@
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
* Copyright 2007-2010, Intel Corporation
* Copyright(c) 2015-2017 Intel Deutschland GmbH
* Copyright (C) 2018 Intel Corporation
* Copyright (C) 2018-2021 Intel Corporation
*/
/**
@ -191,7 +191,8 @@ static void ieee80211_add_addbaext(struct ieee80211_sub_if_data *sdata,
sband = ieee80211_get_sband(sdata);
if (!sband)
return;
he_cap = ieee80211_get_he_iftype_cap(sband, sdata->vif.type);
he_cap = ieee80211_get_he_iftype_cap(sband,
ieee80211_vif_type_p2p(&sdata->vif));
if (!he_cap)
return;
@ -292,7 +293,8 @@ void ___ieee80211_start_rx_ba_session(struct sta_info *sta,
goto end;
}
if (!sta->sta.ht_cap.ht_supported) {
if (!sta->sta.ht_cap.ht_supported &&
sta->sdata->vif.bss_conf.chandef.chan->band != NL80211_BAND_6GHZ) {
ht_dbg(sta->sdata,
"STA %pM erroneously requests BA session on tid %d w/o QoS\n",
sta->sta.addr, tid);

View File

@ -9,7 +9,7 @@
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
* Copyright 2007-2010, Intel Corporation
* Copyright(c) 2015-2017 Intel Deutschland GmbH
* Copyright (C) 2018 - 2019 Intel Corporation
* Copyright (C) 2018 - 2021 Intel Corporation
*/
#include <linux/ieee80211.h>
@ -106,7 +106,7 @@ static void ieee80211_send_addba_request(struct ieee80211_sub_if_data *sdata,
mgmt->u.action.u.addba_req.start_seq_num =
cpu_to_le16(start_seq_num << 4);
ieee80211_tx_skb(sdata, skb);
ieee80211_tx_skb_tid(sdata, skb, tid);
}
void ieee80211_send_bar(struct ieee80211_vif *vif, u8 *ra, u16 tid, u16 ssn)
@ -213,6 +213,8 @@ ieee80211_agg_start_txq(struct sta_info *sta, int tid, bool enable)
struct ieee80211_txq *txq = sta->sta.txq[tid];
struct txq_info *txqi;
lockdep_assert_held(&sta->ampdu_mlme.mtx);
if (!txq)
return;
@ -290,7 +292,6 @@ static void ieee80211_remove_tid_tx(struct sta_info *sta, int tid)
ieee80211_assign_tid_tx(sta, tid, NULL);
ieee80211_agg_splice_finish(sta->sdata, tid);
ieee80211_agg_start_txq(sta, tid, false);
kfree_rcu(tid_tx, rcu_head);
}
@ -448,59 +449,14 @@ static void sta_addba_resp_timer_expired(struct timer_list *t)
ieee80211_stop_tx_ba_session(&sta->sta, tid);
}
void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
static void ieee80211_send_addba_with_timeout(struct sta_info *sta,
struct tid_ampdu_tx *tid_tx)
{
struct tid_ampdu_tx *tid_tx;
struct ieee80211_local *local = sta->local;
struct ieee80211_sub_if_data *sdata = sta->sdata;
struct ieee80211_ampdu_params params = {
.sta = &sta->sta,
.action = IEEE80211_AMPDU_TX_START,
.tid = tid,
.buf_size = 0,
.amsdu = false,
.timeout = 0,
};
int ret;
struct ieee80211_local *local = sta->local;
u8 tid = tid_tx->tid;
u16 buf_size;
tid_tx = rcu_dereference_protected_tid_tx(sta, tid);
/*
* Start queuing up packets for this aggregation session.
* We're going to release them once the driver is OK with
* that.
*/
clear_bit(HT_AGG_STATE_WANT_START, &tid_tx->state);
ieee80211_agg_stop_txq(sta, tid);
/*
* Make sure no packets are being processed. This ensures that
* we have a valid starting sequence number and that in-flight
* packets have been flushed out and no packets for this TID
* will go into the driver during the ampdu_action call.
*/
synchronize_net();
params.ssn = sta->tid_seq[tid] >> 4;
ret = drv_ampdu_action(local, sdata, &params);
if (ret) {
ht_dbg(sdata,
"BA request denied - HW unavailable for %pM tid %d\n",
sta->sta.addr, tid);
spin_lock_bh(&sta->lock);
ieee80211_agg_splice_packets(sdata, tid_tx, tid);
ieee80211_assign_tid_tx(sta, tid, NULL);
ieee80211_agg_splice_finish(sdata, tid);
spin_unlock_bh(&sta->lock);
ieee80211_agg_start_txq(sta, tid, false);
kfree_rcu(tid_tx, rcu_head);
return;
}
/* activate the timer for the recipient's addBA response */
mod_timer(&tid_tx->addba_resp_timer, jiffies + ADDBA_RESP_INTERVAL);
ht_dbg(sdata, "activated addBA response timer on %pM tid %d\n",
@ -525,10 +481,66 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
/* send AddBA request */
ieee80211_send_addba_request(sdata, sta->sta.addr, tid,
tid_tx->dialog_token, params.ssn,
tid_tx->dialog_token, tid_tx->ssn,
buf_size, tid_tx->timeout);
}
void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
{
struct tid_ampdu_tx *tid_tx;
struct ieee80211_local *local = sta->local;
struct ieee80211_sub_if_data *sdata = sta->sdata;
struct ieee80211_ampdu_params params = {
.sta = &sta->sta,
.action = IEEE80211_AMPDU_TX_START,
.tid = tid,
.buf_size = 0,
.amsdu = false,
.timeout = 0,
};
int ret;
tid_tx = rcu_dereference_protected_tid_tx(sta, tid);
/*
* Start queuing up packets for this aggregation session.
* We're going to release them once the driver is OK with
* that.
*/
clear_bit(HT_AGG_STATE_WANT_START, &tid_tx->state);
ieee80211_agg_stop_txq(sta, tid);
/*
* Make sure no packets are being processed. This ensures that
* we have a valid starting sequence number and that in-flight
* packets have been flushed out and no packets for this TID
* will go into the driver during the ampdu_action call.
*/
synchronize_net();
params.ssn = sta->tid_seq[tid] >> 4;
ret = drv_ampdu_action(local, sdata, &params);
tid_tx->ssn = params.ssn;
if (ret) {
ht_dbg(sdata,
"BA request denied - HW unavailable for %pM tid %d\n",
sta->sta.addr, tid);
spin_lock_bh(&sta->lock);
ieee80211_agg_splice_packets(sdata, tid_tx, tid);
ieee80211_assign_tid_tx(sta, tid, NULL);
ieee80211_agg_splice_finish(sdata, tid);
spin_unlock_bh(&sta->lock);
ieee80211_agg_start_txq(sta, tid, false);
kfree_rcu(tid_tx, rcu_head);
return;
}
ieee80211_send_addba_with_timeout(sta, tid_tx);
}
/*
* After accepting the AddBA Response we activated a timer,
* resetting it after each frame that we send.
@ -571,7 +583,8 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid,
"Requested to start BA session on reserved tid=%d", tid))
return -EINVAL;
if (!pubsta->ht_cap.ht_supported)
if (!pubsta->ht_cap.ht_supported &&
sta->sdata->vif.bss_conf.chandef.chan->band != NL80211_BAND_6GHZ)
return -EINVAL;
if (WARN_ON_ONCE(!local->ops->ampdu_action))
@ -860,6 +873,7 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid,
{
struct ieee80211_sub_if_data *sdata = sta->sdata;
bool send_delba = false;
bool start_txq = false;
ht_dbg(sdata, "Stopping Tx BA session for %pM tid %d\n",
sta->sta.addr, tid);
@ -877,10 +891,14 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid,
send_delba = true;
ieee80211_remove_tid_tx(sta, tid);
start_txq = true;
unlock_sta:
spin_unlock_bh(&sta->lock);
if (start_txq)
ieee80211_agg_start_txq(sta, tid, false);
if (send_delba)
ieee80211_send_delba(sdata, sta->sta.addr, tid,
WLAN_BACK_INITIATOR, WLAN_REASON_QSTA_NOT_USE);

View File

@ -1202,8 +1202,11 @@ static inline void drv_wake_tx_queue(struct ieee80211_local *local,
{
struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif);
if (local->in_reconfig)
/* In reconfig don't transmit now, but mark for waking later */
if (local->in_reconfig) {
set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags);
return;
}
if (!check_sdata_in_driver(sdata))
return;

View File

@ -2421,11 +2421,18 @@ static void ieee80211_sta_tx_wmm_ac_notify(struct ieee80211_sub_if_data *sdata,
u16 tx_time)
{
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
u16 tid = ieee80211_get_tid(hdr);
int ac = ieee80211_ac_from_tid(tid);
struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac];
u16 tid;
int ac;
struct ieee80211_sta_tx_tspec *tx_tspec;
unsigned long now = jiffies;
if (!ieee80211_is_data_qos(hdr->frame_control))
return;
tid = ieee80211_get_tid(hdr);
ac = ieee80211_ac_from_tid(tid);
tx_tspec = &ifmgd->tx_tspec[ac];
if (likely(!tx_tspec->admitted_time))
return;

View File

@ -180,6 +180,7 @@ struct tid_ampdu_tx {
u8 stop_initiator;
bool tx_stop;
u16 buf_size;
u16 ssn;
u16 failed_bar_ssn;
bool bar_pending;

View File

@ -1227,6 +1227,8 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action,
elems->max_idle_period_ie = (void *)pos;
break;
case WLAN_EID_EXTENSION:
if (!elen)
break;
if (pos[0] == WLAN_EID_EXT_HE_MU_EDCA &&
elen >= (sizeof(*elems->mu_edca_param_set) + 1)) {
elems->mu_edca_param_set = (void *)&pos[1];

View File

@ -4453,9 +4453,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
}
out_free_pg_vec:
bitmap_free(rx_owner_map);
if (pg_vec)
if (pg_vec) {
bitmap_free(rx_owner_map);
free_pg_vec(pg_vec, order, req->tp_block_nr);
}
out:
return err;
}

View File

@ -253,6 +253,7 @@ static struct rds_connection *__rds_conn_create(struct net *net,
* should end up here, but if it
* does, reset/destroy the connection.
*/
kfree(conn->c_path);
kmem_cache_free(rds_conn_slab, conn);
conn = ERR_PTR(-EOPNOTSUPP);
goto out;

View File

@ -265,14 +265,12 @@ tcf_sample_get_group(const struct tc_action *a,
struct tcf_sample *s = to_sample(a);
struct psample_group *group;
spin_lock_bh(&s->tcf_lock);
group = rcu_dereference_protected(s->psample_group,
lockdep_is_held(&s->tcf_lock));
if (group) {
psample_group_take(group);
*destructor = tcf_psample_group_put;
}
spin_unlock_bh(&s->tcf_lock);
return group;
}

View File

@ -3436,7 +3436,7 @@ static void tcf_sample_get_group(struct flow_action_entry *entry,
int tc_setup_flow_action(struct flow_action *flow_action,
const struct tcf_exts *exts, bool rtnl_held)
{
const struct tc_action *act;
struct tc_action *act;
int i, j, k, err = 0;
if (!exts)
@ -3450,6 +3450,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
struct flow_action_entry *entry;
entry = &flow_action->entries[j];
spin_lock_bh(&act->tcfa_lock);
if (is_tcf_gact_ok(act)) {
entry->id = FLOW_ACTION_ACCEPT;
} else if (is_tcf_gact_shot(act)) {
@ -3490,13 +3491,13 @@ int tc_setup_flow_action(struct flow_action *flow_action,
break;
default:
err = -EOPNOTSUPP;
goto err_out;
goto err_out_locked;
}
} else if (is_tcf_tunnel_set(act)) {
entry->id = FLOW_ACTION_TUNNEL_ENCAP;
err = tcf_tunnel_encap_get_tunnel(entry, act);
if (err)
goto err_out;
goto err_out_locked;
} else if (is_tcf_tunnel_release(act)) {
entry->id = FLOW_ACTION_TUNNEL_DECAP;
} else if (is_tcf_pedit(act)) {
@ -3510,7 +3511,7 @@ int tc_setup_flow_action(struct flow_action *flow_action,
break;
default:
err = -EOPNOTSUPP;
goto err_out;
goto err_out_locked;
}
entry->mangle.htype = tcf_pedit_htype(act, k);
entry->mangle.mask = tcf_pedit_mask(act, k);
@ -3561,15 +3562,17 @@ int tc_setup_flow_action(struct flow_action *flow_action,
entry->mpls_mangle.ttl = tcf_mpls_ttl(act);
break;
default:
goto err_out;
err = -EOPNOTSUPP;
goto err_out_locked;
}
} else if (is_tcf_skbedit_ptype(act)) {
entry->id = FLOW_ACTION_PTYPE;
entry->ptype = tcf_skbedit_ptype(act);
} else {
err = -EOPNOTSUPP;
goto err_out;
goto err_out_locked;
}
spin_unlock_bh(&act->tcfa_lock);
if (!is_tcf_pedit(act))
j++;
@ -3583,6 +3586,9 @@ int tc_setup_flow_action(struct flow_action *flow_action,
tc_cleanup_flow_action(flow_action);
return err;
err_out_locked:
spin_unlock_bh(&act->tcfa_lock);
goto err_out;
}
EXPORT_SYMBOL(tc_setup_flow_action);

View File

@ -2724,7 +2724,7 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data),
GFP_KERNEL);
if (!q->tins)
goto nomem;
return -ENOMEM;
for (i = 0; i < CAKE_MAX_TINS; i++) {
struct cake_tin_data *b = q->tins + i;
@ -2754,10 +2754,6 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt,
q->min_netlen = ~0;
q->min_adjlen = ~0;
return 0;
nomem:
cake_destroy(sch);
return -ENOMEM;
}
static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)

View File

@ -183,7 +183,9 @@ static int smc_release(struct socket *sock)
/* cleanup for a dangling non-blocking connect */
if (smc->connect_nonblock && sk->sk_state == SMC_INIT)
tcp_abort(smc->clcsock->sk, ECONNABORTED);
flush_work(&smc->connect_work);
if (cancel_work_sync(&smc->connect_work))
sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */
if (sk->sk_state == SMC_LISTEN)
/* smc_close_non_accepted() is called and acquires

View File

@ -252,7 +252,7 @@ if ($arch eq "x86_64") {
} elsif ($arch eq "s390" && $bits == 64) {
if ($cc =~ /-DCC_USING_HOTPATCH/) {
$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*brcl\\s*0,[0-9a-f]+ <([^\+]*)>\$";
$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(bcrl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$";
$mcount_adjust = 0;
} else {
$mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_390_(PC|PLT)32DBL\\s+_mcount\\+0x2\$";

View File

@ -12,6 +12,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/resource.h>
#include "test_util.h"
@ -43,10 +44,39 @@ int main(int argc, char *argv[])
{
int kvm_max_vcpu_id = kvm_check_cap(KVM_CAP_MAX_VCPU_ID);
int kvm_max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
/*
* Number of file descriptors reqired, KVM_CAP_MAX_VCPUS for vCPU fds +
* an arbitrary number for everything else.
*/
int nr_fds_wanted = kvm_max_vcpus + 100;
struct rlimit rl;
printf("KVM_CAP_MAX_VCPU_ID: %d\n", kvm_max_vcpu_id);
printf("KVM_CAP_MAX_VCPUS: %d\n", kvm_max_vcpus);
/*
* Check that we're allowed to open nr_fds_wanted file descriptors and
* try raising the limits if needed.
*/
TEST_ASSERT(!getrlimit(RLIMIT_NOFILE, &rl), "getrlimit() failed!");
if (rl.rlim_cur < nr_fds_wanted) {
rl.rlim_cur = nr_fds_wanted;
if (rl.rlim_max < nr_fds_wanted) {
int old_rlim_max = rl.rlim_max;
rl.rlim_max = nr_fds_wanted;
int r = setrlimit(RLIMIT_NOFILE, &rl);
if (r < 0) {
printf("RLIMIT_NOFILE hard limit is too low (%d, wanted %d)\n",
old_rlim_max, nr_fds_wanted);
exit(KSFT_SKIP);
}
} else {
TEST_ASSERT(!setrlimit(RLIMIT_NOFILE, &rl), "setrlimit() failed!");
}
}
/*
* Upstream KVM prior to 4.8 does not support KVM_CAP_MAX_VCPU_ID.
* Userspace is supposed to use KVM_CAP_MAX_VCPUS as the maximum ID

View File

@ -1491,8 +1491,9 @@ ipv4_addr_bind_vrf()
for a in ${NSA_IP} ${VRF_IP}
do
log_start
show_hint "Socket not bound to VRF, but address is in VRF"
run_cmd nettest -s -R -P icmp -l ${a} -b
log_test_addr ${a} $? 0 "Raw socket bind to local address"
log_test_addr ${a} $? 1 "Raw socket bind to local address"
log_start
run_cmd nettest -s -R -P icmp -l ${a} -d ${NSA_DEV} -b
@ -1884,7 +1885,7 @@ ipv6_ping_vrf()
log_start
show_hint "Fails since VRF device does not support linklocal or multicast"
run_cmd ${ping6} -c1 -w1 ${a}
log_test_addr ${a} $? 2 "ping out, VRF bind"
log_test_addr ${a} $? 1 "ping out, VRF bind"
done
for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV}
@ -2890,11 +2891,14 @@ ipv6_addr_bind_novrf()
run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind"
# Sadly, the kernel allows binding a socket to a device and then
# binding to an address not on the device. So this test passes
# when it really should not
a=${NSA_LO_IP6}
log_start
show_hint "Should fail with 'Cannot assign requested address'"
run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address"
show_hint "Tecnically should fail since address is not on device but kernel allows"
run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b
log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address"
}
ipv6_addr_bind_vrf()
@ -2935,10 +2939,15 @@ ipv6_addr_bind_vrf()
run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind"
# Sadly, the kernel allows binding a socket to a device and then
# binding to an address not on the device. The only restriction
# is that the address is valid in the L3 domain. So this test
# passes when it really should not
a=${VRF_IP6}
log_start
run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b
log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind"
show_hint "Tecnically should fail since address is not on device but kernel allows"
run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b
log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind"
a=${NSA_LO_IP6}
log_start

View File

@ -13,6 +13,8 @@ NETIFS[p5]=veth4
NETIFS[p6]=veth5
NETIFS[p7]=veth6
NETIFS[p8]=veth7
NETIFS[p9]=veth8
NETIFS[p10]=veth9
##############################################################################
# Defines