This is the 5.4.112 stable release

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmB2je0ACgkQONu9yGCS
 aT5LSQ//RbX6sC5N9hmM6XdixRqDXF0YZG6ADrZ24tEIUAvjXZa9rOFGlKyS2JAV
 6KkqRfkrYK2lhyP0lGSkmWPQGoyocxV/6jLcA4XyTqetzxYRkYyW1jiEz7KCTp0+
 AMwqazbMAlaTOTxbNk0TqTsLDrSAE1a5mX9XjPCqjFm1yVjc7gNxxXwKhX01u4LD
 bTw+vMaMtf9MW8sfV1vU9HOcH0BFwp9Sr0/AFb05u8F4BH9MS0XGa6c2bG1o1qQM
 bF7g1aZIcVgn0Jr8WrpsF/7tTUyy3l+XXBvyFNRYvqAnrdUrTDn2ItAPq3W5hqTu
 Y0fdcbAtmmnrHcDeGUD+kuaCTvQGSy+qgZAFvQRkzCmweyY+rvqLEJhO7sBpjqCv
 MszRkYvA0Ji4JaWUWxVlHbmbdIBQ8Jvo9ZMM7shAKq66a26De1W5CIJXTnZXJSij
 dALJowoEKJ2i7V63AoJSzEOlBDYoBUY8xbVzDEjdfBTbj2Gb+cVWRRTsGDKZeuqs
 933fPTRMBOc2q36q6PVpUcpaRLktAFvc33FYdSK8M3/aN22ISQ1QbXqm47sXyQbk
 pHUqRFUJdvjVtQltYIiBQ/GgKY3+TQw9FtRjoSCuZuEeYjE8p004Wq/rWWIv+5mm
 jwY5gfsXKjQcP/Pcxl15kcmNQ4axkC/Jzln99xFScatXV6Ksqh0=
 =sCGS
 -----END PGP SIGNATURE-----

Merge 5.4.112 into android11-5.4-lts

Changes in 5.4.112
	counter: stm32-timer-cnt: fix ceiling miss-alignment with reload register
	ALSA: aloop: Fix initialization of controls
	ALSA: hda/realtek: Fix speaker amp setup on Acer Aspire E1
	ASoC: intel: atom: Stop advertising non working S24LE support
	nfc: fix refcount leak in llcp_sock_bind()
	nfc: fix refcount leak in llcp_sock_connect()
	nfc: fix memory leak in llcp_sock_connect()
	nfc: Avoid endless loops caused by repeated llcp_sock_connect()
	xen/evtchn: Change irq_info lock to raw_spinlock_t
	net: ipv6: check for validity before dereferencing cfg->fc_nlinfo.nlh
	net: dsa: lantiq_gswip: Let GSWIP automatically set the xMII clock
	drm/i915: Fix invalid access to ACPI _DSM objects
	gcov: re-fix clang-11+ support
	ia64: fix user_stack_pointer() for ptrace()
	nds32: flush_dcache_page: use page_mapping_file to avoid races with swapoff
	ocfs2: fix deadlock between setattr and dio_end_io_write
	fs: direct-io: fix missing sdio->boundary
	parisc: parisc-agp requires SBA IOMMU driver
	parisc: avoid a warning on u8 cast for cmpxchg on u8 pointers
	ARM: dts: turris-omnia: configure LED[2]/INTn pin as interrupt pin
	batman-adv: initialize "struct batadv_tvlv_tt_vlan_data"->reserved field
	ice: Increase control queue timeout
	ice: Fix for dereference of NULL pointer
	ice: Cleanup fltr list in case of allocation issues
	net: hso: fix null-ptr-deref during tty device unregistration
	ethernet/netronome/nfp: Fix a use after free in nfp_bpf_ctrl_msg_rx
	bpf, sockmap: Fix sk->prot unhash op reset
	net: ensure mac header is set in virtio_net_hdr_to_skb()
	i40e: Fix sparse warning: missing error code 'err'
	i40e: Fix sparse error: 'vsi->netdev' could be null
	net: sched: sch_teql: fix null-pointer dereference
	mac80211: fix TXQ AC confusion
	net: hsr: Reset MAC header for Tx path
	net-ipv6: bugfix - raw & sctp - switch to ipv6_can_nonlocal_bind()
	net: let skb_orphan_partial wake-up waiters.
	usbip: add sysfs_lock to synchronize sysfs code paths
	usbip: stub-dev synchronize sysfs code paths
	usbip: vudc synchronize sysfs code paths
	usbip: synchronize event handler with sysfs code paths
	i2c: turn recovery error on init to debug
	virtio_net: Add XDP meta data support
	net: dsa: lantiq_gswip: Don't use PHY auto polling
	net: dsa: lantiq_gswip: Configure all remaining GSWIP_MII_CFG bits
	xfrm: interface: fix ipv4 pmtu check to honor ip header df
	regulator: bd9571mwv: Fix AVS and DVFS voltage range
	net: xfrm: Localize sequence counter per network namespace
	esp: delete NETIF_F_SCTP_CRC bit from features for esp offload
	ASoC: SOF: Intel: hda: remove unnecessary parentheses
	ASoC: SOF: Intel: HDA: fix core status verification
	ASoC: wm8960: Fix wrong bclk and lrclk with pll enabled for some chips
	xfrm: Fix NULL pointer dereference on policy lookup
	i40e: Added Asym_Pause to supported link modes
	i40e: Fix kernel oops when i40e driver removes VF's
	hostfs: Use kasprintf() instead of fixed buffer formatting
	hostfs: fix memory handling in follow_link()
	amd-xgbe: Update DMA coherency values
	sch_red: fix off-by-one checks in red_check_params()
	arm64: dts: imx8mm/q: Fix pad control of SD1_DATA0
	can: bcm/raw: fix msg_namelen values depending on CAN_REQUIRED_SIZE
	gianfar: Handle error code at MAC address change
	cxgb4: avoid collecting SGE_QBASE regs during traffic
	net:tipc: Fix a double free in tipc_sk_mcast_rcv
	ARM: dts: imx6: pbab01: Set vmmc supply for both SD interfaces
	net/ncsi: Avoid channel_monitor hrtimer deadlock
	nfp: flower: ignore duplicate merge hints from FW
	net: phy: broadcom: Only advertise EEE for supported modes
	ASoC: sunxi: sun4i-codec: fill ASoC card owner
	net/mlx5e: Fix ethtool indication of connector type
	net/mlx5: Don't request more than supported EQs
	net/rds: Fix a use after free in rds_message_map_pages
	soc/fsl: qbman: fix conflicting alignment attributes
	i40e: Fix display statistics for veb_tc
	drm/msm: Set drvdata to NULL when msm_drm_init() fails
	net: udp: Add support for getsockopt(..., ..., UDP_GRO, ..., ...);
	scsi: ufs: Fix irq return code
	scsi: ufs: Avoid busy-waiting by eliminating tag conflicts
	scsi: ufs: Use blk_{get,put}_request() to allocate and free TMFs
	scsi: ufs: core: Fix task management request completion timeout
	scsi: ufs: core: Fix wrong Task Tag used in task management request UPIUs
	net: macb: restore cmp registers on resume path
	clk: fix invalid usage of list cursor in register
	clk: fix invalid usage of list cursor in unregister
	workqueue: Move the position of debug_work_activate() in __queue_work()
	s390/cpcmd: fix inline assembly register clobbering
	perf inject: Fix repipe usage
	net: openvswitch: conntrack: simplify the return expression of ovs_ct_limit_get_default_limit()
	openvswitch: fix send of uninitialized stack memory in ct limit reply
	net: hns3: clear VF down state bit before request link status
	net/mlx5: Fix placement of log_max_flow_counter
	net/mlx5: Fix PBMC register mapping
	RDMA/cxgb4: check for ipv6 address properly while destroying listener
	RDMA/addr: Be strict with gid size
	RAS/CEC: Correct ce_add_elem()'s returned values
	clk: socfpga: fix iomem pointer cast on 64-bit
	dt-bindings: net: ethernet-controller: fix typo in NVMEM
	net: sched: bump refcount for new action in ACT replace mode
	cfg80211: remove WARN_ON() in cfg80211_sme_connect
	net: tun: set tun->dev->addr_len during TUNSETLINK processing
	drivers: net: fix memory leak in atusb_probe
	drivers: net: fix memory leak in peak_usb_create_dev
	net: mac802154: Fix general protection fault
	net: ieee802154: nl-mac: fix check on panid
	net: ieee802154: fix nl802154 del llsec key
	net: ieee802154: fix nl802154 del llsec dev
	net: ieee802154: fix nl802154 add llsec key
	net: ieee802154: fix nl802154 del llsec devkey
	net: ieee802154: forbid monitor for set llsec params
	net: ieee802154: forbid monitor for del llsec seclevel
	net: ieee802154: stop dump llsec params for monitors
	Revert "cifs: Set CIFS_MOUNT_USE_PREFIX_PATH flag on setting cifs_sb->prepath."
	Linux 5.4.112

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I6849a183d86323395041645f332c33bd4f3a7e8c
This commit is contained in:
Greg Kroah-Hartman 2021-04-14 12:07:53 +02:00
commit f6865f9c47
103 changed files with 854 additions and 321 deletions

View File

@ -51,7 +51,7 @@ properties:
description: description:
Reference to an nvmem node for the MAC address Reference to an nvmem node for the MAC address
nvmem-cells-names: nvmem-cell-names:
const: mac-address const: mac-address
phy-connection-type: phy-connection-type:

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
VERSION = 5 VERSION = 5
PATCHLEVEL = 4 PATCHLEVEL = 4
SUBLEVEL = 111 SUBLEVEL = 112
EXTRAVERSION = EXTRAVERSION =
NAME = Kleptomaniac Octopus NAME = Kleptomaniac Octopus

View File

@ -236,6 +236,7 @@
status = "okay"; status = "okay";
compatible = "ethernet-phy-id0141.0DD1", "ethernet-phy-ieee802.3-c22"; compatible = "ethernet-phy-id0141.0DD1", "ethernet-phy-ieee802.3-c22";
reg = <1>; reg = <1>;
marvell,reg-init = <3 18 0 0x4985>;
/* irq is connected to &pcawan pin 7 */ /* irq is connected to &pcawan pin 7 */
}; };

View File

@ -432,6 +432,7 @@
pinctrl-0 = <&pinctrl_usdhc2>; pinctrl-0 = <&pinctrl_usdhc2>;
cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>; cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>;
wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>; wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
vmmc-supply = <&vdd_sd1_reg>;
status = "disabled"; status = "disabled";
}; };
@ -441,5 +442,6 @@
&pinctrl_usdhc3_cdwp>; &pinctrl_usdhc3_cdwp>;
cd-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>; cd-gpios = <&gpio1 27 GPIO_ACTIVE_LOW>;
wp-gpios = <&gpio1 29 GPIO_ACTIVE_HIGH>; wp-gpios = <&gpio1 29 GPIO_ACTIVE_HIGH>;
vmmc-supply = <&vdd_sd0_reg>;
status = "disabled"; status = "disabled";
}; };

View File

@ -124,7 +124,7 @@
#define MX8MM_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0 #define MX8MM_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0
#define MX8MM_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0 #define MX8MM_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0
#define MX8MM_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0 #define MX8MM_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0
#define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0 #define MX8MM_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0
#define MX8MM_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0 #define MX8MM_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0
#define MX8MM_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0 #define MX8MM_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0
#define MX8MM_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0 #define MX8MM_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0

View File

@ -130,7 +130,7 @@
#define MX8MQ_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0 #define MX8MQ_IOMUXC_SD1_CMD_USDHC1_CMD 0x0A4 0x30C 0x000 0x0 0x0
#define MX8MQ_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0 #define MX8MQ_IOMUXC_SD1_CMD_GPIO2_IO1 0x0A4 0x30C 0x000 0x5 0x0
#define MX8MQ_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0 #define MX8MQ_IOMUXC_SD1_DATA0_USDHC1_DATA0 0x0A8 0x310 0x000 0x0 0x0
#define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x31 0x000 0x5 0x0 #define MX8MQ_IOMUXC_SD1_DATA0_GPIO2_IO2 0x0A8 0x310 0x000 0x5 0x0
#define MX8MQ_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0 #define MX8MQ_IOMUXC_SD1_DATA1_USDHC1_DATA1 0x0AC 0x314 0x000 0x0 0x0
#define MX8MQ_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0 #define MX8MQ_IOMUXC_SD1_DATA1_GPIO2_IO3 0x0AC 0x314 0x000 0x5 0x0
#define MX8MQ_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0 #define MX8MQ_IOMUXC_SD1_DATA2_USDHC1_DATA2 0x0B0 0x318 0x000 0x0 0x0

View File

@ -54,8 +54,7 @@
static inline unsigned long user_stack_pointer(struct pt_regs *regs) static inline unsigned long user_stack_pointer(struct pt_regs *regs)
{ {
/* FIXME: should this be bspstore + nr_dirty regs? */ return regs->r12;
return regs->ar_bspstore;
} }
static inline int is_syscall_success(struct pt_regs *regs) static inline int is_syscall_success(struct pt_regs *regs)
@ -79,11 +78,6 @@ static inline long regs_return_value(struct pt_regs *regs)
unsigned long __ip = instruction_pointer(regs); \ unsigned long __ip = instruction_pointer(regs); \
(__ip & ~3UL) + ((__ip & 3UL) << 2); \ (__ip & ~3UL) + ((__ip & 3UL) << 2); \
}) })
/*
* Why not default? Because user_stack_pointer() on ia64 gives register
* stack backing store instead...
*/
#define current_user_stack_pointer() (current_pt_regs()->r12)
/* given a pointer to a task_struct, return the user's pt_regs */ /* given a pointer to a task_struct, return the user's pt_regs */
# define task_pt_regs(t) (((struct pt_regs *) ((char *) (t) + IA64_STK_OFFSET)) - 1) # define task_pt_regs(t) (((struct pt_regs *) ((char *) (t) + IA64_STK_OFFSET)) - 1)

View File

@ -239,7 +239,7 @@ void flush_dcache_page(struct page *page)
{ {
struct address_space *mapping; struct address_space *mapping;
mapping = page_mapping(page); mapping = page_mapping_file(page);
if (mapping && !mapping_mapped(mapping)) if (mapping && !mapping_mapped(mapping))
set_bit(PG_dcache_dirty, &page->flags); set_bit(PG_dcache_dirty, &page->flags);
else { else {

View File

@ -72,7 +72,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size)
#endif #endif
case 4: return __cmpxchg_u32((unsigned int *)ptr, case 4: return __cmpxchg_u32((unsigned int *)ptr,
(unsigned int)old, (unsigned int)new_); (unsigned int)old, (unsigned int)new_);
case 1: return __cmpxchg_u8((u8 *)ptr, (u8)old, (u8)new_); case 1: return __cmpxchg_u8((u8 *)ptr, old & 0xff, new_ & 0xff);
} }
__cmpxchg_called_with_bad_pointer(); __cmpxchg_called_with_bad_pointer();
return old; return old;

View File

@ -37,10 +37,12 @@ static int diag8_noresponse(int cmdlen)
static int diag8_response(int cmdlen, char *response, int *rlen) static int diag8_response(int cmdlen, char *response, int *rlen)
{ {
unsigned long _cmdlen = cmdlen | 0x40000000L;
unsigned long _rlen = *rlen;
register unsigned long reg2 asm ("2") = (addr_t) cpcmd_buf; register unsigned long reg2 asm ("2") = (addr_t) cpcmd_buf;
register unsigned long reg3 asm ("3") = (addr_t) response; register unsigned long reg3 asm ("3") = (addr_t) response;
register unsigned long reg4 asm ("4") = cmdlen | 0x40000000L; register unsigned long reg4 asm ("4") = _cmdlen;
register unsigned long reg5 asm ("5") = *rlen; register unsigned long reg5 asm ("5") = _rlen;
asm volatile( asm volatile(
" diag %2,%0,0x8\n" " diag %2,%0,0x8\n"

View File

@ -125,7 +125,7 @@ config AGP_HP_ZX1
config AGP_PARISC config AGP_PARISC
tristate "HP Quicksilver AGP support" tristate "HP Quicksilver AGP support"
depends on AGP && PARISC && 64BIT depends on AGP && PARISC && 64BIT && IOMMU_SBA
help help
This option gives you AGP GART support for the HP Quicksilver This option gives you AGP GART support for the HP Quicksilver
AGP bus adapter on HP PA-RISC machines (Ok, just on the C8000 AGP bus adapter on HP PA-RISC machines (Ok, just on the C8000

View File

@ -4233,20 +4233,19 @@ int clk_notifier_register(struct clk *clk, struct notifier_block *nb)
/* search the list of notifiers for this clk */ /* search the list of notifiers for this clk */
list_for_each_entry(cn, &clk_notifier_list, node) list_for_each_entry(cn, &clk_notifier_list, node)
if (cn->clk == clk) if (cn->clk == clk)
break; goto found;
/* if clk wasn't in the notifier list, allocate new clk_notifier */ /* if clk wasn't in the notifier list, allocate new clk_notifier */
if (cn->clk != clk) { cn = kzalloc(sizeof(*cn), GFP_KERNEL);
cn = kzalloc(sizeof(*cn), GFP_KERNEL); if (!cn)
if (!cn) goto out;
goto out;
cn->clk = clk; cn->clk = clk;
srcu_init_notifier_head(&cn->notifier_head); srcu_init_notifier_head(&cn->notifier_head);
list_add(&cn->node, &clk_notifier_list); list_add(&cn->node, &clk_notifier_list);
}
found:
ret = srcu_notifier_chain_register(&cn->notifier_head, nb); ret = srcu_notifier_chain_register(&cn->notifier_head, nb);
clk->core->notifier_count++; clk->core->notifier_count++;
@ -4271,32 +4270,28 @@ EXPORT_SYMBOL_GPL(clk_notifier_register);
*/ */
int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb) int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb)
{ {
struct clk_notifier *cn = NULL; struct clk_notifier *cn;
int ret = -EINVAL; int ret = -ENOENT;
if (!clk || !nb) if (!clk || !nb)
return -EINVAL; return -EINVAL;
clk_prepare_lock(); clk_prepare_lock();
list_for_each_entry(cn, &clk_notifier_list, node) list_for_each_entry(cn, &clk_notifier_list, node) {
if (cn->clk == clk) if (cn->clk == clk) {
ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb);
clk->core->notifier_count--;
/* XXX the notifier code should handle this better */
if (!cn->notifier_head.head) {
srcu_cleanup_notifier_head(&cn->notifier_head);
list_del(&cn->node);
kfree(cn);
}
break; break;
if (cn->clk == clk) {
ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb);
clk->core->notifier_count--;
/* XXX the notifier code should handle this better */
if (!cn->notifier_head.head) {
srcu_cleanup_notifier_head(&cn->notifier_head);
list_del(&cn->node);
kfree(cn);
} }
} else {
ret = -ENOENT;
} }
clk_prepare_unlock(); clk_prepare_unlock();

View File

@ -99,7 +99,7 @@ static unsigned long socfpga_clk_recalc_rate(struct clk_hw *hwclk,
val = readl(socfpgaclk->div_reg) >> socfpgaclk->shift; val = readl(socfpgaclk->div_reg) >> socfpgaclk->shift;
val &= GENMASK(socfpgaclk->width - 1, 0); val &= GENMASK(socfpgaclk->width - 1, 0);
/* Check for GPIO_DB_CLK by its offset */ /* Check for GPIO_DB_CLK by its offset */
if ((int) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET) if ((uintptr_t) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET)
div = val + 1; div = val + 1;
else else
div = (1 << val); div = (1 << val);

View File

@ -24,7 +24,6 @@ struct stm32_timer_cnt {
struct counter_device counter; struct counter_device counter;
struct regmap *regmap; struct regmap *regmap;
struct clk *clk; struct clk *clk;
u32 ceiling;
u32 max_arr; u32 max_arr;
}; };
@ -67,14 +66,15 @@ static int stm32_count_write(struct counter_device *counter,
struct counter_count_write_value *val) struct counter_count_write_value *val)
{ {
struct stm32_timer_cnt *const priv = counter->priv; struct stm32_timer_cnt *const priv = counter->priv;
u32 cnt; u32 cnt, ceiling;
int err; int err;
err = counter_count_write_value_get(&cnt, COUNTER_COUNT_POSITION, val); err = counter_count_write_value_get(&cnt, COUNTER_COUNT_POSITION, val);
if (err) if (err)
return err; return err;
if (cnt > priv->ceiling) regmap_read(priv->regmap, TIM_ARR, &ceiling);
if (cnt > ceiling)
return -EINVAL; return -EINVAL;
return regmap_write(priv->regmap, TIM_CNT, cnt); return regmap_write(priv->regmap, TIM_CNT, cnt);
@ -136,10 +136,6 @@ static int stm32_count_function_set(struct counter_device *counter,
regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 0); regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 0);
/* TIMx_ARR register shouldn't be buffered (ARPE=0) */
regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, 0);
regmap_write(priv->regmap, TIM_ARR, priv->ceiling);
regmap_update_bits(priv->regmap, TIM_SMCR, TIM_SMCR_SMS, sms); regmap_update_bits(priv->regmap, TIM_SMCR, TIM_SMCR_SMS, sms);
/* Make sure that registers are updated */ /* Make sure that registers are updated */
@ -197,7 +193,6 @@ static ssize_t stm32_count_ceiling_write(struct counter_device *counter,
regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, 0); regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_ARPE, 0);
regmap_write(priv->regmap, TIM_ARR, ceiling); regmap_write(priv->regmap, TIM_ARR, ceiling);
priv->ceiling = ceiling;
return len; return len;
} }
@ -369,7 +364,6 @@ static int stm32_timer_cnt_probe(struct platform_device *pdev)
priv->regmap = ddata->regmap; priv->regmap = ddata->regmap;
priv->clk = ddata->clk; priv->clk = ddata->clk;
priv->ceiling = ddata->max_arr;
priv->max_arr = ddata->max_arr; priv->max_arr = ddata->max_arr;
priv->counter.name = dev_name(dev); priv->counter.name = dev_name(dev);

View File

@ -83,13 +83,31 @@ static void intel_dsm_platform_mux_info(acpi_handle dhandle)
return; return;
} }
if (!pkg->package.count) {
DRM_DEBUG_DRIVER("no connection in _DSM\n");
return;
}
connector_count = &pkg->package.elements[0]; connector_count = &pkg->package.elements[0];
DRM_DEBUG_DRIVER("MUX info connectors: %lld\n", DRM_DEBUG_DRIVER("MUX info connectors: %lld\n",
(unsigned long long)connector_count->integer.value); (unsigned long long)connector_count->integer.value);
for (i = 1; i < pkg->package.count; i++) { for (i = 1; i < pkg->package.count; i++) {
union acpi_object *obj = &pkg->package.elements[i]; union acpi_object *obj = &pkg->package.elements[i];
union acpi_object *connector_id = &obj->package.elements[0]; union acpi_object *connector_id;
union acpi_object *info = &obj->package.elements[1]; union acpi_object *info;
if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < 2) {
DRM_DEBUG_DRIVER("Invalid object for MUX #%d\n", i);
continue;
}
connector_id = &obj->package.elements[0];
info = &obj->package.elements[1];
if (info->type != ACPI_TYPE_BUFFER || info->buffer.length < 4) {
DRM_DEBUG_DRIVER("Invalid info for MUX obj #%d\n", i);
continue;
}
DRM_DEBUG_DRIVER("Connector id: 0x%016llx\n", DRM_DEBUG_DRIVER("Connector id: 0x%016llx\n",
(unsigned long long)connector_id->integer.value); (unsigned long long)connector_id->integer.value);
DRM_DEBUG_DRIVER(" port id: %s\n", DRM_DEBUG_DRIVER(" port id: %s\n",

View File

@ -567,6 +567,7 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv)
kfree(priv); kfree(priv);
err_put_drm_dev: err_put_drm_dev:
drm_dev_put(ddev); drm_dev_put(ddev);
platform_set_drvdata(pdev, NULL);
return ret; return ret;
} }

View File

@ -254,13 +254,14 @@ EXPORT_SYMBOL_GPL(i2c_recover_bus);
static void i2c_init_recovery(struct i2c_adapter *adap) static void i2c_init_recovery(struct i2c_adapter *adap)
{ {
struct i2c_bus_recovery_info *bri = adap->bus_recovery_info; struct i2c_bus_recovery_info *bri = adap->bus_recovery_info;
char *err_str; char *err_str, *err_level = KERN_ERR;
if (!bri) if (!bri)
return; return;
if (!bri->recover_bus) { if (!bri->recover_bus) {
err_str = "no recover_bus() found"; err_str = "no suitable method provided";
err_level = KERN_DEBUG;
goto err; goto err;
} }
@ -290,7 +291,7 @@ static void i2c_init_recovery(struct i2c_adapter *adap)
return; return;
err: err:
dev_err(&adap->dev, "Not using recovery: %s\n", err_str); dev_printk(err_level, &adap->dev, "Not using recovery: %s\n", err_str);
adap->bus_recovery_info = NULL; adap->bus_recovery_info = NULL;
} }

View File

@ -76,7 +76,9 @@ static struct workqueue_struct *addr_wq;
static const struct nla_policy ib_nl_addr_policy[LS_NLA_TYPE_MAX] = { static const struct nla_policy ib_nl_addr_policy[LS_NLA_TYPE_MAX] = {
[LS_NLA_TYPE_DGID] = {.type = NLA_BINARY, [LS_NLA_TYPE_DGID] = {.type = NLA_BINARY,
.len = sizeof(struct rdma_nla_ls_gid)}, .len = sizeof(struct rdma_nla_ls_gid),
.validation_type = NLA_VALIDATE_MIN,
.min = sizeof(struct rdma_nla_ls_gid)},
}; };
static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh) static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh)

View File

@ -3616,7 +3616,8 @@ int c4iw_destroy_listen(struct iw_cm_id *cm_id)
c4iw_init_wr_wait(ep->com.wr_waitp); c4iw_init_wr_wait(ep->com.wr_waitp);
err = cxgb4_remove_server( err = cxgb4_remove_server(
ep->com.dev->rdev.lldi.ports[0], ep->stid, ep->com.dev->rdev.lldi.ports[0], ep->stid,
ep->com.dev->rdev.lldi.rxq_ids[0], true); ep->com.dev->rdev.lldi.rxq_ids[0],
ep->com.local_addr.ss_family == AF_INET6);
if (err) if (err)
goto done; goto done;
err = c4iw_wait_for_reply(&ep->com.dev->rdev, ep->com.wr_waitp, err = c4iw_wait_for_reply(&ep->com.dev->rdev, ep->com.wr_waitp,

View File

@ -856,7 +856,7 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
if (dev->adapter->dev_set_bus) { if (dev->adapter->dev_set_bus) {
err = dev->adapter->dev_set_bus(dev, 0); err = dev->adapter->dev_set_bus(dev, 0);
if (err) if (err)
goto lbl_unregister_candev; goto adap_dev_free;
} }
/* get device number early */ /* get device number early */
@ -868,6 +868,10 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter,
return 0; return 0;
adap_dev_free:
if (dev->adapter->dev_free)
dev->adapter->dev_free(dev);
lbl_unregister_candev: lbl_unregister_candev:
unregister_candev(netdev); unregister_candev(netdev);

View File

@ -93,8 +93,12 @@
/* GSWIP MII Registers */ /* GSWIP MII Registers */
#define GSWIP_MII_CFGp(p) (0x2 * (p)) #define GSWIP_MII_CFGp(p) (0x2 * (p))
#define GSWIP_MII_CFG_RESET BIT(15)
#define GSWIP_MII_CFG_EN BIT(14) #define GSWIP_MII_CFG_EN BIT(14)
#define GSWIP_MII_CFG_ISOLATE BIT(13)
#define GSWIP_MII_CFG_LDCLKDIS BIT(12) #define GSWIP_MII_CFG_LDCLKDIS BIT(12)
#define GSWIP_MII_CFG_RGMII_IBS BIT(8)
#define GSWIP_MII_CFG_RMII_CLK BIT(7)
#define GSWIP_MII_CFG_MODE_MIIP 0x0 #define GSWIP_MII_CFG_MODE_MIIP 0x0
#define GSWIP_MII_CFG_MODE_MIIM 0x1 #define GSWIP_MII_CFG_MODE_MIIM 0x1
#define GSWIP_MII_CFG_MODE_RMIIP 0x2 #define GSWIP_MII_CFG_MODE_RMIIP 0x2
@ -190,6 +194,23 @@
#define GSWIP_PCE_DEFPVID(p) (0x486 + ((p) * 0xA)) #define GSWIP_PCE_DEFPVID(p) (0x486 + ((p) * 0xA))
#define GSWIP_MAC_FLEN 0x8C5 #define GSWIP_MAC_FLEN 0x8C5
#define GSWIP_MAC_CTRL_0p(p) (0x903 + ((p) * 0xC))
#define GSWIP_MAC_CTRL_0_PADEN BIT(8)
#define GSWIP_MAC_CTRL_0_FCS_EN BIT(7)
#define GSWIP_MAC_CTRL_0_FCON_MASK 0x0070
#define GSWIP_MAC_CTRL_0_FCON_AUTO 0x0000
#define GSWIP_MAC_CTRL_0_FCON_RX 0x0010
#define GSWIP_MAC_CTRL_0_FCON_TX 0x0020
#define GSWIP_MAC_CTRL_0_FCON_RXTX 0x0030
#define GSWIP_MAC_CTRL_0_FCON_NONE 0x0040
#define GSWIP_MAC_CTRL_0_FDUP_MASK 0x000C
#define GSWIP_MAC_CTRL_0_FDUP_AUTO 0x0000
#define GSWIP_MAC_CTRL_0_FDUP_EN 0x0004
#define GSWIP_MAC_CTRL_0_FDUP_DIS 0x000C
#define GSWIP_MAC_CTRL_0_GMII_MASK 0x0003
#define GSWIP_MAC_CTRL_0_GMII_AUTO 0x0000
#define GSWIP_MAC_CTRL_0_GMII_MII 0x0001
#define GSWIP_MAC_CTRL_0_GMII_RGMII 0x0002
#define GSWIP_MAC_CTRL_2p(p) (0x905 + ((p) * 0xC)) #define GSWIP_MAC_CTRL_2p(p) (0x905 + ((p) * 0xC))
#define GSWIP_MAC_CTRL_2_MLEN BIT(3) /* Maximum Untagged Frame Lnegth */ #define GSWIP_MAC_CTRL_2_MLEN BIT(3) /* Maximum Untagged Frame Lnegth */
@ -653,16 +674,13 @@ static int gswip_port_enable(struct dsa_switch *ds, int port,
GSWIP_SDMA_PCTRLp(port)); GSWIP_SDMA_PCTRLp(port));
if (!dsa_is_cpu_port(ds, port)) { if (!dsa_is_cpu_port(ds, port)) {
u32 macconf = GSWIP_MDIO_PHY_LINK_AUTO | u32 mdio_phy = 0;
GSWIP_MDIO_PHY_SPEED_AUTO |
GSWIP_MDIO_PHY_FDUP_AUTO |
GSWIP_MDIO_PHY_FCONTX_AUTO |
GSWIP_MDIO_PHY_FCONRX_AUTO |
(phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK);
gswip_mdio_w(priv, macconf, GSWIP_MDIO_PHYp(port)); if (phydev)
/* Activate MDIO auto polling */ mdio_phy = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK;
gswip_mdio_mask(priv, 0, BIT(port), GSWIP_MDIO_MDC_CFG0);
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_ADDR_MASK, mdio_phy,
GSWIP_MDIO_PHYp(port));
} }
return 0; return 0;
@ -675,14 +693,6 @@ static void gswip_port_disable(struct dsa_switch *ds, int port)
if (!dsa_is_user_port(ds, port)) if (!dsa_is_user_port(ds, port))
return; return;
if (!dsa_is_cpu_port(ds, port)) {
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_DOWN,
GSWIP_MDIO_PHY_LINK_MASK,
GSWIP_MDIO_PHYp(port));
/* Deactivate MDIO auto polling */
gswip_mdio_mask(priv, BIT(port), 0, GSWIP_MDIO_MDC_CFG0);
}
gswip_switch_mask(priv, GSWIP_FDMA_PCTRL_EN, 0, gswip_switch_mask(priv, GSWIP_FDMA_PCTRL_EN, 0,
GSWIP_FDMA_PCTRLp(port)); GSWIP_FDMA_PCTRLp(port));
gswip_switch_mask(priv, GSWIP_SDMA_PCTRL_EN, 0, gswip_switch_mask(priv, GSWIP_SDMA_PCTRL_EN, 0,
@ -790,14 +800,32 @@ static int gswip_setup(struct dsa_switch *ds)
gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP2); gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP2);
gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP3); gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP3);
/* disable PHY auto polling */ /* Deactivate MDIO PHY auto polling. Some PHYs as the AR8030 have an
* interoperability problem with this auto polling mechanism because
* their status registers think that the link is in a different state
* than it actually is. For the AR8030 it has the BMSR_ESTATEN bit set
* as well as ESTATUS_1000_TFULL and ESTATUS_1000_XFULL. This makes the
* auto polling state machine consider the link being negotiated with
* 1Gbit/s. Since the PHY itself is a Fast Ethernet RMII PHY this leads
* to the switch port being completely dead (RX and TX are both not
* working).
* Also with various other PHY / port combinations (PHY11G GPHY, PHY22F
* GPHY, external RGMII PEF7071/7072) any traffic would stop. Sometimes
* it would work fine for a few minutes to hours and then stop, on
* other device it would no traffic could be sent or received at all.
* Testing shows that when PHY auto polling is disabled these problems
* go away.
*/
gswip_mdio_w(priv, 0x0, GSWIP_MDIO_MDC_CFG0); gswip_mdio_w(priv, 0x0, GSWIP_MDIO_MDC_CFG0);
/* Configure the MDIO Clock 2.5 MHz */ /* Configure the MDIO Clock 2.5 MHz */
gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1); gswip_mdio_mask(priv, 0xff, 0x09, GSWIP_MDIO_MDC_CFG1);
/* Disable the xMII link */ /* Disable the xMII interface and clear it's isolation bit */
for (i = 0; i < priv->hw_info->max_ports; i++) for (i = 0; i < priv->hw_info->max_ports; i++)
gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, i); gswip_mii_mask_cfg(priv,
GSWIP_MII_CFG_EN | GSWIP_MII_CFG_ISOLATE,
0, i);
/* enable special tag insertion on cpu port */ /* enable special tag insertion on cpu port */
gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN, gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN,
@ -1447,6 +1475,112 @@ static void gswip_phylink_validate(struct dsa_switch *ds, int port,
return; return;
} }
static void gswip_port_set_link(struct gswip_priv *priv, int port, bool link)
{
u32 mdio_phy;
if (link)
mdio_phy = GSWIP_MDIO_PHY_LINK_UP;
else
mdio_phy = GSWIP_MDIO_PHY_LINK_DOWN;
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_LINK_MASK, mdio_phy,
GSWIP_MDIO_PHYp(port));
}
static void gswip_port_set_speed(struct gswip_priv *priv, int port, int speed,
phy_interface_t interface)
{
u32 mdio_phy = 0, mii_cfg = 0, mac_ctrl_0 = 0;
switch (speed) {
case SPEED_10:
mdio_phy = GSWIP_MDIO_PHY_SPEED_M10;
if (interface == PHY_INTERFACE_MODE_RMII)
mii_cfg = GSWIP_MII_CFG_RATE_M50;
else
mii_cfg = GSWIP_MII_CFG_RATE_M2P5;
mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII;
break;
case SPEED_100:
mdio_phy = GSWIP_MDIO_PHY_SPEED_M100;
if (interface == PHY_INTERFACE_MODE_RMII)
mii_cfg = GSWIP_MII_CFG_RATE_M50;
else
mii_cfg = GSWIP_MII_CFG_RATE_M25;
mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_MII;
break;
case SPEED_1000:
mdio_phy = GSWIP_MDIO_PHY_SPEED_G1;
mii_cfg = GSWIP_MII_CFG_RATE_M125;
mac_ctrl_0 = GSWIP_MAC_CTRL_0_GMII_RGMII;
break;
}
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_SPEED_MASK, mdio_phy,
GSWIP_MDIO_PHYp(port));
gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_RATE_MASK, mii_cfg, port);
gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_GMII_MASK, mac_ctrl_0,
GSWIP_MAC_CTRL_0p(port));
}
static void gswip_port_set_duplex(struct gswip_priv *priv, int port, int duplex)
{
u32 mac_ctrl_0, mdio_phy;
if (duplex == DUPLEX_FULL) {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_EN;
mdio_phy = GSWIP_MDIO_PHY_FDUP_EN;
} else {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FDUP_DIS;
mdio_phy = GSWIP_MDIO_PHY_FDUP_DIS;
}
gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FDUP_MASK, mac_ctrl_0,
GSWIP_MAC_CTRL_0p(port));
gswip_mdio_mask(priv, GSWIP_MDIO_PHY_FDUP_MASK, mdio_phy,
GSWIP_MDIO_PHYp(port));
}
static void gswip_port_set_pause(struct gswip_priv *priv, int port,
bool tx_pause, bool rx_pause)
{
u32 mac_ctrl_0, mdio_phy;
if (tx_pause && rx_pause) {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RXTX;
mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN |
GSWIP_MDIO_PHY_FCONRX_EN;
} else if (tx_pause) {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_TX;
mdio_phy = GSWIP_MDIO_PHY_FCONTX_EN |
GSWIP_MDIO_PHY_FCONRX_DIS;
} else if (rx_pause) {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_RX;
mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS |
GSWIP_MDIO_PHY_FCONRX_EN;
} else {
mac_ctrl_0 = GSWIP_MAC_CTRL_0_FCON_NONE;
mdio_phy = GSWIP_MDIO_PHY_FCONTX_DIS |
GSWIP_MDIO_PHY_FCONRX_DIS;
}
gswip_switch_mask(priv, GSWIP_MAC_CTRL_0_FCON_MASK,
mac_ctrl_0, GSWIP_MAC_CTRL_0p(port));
gswip_mdio_mask(priv,
GSWIP_MDIO_PHY_FCONTX_MASK |
GSWIP_MDIO_PHY_FCONRX_MASK,
mdio_phy, GSWIP_MDIO_PHYp(port));
}
static void gswip_phylink_mac_config(struct dsa_switch *ds, int port, static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
unsigned int mode, unsigned int mode,
const struct phylink_link_state *state) const struct phylink_link_state *state)
@ -1466,6 +1600,9 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
break; break;
case PHY_INTERFACE_MODE_RMII: case PHY_INTERFACE_MODE_RMII:
miicfg |= GSWIP_MII_CFG_MODE_RMIIM; miicfg |= GSWIP_MII_CFG_MODE_RMIIM;
/* Configure the RMII clock as output: */
miicfg |= GSWIP_MII_CFG_RMII_CLK;
break; break;
case PHY_INTERFACE_MODE_RGMII: case PHY_INTERFACE_MODE_RGMII:
case PHY_INTERFACE_MODE_RGMII_ID: case PHY_INTERFACE_MODE_RGMII_ID:
@ -1478,7 +1615,16 @@ static void gswip_phylink_mac_config(struct dsa_switch *ds, int port,
"Unsupported interface: %d\n", state->interface); "Unsupported interface: %d\n", state->interface);
return; return;
} }
gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_MODE_MASK, miicfg, port);
gswip_mii_mask_cfg(priv,
GSWIP_MII_CFG_MODE_MASK | GSWIP_MII_CFG_RMII_CLK |
GSWIP_MII_CFG_RGMII_IBS | GSWIP_MII_CFG_LDCLKDIS,
miicfg, port);
gswip_port_set_speed(priv, port, state->speed, state->interface);
gswip_port_set_duplex(priv, port, state->duplex);
gswip_port_set_pause(priv, port, !!(state->pause & MLO_PAUSE_TX),
!!(state->pause & MLO_PAUSE_RX));
switch (state->interface) { switch (state->interface) {
case PHY_INTERFACE_MODE_RGMII_ID: case PHY_INTERFACE_MODE_RGMII_ID:
@ -1503,6 +1649,9 @@ static void gswip_phylink_mac_link_down(struct dsa_switch *ds, int port,
struct gswip_priv *priv = ds->priv; struct gswip_priv *priv = ds->priv;
gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, port); gswip_mii_mask_cfg(priv, GSWIP_MII_CFG_EN, 0, port);
if (!dsa_is_cpu_port(ds, port))
gswip_port_set_link(priv, port, false);
} }
static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port, static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port,
@ -1512,6 +1661,9 @@ static void gswip_phylink_mac_link_up(struct dsa_switch *ds, int port,
{ {
struct gswip_priv *priv = ds->priv; struct gswip_priv *priv = ds->priv;
if (!dsa_is_cpu_port(ds, port))
gswip_port_set_link(priv, port, true);
gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port); gswip_mii_mask_cfg(priv, 0, GSWIP_MII_CFG_EN, port);
} }

View File

@ -181,9 +181,9 @@
#define XGBE_DMA_SYS_AWCR 0x30303030 #define XGBE_DMA_SYS_AWCR 0x30303030
/* DMA cache settings - PCI device */ /* DMA cache settings - PCI device */
#define XGBE_DMA_PCI_ARCR 0x00000003 #define XGBE_DMA_PCI_ARCR 0x000f0f0f
#define XGBE_DMA_PCI_AWCR 0x13131313 #define XGBE_DMA_PCI_AWCR 0x0f0f0f0f
#define XGBE_DMA_PCI_AWARCR 0x00000313 #define XGBE_DMA_PCI_AWARCR 0x00000f0f
/* DMA channel interrupt modes */ /* DMA channel interrupt modes */
#define XGBE_IRQ_MODE_EDGE 0 #define XGBE_IRQ_MODE_EDGE 0

View File

@ -2915,6 +2915,9 @@ static void gem_prog_cmp_regs(struct macb *bp, struct ethtool_rx_flow_spec *fs)
bool cmp_b = false; bool cmp_b = false;
bool cmp_c = false; bool cmp_c = false;
if (!macb_is_gem(bp))
return;
tp4sp_v = &(fs->h_u.tcp_ip4_spec); tp4sp_v = &(fs->h_u.tcp_ip4_spec);
tp4sp_m = &(fs->m_u.tcp_ip4_spec); tp4sp_m = &(fs->m_u.tcp_ip4_spec);
@ -3286,6 +3289,7 @@ static void macb_restore_features(struct macb *bp)
{ {
struct net_device *netdev = bp->dev; struct net_device *netdev = bp->dev;
netdev_features_t features = netdev->features; netdev_features_t features = netdev->features;
struct ethtool_rx_fs_item *item;
/* TX checksum offload */ /* TX checksum offload */
macb_set_txcsum_feature(bp, features); macb_set_txcsum_feature(bp, features);
@ -3294,6 +3298,9 @@ static void macb_restore_features(struct macb *bp)
macb_set_rxcsum_feature(bp, features); macb_set_rxcsum_feature(bp, features);
/* RX Flow Filters */ /* RX Flow Filters */
list_for_each_entry(item, &bp->rx_fs_list.list, list)
gem_prog_cmp_regs(bp, &item->fs);
macb_set_rxflow_feature(bp, features); macb_set_rxflow_feature(bp, features);
} }

View File

@ -1393,11 +1393,25 @@ int cudbg_collect_sge_indirect(struct cudbg_init *pdbg_init,
struct cudbg_buffer temp_buff = { 0 }; struct cudbg_buffer temp_buff = { 0 };
struct sge_qbase_reg_field *sge_qbase; struct sge_qbase_reg_field *sge_qbase;
struct ireg_buf *ch_sge_dbg; struct ireg_buf *ch_sge_dbg;
u8 padap_running = 0;
int i, rc; int i, rc;
u32 size;
rc = cudbg_get_buff(pdbg_init, dbg_buff, /* Accessing SGE_QBASE_MAP[0-3] and SGE_QBASE_INDEX regs can
sizeof(*ch_sge_dbg) * 2 + sizeof(*sge_qbase), * lead to SGE missing doorbells under heavy traffic. So, only
&temp_buff); * collect them when adapter is idle.
*/
for_each_port(padap, i) {
padap_running = netif_running(padap->port[i]);
if (padap_running)
break;
}
size = sizeof(*ch_sge_dbg) * 2;
if (!padap_running)
size += sizeof(*sge_qbase);
rc = cudbg_get_buff(pdbg_init, dbg_buff, size, &temp_buff);
if (rc) if (rc)
return rc; return rc;
@ -1419,7 +1433,8 @@ int cudbg_collect_sge_indirect(struct cudbg_init *pdbg_init,
ch_sge_dbg++; ch_sge_dbg++;
} }
if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5) { if (CHELSIO_CHIP_VERSION(padap->params.chip) > CHELSIO_T5 &&
!padap_running) {
sge_qbase = (struct sge_qbase_reg_field *)ch_sge_dbg; sge_qbase = (struct sge_qbase_reg_field *)ch_sge_dbg;
/* 1 addr reg SGE_QBASE_INDEX and 4 data reg /* 1 addr reg SGE_QBASE_INDEX and 4 data reg
* SGE_QBASE_MAP[0-3] * SGE_QBASE_MAP[0-3]

View File

@ -2093,7 +2093,8 @@ void t4_get_regs(struct adapter *adap, void *buf, size_t buf_size)
0x1190, 0x1194, 0x1190, 0x1194,
0x11a0, 0x11a4, 0x11a0, 0x11a4,
0x11b0, 0x11b4, 0x11b0, 0x11b4,
0x11fc, 0x1274, 0x11fc, 0x123c,
0x1254, 0x1274,
0x1280, 0x133c, 0x1280, 0x133c,
0x1800, 0x18fc, 0x1800, 0x18fc,
0x3000, 0x302c, 0x3000, 0x302c,

View File

@ -366,7 +366,11 @@ static void gfar_set_mac_for_addr(struct net_device *dev, int num,
static int gfar_set_mac_addr(struct net_device *dev, void *p) static int gfar_set_mac_addr(struct net_device *dev, void *p)
{ {
eth_mac_addr(dev, p); int ret;
ret = eth_mac_addr(dev, p);
if (ret)
return ret;
gfar_set_mac_for_addr(dev, 0, dev->dev_addr); gfar_set_mac_for_addr(dev, 0, dev->dev_addr);

View File

@ -2140,14 +2140,14 @@ static int hclgevf_ae_start(struct hnae3_handle *handle)
{ {
struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
hclgevf_reset_tqp_stats(handle); hclgevf_reset_tqp_stats(handle);
hclgevf_request_link_info(hdev); hclgevf_request_link_info(hdev);
hclgevf_update_link_mode(hdev); hclgevf_update_link_mode(hdev);
clear_bit(HCLGEVF_STATE_DOWN, &hdev->state);
return 0; return 0;
} }

View File

@ -152,6 +152,7 @@ enum i40e_state_t {
__I40E_VIRTCHNL_OP_PENDING, __I40E_VIRTCHNL_OP_PENDING,
__I40E_RECOVERY_MODE, __I40E_RECOVERY_MODE,
__I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */ __I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */
__I40E_VFS_RELEASING,
/* This must be last as it determines the size of the BITMAP */ /* This must be last as it determines the size of the BITMAP */
__I40E_STATE_SIZE__, __I40E_STATE_SIZE__,
}; };

View File

@ -232,6 +232,8 @@ static void __i40e_add_stat_strings(u8 **p, const struct i40e_stats stats[],
I40E_STAT(struct i40e_vsi, _name, _stat) I40E_STAT(struct i40e_vsi, _name, _stat)
#define I40E_VEB_STAT(_name, _stat) \ #define I40E_VEB_STAT(_name, _stat) \
I40E_STAT(struct i40e_veb, _name, _stat) I40E_STAT(struct i40e_veb, _name, _stat)
#define I40E_VEB_TC_STAT(_name, _stat) \
I40E_STAT(struct i40e_cp_veb_tc_stats, _name, _stat)
#define I40E_PFC_STAT(_name, _stat) \ #define I40E_PFC_STAT(_name, _stat) \
I40E_STAT(struct i40e_pfc_stats, _name, _stat) I40E_STAT(struct i40e_pfc_stats, _name, _stat)
#define I40E_QUEUE_STAT(_name, _stat) \ #define I40E_QUEUE_STAT(_name, _stat) \
@ -266,11 +268,18 @@ static const struct i40e_stats i40e_gstrings_veb_stats[] = {
I40E_VEB_STAT("veb.rx_unknown_protocol", stats.rx_unknown_protocol), I40E_VEB_STAT("veb.rx_unknown_protocol", stats.rx_unknown_protocol),
}; };
struct i40e_cp_veb_tc_stats {
u64 tc_rx_packets;
u64 tc_rx_bytes;
u64 tc_tx_packets;
u64 tc_tx_bytes;
};
static const struct i40e_stats i40e_gstrings_veb_tc_stats[] = { static const struct i40e_stats i40e_gstrings_veb_tc_stats[] = {
I40E_VEB_STAT("veb.tc_%u_tx_packets", tc_stats.tc_tx_packets), I40E_VEB_TC_STAT("veb.tc_%u_tx_packets", tc_tx_packets),
I40E_VEB_STAT("veb.tc_%u_tx_bytes", tc_stats.tc_tx_bytes), I40E_VEB_TC_STAT("veb.tc_%u_tx_bytes", tc_tx_bytes),
I40E_VEB_STAT("veb.tc_%u_rx_packets", tc_stats.tc_rx_packets), I40E_VEB_TC_STAT("veb.tc_%u_rx_packets", tc_rx_packets),
I40E_VEB_STAT("veb.tc_%u_rx_bytes", tc_stats.tc_rx_bytes), I40E_VEB_TC_STAT("veb.tc_%u_rx_bytes", tc_rx_bytes),
}; };
static const struct i40e_stats i40e_gstrings_misc_stats[] = { static const struct i40e_stats i40e_gstrings_misc_stats[] = {
@ -1098,6 +1107,7 @@ static int i40e_get_link_ksettings(struct net_device *netdev,
/* Set flow control settings */ /* Set flow control settings */
ethtool_link_ksettings_add_link_mode(ks, supported, Pause); ethtool_link_ksettings_add_link_mode(ks, supported, Pause);
ethtool_link_ksettings_add_link_mode(ks, supported, Asym_Pause);
switch (hw->fc.requested_mode) { switch (hw->fc.requested_mode) {
case I40E_FC_FULL: case I40E_FC_FULL:
@ -2212,6 +2222,29 @@ static int i40e_get_sset_count(struct net_device *netdev, int sset)
} }
} }
/**
* i40e_get_veb_tc_stats - copy VEB TC statistics to formatted structure
* @tc: the TC statistics in VEB structure (veb->tc_stats)
* @i: the index of traffic class in (veb->tc_stats) structure to copy
*
* Copy VEB TC statistics from structure of arrays (veb->tc_stats) to
* one dimensional structure i40e_cp_veb_tc_stats.
* Produce formatted i40e_cp_veb_tc_stats structure of the VEB TC
* statistics for the given TC.
**/
static struct i40e_cp_veb_tc_stats
i40e_get_veb_tc_stats(struct i40e_veb_tc_stats *tc, unsigned int i)
{
struct i40e_cp_veb_tc_stats veb_tc = {
.tc_rx_packets = tc->tc_rx_packets[i],
.tc_rx_bytes = tc->tc_rx_bytes[i],
.tc_tx_packets = tc->tc_tx_packets[i],
.tc_tx_bytes = tc->tc_tx_bytes[i],
};
return veb_tc;
}
/** /**
* i40e_get_pfc_stats - copy HW PFC statistics to formatted structure * i40e_get_pfc_stats - copy HW PFC statistics to formatted structure
* @pf: the PF device structure * @pf: the PF device structure
@ -2296,8 +2329,16 @@ static void i40e_get_ethtool_stats(struct net_device *netdev,
i40e_gstrings_veb_stats); i40e_gstrings_veb_stats);
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
i40e_add_ethtool_stats(&data, veb_stats ? veb : NULL, if (veb_stats) {
i40e_gstrings_veb_tc_stats); struct i40e_cp_veb_tc_stats veb_tc =
i40e_get_veb_tc_stats(&veb->tc_stats, i);
i40e_add_ethtool_stats(&data, &veb_tc,
i40e_gstrings_veb_tc_stats);
} else {
i40e_add_ethtool_stats(&data, NULL,
i40e_gstrings_veb_tc_stats);
}
i40e_add_ethtool_stats(&data, pf, i40e_gstrings_stats); i40e_add_ethtool_stats(&data, pf, i40e_gstrings_stats);

View File

@ -2547,8 +2547,7 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
i40e_stat_str(hw, aq_ret), i40e_stat_str(hw, aq_ret),
i40e_aq_str(hw, hw->aq.asq_last_status)); i40e_aq_str(hw, hw->aq.asq_last_status));
} else { } else {
dev_info(&pf->pdev->dev, "%s is %s allmulti mode.\n", dev_info(&pf->pdev->dev, "%s allmulti mode.\n",
vsi->netdev->name,
cur_multipromisc ? "entering" : "leaving"); cur_multipromisc ? "entering" : "leaving");
} }
} }
@ -14701,12 +14700,16 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw)
* in order to register the netdev * in order to register the netdev
*/ */
v_idx = i40e_vsi_mem_alloc(pf, I40E_VSI_MAIN); v_idx = i40e_vsi_mem_alloc(pf, I40E_VSI_MAIN);
if (v_idx < 0) if (v_idx < 0) {
err = v_idx;
goto err_switch_setup; goto err_switch_setup;
}
pf->lan_vsi = v_idx; pf->lan_vsi = v_idx;
vsi = pf->vsi[v_idx]; vsi = pf->vsi[v_idx];
if (!vsi) if (!vsi) {
err = -EFAULT;
goto err_switch_setup; goto err_switch_setup;
}
vsi->alloc_queue_pairs = 1; vsi->alloc_queue_pairs = 1;
err = i40e_config_netdev(vsi); err = i40e_config_netdev(vsi);
if (err) if (err)

View File

@ -137,6 +137,7 @@ void i40e_vc_notify_vf_reset(struct i40e_vf *vf)
**/ **/
static inline void i40e_vc_disable_vf(struct i40e_vf *vf) static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
{ {
struct i40e_pf *pf = vf->pf;
int i; int i;
i40e_vc_notify_vf_reset(vf); i40e_vc_notify_vf_reset(vf);
@ -147,6 +148,11 @@ static inline void i40e_vc_disable_vf(struct i40e_vf *vf)
* ensure a reset. * ensure a reset.
*/ */
for (i = 0; i < 20; i++) { for (i = 0; i < 20; i++) {
/* If PF is in VFs releasing state reset VF is impossible,
* so leave it.
*/
if (test_bit(__I40E_VFS_RELEASING, pf->state))
return;
if (i40e_reset_vf(vf, false)) if (i40e_reset_vf(vf, false))
return; return;
usleep_range(10000, 20000); usleep_range(10000, 20000);
@ -1506,6 +1512,8 @@ void i40e_free_vfs(struct i40e_pf *pf)
if (!pf->vf) if (!pf->vf)
return; return;
set_bit(__I40E_VFS_RELEASING, pf->state);
while (test_and_set_bit(__I40E_VF_DISABLE, pf->state)) while (test_and_set_bit(__I40E_VF_DISABLE, pf->state))
usleep_range(1000, 2000); usleep_range(1000, 2000);
@ -1563,6 +1571,7 @@ void i40e_free_vfs(struct i40e_pf *pf)
} }
} }
clear_bit(__I40E_VF_DISABLE, pf->state); clear_bit(__I40E_VF_DISABLE, pf->state);
clear_bit(__I40E_VFS_RELEASING, pf->state);
} }
#ifdef CONFIG_PCI_IOV #ifdef CONFIG_PCI_IOV

View File

@ -31,8 +31,8 @@ enum ice_ctl_q {
ICE_CTL_Q_MAILBOX, ICE_CTL_Q_MAILBOX,
}; };
/* Control Queue timeout settings - max delay 250ms */ /* Control Queue timeout settings - max delay 1s */
#define ICE_CTL_Q_SQ_CMD_TIMEOUT 2500 /* Count 2500 times */ #define ICE_CTL_Q_SQ_CMD_TIMEOUT 10000 /* Count 10000 times */
#define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */ #define ICE_CTL_Q_SQ_CMD_USEC 100 /* Check every 100usec */
struct ice_ctl_q_ring { struct ice_ctl_q_ring {

View File

@ -1279,6 +1279,9 @@ ice_add_update_vsi_list(struct ice_hw *hw,
ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2, ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
vsi_list_id); vsi_list_id);
if (!m_entry->vsi_list_info)
return ICE_ERR_NO_MEMORY;
/* If this entry was large action then the large action needs /* If this entry was large action then the large action needs
* to be updated to point to FWD to VSI list * to be updated to point to FWD to VSI list
*/ */
@ -2266,6 +2269,7 @@ ice_vsi_uses_fltr(struct ice_fltr_mgmt_list_entry *fm_entry, u16 vsi_handle)
return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI && return ((fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI &&
fm_entry->fltr_info.vsi_handle == vsi_handle) || fm_entry->fltr_info.vsi_handle == vsi_handle) ||
(fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST && (fm_entry->fltr_info.fltr_act == ICE_FWD_TO_VSI_LIST &&
fm_entry->vsi_list_info &&
(test_bit(vsi_handle, fm_entry->vsi_list_info->vsi_map)))); (test_bit(vsi_handle, fm_entry->vsi_list_info->vsi_map))));
} }
@ -2338,14 +2342,12 @@ ice_add_to_vsi_fltr_list(struct ice_hw *hw, u16 vsi_handle,
return ICE_ERR_PARAM; return ICE_ERR_PARAM;
list_for_each_entry(fm_entry, lkup_list_head, list_entry) { list_for_each_entry(fm_entry, lkup_list_head, list_entry) {
struct ice_fltr_info *fi; if (!ice_vsi_uses_fltr(fm_entry, vsi_handle))
fi = &fm_entry->fltr_info;
if (!fi || !ice_vsi_uses_fltr(fm_entry, vsi_handle))
continue; continue;
status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle, status = ice_add_entry_to_vsi_fltr_list(hw, vsi_handle,
vsi_list_head, fi); vsi_list_head,
&fm_entry->fltr_info);
if (status) if (status)
return status; return status;
} }
@ -2663,7 +2665,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
&remove_list_head); &remove_list_head);
mutex_unlock(rule_lock); mutex_unlock(rule_lock);
if (status) if (status)
return; goto free_fltr_list;
switch (lkup) { switch (lkup) {
case ICE_SW_LKUP_MAC: case ICE_SW_LKUP_MAC:
@ -2686,6 +2688,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
break; break;
} }
free_fltr_list:
list_for_each_entry_safe(fm_entry, tmp, &remove_list_head, list_entry) { list_for_each_entry_safe(fm_entry, tmp, &remove_list_head, list_entry) {
list_del(&fm_entry->list_entry); list_del(&fm_entry->list_entry);
devm_kfree(ice_hw_to_dev(hw), fm_entry); devm_kfree(ice_hw_to_dev(hw), fm_entry);

View File

@ -700,11 +700,11 @@ static int get_fec_supported_advertised(struct mlx5_core_dev *dev,
return 0; return 0;
} }
static void ptys2ethtool_supported_advertised_port(struct ethtool_link_ksettings *link_ksettings, static void ptys2ethtool_supported_advertised_port(struct mlx5_core_dev *mdev,
u32 eth_proto_cap, struct ethtool_link_ksettings *link_ksettings,
u8 connector_type, bool ext) u32 eth_proto_cap, u8 connector_type)
{ {
if ((!connector_type && !ext) || connector_type >= MLX5E_CONNECTOR_TYPE_NUMBER) { if (!MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type)) {
if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR) if (eth_proto_cap & (MLX5E_PROT_MASK(MLX5E_10GBASE_CR)
| MLX5E_PROT_MASK(MLX5E_10GBASE_SR) | MLX5E_PROT_MASK(MLX5E_10GBASE_SR)
| MLX5E_PROT_MASK(MLX5E_40GBASE_CR4) | MLX5E_PROT_MASK(MLX5E_40GBASE_CR4)
@ -836,9 +836,9 @@ static int ptys2connector_type[MLX5E_CONNECTOR_TYPE_NUMBER] = {
[MLX5E_PORT_OTHER] = PORT_OTHER, [MLX5E_PORT_OTHER] = PORT_OTHER,
}; };
static u8 get_connector_port(u32 eth_proto, u8 connector_type, bool ext) static u8 get_connector_port(struct mlx5_core_dev *mdev, u32 eth_proto, u8 connector_type)
{ {
if ((connector_type || ext) && connector_type < MLX5E_CONNECTOR_TYPE_NUMBER) if (MLX5_CAP_PCAM_FEATURE(mdev, ptys_connector_type))
return ptys2connector_type[connector_type]; return ptys2connector_type[connector_type];
if (eth_proto & if (eth_proto &
@ -937,11 +937,11 @@ int mlx5e_ethtool_get_link_ksettings(struct mlx5e_priv *priv,
link_ksettings); link_ksettings);
eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap; eth_proto_oper = eth_proto_oper ? eth_proto_oper : eth_proto_cap;
connector_type = connector_type < MLX5E_CONNECTOR_TYPE_NUMBER ?
link_ksettings->base.port = get_connector_port(eth_proto_oper, connector_type : MLX5E_PORT_UNKNOWN;
connector_type, ext); link_ksettings->base.port = get_connector_port(mdev, eth_proto_oper, connector_type);
ptys2ethtool_supported_advertised_port(link_ksettings, eth_proto_admin, ptys2ethtool_supported_advertised_port(mdev, link_ksettings, eth_proto_admin,
connector_type, ext); connector_type);
get_lp_advertising(mdev, eth_proto_lp, link_ksettings); get_lp_advertising(mdev, eth_proto_lp, link_ksettings);
if (an_status == MLX5_AN_COMPLETE) if (an_status == MLX5_AN_COMPLETE)

View File

@ -926,13 +926,24 @@ void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev)
mutex_unlock(&table->lock); mutex_unlock(&table->lock);
} }
#ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING
#define MLX5_MAX_ASYNC_EQS 4
#else
#define MLX5_MAX_ASYNC_EQS 3
#endif
int mlx5_eq_table_create(struct mlx5_core_dev *dev) int mlx5_eq_table_create(struct mlx5_core_dev *dev)
{ {
struct mlx5_eq_table *eq_table = dev->priv.eq_table; struct mlx5_eq_table *eq_table = dev->priv.eq_table;
int num_eqs = MLX5_CAP_GEN(dev, max_num_eqs) ?
MLX5_CAP_GEN(dev, max_num_eqs) :
1 << MLX5_CAP_GEN(dev, log_max_eq);
int err; int err;
eq_table->num_comp_eqs = eq_table->num_comp_eqs =
mlx5_irq_get_num_comp(eq_table->irq_table); min_t(int,
mlx5_irq_get_num_comp(eq_table->irq_table),
num_eqs - MLX5_MAX_ASYNC_EQS);
err = create_async_eqs(dev); err = create_async_eqs(dev);
if (err) { if (err) {

View File

@ -454,6 +454,7 @@ void nfp_bpf_ctrl_msg_rx(struct nfp_app *app, struct sk_buff *skb)
dev_consume_skb_any(skb); dev_consume_skb_any(skb);
else else
dev_kfree_skb_any(skb); dev_kfree_skb_any(skb);
return;
} }
nfp_ccm_rx(&bpf->ccm, skb); nfp_ccm_rx(&bpf->ccm, skb);

View File

@ -164,6 +164,7 @@ struct nfp_fl_internal_ports {
* @qos_rate_limiters: Current active qos rate limiters * @qos_rate_limiters: Current active qos rate limiters
* @qos_stats_lock: Lock on qos stats updates * @qos_stats_lock: Lock on qos stats updates
* @pre_tun_rule_cnt: Number of pre-tunnel rules offloaded * @pre_tun_rule_cnt: Number of pre-tunnel rules offloaded
* @merge_table: Hash table to store merged flows
*/ */
struct nfp_flower_priv { struct nfp_flower_priv {
struct nfp_app *app; struct nfp_app *app;
@ -196,6 +197,7 @@ struct nfp_flower_priv {
unsigned int qos_rate_limiters; unsigned int qos_rate_limiters;
spinlock_t qos_stats_lock; /* Protect the qos stats */ spinlock_t qos_stats_lock; /* Protect the qos stats */
int pre_tun_rule_cnt; int pre_tun_rule_cnt;
struct rhashtable merge_table;
}; };
/** /**
@ -310,6 +312,12 @@ struct nfp_fl_payload_link {
}; };
extern const struct rhashtable_params nfp_flower_table_params; extern const struct rhashtable_params nfp_flower_table_params;
extern const struct rhashtable_params merge_table_params;
struct nfp_merge_info {
u64 parent_ctx;
struct rhash_head ht_node;
};
struct nfp_fl_stats_frame { struct nfp_fl_stats_frame {
__be32 stats_con_id; __be32 stats_con_id;

View File

@ -490,6 +490,12 @@ const struct rhashtable_params nfp_flower_table_params = {
.automatic_shrinking = true, .automatic_shrinking = true,
}; };
const struct rhashtable_params merge_table_params = {
.key_offset = offsetof(struct nfp_merge_info, parent_ctx),
.head_offset = offsetof(struct nfp_merge_info, ht_node),
.key_len = sizeof(u64),
};
int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count, int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
unsigned int host_num_mems) unsigned int host_num_mems)
{ {
@ -506,6 +512,10 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
if (err) if (err)
goto err_free_flow_table; goto err_free_flow_table;
err = rhashtable_init(&priv->merge_table, &merge_table_params);
if (err)
goto err_free_stats_ctx_table;
get_random_bytes(&priv->mask_id_seed, sizeof(priv->mask_id_seed)); get_random_bytes(&priv->mask_id_seed, sizeof(priv->mask_id_seed));
/* Init ring buffer and unallocated mask_ids. */ /* Init ring buffer and unallocated mask_ids. */
@ -513,7 +523,7 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
kmalloc_array(NFP_FLOWER_MASK_ENTRY_RS, kmalloc_array(NFP_FLOWER_MASK_ENTRY_RS,
NFP_FLOWER_MASK_ELEMENT_RS, GFP_KERNEL); NFP_FLOWER_MASK_ELEMENT_RS, GFP_KERNEL);
if (!priv->mask_ids.mask_id_free_list.buf) if (!priv->mask_ids.mask_id_free_list.buf)
goto err_free_stats_ctx_table; goto err_free_merge_table;
priv->mask_ids.init_unallocated = NFP_FLOWER_MASK_ENTRY_RS - 1; priv->mask_ids.init_unallocated = NFP_FLOWER_MASK_ENTRY_RS - 1;
@ -550,6 +560,8 @@ int nfp_flower_metadata_init(struct nfp_app *app, u64 host_ctx_count,
kfree(priv->mask_ids.last_used); kfree(priv->mask_ids.last_used);
err_free_mask_id: err_free_mask_id:
kfree(priv->mask_ids.mask_id_free_list.buf); kfree(priv->mask_ids.mask_id_free_list.buf);
err_free_merge_table:
rhashtable_destroy(&priv->merge_table);
err_free_stats_ctx_table: err_free_stats_ctx_table:
rhashtable_destroy(&priv->stats_ctx_table); rhashtable_destroy(&priv->stats_ctx_table);
err_free_flow_table: err_free_flow_table:
@ -568,6 +580,8 @@ void nfp_flower_metadata_cleanup(struct nfp_app *app)
nfp_check_rhashtable_empty, NULL); nfp_check_rhashtable_empty, NULL);
rhashtable_free_and_destroy(&priv->stats_ctx_table, rhashtable_free_and_destroy(&priv->stats_ctx_table,
nfp_check_rhashtable_empty, NULL); nfp_check_rhashtable_empty, NULL);
rhashtable_free_and_destroy(&priv->merge_table,
nfp_check_rhashtable_empty, NULL);
kvfree(priv->stats); kvfree(priv->stats);
kfree(priv->mask_ids.mask_id_free_list.buf); kfree(priv->mask_ids.mask_id_free_list.buf);
kfree(priv->mask_ids.last_used); kfree(priv->mask_ids.last_used);

View File

@ -923,6 +923,8 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
struct netlink_ext_ack *extack = NULL; struct netlink_ext_ack *extack = NULL;
struct nfp_fl_payload *merge_flow; struct nfp_fl_payload *merge_flow;
struct nfp_fl_key_ls merge_key_ls; struct nfp_fl_key_ls merge_key_ls;
struct nfp_merge_info *merge_info;
u64 parent_ctx = 0;
int err; int err;
ASSERT_RTNL(); ASSERT_RTNL();
@ -933,6 +935,15 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
nfp_flower_is_merge_flow(sub_flow2)) nfp_flower_is_merge_flow(sub_flow2))
return -EINVAL; return -EINVAL;
/* check if the two flows are already merged */
parent_ctx = (u64)(be32_to_cpu(sub_flow1->meta.host_ctx_id)) << 32;
parent_ctx |= (u64)(be32_to_cpu(sub_flow2->meta.host_ctx_id));
if (rhashtable_lookup_fast(&priv->merge_table,
&parent_ctx, merge_table_params)) {
nfp_flower_cmsg_warn(app, "The two flows are already merged.\n");
return 0;
}
err = nfp_flower_can_merge(sub_flow1, sub_flow2); err = nfp_flower_can_merge(sub_flow1, sub_flow2);
if (err) if (err)
return err; return err;
@ -974,16 +985,33 @@ int nfp_flower_merge_offloaded_flows(struct nfp_app *app,
if (err) if (err)
goto err_release_metadata; goto err_release_metadata;
merge_info = kmalloc(sizeof(*merge_info), GFP_KERNEL);
if (!merge_info) {
err = -ENOMEM;
goto err_remove_rhash;
}
merge_info->parent_ctx = parent_ctx;
err = rhashtable_insert_fast(&priv->merge_table, &merge_info->ht_node,
merge_table_params);
if (err)
goto err_destroy_merge_info;
err = nfp_flower_xmit_flow(app, merge_flow, err = nfp_flower_xmit_flow(app, merge_flow,
NFP_FLOWER_CMSG_TYPE_FLOW_MOD); NFP_FLOWER_CMSG_TYPE_FLOW_MOD);
if (err) if (err)
goto err_remove_rhash; goto err_remove_merge_info;
merge_flow->in_hw = true; merge_flow->in_hw = true;
sub_flow1->in_hw = false; sub_flow1->in_hw = false;
return 0; return 0;
err_remove_merge_info:
WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table,
&merge_info->ht_node,
merge_table_params));
err_destroy_merge_info:
kfree(merge_info);
err_remove_rhash: err_remove_rhash:
WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table, WARN_ON_ONCE(rhashtable_remove_fast(&priv->flow_table,
&merge_flow->fl_node, &merge_flow->fl_node,
@ -1211,7 +1239,9 @@ nfp_flower_remove_merge_flow(struct nfp_app *app,
{ {
struct nfp_flower_priv *priv = app->priv; struct nfp_flower_priv *priv = app->priv;
struct nfp_fl_payload_link *link, *temp; struct nfp_fl_payload_link *link, *temp;
struct nfp_merge_info *merge_info;
struct nfp_fl_payload *origin; struct nfp_fl_payload *origin;
u64 parent_ctx = 0;
bool mod = false; bool mod = false;
int err; int err;
@ -1248,8 +1278,22 @@ nfp_flower_remove_merge_flow(struct nfp_app *app,
err_free_links: err_free_links:
/* Clean any links connected with the merged flow. */ /* Clean any links connected with the merged flow. */
list_for_each_entry_safe(link, temp, &merge_flow->linked_flows, list_for_each_entry_safe(link, temp, &merge_flow->linked_flows,
merge_flow.list) merge_flow.list) {
u32 ctx_id = be32_to_cpu(link->sub_flow.flow->meta.host_ctx_id);
parent_ctx = (parent_ctx << 32) | (u64)(ctx_id);
nfp_flower_unlink_flow(link); nfp_flower_unlink_flow(link);
}
merge_info = rhashtable_lookup_fast(&priv->merge_table,
&parent_ctx,
merge_table_params);
if (merge_info) {
WARN_ON_ONCE(rhashtable_remove_fast(&priv->merge_table,
&merge_info->ht_node,
merge_table_params));
kfree(merge_info);
}
kfree(merge_flow->action_data); kfree(merge_flow->action_data);
kfree(merge_flow->mask_data); kfree(merge_flow->mask_data);

View File

@ -365,6 +365,7 @@ static int atusb_alloc_urbs(struct atusb *atusb, int n)
return -ENOMEM; return -ENOMEM;
} }
usb_anchor_urb(urb, &atusb->idle_urbs); usb_anchor_urb(urb, &atusb->idle_urbs);
usb_free_urb(urb);
n--; n--;
} }
return 0; return 0;

View File

@ -190,7 +190,7 @@ EXPORT_SYMBOL_GPL(bcm_phy_enable_apd);
int bcm_phy_set_eee(struct phy_device *phydev, bool enable) int bcm_phy_set_eee(struct phy_device *phydev, bool enable)
{ {
int val; int val, mask = 0;
/* Enable EEE at PHY level */ /* Enable EEE at PHY level */
val = phy_read_mmd(phydev, MDIO_MMD_AN, BRCM_CL45VEN_EEE_CONTROL); val = phy_read_mmd(phydev, MDIO_MMD_AN, BRCM_CL45VEN_EEE_CONTROL);
@ -209,10 +209,17 @@ int bcm_phy_set_eee(struct phy_device *phydev, bool enable)
if (val < 0) if (val < 0)
return val; return val;
if (linkmode_test_bit(ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
phydev->supported))
mask |= MDIO_EEE_1000T;
if (linkmode_test_bit(ETHTOOL_LINK_MODE_100baseT_Full_BIT,
phydev->supported))
mask |= MDIO_EEE_100TX;
if (enable) if (enable)
val |= (MDIO_EEE_100TX | MDIO_EEE_1000T); val |= mask;
else else
val &= ~(MDIO_EEE_100TX | MDIO_EEE_1000T); val &= ~mask;
phy_write_mmd(phydev, MDIO_MMD_AN, BCM_CL45VEN_EEE_ADV, (u32)val); phy_write_mmd(phydev, MDIO_MMD_AN, BCM_CL45VEN_EEE_ADV, (u32)val);

View File

@ -68,6 +68,14 @@
#include <linux/bpf.h> #include <linux/bpf.h>
#include <linux/bpf_trace.h> #include <linux/bpf_trace.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/ieee802154.h>
#include <linux/if_ltalk.h>
#include <uapi/linux/if_fddi.h>
#include <uapi/linux/if_hippi.h>
#include <uapi/linux/if_fc.h>
#include <net/ax25.h>
#include <net/rose.h>
#include <net/6lowpan.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
@ -3043,6 +3051,45 @@ static int tun_set_ebpf(struct tun_struct *tun, struct tun_prog **prog_p,
return __tun_set_ebpf(tun, prog_p, prog); return __tun_set_ebpf(tun, prog_p, prog);
} }
/* Return correct value for tun->dev->addr_len based on tun->dev->type. */
static unsigned char tun_get_addr_len(unsigned short type)
{
switch (type) {
case ARPHRD_IP6GRE:
case ARPHRD_TUNNEL6:
return sizeof(struct in6_addr);
case ARPHRD_IPGRE:
case ARPHRD_TUNNEL:
case ARPHRD_SIT:
return 4;
case ARPHRD_ETHER:
return ETH_ALEN;
case ARPHRD_IEEE802154:
case ARPHRD_IEEE802154_MONITOR:
return IEEE802154_EXTENDED_ADDR_LEN;
case ARPHRD_PHONET_PIPE:
case ARPHRD_PPP:
case ARPHRD_NONE:
return 0;
case ARPHRD_6LOWPAN:
return EUI64_ADDR_LEN;
case ARPHRD_FDDI:
return FDDI_K_ALEN;
case ARPHRD_HIPPI:
return HIPPI_ALEN;
case ARPHRD_IEEE802:
return FC_ALEN;
case ARPHRD_ROSE:
return ROSE_ADDR_LEN;
case ARPHRD_NETROM:
return AX25_ADDR_LEN;
case ARPHRD_LOCALTLK:
return LTALK_ALEN;
default:
return 0;
}
}
static long __tun_chr_ioctl(struct file *file, unsigned int cmd, static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
unsigned long arg, int ifreq_len) unsigned long arg, int ifreq_len)
{ {
@ -3198,6 +3245,7 @@ static long __tun_chr_ioctl(struct file *file, unsigned int cmd,
ret = -EBUSY; ret = -EBUSY;
} else { } else {
tun->dev->type = (int) arg; tun->dev->type = (int) arg;
tun->dev->addr_len = tun_get_addr_len(tun->dev->type);
tun_debug(KERN_INFO, tun, "linktype set to %d\n", tun_debug(KERN_INFO, tun, "linktype set to %d\n",
tun->dev->type); tun->dev->type);
ret = 0; ret = 0;

View File

@ -611,7 +611,7 @@ static struct hso_serial *get_serial_by_index(unsigned index)
return serial; return serial;
} }
static int get_free_serial_index(void) static int obtain_minor(struct hso_serial *serial)
{ {
int index; int index;
unsigned long flags; unsigned long flags;
@ -619,8 +619,10 @@ static int get_free_serial_index(void)
spin_lock_irqsave(&serial_table_lock, flags); spin_lock_irqsave(&serial_table_lock, flags);
for (index = 0; index < HSO_SERIAL_TTY_MINORS; index++) { for (index = 0; index < HSO_SERIAL_TTY_MINORS; index++) {
if (serial_table[index] == NULL) { if (serial_table[index] == NULL) {
serial_table[index] = serial->parent;
serial->minor = index;
spin_unlock_irqrestore(&serial_table_lock, flags); spin_unlock_irqrestore(&serial_table_lock, flags);
return index; return 0;
} }
} }
spin_unlock_irqrestore(&serial_table_lock, flags); spin_unlock_irqrestore(&serial_table_lock, flags);
@ -629,15 +631,12 @@ static int get_free_serial_index(void)
return -1; return -1;
} }
static void set_serial_by_index(unsigned index, struct hso_serial *serial) static void release_minor(struct hso_serial *serial)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&serial_table_lock, flags); spin_lock_irqsave(&serial_table_lock, flags);
if (serial) serial_table[serial->minor] = NULL;
serial_table[index] = serial->parent;
else
serial_table[index] = NULL;
spin_unlock_irqrestore(&serial_table_lock, flags); spin_unlock_irqrestore(&serial_table_lock, flags);
} }
@ -2230,6 +2229,7 @@ static int hso_stop_serial_device(struct hso_device *hso_dev)
static void hso_serial_tty_unregister(struct hso_serial *serial) static void hso_serial_tty_unregister(struct hso_serial *serial)
{ {
tty_unregister_device(tty_drv, serial->minor); tty_unregister_device(tty_drv, serial->minor);
release_minor(serial);
} }
static void hso_serial_common_free(struct hso_serial *serial) static void hso_serial_common_free(struct hso_serial *serial)
@ -2253,24 +2253,22 @@ static void hso_serial_common_free(struct hso_serial *serial)
static int hso_serial_common_create(struct hso_serial *serial, int num_urbs, static int hso_serial_common_create(struct hso_serial *serial, int num_urbs,
int rx_size, int tx_size) int rx_size, int tx_size)
{ {
int minor;
int i; int i;
tty_port_init(&serial->port); tty_port_init(&serial->port);
minor = get_free_serial_index(); if (obtain_minor(serial))
if (minor < 0)
goto exit2; goto exit2;
/* register our minor number */ /* register our minor number */
serial->parent->dev = tty_port_register_device_attr(&serial->port, serial->parent->dev = tty_port_register_device_attr(&serial->port,
tty_drv, minor, &serial->parent->interface->dev, tty_drv, serial->minor, &serial->parent->interface->dev,
serial->parent, hso_serial_dev_groups); serial->parent, hso_serial_dev_groups);
if (IS_ERR(serial->parent->dev)) if (IS_ERR(serial->parent->dev)) {
release_minor(serial);
goto exit2; goto exit2;
}
/* fill in specific data for later use */
serial->minor = minor;
serial->magic = HSO_SERIAL_MAGIC; serial->magic = HSO_SERIAL_MAGIC;
spin_lock_init(&serial->serial_lock); spin_lock_init(&serial->serial_lock);
serial->num_rx_urbs = num_urbs; serial->num_rx_urbs = num_urbs;
@ -2668,9 +2666,6 @@ static struct hso_device *hso_create_bulk_serial_device(
serial->write_data = hso_std_serial_write_data; serial->write_data = hso_std_serial_write_data;
/* and record this serial */
set_serial_by_index(serial->minor, serial);
/* setup the proc dirs and files if needed */ /* setup the proc dirs and files if needed */
hso_log_port(hso_dev); hso_log_port(hso_dev);
@ -2727,9 +2722,6 @@ struct hso_device *hso_create_mux_serial_device(struct usb_interface *interface,
serial->shared_int->ref_count++; serial->shared_int->ref_count++;
mutex_unlock(&serial->shared_int->shared_int_lock); mutex_unlock(&serial->shared_int->shared_int_lock);
/* and record this serial */
set_serial_by_index(serial->minor, serial);
/* setup the proc dirs and files if needed */ /* setup the proc dirs and files if needed */
hso_log_port(hso_dev); hso_log_port(hso_dev);
@ -3114,7 +3106,6 @@ static void hso_free_interface(struct usb_interface *interface)
cancel_work_sync(&serial_table[i]->async_get_intf); cancel_work_sync(&serial_table[i]->async_get_intf);
hso_serial_tty_unregister(serial); hso_serial_tty_unregister(serial);
kref_put(&serial_table[i]->ref, hso_serial_ref_free); kref_put(&serial_table[i]->ref, hso_serial_ref_free);
set_serial_by_index(i, NULL);
} }
} }

View File

@ -376,7 +376,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
struct receive_queue *rq, struct receive_queue *rq,
struct page *page, unsigned int offset, struct page *page, unsigned int offset,
unsigned int len, unsigned int truesize, unsigned int len, unsigned int truesize,
bool hdr_valid) bool hdr_valid, unsigned int metasize)
{ {
struct sk_buff *skb; struct sk_buff *skb;
struct virtio_net_hdr_mrg_rxbuf *hdr; struct virtio_net_hdr_mrg_rxbuf *hdr;
@ -398,6 +398,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
else else
hdr_padded_len = sizeof(struct padded_vnet_hdr); hdr_padded_len = sizeof(struct padded_vnet_hdr);
/* hdr_valid means no XDP, so we can copy the vnet header */
if (hdr_valid) if (hdr_valid)
memcpy(hdr, p, hdr_len); memcpy(hdr, p, hdr_len);
@ -410,6 +411,11 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
copy = skb_tailroom(skb); copy = skb_tailroom(skb);
skb_put_data(skb, p, copy); skb_put_data(skb, p, copy);
if (metasize) {
__skb_pull(skb, metasize);
skb_metadata_set(skb, metasize);
}
len -= copy; len -= copy;
offset += copy; offset += copy;
@ -455,10 +461,6 @@ static int __virtnet_xdp_xmit_one(struct virtnet_info *vi,
struct virtio_net_hdr_mrg_rxbuf *hdr; struct virtio_net_hdr_mrg_rxbuf *hdr;
int err; int err;
/* virtqueue want to use data area in-front of packet */
if (unlikely(xdpf->metasize > 0))
return -EOPNOTSUPP;
if (unlikely(xdpf->headroom < vi->hdr_len)) if (unlikely(xdpf->headroom < vi->hdr_len))
return -EOVERFLOW; return -EOVERFLOW;
@ -649,6 +651,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
unsigned int delta = 0; unsigned int delta = 0;
struct page *xdp_page; struct page *xdp_page;
int err; int err;
unsigned int metasize = 0;
len -= vi->hdr_len; len -= vi->hdr_len;
stats->bytes += len; stats->bytes += len;
@ -688,8 +691,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len; xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
xdp.data = xdp.data_hard_start + xdp_headroom; xdp.data = xdp.data_hard_start + xdp_headroom;
xdp_set_data_meta_invalid(&xdp);
xdp.data_end = xdp.data + len; xdp.data_end = xdp.data + len;
xdp.data_meta = xdp.data;
xdp.rxq = &rq->xdp_rxq; xdp.rxq = &rq->xdp_rxq;
orig_data = xdp.data; orig_data = xdp.data;
act = bpf_prog_run_xdp(xdp_prog, &xdp); act = bpf_prog_run_xdp(xdp_prog, &xdp);
@ -700,6 +703,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
/* Recalculate length in case bpf program changed it */ /* Recalculate length in case bpf program changed it */
delta = orig_data - xdp.data; delta = orig_data - xdp.data;
len = xdp.data_end - xdp.data; len = xdp.data_end - xdp.data;
metasize = xdp.data - xdp.data_meta;
break; break;
case XDP_TX: case XDP_TX:
stats->xdp_tx++; stats->xdp_tx++;
@ -745,6 +749,9 @@ static struct sk_buff *receive_small(struct net_device *dev,
memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len); memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len);
} /* keep zeroed vnet hdr since packet was changed by bpf */ } /* keep zeroed vnet hdr since packet was changed by bpf */
if (metasize)
skb_metadata_set(skb, metasize);
err: err:
return skb; return skb;
@ -765,8 +772,8 @@ static struct sk_buff *receive_big(struct net_device *dev,
struct virtnet_rq_stats *stats) struct virtnet_rq_stats *stats)
{ {
struct page *page = buf; struct page *page = buf;
struct sk_buff *skb = page_to_skb(vi, rq, page, 0, len, struct sk_buff *skb =
PAGE_SIZE, true); page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, true, 0);
stats->bytes += len - vi->hdr_len; stats->bytes += len - vi->hdr_len;
if (unlikely(!skb)) if (unlikely(!skb))
@ -798,6 +805,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
unsigned int truesize; unsigned int truesize;
unsigned int headroom = mergeable_ctx_to_headroom(ctx); unsigned int headroom = mergeable_ctx_to_headroom(ctx);
int err; int err;
unsigned int metasize = 0;
head_skb = NULL; head_skb = NULL;
stats->bytes += len - vi->hdr_len; stats->bytes += len - vi->hdr_len;
@ -844,8 +852,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
data = page_address(xdp_page) + offset; data = page_address(xdp_page) + offset;
xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len; xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
xdp.data = data + vi->hdr_len; xdp.data = data + vi->hdr_len;
xdp_set_data_meta_invalid(&xdp);
xdp.data_end = xdp.data + (len - vi->hdr_len); xdp.data_end = xdp.data + (len - vi->hdr_len);
xdp.data_meta = xdp.data;
xdp.rxq = &rq->xdp_rxq; xdp.rxq = &rq->xdp_rxq;
act = bpf_prog_run_xdp(xdp_prog, &xdp); act = bpf_prog_run_xdp(xdp_prog, &xdp);
@ -853,24 +861,27 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
switch (act) { switch (act) {
case XDP_PASS: case XDP_PASS:
/* recalculate offset to account for any header metasize = xdp.data - xdp.data_meta;
* adjustments. Note other cases do not build an
* skb and avoid using offset
*/
offset = xdp.data -
page_address(xdp_page) - vi->hdr_len;
/* recalculate len if xdp.data or xdp.data_end were /* recalculate offset to account for any header
* adjusted * adjustments and minus the metasize to copy the
* metadata in page_to_skb(). Note other cases do not
* build an skb and avoid using offset
*/ */
len = xdp.data_end - xdp.data + vi->hdr_len; offset = xdp.data - page_address(xdp_page) -
vi->hdr_len - metasize;
/* recalculate len if xdp.data, xdp.data_end or
* xdp.data_meta were adjusted
*/
len = xdp.data_end - xdp.data + vi->hdr_len + metasize;
/* We can only create skb based on xdp_page. */ /* We can only create skb based on xdp_page. */
if (unlikely(xdp_page != page)) { if (unlikely(xdp_page != page)) {
rcu_read_unlock(); rcu_read_unlock();
put_page(page); put_page(page);
head_skb = page_to_skb(vi, rq, xdp_page, head_skb = page_to_skb(vi, rq, xdp_page, offset,
offset, len, len, PAGE_SIZE, false,
PAGE_SIZE, false); metasize);
return head_skb; return head_skb;
} }
break; break;
@ -926,7 +937,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
goto err_skb; goto err_skb;
} }
head_skb = page_to_skb(vi, rq, page, offset, len, truesize, !xdp_prog); head_skb = page_to_skb(vi, rq, page, offset, len, truesize, !xdp_prog,
metasize);
curr_skb = head_skb; curr_skb = head_skb;
if (unlikely(!curr_skb)) if (unlikely(!curr_skb))

View File

@ -309,11 +309,20 @@ static bool sanity_check(struct ce_array *ca)
return ret; return ret;
} }
/**
* cec_add_elem - Add an element to the CEC array.
* @pfn: page frame number to insert
*
* Return values:
* - <0: on error
* - 0: on success
* - >0: when the inserted pfn was offlined
*/
int cec_add_elem(u64 pfn) int cec_add_elem(u64 pfn)
{ {
struct ce_array *ca = &ce_arr; struct ce_array *ca = &ce_arr;
int count, err, ret = 0;
unsigned int to = 0; unsigned int to = 0;
int count, ret = 0;
/* /*
* We can be called very early on the identify_cpu() path where we are * We can be called very early on the identify_cpu() path where we are
@ -330,8 +339,8 @@ int cec_add_elem(u64 pfn)
if (ca->n == MAX_ELEMS) if (ca->n == MAX_ELEMS)
WARN_ON(!del_lru_elem_unlocked(ca)); WARN_ON(!del_lru_elem_unlocked(ca));
ret = find_elem(ca, pfn, &to); err = find_elem(ca, pfn, &to);
if (ret < 0) { if (err < 0) {
/* /*
* Shift range [to-end] to make room for one more element. * Shift range [to-end] to make room for one more element.
*/ */

View File

@ -124,7 +124,7 @@ static const struct regulator_ops vid_ops = {
static const struct regulator_desc regulators[] = { static const struct regulator_desc regulators[] = {
BD9571MWV_REG("VD09", "vd09", VD09, avs_ops, 0, 0x7f, BD9571MWV_REG("VD09", "vd09", VD09, avs_ops, 0, 0x7f,
0x80, 600000, 10000, 0x3c), 0x6f, 600000, 10000, 0x3c),
BD9571MWV_REG("VD18", "vd18", VD18, vid_ops, BD9571MWV_VD18_VID, 0xf, BD9571MWV_REG("VD18", "vd18", VD18, vid_ops, BD9571MWV_VD18_VID, 0xf,
16, 1625000, 25000, 0), 16, 1625000, 25000, 0),
BD9571MWV_REG("VD25", "vd25", VD25, vid_ops, BD9571MWV_VD25_VID, 0xf, BD9571MWV_REG("VD25", "vd25", VD25, vid_ops, BD9571MWV_VD25_VID, 0xf,
@ -133,7 +133,7 @@ static const struct regulator_desc regulators[] = {
11, 2800000, 100000, 0), 11, 2800000, 100000, 0),
BD9571MWV_REG("DVFS", "dvfs", DVFS, reg_ops, BD9571MWV_REG("DVFS", "dvfs", DVFS, reg_ops,
BD9571MWV_DVFS_MONIVDAC, 0x7f, BD9571MWV_DVFS_MONIVDAC, 0x7f,
0x80, 600000, 10000, 0x3c), 0x6f, 600000, 10000, 0x3c),
}; };
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP

View File

@ -186,7 +186,7 @@ struct qm_eqcr_entry {
__be32 tag; __be32 tag;
struct qm_fd fd; struct qm_fd fd;
u8 __reserved3[32]; u8 __reserved3[32];
} __packed; } __packed __aligned(8);
#define QM_EQCR_VERB_VBIT 0x80 #define QM_EQCR_VERB_VBIT 0x80
#define QM_EQCR_VERB_CMD_MASK 0x61 /* but only one value; */ #define QM_EQCR_VERB_CMD_MASK 0x61 /* but only one value; */
#define QM_EQCR_VERB_CMD_ENQUEUE 0x01 #define QM_EQCR_VERB_CMD_ENQUEUE 0x01

View File

@ -63,6 +63,7 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
dev_info(dev, "stub up\n"); dev_info(dev, "stub up\n");
mutex_lock(&sdev->ud.sysfs_lock);
spin_lock_irq(&sdev->ud.lock); spin_lock_irq(&sdev->ud.lock);
if (sdev->ud.status != SDEV_ST_AVAILABLE) { if (sdev->ud.status != SDEV_ST_AVAILABLE) {
@ -87,13 +88,13 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
tcp_rx = kthread_create(stub_rx_loop, &sdev->ud, "stub_rx"); tcp_rx = kthread_create(stub_rx_loop, &sdev->ud, "stub_rx");
if (IS_ERR(tcp_rx)) { if (IS_ERR(tcp_rx)) {
sockfd_put(socket); sockfd_put(socket);
return -EINVAL; goto unlock_mutex;
} }
tcp_tx = kthread_create(stub_tx_loop, &sdev->ud, "stub_tx"); tcp_tx = kthread_create(stub_tx_loop, &sdev->ud, "stub_tx");
if (IS_ERR(tcp_tx)) { if (IS_ERR(tcp_tx)) {
kthread_stop(tcp_rx); kthread_stop(tcp_rx);
sockfd_put(socket); sockfd_put(socket);
return -EINVAL; goto unlock_mutex;
} }
/* get task structs now */ /* get task structs now */
@ -112,6 +113,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
wake_up_process(sdev->ud.tcp_rx); wake_up_process(sdev->ud.tcp_rx);
wake_up_process(sdev->ud.tcp_tx); wake_up_process(sdev->ud.tcp_tx);
mutex_unlock(&sdev->ud.sysfs_lock);
} else { } else {
dev_info(dev, "stub down\n"); dev_info(dev, "stub down\n");
@ -122,6 +125,7 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
spin_unlock_irq(&sdev->ud.lock); spin_unlock_irq(&sdev->ud.lock);
usbip_event_add(&sdev->ud, SDEV_EVENT_DOWN); usbip_event_add(&sdev->ud, SDEV_EVENT_DOWN);
mutex_unlock(&sdev->ud.sysfs_lock);
} }
return count; return count;
@ -130,6 +134,8 @@ static ssize_t usbip_sockfd_store(struct device *dev, struct device_attribute *a
sockfd_put(socket); sockfd_put(socket);
err: err:
spin_unlock_irq(&sdev->ud.lock); spin_unlock_irq(&sdev->ud.lock);
unlock_mutex:
mutex_unlock(&sdev->ud.sysfs_lock);
return -EINVAL; return -EINVAL;
} }
static DEVICE_ATTR_WO(usbip_sockfd); static DEVICE_ATTR_WO(usbip_sockfd);
@ -270,6 +276,7 @@ static struct stub_device *stub_device_alloc(struct usb_device *udev)
sdev->ud.side = USBIP_STUB; sdev->ud.side = USBIP_STUB;
sdev->ud.status = SDEV_ST_AVAILABLE; sdev->ud.status = SDEV_ST_AVAILABLE;
spin_lock_init(&sdev->ud.lock); spin_lock_init(&sdev->ud.lock);
mutex_init(&sdev->ud.sysfs_lock);
sdev->ud.tcp_socket = NULL; sdev->ud.tcp_socket = NULL;
sdev->ud.sockfd = -1; sdev->ud.sockfd = -1;

View File

@ -263,6 +263,9 @@ struct usbip_device {
/* lock for status */ /* lock for status */
spinlock_t lock; spinlock_t lock;
/* mutex for synchronizing sysfs store paths */
struct mutex sysfs_lock;
int sockfd; int sockfd;
struct socket *tcp_socket; struct socket *tcp_socket;

View File

@ -70,6 +70,7 @@ static void event_handler(struct work_struct *work)
while ((ud = get_event()) != NULL) { while ((ud = get_event()) != NULL) {
usbip_dbg_eh("pending event %lx\n", ud->event); usbip_dbg_eh("pending event %lx\n", ud->event);
mutex_lock(&ud->sysfs_lock);
/* /*
* NOTE: shutdown must come first. * NOTE: shutdown must come first.
* Shutdown the device. * Shutdown the device.
@ -90,6 +91,7 @@ static void event_handler(struct work_struct *work)
ud->eh_ops.unusable(ud); ud->eh_ops.unusable(ud);
unset_event(ud, USBIP_EH_UNUSABLE); unset_event(ud, USBIP_EH_UNUSABLE);
} }
mutex_unlock(&ud->sysfs_lock);
wake_up(&ud->eh_waitq); wake_up(&ud->eh_waitq);
} }

View File

@ -1096,6 +1096,7 @@ static void vhci_device_init(struct vhci_device *vdev)
vdev->ud.side = USBIP_VHCI; vdev->ud.side = USBIP_VHCI;
vdev->ud.status = VDEV_ST_NULL; vdev->ud.status = VDEV_ST_NULL;
spin_lock_init(&vdev->ud.lock); spin_lock_init(&vdev->ud.lock);
mutex_init(&vdev->ud.sysfs_lock);
INIT_LIST_HEAD(&vdev->priv_rx); INIT_LIST_HEAD(&vdev->priv_rx);
INIT_LIST_HEAD(&vdev->priv_tx); INIT_LIST_HEAD(&vdev->priv_tx);

View File

@ -185,6 +185,8 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
usbip_dbg_vhci_sysfs("enter\n"); usbip_dbg_vhci_sysfs("enter\n");
mutex_lock(&vdev->ud.sysfs_lock);
/* lock */ /* lock */
spin_lock_irqsave(&vhci->lock, flags); spin_lock_irqsave(&vhci->lock, flags);
spin_lock(&vdev->ud.lock); spin_lock(&vdev->ud.lock);
@ -195,6 +197,7 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
/* unlock */ /* unlock */
spin_unlock(&vdev->ud.lock); spin_unlock(&vdev->ud.lock);
spin_unlock_irqrestore(&vhci->lock, flags); spin_unlock_irqrestore(&vhci->lock, flags);
mutex_unlock(&vdev->ud.sysfs_lock);
return -EINVAL; return -EINVAL;
} }
@ -205,6 +208,8 @@ static int vhci_port_disconnect(struct vhci_hcd *vhci_hcd, __u32 rhport)
usbip_event_add(&vdev->ud, VDEV_EVENT_DOWN); usbip_event_add(&vdev->ud, VDEV_EVENT_DOWN);
mutex_unlock(&vdev->ud.sysfs_lock);
return 0; return 0;
} }
@ -349,30 +354,36 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
else else
vdev = &vhci->vhci_hcd_hs->vdev[rhport]; vdev = &vhci->vhci_hcd_hs->vdev[rhport];
mutex_lock(&vdev->ud.sysfs_lock);
/* Extract socket from fd. */ /* Extract socket from fd. */
socket = sockfd_lookup(sockfd, &err); socket = sockfd_lookup(sockfd, &err);
if (!socket) { if (!socket) {
dev_err(dev, "failed to lookup sock"); dev_err(dev, "failed to lookup sock");
return -EINVAL; err = -EINVAL;
goto unlock_mutex;
} }
if (socket->type != SOCK_STREAM) { if (socket->type != SOCK_STREAM) {
dev_err(dev, "Expecting SOCK_STREAM - found %d", dev_err(dev, "Expecting SOCK_STREAM - found %d",
socket->type); socket->type);
sockfd_put(socket); sockfd_put(socket);
return -EINVAL; err = -EINVAL;
goto unlock_mutex;
} }
/* create threads before locking */ /* create threads before locking */
tcp_rx = kthread_create(vhci_rx_loop, &vdev->ud, "vhci_rx"); tcp_rx = kthread_create(vhci_rx_loop, &vdev->ud, "vhci_rx");
if (IS_ERR(tcp_rx)) { if (IS_ERR(tcp_rx)) {
sockfd_put(socket); sockfd_put(socket);
return -EINVAL; err = -EINVAL;
goto unlock_mutex;
} }
tcp_tx = kthread_create(vhci_tx_loop, &vdev->ud, "vhci_tx"); tcp_tx = kthread_create(vhci_tx_loop, &vdev->ud, "vhci_tx");
if (IS_ERR(tcp_tx)) { if (IS_ERR(tcp_tx)) {
kthread_stop(tcp_rx); kthread_stop(tcp_rx);
sockfd_put(socket); sockfd_put(socket);
return -EINVAL; err = -EINVAL;
goto unlock_mutex;
} }
/* get task structs now */ /* get task structs now */
@ -397,7 +408,8 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
* Will be retried from userspace * Will be retried from userspace
* if there's another free port. * if there's another free port.
*/ */
return -EBUSY; err = -EBUSY;
goto unlock_mutex;
} }
dev_info(dev, "pdev(%u) rhport(%u) sockfd(%d)\n", dev_info(dev, "pdev(%u) rhport(%u) sockfd(%d)\n",
@ -422,7 +434,15 @@ static ssize_t attach_store(struct device *dev, struct device_attribute *attr,
rh_port_connect(vdev, speed); rh_port_connect(vdev, speed);
dev_info(dev, "Device attached\n");
mutex_unlock(&vdev->ud.sysfs_lock);
return count; return count;
unlock_mutex:
mutex_unlock(&vdev->ud.sysfs_lock);
return err;
} }
static DEVICE_ATTR_WO(attach); static DEVICE_ATTR_WO(attach);

View File

@ -572,6 +572,7 @@ static int init_vudc_hw(struct vudc *udc)
init_waitqueue_head(&udc->tx_waitq); init_waitqueue_head(&udc->tx_waitq);
spin_lock_init(&ud->lock); spin_lock_init(&ud->lock);
mutex_init(&ud->sysfs_lock);
ud->status = SDEV_ST_AVAILABLE; ud->status = SDEV_ST_AVAILABLE;
ud->side = USBIP_VUDC; ud->side = USBIP_VUDC;

View File

@ -112,6 +112,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
dev_err(dev, "no device"); dev_err(dev, "no device");
return -ENODEV; return -ENODEV;
} }
mutex_lock(&udc->ud.sysfs_lock);
spin_lock_irqsave(&udc->lock, flags); spin_lock_irqsave(&udc->lock, flags);
/* Don't export what we don't have */ /* Don't export what we don't have */
if (!udc->driver || !udc->pullup) { if (!udc->driver || !udc->pullup) {
@ -187,6 +188,8 @@ static ssize_t usbip_sockfd_store(struct device *dev,
wake_up_process(udc->ud.tcp_rx); wake_up_process(udc->ud.tcp_rx);
wake_up_process(udc->ud.tcp_tx); wake_up_process(udc->ud.tcp_tx);
mutex_unlock(&udc->ud.sysfs_lock);
return count; return count;
} else { } else {
@ -207,6 +210,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
} }
spin_unlock_irqrestore(&udc->lock, flags); spin_unlock_irqrestore(&udc->lock, flags);
mutex_unlock(&udc->ud.sysfs_lock);
return count; return count;
@ -216,6 +220,7 @@ static ssize_t usbip_sockfd_store(struct device *dev,
spin_unlock_irq(&udc->ud.lock); spin_unlock_irq(&udc->ud.lock);
unlock: unlock:
spin_unlock_irqrestore(&udc->lock, flags); spin_unlock_irqrestore(&udc->lock, flags);
mutex_unlock(&udc->ud.sysfs_lock);
return ret; return ret;
} }

View File

@ -222,7 +222,7 @@ static int xen_irq_info_common_setup(struct irq_info *info,
info->evtchn = evtchn; info->evtchn = evtchn;
info->cpu = cpu; info->cpu = cpu;
info->mask_reason = EVT_MASK_REASON_EXPLICIT; info->mask_reason = EVT_MASK_REASON_EXPLICIT;
spin_lock_init(&info->lock); raw_spin_lock_init(&info->lock);
ret = set_evtchn_to_irq(evtchn, irq); ret = set_evtchn_to_irq(evtchn, irq);
if (ret < 0) if (ret < 0)
@ -374,28 +374,28 @@ static void do_mask(struct irq_info *info, u8 reason)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&info->lock, flags); raw_spin_lock_irqsave(&info->lock, flags);
if (!info->mask_reason) if (!info->mask_reason)
mask_evtchn(info->evtchn); mask_evtchn(info->evtchn);
info->mask_reason |= reason; info->mask_reason |= reason;
spin_unlock_irqrestore(&info->lock, flags); raw_spin_unlock_irqrestore(&info->lock, flags);
} }
static void do_unmask(struct irq_info *info, u8 reason) static void do_unmask(struct irq_info *info, u8 reason)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&info->lock, flags); raw_spin_lock_irqsave(&info->lock, flags);
info->mask_reason &= ~reason; info->mask_reason &= ~reason;
if (!info->mask_reason) if (!info->mask_reason)
unmask_evtchn(info->evtchn); unmask_evtchn(info->evtchn);
spin_unlock_irqrestore(&info->lock, flags); raw_spin_unlock_irqrestore(&info->lock, flags);
} }
#ifdef CONFIG_X86 #ifdef CONFIG_X86

View File

@ -45,7 +45,7 @@ struct irq_info {
unsigned short eoi_cpu; /* EOI must happen on this cpu */ unsigned short eoi_cpu; /* EOI must happen on this cpu */
unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */ unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
u64 eoi_time; /* Time in jiffies when to EOI. */ u64 eoi_time; /* Time in jiffies when to EOI. */
spinlock_t lock; raw_spinlock_t lock;
union { union {
unsigned short virq; unsigned short virq;

View File

@ -4198,7 +4198,6 @@ int cifs_setup_cifs_sb(struct smb_vol *pvolume_info,
cifs_sb->prepath = kstrdup(pvolume_info->prepath, GFP_KERNEL); cifs_sb->prepath = kstrdup(pvolume_info->prepath, GFP_KERNEL);
if (cifs_sb->prepath == NULL) if (cifs_sb->prepath == NULL)
return -ENOMEM; return -ENOMEM;
cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;
} }
return 0; return 0;

View File

@ -861,6 +861,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
struct buffer_head *map_bh) struct buffer_head *map_bh)
{ {
int ret = 0; int ret = 0;
int boundary = sdio->boundary; /* dio_send_cur_page may clear it */
if (dio->op == REQ_OP_WRITE) { if (dio->op == REQ_OP_WRITE) {
/* /*
@ -899,10 +900,10 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page,
sdio->cur_page_fs_offset = sdio->block_in_file << sdio->blkbits; sdio->cur_page_fs_offset = sdio->block_in_file << sdio->blkbits;
out: out:
/* /*
* If sdio->boundary then we want to schedule the IO now to * If boundary then we want to schedule the IO now to
* avoid metadata seeks. * avoid metadata seeks.
*/ */
if (sdio->boundary) { if (boundary) {
ret = dio_send_cur_page(dio, sdio, map_bh); ret = dio_send_cur_page(dio, sdio, map_bh);
if (sdio->bio) if (sdio->bio)
dio_bio_submit(dio, sdio); dio_bio_submit(dio, sdio);

View File

@ -139,10 +139,10 @@ static char *inode_name(struct inode *ino)
static char *follow_link(char *link) static char *follow_link(char *link)
{ {
int len, n;
char *name, *resolved, *end; char *name, *resolved, *end;
int n;
name = __getname(); name = kmalloc(PATH_MAX, GFP_KERNEL);
if (!name) { if (!name) {
n = -ENOMEM; n = -ENOMEM;
goto out_free; goto out_free;
@ -164,21 +164,18 @@ static char *follow_link(char *link)
return name; return name;
*(end + 1) = '\0'; *(end + 1) = '\0';
len = strlen(link) + strlen(name) + 1;
resolved = kmalloc(len, GFP_KERNEL); resolved = kasprintf(GFP_KERNEL, "%s%s", link, name);
if (resolved == NULL) { if (resolved == NULL) {
n = -ENOMEM; n = -ENOMEM;
goto out_free; goto out_free;
} }
sprintf(resolved, "%s%s", link, name); kfree(name);
__putname(name);
kfree(link);
return resolved; return resolved;
out_free: out_free:
__putname(name); kfree(name);
return ERR_PTR(n); return ERR_PTR(n);
} }
@ -918,18 +915,16 @@ static int hostfs_fill_sb_common(struct super_block *sb, void *d, int silent)
sb->s_d_op = &simple_dentry_operations; sb->s_d_op = &simple_dentry_operations;
sb->s_maxbytes = MAX_LFS_FILESIZE; sb->s_maxbytes = MAX_LFS_FILESIZE;
/* NULL is printed as <NULL> by sprintf: avoid that. */ /* NULL is printed as '(null)' by printf(): avoid that. */
if (req_root == NULL) if (req_root == NULL)
req_root = ""; req_root = "";
err = -ENOMEM; err = -ENOMEM;
sb->s_fs_info = host_root_path = sb->s_fs_info = host_root_path =
kmalloc(strlen(root_ino) + strlen(req_root) + 2, GFP_KERNEL); kasprintf(GFP_KERNEL, "%s/%s", root_ino, req_root);
if (host_root_path == NULL) if (host_root_path == NULL)
goto out; goto out;
sprintf(host_root_path, "%s/%s", root_ino, req_root);
root_inode = new_inode(sb); root_inode = new_inode(sb);
if (!root_inode) if (!root_inode)
goto out; goto out;

View File

@ -2304,7 +2304,7 @@ static int ocfs2_dio_end_io_write(struct inode *inode,
struct ocfs2_alloc_context *meta_ac = NULL; struct ocfs2_alloc_context *meta_ac = NULL;
handle_t *handle = NULL; handle_t *handle = NULL;
loff_t end = offset + bytes; loff_t end = offset + bytes;
int ret = 0, credits = 0, locked = 0; int ret = 0, credits = 0;
ocfs2_init_dealloc_ctxt(&dealloc); ocfs2_init_dealloc_ctxt(&dealloc);
@ -2315,13 +2315,6 @@ static int ocfs2_dio_end_io_write(struct inode *inode,
!dwc->dw_orphaned) !dwc->dw_orphaned)
goto out; goto out;
/* ocfs2_file_write_iter will get i_mutex, so we need not lock if we
* are in that context. */
if (dwc->dw_writer_pid != task_pid_nr(current)) {
inode_lock(inode);
locked = 1;
}
ret = ocfs2_inode_lock(inode, &di_bh, 1); ret = ocfs2_inode_lock(inode, &di_bh, 1);
if (ret < 0) { if (ret < 0) {
mlog_errno(ret); mlog_errno(ret);
@ -2402,8 +2395,6 @@ static int ocfs2_dio_end_io_write(struct inode *inode,
if (meta_ac) if (meta_ac)
ocfs2_free_alloc_context(meta_ac); ocfs2_free_alloc_context(meta_ac);
ocfs2_run_deallocs(osb, &dealloc); ocfs2_run_deallocs(osb, &dealloc);
if (locked)
inode_unlock(inode);
ocfs2_dio_free_write_ctx(inode, dwc); ocfs2_dio_free_write_ctx(inode, dwc);
return ret; return ret;

View File

@ -1244,22 +1244,24 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
goto bail_unlock; goto bail_unlock;
} }
} }
down_write(&OCFS2_I(inode)->ip_alloc_sem);
handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS + handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS +
2 * ocfs2_quota_trans_credits(sb)); 2 * ocfs2_quota_trans_credits(sb));
if (IS_ERR(handle)) { if (IS_ERR(handle)) {
status = PTR_ERR(handle); status = PTR_ERR(handle);
mlog_errno(status); mlog_errno(status);
goto bail_unlock; goto bail_unlock_alloc;
} }
status = __dquot_transfer(inode, transfer_to); status = __dquot_transfer(inode, transfer_to);
if (status < 0) if (status < 0)
goto bail_commit; goto bail_commit;
} else { } else {
down_write(&OCFS2_I(inode)->ip_alloc_sem);
handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS); handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS);
if (IS_ERR(handle)) { if (IS_ERR(handle)) {
status = PTR_ERR(handle); status = PTR_ERR(handle);
mlog_errno(status); mlog_errno(status);
goto bail_unlock; goto bail_unlock_alloc;
} }
} }
@ -1272,6 +1274,8 @@ int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
bail_commit: bail_commit:
ocfs2_commit_trans(osb, handle); ocfs2_commit_trans(osb, handle);
bail_unlock_alloc:
up_write(&OCFS2_I(inode)->ip_alloc_sem);
bail_unlock: bail_unlock:
if (status && inode_locked) { if (status && inode_locked) {
ocfs2_inode_unlock_tracker(inode, 1, &oh, had_lock); ocfs2_inode_unlock_tracker(inode, 1, &oh, had_lock);

View File

@ -415,11 +415,11 @@ struct mlx5_ifc_flow_table_prop_layout_bits {
u8 reserved_at_60[0x18]; u8 reserved_at_60[0x18];
u8 log_max_ft_num[0x8]; u8 log_max_ft_num[0x8];
u8 reserved_at_80[0x18]; u8 reserved_at_80[0x10];
u8 log_max_flow_counter[0x8];
u8 log_max_destination[0x8]; u8 log_max_destination[0x8];
u8 log_max_flow_counter[0x8]; u8 reserved_at_a0[0x18];
u8 reserved_at_a8[0x10];
u8 log_max_flow[0x8]; u8 log_max_flow[0x8];
u8 reserved_at_c0[0x40]; u8 reserved_at_c0[0x40];
@ -9669,7 +9669,7 @@ struct mlx5_ifc_pbmc_reg_bits {
struct mlx5_ifc_bufferx_reg_bits buffer[10]; struct mlx5_ifc_bufferx_reg_bits buffer[10];
u8 reserved_at_2e0[0x40]; u8 reserved_at_2e0[0x80];
}; };
struct mlx5_ifc_qtct_reg_bits { struct mlx5_ifc_qtct_reg_bits {

View File

@ -355,13 +355,17 @@ static inline void sk_psock_update_proto(struct sock *sk,
static inline void sk_psock_restore_proto(struct sock *sk, static inline void sk_psock_restore_proto(struct sock *sk,
struct sk_psock *psock) struct sk_psock *psock)
{ {
sk->sk_prot->unhash = psock->saved_unhash;
if (psock->sk_proto) { if (psock->sk_proto) {
struct inet_connection_sock *icsk = inet_csk(sk); struct inet_connection_sock *icsk = inet_csk(sk);
bool has_ulp = !!icsk->icsk_ulp_data; bool has_ulp = !!icsk->icsk_ulp_data;
if (has_ulp) { if (has_ulp) {
/* TLS does not have an unhash proto in SW cases, but we need
* to ensure we stop using the sock_map unhash routine because
* the associated psock is being removed. So use the original
* unhash handler.
*/
WRITE_ONCE(sk->sk_prot->unhash, psock->saved_unhash);
tcp_update_ulp(sk, psock->sk_proto, tcp_update_ulp(sk, psock->sk_proto,
psock->saved_write_space); psock->saved_write_space);
} else { } else {

View File

@ -62,6 +62,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
return -EINVAL; return -EINVAL;
} }
skb_reset_mac_header(skb);
if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) { if (hdr->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
u16 start = __virtio16_to_cpu(little_endian, hdr->csum_start); u16 start = __virtio16_to_cpu(little_endian, hdr->csum_start);
u16 off = __virtio16_to_cpu(little_endian, hdr->csum_offset); u16 off = __virtio16_to_cpu(little_endian, hdr->csum_offset);

View File

@ -72,7 +72,9 @@ struct netns_xfrm {
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
struct dst_ops xfrm6_dst_ops; struct dst_ops xfrm6_dst_ops;
#endif #endif
spinlock_t xfrm_state_lock; spinlock_t xfrm_state_lock;
seqcount_t xfrm_state_hash_generation;
spinlock_t xfrm_policy_lock; spinlock_t xfrm_policy_lock;
struct mutex xfrm_cfg_mutex; struct mutex xfrm_cfg_mutex;
}; };

View File

@ -171,9 +171,9 @@ static inline void red_set_vars(struct red_vars *v)
static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog,
u8 Scell_log, u8 *stab) u8 Scell_log, u8 *stab)
{ {
if (fls(qth_min) + Wlog > 32) if (fls(qth_min) + Wlog >= 32)
return false; return false;
if (fls(qth_max) + Wlog > 32) if (fls(qth_max) + Wlog >= 32)
return false; return false;
if (Scell_log >= 32) if (Scell_log >= 32)
return false; return false;

View File

@ -2163,6 +2163,15 @@ static inline void skb_set_owner_r(struct sk_buff *skb, struct sock *sk)
sk_mem_charge(sk, skb->truesize); sk_mem_charge(sk, skb->truesize);
} }
static inline void skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk)
{
if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) {
skb_orphan(skb);
skb->destructor = sock_efree;
skb->sk = sk;
}
}
void sk_reset_timer(struct sock *sk, struct timer_list *timer, void sk_reset_timer(struct sock *sk, struct timer_list *timer,
unsigned long expires); unsigned long expires);

View File

@ -1098,7 +1098,7 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
return __xfrm_policy_check(sk, ndir, skb, family); return __xfrm_policy_check(sk, ndir, skb, family);
return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) ||
(skb_dst(skb)->flags & DST_NOPOLICY) || (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) ||
__xfrm_policy_check(sk, ndir, skb, family); __xfrm_policy_check(sk, ndir, skb, family);
} }

View File

@ -70,7 +70,9 @@ struct gcov_fn_info {
u32 ident; u32 ident;
u32 checksum; u32 checksum;
#if CONFIG_CLANG_VERSION < 110000
u8 use_extra_checksum; u8 use_extra_checksum;
#endif
u32 cfg_checksum; u32 cfg_checksum;
u32 num_counters; u32 num_counters;
@ -145,10 +147,8 @@ void llvm_gcda_emit_function(u32 ident, const char *function_name,
list_add_tail(&info->head, &current_info->functions); list_add_tail(&info->head, &current_info->functions);
} }
EXPORT_SYMBOL(llvm_gcda_emit_function);
#else #else
void llvm_gcda_emit_function(u32 ident, u32 func_checksum, void llvm_gcda_emit_function(u32 ident, u32 func_checksum, u32 cfg_checksum)
u8 use_extra_checksum, u32 cfg_checksum)
{ {
struct gcov_fn_info *info = kzalloc(sizeof(*info), GFP_KERNEL); struct gcov_fn_info *info = kzalloc(sizeof(*info), GFP_KERNEL);
@ -158,12 +158,11 @@ void llvm_gcda_emit_function(u32 ident, u32 func_checksum,
INIT_LIST_HEAD(&info->head); INIT_LIST_HEAD(&info->head);
info->ident = ident; info->ident = ident;
info->checksum = func_checksum; info->checksum = func_checksum;
info->use_extra_checksum = use_extra_checksum;
info->cfg_checksum = cfg_checksum; info->cfg_checksum = cfg_checksum;
list_add_tail(&info->head, &current_info->functions); list_add_tail(&info->head, &current_info->functions);
} }
EXPORT_SYMBOL(llvm_gcda_emit_function);
#endif #endif
EXPORT_SYMBOL(llvm_gcda_emit_function);
void llvm_gcda_emit_arcs(u32 num_counters, u64 *counters) void llvm_gcda_emit_arcs(u32 num_counters, u64 *counters)
{ {
@ -293,11 +292,16 @@ int gcov_info_is_compatible(struct gcov_info *info1, struct gcov_info *info2)
!list_is_last(&fn_ptr2->head, &info2->functions)) { !list_is_last(&fn_ptr2->head, &info2->functions)) {
if (fn_ptr1->checksum != fn_ptr2->checksum) if (fn_ptr1->checksum != fn_ptr2->checksum)
return false; return false;
#if CONFIG_CLANG_VERSION < 110000
if (fn_ptr1->use_extra_checksum != fn_ptr2->use_extra_checksum) if (fn_ptr1->use_extra_checksum != fn_ptr2->use_extra_checksum)
return false; return false;
if (fn_ptr1->use_extra_checksum && if (fn_ptr1->use_extra_checksum &&
fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum) fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum)
return false; return false;
#else
if (fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum)
return false;
#endif
fn_ptr1 = list_next_entry(fn_ptr1, head); fn_ptr1 = list_next_entry(fn_ptr1, head);
fn_ptr2 = list_next_entry(fn_ptr2, head); fn_ptr2 = list_next_entry(fn_ptr2, head);
} }
@ -529,17 +533,22 @@ static size_t convert_to_gcda(char *buffer, struct gcov_info *info)
list_for_each_entry(fi_ptr, &info->functions, head) { list_for_each_entry(fi_ptr, &info->functions, head) {
u32 i; u32 i;
u32 len = 2;
if (fi_ptr->use_extra_checksum)
len++;
pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION); pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION);
pos += store_gcov_u32(buffer, pos, len); #if CONFIG_CLANG_VERSION < 110000
pos += store_gcov_u32(buffer, pos,
fi_ptr->use_extra_checksum ? 3 : 2);
#else
pos += store_gcov_u32(buffer, pos, 3);
#endif
pos += store_gcov_u32(buffer, pos, fi_ptr->ident); pos += store_gcov_u32(buffer, pos, fi_ptr->ident);
pos += store_gcov_u32(buffer, pos, fi_ptr->checksum); pos += store_gcov_u32(buffer, pos, fi_ptr->checksum);
#if CONFIG_CLANG_VERSION < 110000
if (fi_ptr->use_extra_checksum) if (fi_ptr->use_extra_checksum)
pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum); pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum);
#else
pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum);
#endif
pos += store_gcov_u32(buffer, pos, GCOV_TAG_COUNTER_BASE); pos += store_gcov_u32(buffer, pos, GCOV_TAG_COUNTER_BASE);
pos += store_gcov_u32(buffer, pos, fi_ptr->num_counters * 2); pos += store_gcov_u32(buffer, pos, fi_ptr->num_counters * 2);

View File

@ -1415,7 +1415,6 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
*/ */
lockdep_assert_irqs_disabled(); lockdep_assert_irqs_disabled();
debug_work_activate(work);
/* if draining, only works from the same workqueue are allowed */ /* if draining, only works from the same workqueue are allowed */
if (unlikely(wq->flags & __WQ_DRAINING) && if (unlikely(wq->flags & __WQ_DRAINING) &&
@ -1497,6 +1496,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
worklist = &pwq->delayed_works; worklist = &pwq->delayed_works;
} }
debug_work_activate(work);
insert_work(pwq, work, worklist, work_flags); insert_work(pwq, work, worklist, work_flags);
out: out:

View File

@ -891,6 +891,7 @@ batadv_tt_prepare_tvlv_global_data(struct batadv_orig_node *orig_node,
hlist_for_each_entry_rcu(vlan, &orig_node->vlan_list, list) { hlist_for_each_entry_rcu(vlan, &orig_node->vlan_list, list) {
tt_vlan->vid = htons(vlan->vid); tt_vlan->vid = htons(vlan->vid);
tt_vlan->crc = htonl(vlan->tt.crc); tt_vlan->crc = htonl(vlan->tt.crc);
tt_vlan->reserved = 0;
tt_vlan++; tt_vlan++;
} }
@ -974,6 +975,7 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
tt_vlan->vid = htons(vlan->vid); tt_vlan->vid = htons(vlan->vid);
tt_vlan->crc = htonl(vlan->tt.crc); tt_vlan->crc = htonl(vlan->tt.crc);
tt_vlan->reserved = 0;
tt_vlan++; tt_vlan++;
} }

View File

@ -88,6 +88,8 @@ MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Oliver Hartkopp <oliver.hartkopp@volkswagen.de>"); MODULE_AUTHOR("Oliver Hartkopp <oliver.hartkopp@volkswagen.de>");
MODULE_ALIAS("can-proto-2"); MODULE_ALIAS("can-proto-2");
#define BCM_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex)
/* /*
* easy access to the first 64 bit of can(fd)_frame payload. cp->data is * easy access to the first 64 bit of can(fd)_frame payload. cp->data is
* 64 bit aligned so the offset has to be multiples of 8 which is ensured * 64 bit aligned so the offset has to be multiples of 8 which is ensured
@ -1294,7 +1296,7 @@ static int bcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
/* no bound device as default => check msg_name */ /* no bound device as default => check msg_name */
DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name); DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name);
if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex)) if (msg->msg_namelen < BCM_MIN_NAMELEN)
return -EINVAL; return -EINVAL;
if (addr->can_family != AF_CAN) if (addr->can_family != AF_CAN)
@ -1536,7 +1538,7 @@ static int bcm_connect(struct socket *sock, struct sockaddr *uaddr, int len,
struct net *net = sock_net(sk); struct net *net = sock_net(sk);
int ret = 0; int ret = 0;
if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex)) if (len < BCM_MIN_NAMELEN)
return -EINVAL; return -EINVAL;
lock_sock(sk); lock_sock(sk);
@ -1618,8 +1620,8 @@ static int bcm_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
sock_recv_ts_and_drops(msg, sk, skb); sock_recv_ts_and_drops(msg, sk, skb);
if (msg->msg_name) { if (msg->msg_name) {
__sockaddr_check_size(sizeof(struct sockaddr_can)); __sockaddr_check_size(BCM_MIN_NAMELEN);
msg->msg_namelen = sizeof(struct sockaddr_can); msg->msg_namelen = BCM_MIN_NAMELEN;
memcpy(msg->msg_name, skb->cb, msg->msg_namelen); memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
} }

View File

@ -62,6 +62,8 @@ MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Urs Thuermann <urs.thuermann@volkswagen.de>"); MODULE_AUTHOR("Urs Thuermann <urs.thuermann@volkswagen.de>");
MODULE_ALIAS("can-proto-1"); MODULE_ALIAS("can-proto-1");
#define RAW_MIN_NAMELEN CAN_REQUIRED_SIZE(struct sockaddr_can, can_ifindex)
#define MASK_ALL 0 #define MASK_ALL 0
/* A raw socket has a list of can_filters attached to it, each receiving /* A raw socket has a list of can_filters attached to it, each receiving
@ -396,7 +398,7 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len)
int err = 0; int err = 0;
int notify_enetdown = 0; int notify_enetdown = 0;
if (len < CAN_REQUIRED_SIZE(*addr, can_ifindex)) if (len < RAW_MIN_NAMELEN)
return -EINVAL; return -EINVAL;
if (addr->can_family != AF_CAN) if (addr->can_family != AF_CAN)
return -EINVAL; return -EINVAL;
@ -477,11 +479,11 @@ static int raw_getname(struct socket *sock, struct sockaddr *uaddr,
if (peer) if (peer)
return -EOPNOTSUPP; return -EOPNOTSUPP;
memset(addr, 0, sizeof(*addr)); memset(addr, 0, RAW_MIN_NAMELEN);
addr->can_family = AF_CAN; addr->can_family = AF_CAN;
addr->can_ifindex = ro->ifindex; addr->can_ifindex = ro->ifindex;
return sizeof(*addr); return RAW_MIN_NAMELEN;
} }
static int raw_setsockopt(struct socket *sock, int level, int optname, static int raw_setsockopt(struct socket *sock, int level, int optname,
@ -733,7 +735,7 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
if (msg->msg_name) { if (msg->msg_name) {
DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name); DECLARE_SOCKADDR(struct sockaddr_can *, addr, msg->msg_name);
if (msg->msg_namelen < CAN_REQUIRED_SIZE(*addr, can_ifindex)) if (msg->msg_namelen < RAW_MIN_NAMELEN)
return -EINVAL; return -EINVAL;
if (addr->can_family != AF_CAN) if (addr->can_family != AF_CAN)
@ -822,8 +824,8 @@ static int raw_recvmsg(struct socket *sock, struct msghdr *msg, size_t size,
sock_recv_ts_and_drops(msg, sk, skb); sock_recv_ts_and_drops(msg, sk, skb);
if (msg->msg_name) { if (msg->msg_name) {
__sockaddr_check_size(sizeof(struct sockaddr_can)); __sockaddr_check_size(RAW_MIN_NAMELEN);
msg->msg_namelen = sizeof(struct sockaddr_can); msg->msg_namelen = RAW_MIN_NAMELEN;
memcpy(msg->msg_name, skb->cb, msg->msg_namelen); memcpy(msg->msg_name, skb->cb, msg->msg_namelen);
} }

View File

@ -2031,16 +2031,10 @@ void skb_orphan_partial(struct sk_buff *skb)
if (skb_is_tcp_pure_ack(skb)) if (skb_is_tcp_pure_ack(skb))
return; return;
if (can_skb_orphan_partial(skb)) { if (can_skb_orphan_partial(skb))
struct sock *sk = skb->sk; skb_set_owner_sk_safe(skb, skb->sk);
else
if (refcount_inc_not_zero(&sk->sk_refcnt)) {
WARN_ON(refcount_sub_and_test(skb->truesize, &sk->sk_wmem_alloc));
skb->destructor = sock_efree;
}
} else {
skb_orphan(skb); skb_orphan(skb);
}
} }
EXPORT_SYMBOL(skb_orphan_partial); EXPORT_SYMBOL(skb_orphan_partial);

View File

@ -229,6 +229,7 @@ static int hsr_dev_xmit(struct sk_buff *skb, struct net_device *dev)
master = hsr_port_get_hsr(hsr, HSR_PT_MASTER); master = hsr_port_get_hsr(hsr, HSR_PT_MASTER);
if (master) { if (master) {
skb->dev = master->dev; skb->dev = master->dev;
skb_reset_mac_header(skb);
hsr_forward_skb(skb, master); hsr_forward_skb(skb, master);
} else { } else {
atomic_long_inc(&dev->tx_dropped); atomic_long_inc(&dev->tx_dropped);

View File

@ -349,12 +349,6 @@ void hsr_forward_skb(struct sk_buff *skb, struct hsr_port *port)
{ {
struct hsr_frame_info frame; struct hsr_frame_info frame;
if (skb_mac_header(skb) != skb->data) {
WARN_ONCE(1, "%s:%d: Malformed frame (port_src %s)\n",
__FILE__, __LINE__, port->dev->name);
goto out_drop;
}
if (hsr_fill_frame_info(&frame, skb, port) < 0) if (hsr_fill_frame_info(&frame, skb, port) < 0)
goto out_drop; goto out_drop;
hsr_register_frame_in(frame.node_src, port, frame.sequence_nr); hsr_register_frame_in(frame.node_src, port, frame.sequence_nr);

View File

@ -551,9 +551,7 @@ ieee802154_llsec_parse_key_id(struct genl_info *info,
desc->mode = nla_get_u8(info->attrs[IEEE802154_ATTR_LLSEC_KEY_MODE]); desc->mode = nla_get_u8(info->attrs[IEEE802154_ATTR_LLSEC_KEY_MODE]);
if (desc->mode == IEEE802154_SCF_KEY_IMPLICIT) { if (desc->mode == IEEE802154_SCF_KEY_IMPLICIT) {
if (!info->attrs[IEEE802154_ATTR_PAN_ID] && if (!info->attrs[IEEE802154_ATTR_PAN_ID])
!(info->attrs[IEEE802154_ATTR_SHORT_ADDR] ||
info->attrs[IEEE802154_ATTR_HW_ADDR]))
return -EINVAL; return -EINVAL;
desc->device_addr.pan_id = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_PAN_ID]); desc->device_addr.pan_id = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_PAN_ID]);
@ -562,6 +560,9 @@ ieee802154_llsec_parse_key_id(struct genl_info *info,
desc->device_addr.mode = IEEE802154_ADDR_SHORT; desc->device_addr.mode = IEEE802154_ADDR_SHORT;
desc->device_addr.short_addr = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_SHORT_ADDR]); desc->device_addr.short_addr = nla_get_shortaddr(info->attrs[IEEE802154_ATTR_SHORT_ADDR]);
} else { } else {
if (!info->attrs[IEEE802154_ATTR_HW_ADDR])
return -EINVAL;
desc->device_addr.mode = IEEE802154_ADDR_LONG; desc->device_addr.mode = IEEE802154_ADDR_LONG;
desc->device_addr.extended_addr = nla_get_hwaddr(info->attrs[IEEE802154_ATTR_HW_ADDR]); desc->device_addr.extended_addr = nla_get_hwaddr(info->attrs[IEEE802154_ATTR_HW_ADDR]);
} }

View File

@ -836,8 +836,13 @@ nl802154_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flags,
goto nla_put_failure; goto nla_put_failure;
#ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL #ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL
if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
goto out;
if (nl802154_get_llsec_params(msg, rdev, wpan_dev) < 0) if (nl802154_get_llsec_params(msg, rdev, wpan_dev) < 0)
goto nla_put_failure; goto nla_put_failure;
out:
#endif /* CONFIG_IEEE802154_NL802154_EXPERIMENTAL */ #endif /* CONFIG_IEEE802154_NL802154_EXPERIMENTAL */
genlmsg_end(msg, hdr); genlmsg_end(msg, hdr);
@ -1400,6 +1405,9 @@ static int nl802154_set_llsec_params(struct sk_buff *skb,
u32 changed = 0; u32 changed = 0;
int ret; int ret;
if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
return -EOPNOTSUPP;
if (info->attrs[NL802154_ATTR_SEC_ENABLED]) { if (info->attrs[NL802154_ATTR_SEC_ENABLED]) {
u8 enabled; u8 enabled;
@ -1560,7 +1568,8 @@ static int nl802154_add_llsec_key(struct sk_buff *skb, struct genl_info *info)
struct ieee802154_llsec_key_id id = { }; struct ieee802154_llsec_key_id id = { };
u32 commands[NL802154_CMD_FRAME_NR_IDS / 32] = { }; u32 commands[NL802154_CMD_FRAME_NR_IDS / 32] = { };
if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) if (!info->attrs[NL802154_ATTR_SEC_KEY] ||
nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
return -EINVAL; return -EINVAL;
if (!attrs[NL802154_KEY_ATTR_USAGE_FRAMES] || if (!attrs[NL802154_KEY_ATTR_USAGE_FRAMES] ||
@ -1608,7 +1617,8 @@ static int nl802154_del_llsec_key(struct sk_buff *skb, struct genl_info *info)
struct nlattr *attrs[NL802154_KEY_ATTR_MAX + 1]; struct nlattr *attrs[NL802154_KEY_ATTR_MAX + 1];
struct ieee802154_llsec_key_id id; struct ieee802154_llsec_key_id id;
if (nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack)) if (!info->attrs[NL802154_ATTR_SEC_KEY] ||
nla_parse_nested_deprecated(attrs, NL802154_KEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_KEY], nl802154_key_policy, info->extack))
return -EINVAL; return -EINVAL;
if (ieee802154_llsec_parse_key_id(attrs[NL802154_KEY_ATTR_ID], &id) < 0) if (ieee802154_llsec_parse_key_id(attrs[NL802154_KEY_ATTR_ID], &id) < 0)
@ -1773,7 +1783,8 @@ static int nl802154_del_llsec_dev(struct sk_buff *skb, struct genl_info *info)
struct nlattr *attrs[NL802154_DEV_ATTR_MAX + 1]; struct nlattr *attrs[NL802154_DEV_ATTR_MAX + 1];
__le64 extended_addr; __le64 extended_addr;
if (nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack)) if (!info->attrs[NL802154_ATTR_SEC_DEVICE] ||
nla_parse_nested_deprecated(attrs, NL802154_DEV_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVICE], nl802154_dev_policy, info->extack))
return -EINVAL; return -EINVAL;
if (!attrs[NL802154_DEV_ATTR_EXTENDED_ADDR]) if (!attrs[NL802154_DEV_ATTR_EXTENDED_ADDR])
@ -1929,7 +1940,8 @@ static int nl802154_del_llsec_devkey(struct sk_buff *skb, struct genl_info *info
struct ieee802154_llsec_device_key key; struct ieee802154_llsec_device_key key;
__le64 extended_addr; __le64 extended_addr;
if (nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack)) if (!info->attrs[NL802154_ATTR_SEC_DEVKEY] ||
nla_parse_nested_deprecated(attrs, NL802154_DEVKEY_ATTR_MAX, info->attrs[NL802154_ATTR_SEC_DEVKEY], nl802154_devkey_policy, info->extack))
return -EINVAL; return -EINVAL;
if (!attrs[NL802154_DEVKEY_ATTR_EXTENDED_ADDR]) if (!attrs[NL802154_DEVKEY_ATTR_EXTENDED_ADDR])
@ -2101,6 +2113,9 @@ static int nl802154_del_llsec_seclevel(struct sk_buff *skb,
struct wpan_dev *wpan_dev = dev->ieee802154_ptr; struct wpan_dev *wpan_dev = dev->ieee802154_ptr;
struct ieee802154_llsec_seclevel sl; struct ieee802154_llsec_seclevel sl;
if (wpan_dev->iftype == NL802154_IFTYPE_MONITOR)
return -EOPNOTSUPP;
if (!info->attrs[NL802154_ATTR_SEC_LEVEL] || if (!info->attrs[NL802154_ATTR_SEC_LEVEL] ||
llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL], llsec_parse_seclevel(info->attrs[NL802154_ATTR_SEC_LEVEL],
&sl) < 0) &sl) < 0)

View File

@ -177,10 +177,12 @@ static struct sk_buff *esp4_gso_segment(struct sk_buff *skb,
if ((!(skb->dev->gso_partial_features & NETIF_F_HW_ESP) && if ((!(skb->dev->gso_partial_features & NETIF_F_HW_ESP) &&
!(features & NETIF_F_HW_ESP)) || x->xso.dev != skb->dev) !(features & NETIF_F_HW_ESP)) || x->xso.dev != skb->dev)
esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK); esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK |
NETIF_F_SCTP_CRC);
else if (!(features & NETIF_F_HW_ESP_TX_CSUM) && else if (!(features & NETIF_F_HW_ESP_TX_CSUM) &&
!(skb->dev->gso_partial_features & NETIF_F_HW_ESP_TX_CSUM)) !(skb->dev->gso_partial_features & NETIF_F_HW_ESP_TX_CSUM))
esp_features = features & ~NETIF_F_CSUM_MASK; esp_features = features & ~(NETIF_F_CSUM_MASK |
NETIF_F_SCTP_CRC);
xo->flags |= XFRM_GSO_SEGMENT; xo->flags |= XFRM_GSO_SEGMENT;

View File

@ -2692,6 +2692,10 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
val = up->gso_size; val = up->gso_size;
break; break;
case UDP_GRO:
val = up->gro_enabled;
break;
/* The following two cannot be changed on UDP sockets, the return is /* The following two cannot be changed on UDP sockets, the return is
* always 0 (which corresponds to the full checksum coverage of UDP). */ * always 0 (which corresponds to the full checksum coverage of UDP). */
case UDPLITE_SEND_CSCOV: case UDPLITE_SEND_CSCOV:

View File

@ -210,9 +210,11 @@ static struct sk_buff *esp6_gso_segment(struct sk_buff *skb,
skb->encap_hdr_csum = 1; skb->encap_hdr_csum = 1;
if (!(features & NETIF_F_HW_ESP) || x->xso.dev != skb->dev) if (!(features & NETIF_F_HW_ESP) || x->xso.dev != skb->dev)
esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK); esp_features = features & ~(NETIF_F_SG | NETIF_F_CSUM_MASK |
NETIF_F_SCTP_CRC);
else if (!(features & NETIF_F_HW_ESP_TX_CSUM)) else if (!(features & NETIF_F_HW_ESP_TX_CSUM))
esp_features = features & ~NETIF_F_CSUM_MASK; esp_features = features & ~(NETIF_F_CSUM_MASK |
NETIF_F_SCTP_CRC);
xo->flags |= XFRM_GSO_SEGMENT; xo->flags |= XFRM_GSO_SEGMENT;

View File

@ -298,7 +298,7 @@ static int rawv6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
*/ */
v4addr = LOOPBACK4_IPV6; v4addr = LOOPBACK4_IPV6;
if (!(addr_type & IPV6_ADDR_MULTICAST) && if (!(addr_type & IPV6_ADDR_MULTICAST) &&
!sock_net(sk)->ipv6.sysctl.ip_nonlocal_bind) { !ipv6_can_nonlocal_bind(sock_net(sk), inet)) {
err = -EADDRNOTAVAIL; err = -EADDRNOTAVAIL;
if (!ipv6_chk_addr(sock_net(sk), &addr->sin6_addr, if (!ipv6_chk_addr(sock_net(sk), &addr->sin6_addr,
dev, 0)) { dev, 0)) {

View File

@ -5160,9 +5160,11 @@ static int ip6_route_multipath_add(struct fib6_config *cfg,
* nexthops have been replaced by first new, the rest should * nexthops have been replaced by first new, the rest should
* be added to it. * be added to it.
*/ */
cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL | if (cfg->fc_nlinfo.nlh) {
NLM_F_REPLACE); cfg->fc_nlinfo.nlh->nlmsg_flags &= ~(NLM_F_EXCL |
cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE; NLM_F_REPLACE);
cfg->fc_nlinfo.nlh->nlmsg_flags |= NLM_F_CREATE;
}
nhn++; nhn++;
} }

View File

@ -3582,7 +3582,7 @@ struct sk_buff *ieee80211_tx_dequeue(struct ieee80211_hw *hw,
test_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags)) test_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags))
goto out; goto out;
if (vif->txqs_stopped[ieee80211_ac_from_tid(txq->tid)]) { if (vif->txqs_stopped[txq->ac]) {
set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags); set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txqi->flags);
goto out; goto out;
} }

View File

@ -152,7 +152,7 @@ llsec_key_alloc(const struct ieee802154_llsec_key *template)
crypto_free_sync_skcipher(key->tfm0); crypto_free_sync_skcipher(key->tfm0);
err_tfm: err_tfm:
for (i = 0; i < ARRAY_SIZE(key->tfm); i++) for (i = 0; i < ARRAY_SIZE(key->tfm); i++)
if (key->tfm[i]) if (!IS_ERR_OR_NULL(key->tfm[i]))
crypto_free_aead(key->tfm[i]); crypto_free_aead(key->tfm[i]);
kzfree(key); kzfree(key);

View File

@ -103,13 +103,20 @@ static void ncsi_channel_monitor(struct timer_list *t)
monitor_state = nc->monitor.state; monitor_state = nc->monitor.state;
spin_unlock_irqrestore(&nc->lock, flags); spin_unlock_irqrestore(&nc->lock, flags);
if (!enabled || chained) { if (!enabled)
ncsi_stop_channel_monitor(nc); return; /* expected race disabling timer */
return; if (WARN_ON_ONCE(chained))
} goto bad_state;
if (state != NCSI_CHANNEL_INACTIVE && if (state != NCSI_CHANNEL_INACTIVE &&
state != NCSI_CHANNEL_ACTIVE) { state != NCSI_CHANNEL_ACTIVE) {
ncsi_stop_channel_monitor(nc); bad_state:
netdev_warn(ndp->ndev.dev,
"Bad NCSI monitor state channel %d 0x%x %s queue\n",
nc->id, state, chained ? "on" : "off");
spin_lock_irqsave(&nc->lock, flags);
nc->monitor.enabled = false;
spin_unlock_irqrestore(&nc->lock, flags);
return; return;
} }
@ -134,10 +141,9 @@ static void ncsi_channel_monitor(struct timer_list *t)
ncsi_report_link(ndp, true); ncsi_report_link(ndp, true);
ndp->flags |= NCSI_DEV_RESHUFFLE; ndp->flags |= NCSI_DEV_RESHUFFLE;
ncsi_stop_channel_monitor(nc);
ncm = &nc->modes[NCSI_MODE_LINK]; ncm = &nc->modes[NCSI_MODE_LINK];
spin_lock_irqsave(&nc->lock, flags); spin_lock_irqsave(&nc->lock, flags);
nc->monitor.enabled = false;
nc->state = NCSI_CHANNEL_INVISIBLE; nc->state = NCSI_CHANNEL_INVISIBLE;
ncm->data[2] &= ~0x1; ncm->data[2] &= ~0x1;
spin_unlock_irqrestore(&nc->lock, flags); spin_unlock_irqrestore(&nc->lock, flags);

View File

@ -108,11 +108,13 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen)
llcp_sock->service_name_len, llcp_sock->service_name_len,
GFP_KERNEL); GFP_KERNEL);
if (!llcp_sock->service_name) { if (!llcp_sock->service_name) {
nfc_llcp_local_put(llcp_sock->local);
ret = -ENOMEM; ret = -ENOMEM;
goto put_dev; goto put_dev;
} }
llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock); llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock);
if (llcp_sock->ssap == LLCP_SAP_MAX) { if (llcp_sock->ssap == LLCP_SAP_MAX) {
nfc_llcp_local_put(llcp_sock->local);
kfree(llcp_sock->service_name); kfree(llcp_sock->service_name);
llcp_sock->service_name = NULL; llcp_sock->service_name = NULL;
ret = -EADDRINUSE; ret = -EADDRINUSE;
@ -671,6 +673,10 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
ret = -EISCONN; ret = -EISCONN;
goto error; goto error;
} }
if (sk->sk_state == LLCP_CONNECTING) {
ret = -EINPROGRESS;
goto error;
}
dev = nfc_get_device(addr->dev_idx); dev = nfc_get_device(addr->dev_idx);
if (dev == NULL) { if (dev == NULL) {
@ -702,6 +708,7 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
llcp_sock->local = nfc_llcp_local_get(local); llcp_sock->local = nfc_llcp_local_get(local);
llcp_sock->ssap = nfc_llcp_get_local_ssap(local); llcp_sock->ssap = nfc_llcp_get_local_ssap(local);
if (llcp_sock->ssap == LLCP_SAP_MAX) { if (llcp_sock->ssap == LLCP_SAP_MAX) {
nfc_llcp_local_put(llcp_sock->local);
ret = -ENOMEM; ret = -ENOMEM;
goto put_dev; goto put_dev;
} }
@ -743,9 +750,12 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr,
sock_unlink: sock_unlink:
nfc_llcp_sock_unlink(&local->connecting_sockets, sk); nfc_llcp_sock_unlink(&local->connecting_sockets, sk);
kfree(llcp_sock->service_name);
llcp_sock->service_name = NULL;
sock_llcp_release: sock_llcp_release:
nfc_llcp_put_ssap(local, llcp_sock->ssap); nfc_llcp_put_ssap(local, llcp_sock->ssap);
nfc_llcp_local_put(llcp_sock->local);
put_dev: put_dev:
nfc_put_device(dev); nfc_put_device(dev);

View File

@ -2019,16 +2019,12 @@ static int ovs_ct_limit_del_zone_limit(struct nlattr *nla_zone_limit,
static int ovs_ct_limit_get_default_limit(struct ovs_ct_limit_info *info, static int ovs_ct_limit_get_default_limit(struct ovs_ct_limit_info *info,
struct sk_buff *reply) struct sk_buff *reply)
{ {
struct ovs_zone_limit zone_limit; struct ovs_zone_limit zone_limit = {
int err; .zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE,
.limit = info->default_limit,
};
zone_limit.zone_id = OVS_ZONE_LIMIT_DEFAULT_ZONE; return nla_put_nohdr(reply, sizeof(zone_limit), &zone_limit);
zone_limit.limit = info->default_limit;
err = nla_put_nohdr(reply, sizeof(zone_limit), &zone_limit);
if (err)
return err;
return 0;
} }
static int __ovs_ct_limit_get_zone_limit(struct net *net, static int __ovs_ct_limit_get_zone_limit(struct net *net,

View File

@ -347,8 +347,9 @@ struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned in
rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE); rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE);
rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs); rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs);
if (IS_ERR(rm->data.op_sg)) { if (IS_ERR(rm->data.op_sg)) {
void *err = ERR_CAST(rm->data.op_sg);
rds_message_put(rm); rds_message_put(rm);
return ERR_CAST(rm->data.op_sg); return err;
} }
for (i = 0; i < rm->data.op_nents; ++i) { for (i = 0; i < rm->data.op_nents; ++i) {

View File

@ -935,6 +935,9 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp,
if (err != ACT_P_CREATED) if (err != ACT_P_CREATED)
module_put(a_o->owner); module_put(a_o->owner);
if (!bind && ovr && err == ACT_P_CREATED)
refcount_set(&a->tcfa_refcnt, 2);
return a; return a;
err_mod: err_mod:

View File

@ -134,6 +134,9 @@ teql_destroy(struct Qdisc *sch)
struct teql_sched_data *dat = qdisc_priv(sch); struct teql_sched_data *dat = qdisc_priv(sch);
struct teql_master *master = dat->m; struct teql_master *master = dat->m;
if (!master)
return;
prev = master->slaves; prev = master->slaves;
if (prev) { if (prev) {
do { do {

View File

@ -643,8 +643,8 @@ static int sctp_v6_available(union sctp_addr *addr, struct sctp_sock *sp)
if (!(type & IPV6_ADDR_UNICAST)) if (!(type & IPV6_ADDR_UNICAST))
return 0; return 0;
return sp->inet.freebind || net->ipv6.sysctl.ip_nonlocal_bind || return ipv6_can_nonlocal_bind(net, &sp->inet) ||
ipv6_chk_addr(net, in6, NULL, 0); ipv6_chk_addr(net, in6, NULL, 0);
} }
/* This function checks if the address is a valid address to be used for /* This function checks if the address is a valid address to be used for
@ -933,8 +933,7 @@ static int sctp_inet6_bind_verify(struct sctp_sock *opt, union sctp_addr *addr)
net = sock_net(&opt->inet.sk); net = sock_net(&opt->inet.sk);
rcu_read_lock(); rcu_read_lock();
dev = dev_get_by_index_rcu(net, addr->v6.sin6_scope_id); dev = dev_get_by_index_rcu(net, addr->v6.sin6_scope_id);
if (!dev || !(opt->inet.freebind || if (!dev || !(ipv6_can_nonlocal_bind(net, &opt->inet) ||
net->ipv6.sysctl.ip_nonlocal_bind ||
ipv6_chk_addr(net, &addr->v6.sin6_addr, ipv6_chk_addr(net, &addr->v6.sin6_addr,
dev, 0))) { dev, 0))) {
rcu_read_unlock(); rcu_read_unlock();

View File

@ -1210,7 +1210,7 @@ void tipc_sk_mcast_rcv(struct net *net, struct sk_buff_head *arrvq,
spin_lock_bh(&inputq->lock); spin_lock_bh(&inputq->lock);
if (skb_peek(arrvq) == skb) { if (skb_peek(arrvq) == skb) {
skb_queue_splice_tail_init(&tmpq, inputq); skb_queue_splice_tail_init(&tmpq, inputq);
kfree_skb(__skb_dequeue(arrvq)); __skb_dequeue(arrvq);
} }
spin_unlock_bh(&inputq->lock); spin_unlock_bh(&inputq->lock);
__skb_queue_purge(&tmpq); __skb_queue_purge(&tmpq);

View File

@ -530,7 +530,7 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
cfg80211_sme_free(wdev); cfg80211_sme_free(wdev);
} }
if (WARN_ON(wdev->conn)) if (wdev->conn)
return -EINPROGRESS; return -EINPROGRESS;
wdev->conn = kzalloc(sizeof(*wdev->conn), GFP_KERNEL); wdev->conn = kzalloc(sizeof(*wdev->conn), GFP_KERNEL);

View File

@ -302,6 +302,8 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
} else { } else {
if (!(ip_hdr(skb)->frag_off & htons(IP_DF)))
goto xmit;
icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
htonl(mtu)); htonl(mtu));
} }
@ -310,6 +312,7 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl)
return -EMSGSIZE; return -EMSGSIZE;
} }
xmit:
xfrmi_scrub_packet(skb, !net_eq(xi->net, dev_net(dev))); xfrmi_scrub_packet(skb, !net_eq(xi->net, dev_net(dev)));
skb_dst_set(skb, dst); skb_dst_set(skb, dst);
skb->dev = tdev; skb->dev = tdev;

View File

@ -44,7 +44,6 @@ static void xfrm_state_gc_task(struct work_struct *work);
*/ */
static unsigned int xfrm_state_hashmax __read_mostly = 1 * 1024 * 1024; static unsigned int xfrm_state_hashmax __read_mostly = 1 * 1024 * 1024;
static __read_mostly seqcount_t xfrm_state_hash_generation = SEQCNT_ZERO(xfrm_state_hash_generation);
static struct kmem_cache *xfrm_state_cache __ro_after_init; static struct kmem_cache *xfrm_state_cache __ro_after_init;
static DECLARE_WORK(xfrm_state_gc_work, xfrm_state_gc_task); static DECLARE_WORK(xfrm_state_gc_work, xfrm_state_gc_task);
@ -140,7 +139,7 @@ static void xfrm_hash_resize(struct work_struct *work)
} }
spin_lock_bh(&net->xfrm.xfrm_state_lock); spin_lock_bh(&net->xfrm.xfrm_state_lock);
write_seqcount_begin(&xfrm_state_hash_generation); write_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
nhashmask = (nsize / sizeof(struct hlist_head)) - 1U; nhashmask = (nsize / sizeof(struct hlist_head)) - 1U;
odst = xfrm_state_deref_prot(net->xfrm.state_bydst, net); odst = xfrm_state_deref_prot(net->xfrm.state_bydst, net);
@ -156,7 +155,7 @@ static void xfrm_hash_resize(struct work_struct *work)
rcu_assign_pointer(net->xfrm.state_byspi, nspi); rcu_assign_pointer(net->xfrm.state_byspi, nspi);
net->xfrm.state_hmask = nhashmask; net->xfrm.state_hmask = nhashmask;
write_seqcount_end(&xfrm_state_hash_generation); write_seqcount_end(&net->xfrm.xfrm_state_hash_generation);
spin_unlock_bh(&net->xfrm.xfrm_state_lock); spin_unlock_bh(&net->xfrm.xfrm_state_lock);
osize = (ohashmask + 1) * sizeof(struct hlist_head); osize = (ohashmask + 1) * sizeof(struct hlist_head);
@ -1058,7 +1057,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
to_put = NULL; to_put = NULL;
sequence = read_seqcount_begin(&xfrm_state_hash_generation); sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
rcu_read_lock(); rcu_read_lock();
h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family); h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family);
@ -1171,7 +1170,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
if (to_put) if (to_put)
xfrm_state_put(to_put); xfrm_state_put(to_put);
if (read_seqcount_retry(&xfrm_state_hash_generation, sequence)) { if (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence)) {
*err = -EAGAIN; *err = -EAGAIN;
if (x) { if (x) {
xfrm_state_put(x); xfrm_state_put(x);
@ -2662,6 +2661,7 @@ int __net_init xfrm_state_init(struct net *net)
net->xfrm.state_num = 0; net->xfrm.state_num = 0;
INIT_WORK(&net->xfrm.state_hash_work, xfrm_hash_resize); INIT_WORK(&net->xfrm.state_hash_work, xfrm_hash_resize);
spin_lock_init(&net->xfrm.xfrm_state_lock); spin_lock_init(&net->xfrm.xfrm_state_lock);
seqcount_init(&net->xfrm.xfrm_state_hash_generation);
return 0; return 0;
out_byspi: out_byspi:

View File

@ -1035,6 +1035,14 @@ static int loopback_mixer_new(struct loopback *loopback, int notify)
return -ENOMEM; return -ENOMEM;
kctl->id.device = dev; kctl->id.device = dev;
kctl->id.subdevice = substr; kctl->id.subdevice = substr;
/* Add the control before copying the id so that
* the numid field of the id is set in the copy.
*/
err = snd_ctl_add(card, kctl);
if (err < 0)
return err;
switch (idx) { switch (idx) {
case ACTIVE_IDX: case ACTIVE_IDX:
setup->active_id = kctl->id; setup->active_id = kctl->id;
@ -1051,9 +1059,6 @@ static int loopback_mixer_new(struct loopback *loopback, int notify)
default: default:
break; break;
} }
err = snd_ctl_add(card, kctl);
if (err < 0)
return err;
} }
} }
} }

View File

@ -3917,6 +3917,15 @@ static void alc271_fixup_dmic(struct hda_codec *codec,
snd_hda_sequence_write(codec, verbs); snd_hda_sequence_write(codec, verbs);
} }
/* Fix the speaker amp after resume, etc */
static void alc269vb_fixup_aspire_e1_coef(struct hda_codec *codec,
const struct hda_fixup *fix,
int action)
{
if (action == HDA_FIXUP_ACT_INIT)
alc_update_coef_idx(codec, 0x0d, 0x6000, 0x6000);
}
static void alc269_fixup_pcm_44k(struct hda_codec *codec, static void alc269_fixup_pcm_44k(struct hda_codec *codec,
const struct hda_fixup *fix, int action) const struct hda_fixup *fix, int action)
{ {
@ -6220,6 +6229,7 @@ enum {
ALC283_FIXUP_HEADSET_MIC, ALC283_FIXUP_HEADSET_MIC,
ALC255_FIXUP_MIC_MUTE_LED, ALC255_FIXUP_MIC_MUTE_LED,
ALC282_FIXUP_ASPIRE_V5_PINS, ALC282_FIXUP_ASPIRE_V5_PINS,
ALC269VB_FIXUP_ASPIRE_E1_COEF,
ALC280_FIXUP_HP_GPIO4, ALC280_FIXUP_HP_GPIO4,
ALC286_FIXUP_HP_GPIO_LED, ALC286_FIXUP_HP_GPIO_LED,
ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY,
@ -6890,6 +6900,10 @@ static const struct hda_fixup alc269_fixups[] = {
{ }, { },
}, },
}, },
[ALC269VB_FIXUP_ASPIRE_E1_COEF] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc269vb_fixup_aspire_e1_coef,
},
[ALC280_FIXUP_HP_GPIO4] = { [ALC280_FIXUP_HP_GPIO4] = {
.type = HDA_FIXUP_FUNC, .type = HDA_FIXUP_FUNC,
.v.func = alc280_fixup_hp_gpio4, .v.func = alc280_fixup_hp_gpio4,
@ -7764,6 +7778,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS), SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
SND_PCI_QUIRK(0x1025, 0x0840, "Acer Aspire E1", ALC269VB_FIXUP_ASPIRE_E1_COEF),
SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK), SND_PCI_QUIRK(0x1025, 0x101c, "Acer Veriton N2510G", ALC269_FIXUP_LIFEBOOK),
SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC), SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC),
@ -8240,6 +8255,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
{.id = ALC283_FIXUP_HEADSET_MIC, .name = "alc283-headset"}, {.id = ALC283_FIXUP_HEADSET_MIC, .name = "alc283-headset"},
{.id = ALC255_FIXUP_MIC_MUTE_LED, .name = "alc255-dell-mute"}, {.id = ALC255_FIXUP_MIC_MUTE_LED, .name = "alc255-dell-mute"},
{.id = ALC282_FIXUP_ASPIRE_V5_PINS, .name = "aspire-v5"}, {.id = ALC282_FIXUP_ASPIRE_V5_PINS, .name = "aspire-v5"},
{.id = ALC269VB_FIXUP_ASPIRE_E1_COEF, .name = "aspire-e1-coef"},
{.id = ALC280_FIXUP_HP_GPIO4, .name = "hp-gpio4"}, {.id = ALC280_FIXUP_HP_GPIO4, .name = "hp-gpio4"},
{.id = ALC286_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"}, {.id = ALC286_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"},
{.id = ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, .name = "hp-gpio2-hotkey"}, {.id = ALC280_FIXUP_HP_GPIO2_MIC_HOTKEY, .name = "hp-gpio2-hotkey"},

View File

@ -707,7 +707,13 @@ int wm8960_configure_pll(struct snd_soc_component *component, int freq_in,
best_freq_out = -EINVAL; best_freq_out = -EINVAL;
*sysclk_idx = *dac_idx = *bclk_idx = -1; *sysclk_idx = *dac_idx = *bclk_idx = -1;
for (i = 0; i < ARRAY_SIZE(sysclk_divs); ++i) { /*
* From Datasheet, the PLL performs best when f2 is between
* 90MHz and 100MHz, the desired sysclk output is 11.2896MHz
* or 12.288MHz, then sysclkdiv = 2 is the best choice.
* So search sysclk_divs from 2 to 1 other than from 1 to 2.
*/
for (i = ARRAY_SIZE(sysclk_divs) - 1; i >= 0; --i) {
if (sysclk_divs[i] == -1) if (sysclk_divs[i] == -1)
continue; continue;
for (j = 0; j < ARRAY_SIZE(dac_divs); ++j) { for (j = 0; j < ARRAY_SIZE(dac_divs); ++j) {

View File

@ -500,14 +500,14 @@ static struct snd_soc_dai_driver sst_platform_dai[] = {
.channels_min = SST_STEREO, .channels_min = SST_STEREO,
.channels_max = SST_STEREO, .channels_max = SST_STEREO,
.rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000, .rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000,
.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE, .formats = SNDRV_PCM_FMTBIT_S16_LE,
}, },
.capture = { .capture = {
.stream_name = "Headset Capture", .stream_name = "Headset Capture",
.channels_min = 1, .channels_min = 1,
.channels_max = 2, .channels_max = 2,
.rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000, .rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000,
.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE, .formats = SNDRV_PCM_FMTBIT_S16_LE,
}, },
}, },
{ {
@ -518,7 +518,7 @@ static struct snd_soc_dai_driver sst_platform_dai[] = {
.channels_min = SST_STEREO, .channels_min = SST_STEREO,
.channels_max = SST_STEREO, .channels_max = SST_STEREO,
.rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000, .rates = SNDRV_PCM_RATE_44100|SNDRV_PCM_RATE_48000,
.formats = SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE, .formats = SNDRV_PCM_FMTBIT_S16_LE,
}, },
}, },
{ {

Some files were not shown because too many files have changed in this diff Show More