Networking fixes for 5.11-rc4, including fixes from can and netfilter.
Current release - regressions: - fix feature enforcement to allow NETIF_F_HW_TLS_TX if IP_CSUM && IPV6_CSUM - dcb: accept RTM_GETDCB messages carrying set-like DCB commands if user is admin for backward-compatibility - selftests/tls: fix selftests build after adding ChaCha20-Poly1305 Current release - always broken: - ppp: fix refcount underflow on channel unbridge - bnxt_en: clear DEFRAG flag in firmware message when retry flashing - smc: fix out of bound access in the new netlink interface Previous releases - regressions: - fix use-after-free with UDP GRO by frags - mptcp: better msk-level shutdown - rndis_host: set proper input size for OID_GEN_PHYSICAL_MEDIUM request - i40e: xsk: fix potential NULL pointer dereferencing Previous releases - always broken: - skb frag: kmap_atomic fixes - avoid 32 x truesize under-estimation for tiny skbs - fix issues around register_netdevice() failures - udp: prevent reuseport_select_sock from reading uninitialized socks - dsa: unbind all switches from tree when DSA master unbinds - dsa: clear devlink port type before unregistering slave netdevs - can: isotp: isotp_getname(): fix kernel information leak - mlxsw: core: Thermal control fixes - ipv6: validate GSO SKB against MTU before finish IPv6 processing - stmmac: use __napi_schedule() for PREEMPT_RT - net: mvpp2: remove Pause and Asym_Pause support Misc: - remove from MAINTAINERS folks who had been inactive for >5yrs Signed-off-by: Jakub Kicinski <kuba@kernel.org> -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmAAnyYACgkQMUZtbf5S IrsdmhAAotkTNVS1zEsvwIirI9KUKKMXvNvscpO0+HJgsQHVnCGkfrj0BQmqQR21 D9njJIkGRiIANRO/Y/3wVCew55a0bxLmyE3JaU6krGLpvcNUFX6+fvuuzFSiWtKu 1c/AaXFIDTa8uVtXP/Ve8DfxKZmh3YPX5pNtk3fS6OlymbUfu8pOEPY5k69/Nlmr QwbGZO0Q5Ab18rmPztgWpcZi8wLbpZYbrIR2E45u3k+LnXG3UUVYeYTC9Hi89wkz 8YiS0PIs6GmWeSWnWK9TWXFSaxV8ttABsFxpbmzWW6oqkaviGjLfPg7kYYRgPu08 nCyYx7LN58shQ8FTfZm1yBpJ1fbPV/5RIMZKQ6Fg4cICgCab63E4N6xxoA9mLNu9 hP/qgeynQ2w1FbPw5yQVbDCVmcyfPb5V4WC1OccHQdgaAzz2SFPxvsUTOoBRxY8m DmZDHjBi2ZXB3/PSkwWmIsW9PuPq6de8xgHIQtjrCeduvVvmOYkrcdfkMxTx9HC0 LH2a5x9VCL/cf/Y/tQ2TZSntweSq8MhlRV9vOIO1FOqiviYHlnD8+EuIBMe8To14 XRIDMl92lpY5xjJpKdRhZ7Yh4CNMk199yFf5bt3xSlM4A3ALUlwqRKES6I2MZiiF 0Yvxsr2qVShaHx6XpmBAimaUXxTmmUV7X1hf19EEzzmTdiMjad4= =e8t6 -----END PGP SIGNATURE----- Merge tag 'net-5.11-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Jakub Kicinski: "We have a few fixes for long standing issues, in particular Eric's fix to not underestimate the skb sizes, and my fix for brokenness of register_netdevice() error path. They may uncover other bugs so we will keep an eye on them. Also included are Willem's fixes for kmap(_atomic). Looking at the "current release" fixes, it seems we are about one rc behind a normal cycle. We've previously seen an uptick of "people had run their test suites" / "humans actually tried to use new features" fixes between rc2 and rc3. Summary: Current release - regressions: - fix feature enforcement to allow NETIF_F_HW_TLS_TX if IP_CSUM && IPV6_CSUM - dcb: accept RTM_GETDCB messages carrying set-like DCB commands if user is admin for backward-compatibility - selftests/tls: fix selftests build after adding ChaCha20-Poly1305 Current release - always broken: - ppp: fix refcount underflow on channel unbridge - bnxt_en: clear DEFRAG flag in firmware message when retry flashing - smc: fix out of bound access in the new netlink interface Previous releases - regressions: - fix use-after-free with UDP GRO by frags - mptcp: better msk-level shutdown - rndis_host: set proper input size for OID_GEN_PHYSICAL_MEDIUM request - i40e: xsk: fix potential NULL pointer dereferencing Previous releases - always broken: - skb frag: kmap_atomic fixes - avoid 32 x truesize under-estimation for tiny skbs - fix issues around register_netdevice() failures - udp: prevent reuseport_select_sock from reading uninitialized socks - dsa: unbind all switches from tree when DSA master unbinds - dsa: clear devlink port type before unregistering slave netdevs - can: isotp: isotp_getname(): fix kernel information leak - mlxsw: core: Thermal control fixes - ipv6: validate GSO SKB against MTU before finish IPv6 processing - stmmac: use __napi_schedule() for PREEMPT_RT - net: mvpp2: remove Pause and Asym_Pause support Misc: - remove from MAINTAINERS folks who had been inactive for >5yrs" * tag 'net-5.11-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (58 commits) mptcp: fix locking in mptcp_disconnect() net: Allow NETIF_F_HW_TLS_TX if IP_CSUM && IPV6_CSUM MAINTAINERS: dccp: move Gerrit Renker to CREDITS MAINTAINERS: ipvs: move Wensong Zhang to CREDITS MAINTAINERS: tls: move Aviad to CREDITS MAINTAINERS: ena: remove Zorik Machulsky from reviewers MAINTAINERS: vrf: move Shrijeet to CREDITS MAINTAINERS: net: move Alexey Kuznetsov to CREDITS MAINTAINERS: altx: move Jay Cliburn to CREDITS net: avoid 32 x truesize under-estimation for tiny skbs nt: usb: USB_RTL8153_ECM should not default to y net: stmmac: fix taprio configuration when base_time is in the past net: stmmac: fix taprio schedule configuration net: tip: fix a couple kernel-doc markups net: sit: unregister_netdevice on newlink's error path net: stmmac: Fixed mtu channged by cache aligned cxgb4/chtls: Fix tid stuck due to wrong update of qid i40e: fix potential NULL pointer dereferencing net: stmmac: use __napi_schedule() for PREEMPT_RT can: mcp251xfd: mcp251xfd_handle_rxif_one(): fix wrong NULL pointer check ...
This commit is contained in:
commit
e8c13a6bc8
24
CREDITS
24
CREDITS
@ -710,6 +710,10 @@ S: Las Cuevas 2385 - Bo Guemes
|
|||||||
S: Las Heras, Mendoza CP 5539
|
S: Las Heras, Mendoza CP 5539
|
||||||
S: Argentina
|
S: Argentina
|
||||||
|
|
||||||
|
N: Jay Cliburn
|
||||||
|
E: jcliburn@gmail.com
|
||||||
|
D: ATLX Ethernet drivers
|
||||||
|
|
||||||
N: Steven P. Cole
|
N: Steven P. Cole
|
||||||
E: scole@lanl.gov
|
E: scole@lanl.gov
|
||||||
E: elenstev@mesatop.com
|
E: elenstev@mesatop.com
|
||||||
@ -1284,6 +1288,10 @@ D: Major kbuild rework during the 2.5 cycle
|
|||||||
D: ISDN Maintainer
|
D: ISDN Maintainer
|
||||||
S: USA
|
S: USA
|
||||||
|
|
||||||
|
N: Gerrit Renker
|
||||||
|
E: gerrit@erg.abdn.ac.uk
|
||||||
|
D: DCCP protocol support.
|
||||||
|
|
||||||
N: Philip Gladstone
|
N: Philip Gladstone
|
||||||
E: philip@gladstonefamily.net
|
E: philip@gladstonefamily.net
|
||||||
D: Kernel / timekeeping stuff
|
D: Kernel / timekeeping stuff
|
||||||
@ -2138,6 +2146,10 @@ E: seasons@falcon.sch.bme.hu
|
|||||||
E: seasons@makosteszta.sote.hu
|
E: seasons@makosteszta.sote.hu
|
||||||
D: Original author of software suspend
|
D: Original author of software suspend
|
||||||
|
|
||||||
|
N: Alexey Kuznetsov
|
||||||
|
E: kuznet@ms2.inr.ac.ru
|
||||||
|
D: Author and maintainer of large parts of the networking stack
|
||||||
|
|
||||||
N: Jaroslav Kysela
|
N: Jaroslav Kysela
|
||||||
E: perex@perex.cz
|
E: perex@perex.cz
|
||||||
W: https://www.perex.cz
|
W: https://www.perex.cz
|
||||||
@ -2696,6 +2708,10 @@ N: Wolfgang Muees
|
|||||||
E: wolfgang@iksw-muees.de
|
E: wolfgang@iksw-muees.de
|
||||||
D: Auerswald USB driver
|
D: Auerswald USB driver
|
||||||
|
|
||||||
|
N: Shrijeet Mukherjee
|
||||||
|
E: shrijeet@gmail.com
|
||||||
|
D: Network routing domains (VRF).
|
||||||
|
|
||||||
N: Paul Mundt
|
N: Paul Mundt
|
||||||
E: paul.mundt@gmail.com
|
E: paul.mundt@gmail.com
|
||||||
D: SuperH maintainer
|
D: SuperH maintainer
|
||||||
@ -4110,6 +4126,10 @@ S: B-1206 Jingmao Guojigongyu
|
|||||||
S: 16 Baliqiao Nanjie, Beijing 101100
|
S: 16 Baliqiao Nanjie, Beijing 101100
|
||||||
S: People's Repulic of China
|
S: People's Repulic of China
|
||||||
|
|
||||||
|
N: Aviad Yehezkel
|
||||||
|
E: aviadye@nvidia.com
|
||||||
|
D: Kernel TLS implementation and offload support.
|
||||||
|
|
||||||
N: Victor Yodaiken
|
N: Victor Yodaiken
|
||||||
E: yodaiken@fsmlabs.com
|
E: yodaiken@fsmlabs.com
|
||||||
D: RTLinux (RealTime Linux)
|
D: RTLinux (RealTime Linux)
|
||||||
@ -4167,6 +4187,10 @@ S: 1507 145th Place SE #B5
|
|||||||
S: Bellevue, Washington 98007
|
S: Bellevue, Washington 98007
|
||||||
S: USA
|
S: USA
|
||||||
|
|
||||||
|
N: Wensong Zhang
|
||||||
|
E: wensong@linux-vs.org
|
||||||
|
D: IP virtual server (IPVS).
|
||||||
|
|
||||||
N: Haojian Zhuang
|
N: Haojian Zhuang
|
||||||
E: haojian.zhuang@gmail.com
|
E: haojian.zhuang@gmail.com
|
||||||
D: MMP support
|
D: MMP support
|
||||||
|
@ -163,6 +163,7 @@ allOf:
|
|||||||
enum:
|
enum:
|
||||||
- renesas,etheravb-r8a774a1
|
- renesas,etheravb-r8a774a1
|
||||||
- renesas,etheravb-r8a774b1
|
- renesas,etheravb-r8a774b1
|
||||||
|
- renesas,etheravb-r8a774e1
|
||||||
- renesas,etheravb-r8a7795
|
- renesas,etheravb-r8a7795
|
||||||
- renesas,etheravb-r8a7796
|
- renesas,etheravb-r8a7796
|
||||||
- renesas,etheravb-r8a77961
|
- renesas,etheravb-r8a77961
|
||||||
|
@ -161,7 +161,8 @@ properties:
|
|||||||
* snps,route-dcbcp, DCB Control Packets
|
* snps,route-dcbcp, DCB Control Packets
|
||||||
* snps,route-up, Untagged Packets
|
* snps,route-up, Untagged Packets
|
||||||
* snps,route-multi-broad, Multicast & Broadcast Packets
|
* snps,route-multi-broad, Multicast & Broadcast Packets
|
||||||
* snps,priority, RX queue priority (Range 0x0 to 0xF)
|
* snps,priority, bitmask of the tagged frames priorities assigned to
|
||||||
|
the queue
|
||||||
|
|
||||||
snps,mtl-tx-config:
|
snps,mtl-tx-config:
|
||||||
$ref: /schemas/types.yaml#/definitions/phandle
|
$ref: /schemas/types.yaml#/definitions/phandle
|
||||||
@ -188,7 +189,10 @@ properties:
|
|||||||
* snps,idle_slope, unlock on WoL
|
* snps,idle_slope, unlock on WoL
|
||||||
* snps,high_credit, max write outstanding req. limit
|
* snps,high_credit, max write outstanding req. limit
|
||||||
* snps,low_credit, max read outstanding req. limit
|
* snps,low_credit, max read outstanding req. limit
|
||||||
* snps,priority, TX queue priority (Range 0x0 to 0xF)
|
* snps,priority, bitmask of the priorities assigned to the queue.
|
||||||
|
When a PFC frame is received with priorities matching the bitmask,
|
||||||
|
the queue is blocked from transmitting for the pause time specified
|
||||||
|
in the PFC frame.
|
||||||
|
|
||||||
snps,reset-gpio:
|
snps,reset-gpio:
|
||||||
deprecated: true
|
deprecated: true
|
||||||
|
@ -10,18 +10,177 @@ Introduction
|
|||||||
The following is a random collection of documentation regarding
|
The following is a random collection of documentation regarding
|
||||||
network devices.
|
network devices.
|
||||||
|
|
||||||
struct net_device allocation rules
|
struct net_device lifetime rules
|
||||||
==================================
|
================================
|
||||||
Network device structures need to persist even after module is unloaded and
|
Network device structures need to persist even after module is unloaded and
|
||||||
must be allocated with alloc_netdev_mqs() and friends.
|
must be allocated with alloc_netdev_mqs() and friends.
|
||||||
If device has registered successfully, it will be freed on last use
|
If device has registered successfully, it will be freed on last use
|
||||||
by free_netdev(). This is required to handle the pathologic case cleanly
|
by free_netdev(). This is required to handle the pathological case cleanly
|
||||||
(example: rmmod mydriver </sys/class/net/myeth/mtu )
|
(example: ``rmmod mydriver </sys/class/net/myeth/mtu``)
|
||||||
|
|
||||||
alloc_netdev_mqs()/alloc_netdev() reserve extra space for driver
|
alloc_netdev_mqs() / alloc_netdev() reserve extra space for driver
|
||||||
private data which gets freed when the network device is freed. If
|
private data which gets freed when the network device is freed. If
|
||||||
separately allocated data is attached to the network device
|
separately allocated data is attached to the network device
|
||||||
(netdev_priv(dev)) then it is up to the module exit handler to free that.
|
(netdev_priv()) then it is up to the module exit handler to free that.
|
||||||
|
|
||||||
|
There are two groups of APIs for registering struct net_device.
|
||||||
|
First group can be used in normal contexts where ``rtnl_lock`` is not already
|
||||||
|
held: register_netdev(), unregister_netdev().
|
||||||
|
Second group can be used when ``rtnl_lock`` is already held:
|
||||||
|
register_netdevice(), unregister_netdevice(), free_netdevice().
|
||||||
|
|
||||||
|
Simple drivers
|
||||||
|
--------------
|
||||||
|
|
||||||
|
Most drivers (especially device drivers) handle lifetime of struct net_device
|
||||||
|
in context where ``rtnl_lock`` is not held (e.g. driver probe and remove paths).
|
||||||
|
|
||||||
|
In that case the struct net_device registration is done using
|
||||||
|
the register_netdev(), and unregister_netdev() functions:
|
||||||
|
|
||||||
|
.. code-block:: c
|
||||||
|
|
||||||
|
int probe()
|
||||||
|
{
|
||||||
|
struct my_device_priv *priv;
|
||||||
|
int err;
|
||||||
|
|
||||||
|
dev = alloc_netdev_mqs(...);
|
||||||
|
if (!dev)
|
||||||
|
return -ENOMEM;
|
||||||
|
priv = netdev_priv(dev);
|
||||||
|
|
||||||
|
/* ... do all device setup before calling register_netdev() ...
|
||||||
|
*/
|
||||||
|
|
||||||
|
err = register_netdev(dev);
|
||||||
|
if (err)
|
||||||
|
goto err_undo;
|
||||||
|
|
||||||
|
/* net_device is visible to the user! */
|
||||||
|
|
||||||
|
err_undo:
|
||||||
|
/* ... undo the device setup ... */
|
||||||
|
free_netdev(dev);
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
void remove()
|
||||||
|
{
|
||||||
|
unregister_netdev(dev);
|
||||||
|
free_netdev(dev);
|
||||||
|
}
|
||||||
|
|
||||||
|
Note that after calling register_netdev() the device is visible in the system.
|
||||||
|
Users can open it and start sending / receiving traffic immediately,
|
||||||
|
or run any other callback, so all initialization must be done prior to
|
||||||
|
registration.
|
||||||
|
|
||||||
|
unregister_netdev() closes the device and waits for all users to be done
|
||||||
|
with it. The memory of struct net_device itself may still be referenced
|
||||||
|
by sysfs but all operations on that device will fail.
|
||||||
|
|
||||||
|
free_netdev() can be called after unregister_netdev() returns on when
|
||||||
|
register_netdev() failed.
|
||||||
|
|
||||||
|
Device management under RTNL
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
Registering struct net_device while in context which already holds
|
||||||
|
the ``rtnl_lock`` requires extra care. In those scenarios most drivers
|
||||||
|
will want to make use of struct net_device's ``needs_free_netdev``
|
||||||
|
and ``priv_destructor`` members for freeing of state.
|
||||||
|
|
||||||
|
Example flow of netdev handling under ``rtnl_lock``:
|
||||||
|
|
||||||
|
.. code-block:: c
|
||||||
|
|
||||||
|
static void my_setup(struct net_device *dev)
|
||||||
|
{
|
||||||
|
dev->needs_free_netdev = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void my_destructor(struct net_device *dev)
|
||||||
|
{
|
||||||
|
some_obj_destroy(priv->obj);
|
||||||
|
some_uninit(priv);
|
||||||
|
}
|
||||||
|
|
||||||
|
int create_link()
|
||||||
|
{
|
||||||
|
struct my_device_priv *priv;
|
||||||
|
int err;
|
||||||
|
|
||||||
|
ASSERT_RTNL();
|
||||||
|
|
||||||
|
dev = alloc_netdev(sizeof(*priv), "net%d", NET_NAME_UNKNOWN, my_setup);
|
||||||
|
if (!dev)
|
||||||
|
return -ENOMEM;
|
||||||
|
priv = netdev_priv(dev);
|
||||||
|
|
||||||
|
/* Implicit constructor */
|
||||||
|
err = some_init(priv);
|
||||||
|
if (err)
|
||||||
|
goto err_free_dev;
|
||||||
|
|
||||||
|
priv->obj = some_obj_create();
|
||||||
|
if (!priv->obj) {
|
||||||
|
err = -ENOMEM;
|
||||||
|
goto err_some_uninit;
|
||||||
|
}
|
||||||
|
/* End of constructor, set the destructor: */
|
||||||
|
dev->priv_destructor = my_destructor;
|
||||||
|
|
||||||
|
err = register_netdevice(dev);
|
||||||
|
if (err)
|
||||||
|
/* register_netdevice() calls destructor on failure */
|
||||||
|
goto err_free_dev;
|
||||||
|
|
||||||
|
/* If anything fails now unregister_netdevice() (or unregister_netdev())
|
||||||
|
* will take care of calling my_destructor and free_netdev().
|
||||||
|
*/
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err_some_uninit:
|
||||||
|
some_uninit(priv);
|
||||||
|
err_free_dev:
|
||||||
|
free_netdev(dev);
|
||||||
|
return err;
|
||||||
|
}
|
||||||
|
|
||||||
|
If struct net_device.priv_destructor is set it will be called by the core
|
||||||
|
some time after unregister_netdevice(), it will also be called if
|
||||||
|
register_netdevice() fails. The callback may be invoked with or without
|
||||||
|
``rtnl_lock`` held.
|
||||||
|
|
||||||
|
There is no explicit constructor callback, driver "constructs" the private
|
||||||
|
netdev state after allocating it and before registration.
|
||||||
|
|
||||||
|
Setting struct net_device.needs_free_netdev makes core call free_netdevice()
|
||||||
|
automatically after unregister_netdevice() when all references to the device
|
||||||
|
are gone. It only takes effect after a successful call to register_netdevice()
|
||||||
|
so if register_netdevice() fails driver is responsible for calling
|
||||||
|
free_netdev().
|
||||||
|
|
||||||
|
free_netdev() is safe to call on error paths right after unregister_netdevice()
|
||||||
|
or when register_netdevice() fails. Parts of netdev (de)registration process
|
||||||
|
happen after ``rtnl_lock`` is released, therefore in those cases free_netdev()
|
||||||
|
will defer some of the processing until ``rtnl_lock`` is released.
|
||||||
|
|
||||||
|
Devices spawned from struct rtnl_link_ops should never free the
|
||||||
|
struct net_device directly.
|
||||||
|
|
||||||
|
.ndo_init and .ndo_uninit
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
``.ndo_init`` and ``.ndo_uninit`` callbacks are called during net_device
|
||||||
|
registration and de-registration, under ``rtnl_lock``. Drivers can use
|
||||||
|
those e.g. when parts of their init process need to run under ``rtnl_lock``.
|
||||||
|
|
||||||
|
``.ndo_init`` runs before device is visible in the system, ``.ndo_uninit``
|
||||||
|
runs during de-registering after device is closed but other subsystems
|
||||||
|
may still have outstanding references to the netdevice.
|
||||||
|
|
||||||
MTU
|
MTU
|
||||||
===
|
===
|
||||||
|
@ -530,7 +530,7 @@ TLS device feature flags only control adding of new TLS connection
|
|||||||
offloads, old connections will remain active after flags are cleared.
|
offloads, old connections will remain active after flags are cleared.
|
||||||
|
|
||||||
TLS encryption cannot be offloaded to devices without checksum calculation
|
TLS encryption cannot be offloaded to devices without checksum calculation
|
||||||
offload. Hence, TLS TX device feature flag requires NETIF_F_HW_CSUM being set.
|
offload. Hence, TLS TX device feature flag requires TX csum offload being set.
|
||||||
Disabling the latter implies clearing the former. Disabling TX checksum offload
|
Disabling the latter implies clearing the former. Disabling TX checksum offload
|
||||||
should not affect old connections, and drivers should make sure checksum
|
should not affect old connections, and drivers should make sure checksum
|
||||||
calculation does not break for them.
|
calculation does not break for them.
|
||||||
|
@ -820,7 +820,6 @@ M: Netanel Belgazal <netanel@amazon.com>
|
|||||||
M: Arthur Kiyanovski <akiyano@amazon.com>
|
M: Arthur Kiyanovski <akiyano@amazon.com>
|
||||||
R: Guy Tzalik <gtzalik@amazon.com>
|
R: Guy Tzalik <gtzalik@amazon.com>
|
||||||
R: Saeed Bishara <saeedb@amazon.com>
|
R: Saeed Bishara <saeedb@amazon.com>
|
||||||
R: Zorik Machulsky <zorik@amazon.com>
|
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst
|
F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst
|
||||||
@ -2942,7 +2941,6 @@ S: Maintained
|
|||||||
F: drivers/hwmon/asus_atk0110.c
|
F: drivers/hwmon/asus_atk0110.c
|
||||||
|
|
||||||
ATLX ETHERNET DRIVERS
|
ATLX ETHERNET DRIVERS
|
||||||
M: Jay Cliburn <jcliburn@gmail.com>
|
|
||||||
M: Chris Snook <chris.snook@gmail.com>
|
M: Chris Snook <chris.snook@gmail.com>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
@ -4922,9 +4920,8 @@ F: Documentation/scsi/dc395x.rst
|
|||||||
F: drivers/scsi/dc395x.*
|
F: drivers/scsi/dc395x.*
|
||||||
|
|
||||||
DCCP PROTOCOL
|
DCCP PROTOCOL
|
||||||
M: Gerrit Renker <gerrit@erg.abdn.ac.uk>
|
|
||||||
L: dccp@vger.kernel.org
|
L: dccp@vger.kernel.org
|
||||||
S: Maintained
|
S: Orphan
|
||||||
W: http://www.linuxfoundation.org/collaborate/workgroups/networking/dccp
|
W: http://www.linuxfoundation.org/collaborate/workgroups/networking/dccp
|
||||||
F: include/linux/dccp.h
|
F: include/linux/dccp.h
|
||||||
F: include/linux/tfrc.h
|
F: include/linux/tfrc.h
|
||||||
@ -9326,7 +9323,6 @@ W: http://www.adaptec.com/
|
|||||||
F: drivers/scsi/ips*
|
F: drivers/scsi/ips*
|
||||||
|
|
||||||
IPVS
|
IPVS
|
||||||
M: Wensong Zhang <wensong@linux-vs.org>
|
|
||||||
M: Simon Horman <horms@verge.net.au>
|
M: Simon Horman <horms@verge.net.au>
|
||||||
M: Julian Anastasov <ja@ssi.bg>
|
M: Julian Anastasov <ja@ssi.bg>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
@ -12416,7 +12412,6 @@ F: tools/testing/selftests/net/ipsec.c
|
|||||||
|
|
||||||
NETWORKING [IPv4/IPv6]
|
NETWORKING [IPv4/IPv6]
|
||||||
M: "David S. Miller" <davem@davemloft.net>
|
M: "David S. Miller" <davem@davemloft.net>
|
||||||
M: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
|
|
||||||
M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
|
M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
@ -12473,7 +12468,6 @@ F: net/ipv6/tcp*.c
|
|||||||
|
|
||||||
NETWORKING [TLS]
|
NETWORKING [TLS]
|
||||||
M: Boris Pismenny <borisp@nvidia.com>
|
M: Boris Pismenny <borisp@nvidia.com>
|
||||||
M: Aviad Yehezkel <aviadye@nvidia.com>
|
|
||||||
M: John Fastabend <john.fastabend@gmail.com>
|
M: John Fastabend <john.fastabend@gmail.com>
|
||||||
M: Daniel Borkmann <daniel@iogearbox.net>
|
M: Daniel Borkmann <daniel@iogearbox.net>
|
||||||
M: Jakub Kicinski <kuba@kernel.org>
|
M: Jakub Kicinski <kuba@kernel.org>
|
||||||
@ -19071,7 +19065,6 @@ K: regulator_get_optional
|
|||||||
|
|
||||||
VRF
|
VRF
|
||||||
M: David Ahern <dsahern@kernel.org>
|
M: David Ahern <dsahern@kernel.org>
|
||||||
M: Shrijeet Mukherjee <shrijeet@gmail.com>
|
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
S: Maintained
|
S: Maintained
|
||||||
F: Documentation/networking/vrf.rst
|
F: Documentation/networking/vrf.rst
|
||||||
|
@ -1491,7 +1491,7 @@ mcp251xfd_handle_rxif_one(struct mcp251xfd_priv *priv,
|
|||||||
else
|
else
|
||||||
skb = alloc_can_skb(priv->ndev, (struct can_frame **)&cfd);
|
skb = alloc_can_skb(priv->ndev, (struct can_frame **)&cfd);
|
||||||
|
|
||||||
if (!cfd) {
|
if (!skb) {
|
||||||
stats->rx_dropped++;
|
stats->rx_dropped++;
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -2532,7 +2532,7 @@ int bnxt_flash_package_from_fw_obj(struct net_device *dev, const struct firmware
|
|||||||
|
|
||||||
if (rc && ((struct hwrm_err_output *)&resp)->cmd_err ==
|
if (rc && ((struct hwrm_err_output *)&resp)->cmd_err ==
|
||||||
NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) {
|
NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) {
|
||||||
install.flags |=
|
install.flags =
|
||||||
cpu_to_le16(NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG);
|
cpu_to_le16(NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG);
|
||||||
|
|
||||||
rc = _hwrm_send_message_silent(bp, &install,
|
rc = _hwrm_send_message_silent(bp, &install,
|
||||||
@ -2546,6 +2546,7 @@ int bnxt_flash_package_from_fw_obj(struct net_device *dev, const struct firmware
|
|||||||
* UPDATE directory and try the flash again
|
* UPDATE directory and try the flash again
|
||||||
*/
|
*/
|
||||||
defrag_attempted = true;
|
defrag_attempted = true;
|
||||||
|
install.flags = 0;
|
||||||
rc = __bnxt_flash_nvram(bp->dev,
|
rc = __bnxt_flash_nvram(bp->dev,
|
||||||
BNX_DIR_TYPE_UPDATE,
|
BNX_DIR_TYPE_UPDATE,
|
||||||
BNX_DIR_ORDINAL_FIRST,
|
BNX_DIR_ORDINAL_FIRST,
|
||||||
|
@ -222,8 +222,12 @@ int bnxt_get_ulp_msix_base(struct bnxt *bp)
|
|||||||
|
|
||||||
int bnxt_get_ulp_stat_ctxs(struct bnxt *bp)
|
int bnxt_get_ulp_stat_ctxs(struct bnxt *bp)
|
||||||
{
|
{
|
||||||
if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP))
|
if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) {
|
||||||
return BNXT_MIN_ROCE_STAT_CTXS;
|
struct bnxt_en_dev *edev = bp->edev;
|
||||||
|
|
||||||
|
if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested)
|
||||||
|
return BNXT_MIN_ROCE_STAT_CTXS;
|
||||||
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -40,6 +40,13 @@
|
|||||||
#define TCB_L2T_IX_M 0xfffULL
|
#define TCB_L2T_IX_M 0xfffULL
|
||||||
#define TCB_L2T_IX_V(x) ((x) << TCB_L2T_IX_S)
|
#define TCB_L2T_IX_V(x) ((x) << TCB_L2T_IX_S)
|
||||||
|
|
||||||
|
#define TCB_T_FLAGS_W 1
|
||||||
|
#define TCB_T_FLAGS_S 0
|
||||||
|
#define TCB_T_FLAGS_M 0xffffffffffffffffULL
|
||||||
|
#define TCB_T_FLAGS_V(x) ((__u64)(x) << TCB_T_FLAGS_S)
|
||||||
|
|
||||||
|
#define TCB_FIELD_COOKIE_TFLAG 1
|
||||||
|
|
||||||
#define TCB_SMAC_SEL_W 0
|
#define TCB_SMAC_SEL_W 0
|
||||||
#define TCB_SMAC_SEL_S 24
|
#define TCB_SMAC_SEL_S 24
|
||||||
#define TCB_SMAC_SEL_M 0xffULL
|
#define TCB_SMAC_SEL_M 0xffULL
|
||||||
|
@ -575,7 +575,11 @@ int send_tx_flowc_wr(struct sock *sk, int compl,
|
|||||||
void chtls_tcp_push(struct sock *sk, int flags);
|
void chtls_tcp_push(struct sock *sk, int flags);
|
||||||
int chtls_push_frames(struct chtls_sock *csk, int comp);
|
int chtls_push_frames(struct chtls_sock *csk, int comp);
|
||||||
int chtls_set_tcb_tflag(struct sock *sk, unsigned int bit_pos, int val);
|
int chtls_set_tcb_tflag(struct sock *sk, unsigned int bit_pos, int val);
|
||||||
|
void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word,
|
||||||
|
u64 mask, u64 val, u8 cookie,
|
||||||
|
int through_l2t);
|
||||||
int chtls_setkey(struct chtls_sock *csk, u32 keylen, u32 mode, int cipher_type);
|
int chtls_setkey(struct chtls_sock *csk, u32 keylen, u32 mode, int cipher_type);
|
||||||
|
void chtls_set_quiesce_ctrl(struct sock *sk, int val);
|
||||||
void skb_entail(struct sock *sk, struct sk_buff *skb, int flags);
|
void skb_entail(struct sock *sk, struct sk_buff *skb, int flags);
|
||||||
unsigned int keyid_to_addr(int start_addr, int keyid);
|
unsigned int keyid_to_addr(int start_addr, int keyid);
|
||||||
void free_tls_keyid(struct sock *sk);
|
void free_tls_keyid(struct sock *sk);
|
||||||
|
@ -32,6 +32,7 @@
|
|||||||
#include "chtls.h"
|
#include "chtls.h"
|
||||||
#include "chtls_cm.h"
|
#include "chtls_cm.h"
|
||||||
#include "clip_tbl.h"
|
#include "clip_tbl.h"
|
||||||
|
#include "t4_tcb.h"
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* State transitions and actions for close. Note that if we are in SYN_SENT
|
* State transitions and actions for close. Note that if we are in SYN_SENT
|
||||||
@ -267,7 +268,9 @@ static void chtls_send_reset(struct sock *sk, int mode, struct sk_buff *skb)
|
|||||||
if (sk->sk_state != TCP_SYN_RECV)
|
if (sk->sk_state != TCP_SYN_RECV)
|
||||||
chtls_send_abort(sk, mode, skb);
|
chtls_send_abort(sk, mode, skb);
|
||||||
else
|
else
|
||||||
goto out;
|
chtls_set_tcb_field_rpl_skb(sk, TCB_T_FLAGS_W,
|
||||||
|
TCB_T_FLAGS_V(TCB_T_FLAGS_M), 0,
|
||||||
|
TCB_FIELD_COOKIE_TFLAG, 1);
|
||||||
|
|
||||||
return;
|
return;
|
||||||
out:
|
out:
|
||||||
@ -1949,6 +1952,8 @@ static void chtls_close_con_rpl(struct sock *sk, struct sk_buff *skb)
|
|||||||
else if (tcp_sk(sk)->linger2 < 0 &&
|
else if (tcp_sk(sk)->linger2 < 0 &&
|
||||||
!csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN))
|
!csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN))
|
||||||
chtls_abort_conn(sk, skb);
|
chtls_abort_conn(sk, skb);
|
||||||
|
else if (csk_flag_nochk(csk, CSK_TX_DATA_SENT))
|
||||||
|
chtls_set_quiesce_ctrl(sk, 0);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
pr_info("close_con_rpl in bad state %d\n", sk->sk_state);
|
pr_info("close_con_rpl in bad state %d\n", sk->sk_state);
|
||||||
@ -2292,6 +2297,28 @@ static int chtls_wr_ack(struct chtls_dev *cdev, struct sk_buff *skb)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int chtls_set_tcb_rpl(struct chtls_dev *cdev, struct sk_buff *skb)
|
||||||
|
{
|
||||||
|
struct cpl_set_tcb_rpl *rpl = cplhdr(skb) + RSS_HDR;
|
||||||
|
unsigned int hwtid = GET_TID(rpl);
|
||||||
|
struct sock *sk;
|
||||||
|
|
||||||
|
sk = lookup_tid(cdev->tids, hwtid);
|
||||||
|
|
||||||
|
/* return EINVAL if socket doesn't exist */
|
||||||
|
if (!sk)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
/* Reusing the skb as size of cpl_set_tcb_field structure
|
||||||
|
* is greater than cpl_abort_req
|
||||||
|
*/
|
||||||
|
if (TCB_COOKIE_G(rpl->cookie) == TCB_FIELD_COOKIE_TFLAG)
|
||||||
|
chtls_send_abort(sk, CPL_ABORT_SEND_RST, NULL);
|
||||||
|
|
||||||
|
kfree_skb(skb);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = {
|
chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = {
|
||||||
[CPL_PASS_OPEN_RPL] = chtls_pass_open_rpl,
|
[CPL_PASS_OPEN_RPL] = chtls_pass_open_rpl,
|
||||||
[CPL_CLOSE_LISTSRV_RPL] = chtls_close_listsrv_rpl,
|
[CPL_CLOSE_LISTSRV_RPL] = chtls_close_listsrv_rpl,
|
||||||
@ -2304,5 +2331,6 @@ chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = {
|
|||||||
[CPL_CLOSE_CON_RPL] = chtls_conn_cpl,
|
[CPL_CLOSE_CON_RPL] = chtls_conn_cpl,
|
||||||
[CPL_ABORT_REQ_RSS] = chtls_conn_cpl,
|
[CPL_ABORT_REQ_RSS] = chtls_conn_cpl,
|
||||||
[CPL_ABORT_RPL_RSS] = chtls_conn_cpl,
|
[CPL_ABORT_RPL_RSS] = chtls_conn_cpl,
|
||||||
[CPL_FW4_ACK] = chtls_wr_ack,
|
[CPL_FW4_ACK] = chtls_wr_ack,
|
||||||
|
[CPL_SET_TCB_RPL] = chtls_set_tcb_rpl,
|
||||||
};
|
};
|
||||||
|
@ -88,6 +88,24 @@ static int chtls_set_tcb_field(struct sock *sk, u16 word, u64 mask, u64 val)
|
|||||||
return ret < 0 ? ret : 0;
|
return ret < 0 ? ret : 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word,
|
||||||
|
u64 mask, u64 val, u8 cookie,
|
||||||
|
int through_l2t)
|
||||||
|
{
|
||||||
|
struct sk_buff *skb;
|
||||||
|
unsigned int wrlen;
|
||||||
|
|
||||||
|
wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata);
|
||||||
|
wrlen = roundup(wrlen, 16);
|
||||||
|
|
||||||
|
skb = alloc_skb(wrlen, GFP_KERNEL | __GFP_NOFAIL);
|
||||||
|
if (!skb)
|
||||||
|
return;
|
||||||
|
|
||||||
|
__set_tcb_field(sk, skb, word, mask, val, cookie, 0);
|
||||||
|
send_or_defer(sk, tcp_sk(sk), skb, through_l2t);
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Set one of the t_flags bits in the TCB.
|
* Set one of the t_flags bits in the TCB.
|
||||||
*/
|
*/
|
||||||
@ -113,6 +131,29 @@ static int chtls_set_tcb_quiesce(struct sock *sk, int val)
|
|||||||
TF_RX_QUIESCE_V(val));
|
TF_RX_QUIESCE_V(val));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void chtls_set_quiesce_ctrl(struct sock *sk, int val)
|
||||||
|
{
|
||||||
|
struct chtls_sock *csk;
|
||||||
|
struct sk_buff *skb;
|
||||||
|
unsigned int wrlen;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata);
|
||||||
|
wrlen = roundup(wrlen, 16);
|
||||||
|
|
||||||
|
skb = alloc_skb(wrlen, GFP_ATOMIC);
|
||||||
|
if (!skb)
|
||||||
|
return;
|
||||||
|
|
||||||
|
csk = rcu_dereference_sk_user_data(sk);
|
||||||
|
|
||||||
|
__set_tcb_field(sk, skb, 1, TF_RX_QUIESCE_V(1), 0, 0, 1);
|
||||||
|
set_wr_txq(skb, CPL_PRIORITY_CONTROL, csk->port_id);
|
||||||
|
ret = cxgb4_ofld_send(csk->egress_dev, skb);
|
||||||
|
if (ret < 0)
|
||||||
|
kfree_skb(skb);
|
||||||
|
}
|
||||||
|
|
||||||
/* TLS Key bitmap processing */
|
/* TLS Key bitmap processing */
|
||||||
int chtls_init_kmap(struct chtls_dev *cdev, struct cxgb4_lld_info *lldi)
|
int chtls_init_kmap(struct chtls_dev *cdev, struct cxgb4_lld_info *lldi)
|
||||||
{
|
{
|
||||||
|
@ -348,12 +348,12 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
|
|||||||
* SBP is *not* set in PRT_SBPVSI (default not set).
|
* SBP is *not* set in PRT_SBPVSI (default not set).
|
||||||
*/
|
*/
|
||||||
skb = i40e_construct_skb_zc(rx_ring, *bi);
|
skb = i40e_construct_skb_zc(rx_ring, *bi);
|
||||||
*bi = NULL;
|
|
||||||
if (!skb) {
|
if (!skb) {
|
||||||
rx_ring->rx_stats.alloc_buff_failed++;
|
rx_ring->rx_stats.alloc_buff_failed++;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
*bi = NULL;
|
||||||
cleaned_count++;
|
cleaned_count++;
|
||||||
i40e_inc_ntc(rx_ring);
|
i40e_inc_ntc(rx_ring);
|
||||||
|
|
||||||
|
@ -5882,8 +5882,6 @@ static void mvpp2_phylink_validate(struct phylink_config *config,
|
|||||||
|
|
||||||
phylink_set(mask, Autoneg);
|
phylink_set(mask, Autoneg);
|
||||||
phylink_set_port_modes(mask);
|
phylink_set_port_modes(mask);
|
||||||
phylink_set(mask, Pause);
|
|
||||||
phylink_set(mask, Asym_Pause);
|
|
||||||
|
|
||||||
switch (state->interface) {
|
switch (state->interface) {
|
||||||
case PHY_INTERFACE_MODE_10GBASER:
|
case PHY_INTERFACE_MODE_10GBASER:
|
||||||
|
@ -19,7 +19,7 @@
|
|||||||
#define MLXSW_THERMAL_ASIC_TEMP_NORM 75000 /* 75C */
|
#define MLXSW_THERMAL_ASIC_TEMP_NORM 75000 /* 75C */
|
||||||
#define MLXSW_THERMAL_ASIC_TEMP_HIGH 85000 /* 85C */
|
#define MLXSW_THERMAL_ASIC_TEMP_HIGH 85000 /* 85C */
|
||||||
#define MLXSW_THERMAL_ASIC_TEMP_HOT 105000 /* 105C */
|
#define MLXSW_THERMAL_ASIC_TEMP_HOT 105000 /* 105C */
|
||||||
#define MLXSW_THERMAL_ASIC_TEMP_CRIT 110000 /* 110C */
|
#define MLXSW_THERMAL_ASIC_TEMP_CRIT 140000 /* 140C */
|
||||||
#define MLXSW_THERMAL_HYSTERESIS_TEMP 5000 /* 5C */
|
#define MLXSW_THERMAL_HYSTERESIS_TEMP 5000 /* 5C */
|
||||||
#define MLXSW_THERMAL_MODULE_TEMP_SHIFT (MLXSW_THERMAL_HYSTERESIS_TEMP * 2)
|
#define MLXSW_THERMAL_MODULE_TEMP_SHIFT (MLXSW_THERMAL_HYSTERESIS_TEMP * 2)
|
||||||
#define MLXSW_THERMAL_ZONE_MAX_NAME 16
|
#define MLXSW_THERMAL_ZONE_MAX_NAME 16
|
||||||
@ -176,6 +176,12 @@ mlxsw_thermal_module_trips_update(struct device *dev, struct mlxsw_core *core,
|
|||||||
if (err)
|
if (err)
|
||||||
return err;
|
return err;
|
||||||
|
|
||||||
|
if (crit_temp > emerg_temp) {
|
||||||
|
dev_warn(dev, "%s : Critical threshold %d is above emergency threshold %d\n",
|
||||||
|
tz->tzdev->type, crit_temp, emerg_temp);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
/* According to the system thermal requirements, the thermal zones are
|
/* According to the system thermal requirements, the thermal zones are
|
||||||
* defined with four trip points. The critical and emergency
|
* defined with four trip points. The critical and emergency
|
||||||
* temperature thresholds, provided by QSFP module are set as "active"
|
* temperature thresholds, provided by QSFP module are set as "active"
|
||||||
@ -190,11 +196,8 @@ mlxsw_thermal_module_trips_update(struct device *dev, struct mlxsw_core *core,
|
|||||||
tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = crit_temp;
|
tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = crit_temp;
|
||||||
tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp = crit_temp;
|
tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp = crit_temp;
|
||||||
tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temp = emerg_temp;
|
tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temp = emerg_temp;
|
||||||
if (emerg_temp > crit_temp)
|
tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp +
|
||||||
tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp +
|
|
||||||
MLXSW_THERMAL_MODULE_TEMP_SHIFT;
|
MLXSW_THERMAL_MODULE_TEMP_SHIFT;
|
||||||
else
|
|
||||||
tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp;
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -564,11 +564,6 @@ static const struct net_device_ops netxen_netdev_ops = {
|
|||||||
.ndo_set_features = netxen_set_features,
|
.ndo_set_features = netxen_set_features,
|
||||||
};
|
};
|
||||||
|
|
||||||
static inline bool netxen_function_zero(struct pci_dev *pdev)
|
|
||||||
{
|
|
||||||
return (PCI_FUNC(pdev->devfn) == 0) ? true : false;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline void netxen_set_interrupt_mode(struct netxen_adapter *adapter,
|
static inline void netxen_set_interrupt_mode(struct netxen_adapter *adapter,
|
||||||
u32 mode)
|
u32 mode)
|
||||||
{
|
{
|
||||||
@ -664,7 +659,7 @@ static int netxen_setup_intr(struct netxen_adapter *adapter)
|
|||||||
netxen_initialize_interrupt_registers(adapter);
|
netxen_initialize_interrupt_registers(adapter);
|
||||||
netxen_set_msix_bit(pdev, 0);
|
netxen_set_msix_bit(pdev, 0);
|
||||||
|
|
||||||
if (netxen_function_zero(pdev)) {
|
if (adapter->portnum == 0) {
|
||||||
if (!netxen_setup_msi_interrupts(adapter, num_msix))
|
if (!netxen_setup_msi_interrupts(adapter, num_msix))
|
||||||
netxen_set_interrupt_mode(adapter, NETXEN_MSI_MODE);
|
netxen_set_interrupt_mode(adapter, NETXEN_MSI_MODE);
|
||||||
else
|
else
|
||||||
|
@ -568,68 +568,24 @@ static int dwmac5_est_write(void __iomem *ioaddr, u32 reg, u32 val, bool gcl)
|
|||||||
int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg,
|
int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg,
|
||||||
unsigned int ptp_rate)
|
unsigned int ptp_rate)
|
||||||
{
|
{
|
||||||
u32 speed, total_offset, offset, ctrl, ctr_low;
|
|
||||||
u32 extcfg = readl(ioaddr + GMAC_EXT_CONFIG);
|
|
||||||
u32 mac_cfg = readl(ioaddr + GMAC_CONFIG);
|
|
||||||
int i, ret = 0x0;
|
int i, ret = 0x0;
|
||||||
u64 total_ctr;
|
u32 ctrl;
|
||||||
|
|
||||||
if (extcfg & GMAC_CONFIG_EIPG_EN) {
|
|
||||||
offset = (extcfg & GMAC_CONFIG_EIPG) >> GMAC_CONFIG_EIPG_SHIFT;
|
|
||||||
offset = 104 + (offset * 8);
|
|
||||||
} else {
|
|
||||||
offset = (mac_cfg & GMAC_CONFIG_IPG) >> GMAC_CONFIG_IPG_SHIFT;
|
|
||||||
offset = 96 - (offset * 8);
|
|
||||||
}
|
|
||||||
|
|
||||||
speed = mac_cfg & (GMAC_CONFIG_PS | GMAC_CONFIG_FES);
|
|
||||||
speed = speed >> GMAC_CONFIG_FES_SHIFT;
|
|
||||||
|
|
||||||
switch (speed) {
|
|
||||||
case 0x0:
|
|
||||||
offset = offset * 1000; /* 1G */
|
|
||||||
break;
|
|
||||||
case 0x1:
|
|
||||||
offset = offset * 400; /* 2.5G */
|
|
||||||
break;
|
|
||||||
case 0x2:
|
|
||||||
offset = offset * 100000; /* 10M */
|
|
||||||
break;
|
|
||||||
case 0x3:
|
|
||||||
offset = offset * 10000; /* 100M */
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
offset = offset / 1000;
|
|
||||||
|
|
||||||
ret |= dwmac5_est_write(ioaddr, BTR_LOW, cfg->btr[0], false);
|
ret |= dwmac5_est_write(ioaddr, BTR_LOW, cfg->btr[0], false);
|
||||||
ret |= dwmac5_est_write(ioaddr, BTR_HIGH, cfg->btr[1], false);
|
ret |= dwmac5_est_write(ioaddr, BTR_HIGH, cfg->btr[1], false);
|
||||||
ret |= dwmac5_est_write(ioaddr, TER, cfg->ter, false);
|
ret |= dwmac5_est_write(ioaddr, TER, cfg->ter, false);
|
||||||
ret |= dwmac5_est_write(ioaddr, LLR, cfg->gcl_size, false);
|
ret |= dwmac5_est_write(ioaddr, LLR, cfg->gcl_size, false);
|
||||||
|
ret |= dwmac5_est_write(ioaddr, CTR_LOW, cfg->ctr[0], false);
|
||||||
|
ret |= dwmac5_est_write(ioaddr, CTR_HIGH, cfg->ctr[1], false);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
total_offset = 0;
|
|
||||||
for (i = 0; i < cfg->gcl_size; i++) {
|
for (i = 0; i < cfg->gcl_size; i++) {
|
||||||
ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i] + offset, true);
|
ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i], true);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
total_offset += offset;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000ULL;
|
|
||||||
total_ctr += total_offset;
|
|
||||||
|
|
||||||
ctr_low = do_div(total_ctr, 1000000000);
|
|
||||||
|
|
||||||
ret |= dwmac5_est_write(ioaddr, CTR_LOW, ctr_low, false);
|
|
||||||
ret |= dwmac5_est_write(ioaddr, CTR_HIGH, total_ctr, false);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
ctrl = readl(ioaddr + MTL_EST_CONTROL);
|
ctrl = readl(ioaddr + MTL_EST_CONTROL);
|
||||||
ctrl &= ~PTOV;
|
ctrl &= ~PTOV;
|
||||||
ctrl |= ((1000000000 / ptp_rate) * 6) << PTOV_SHIFT;
|
ctrl |= ((1000000000 / ptp_rate) * 6) << PTOV_SHIFT;
|
||||||
|
@ -2184,7 +2184,7 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan)
|
|||||||
spin_lock_irqsave(&ch->lock, flags);
|
spin_lock_irqsave(&ch->lock, flags);
|
||||||
stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 0);
|
stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 0);
|
||||||
spin_unlock_irqrestore(&ch->lock, flags);
|
spin_unlock_irqrestore(&ch->lock, flags);
|
||||||
__napi_schedule_irqoff(&ch->rx_napi);
|
__napi_schedule(&ch->rx_napi);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2193,7 +2193,7 @@ static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan)
|
|||||||
spin_lock_irqsave(&ch->lock, flags);
|
spin_lock_irqsave(&ch->lock, flags);
|
||||||
stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 0, 1);
|
stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 0, 1);
|
||||||
spin_unlock_irqrestore(&ch->lock, flags);
|
spin_unlock_irqrestore(&ch->lock, flags);
|
||||||
__napi_schedule_irqoff(&ch->tx_napi);
|
__napi_schedule(&ch->tx_napi);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -4026,6 +4026,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
|
|||||||
{
|
{
|
||||||
struct stmmac_priv *priv = netdev_priv(dev);
|
struct stmmac_priv *priv = netdev_priv(dev);
|
||||||
int txfifosz = priv->plat->tx_fifo_size;
|
int txfifosz = priv->plat->tx_fifo_size;
|
||||||
|
const int mtu = new_mtu;
|
||||||
|
|
||||||
if (txfifosz == 0)
|
if (txfifosz == 0)
|
||||||
txfifosz = priv->dma_cap.tx_fifo_size;
|
txfifosz = priv->dma_cap.tx_fifo_size;
|
||||||
@ -4043,7 +4044,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
|
|||||||
if ((txfifosz < new_mtu) || (new_mtu > BUF_SIZE_16KiB))
|
if ((txfifosz < new_mtu) || (new_mtu > BUF_SIZE_16KiB))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
dev->mtu = new_mtu;
|
dev->mtu = mtu;
|
||||||
|
|
||||||
netdev_update_features(dev);
|
netdev_update_features(dev);
|
||||||
|
|
||||||
|
@ -599,7 +599,8 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
|
|||||||
{
|
{
|
||||||
u32 size, wid = priv->dma_cap.estwid, dep = priv->dma_cap.estdep;
|
u32 size, wid = priv->dma_cap.estwid, dep = priv->dma_cap.estdep;
|
||||||
struct plat_stmmacenet_data *plat = priv->plat;
|
struct plat_stmmacenet_data *plat = priv->plat;
|
||||||
struct timespec64 time;
|
struct timespec64 time, current_time;
|
||||||
|
ktime_t current_time_ns;
|
||||||
bool fpe = false;
|
bool fpe = false;
|
||||||
int i, ret = 0;
|
int i, ret = 0;
|
||||||
u64 ctr;
|
u64 ctr;
|
||||||
@ -694,7 +695,22 @@ static int tc_setup_taprio(struct stmmac_priv *priv,
|
|||||||
}
|
}
|
||||||
|
|
||||||
/* Adjust for real system time */
|
/* Adjust for real system time */
|
||||||
time = ktime_to_timespec64(qopt->base_time);
|
priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, ¤t_time);
|
||||||
|
current_time_ns = timespec64_to_ktime(current_time);
|
||||||
|
if (ktime_after(qopt->base_time, current_time_ns)) {
|
||||||
|
time = ktime_to_timespec64(qopt->base_time);
|
||||||
|
} else {
|
||||||
|
ktime_t base_time;
|
||||||
|
s64 n;
|
||||||
|
|
||||||
|
n = div64_s64(ktime_sub_ns(current_time_ns, qopt->base_time),
|
||||||
|
qopt->cycle_time);
|
||||||
|
base_time = ktime_add_ns(qopt->base_time,
|
||||||
|
(n + 1) * qopt->cycle_time);
|
||||||
|
|
||||||
|
time = ktime_to_timespec64(base_time);
|
||||||
|
}
|
||||||
|
|
||||||
priv->plat->est->btr[0] = (u32)time.tv_nsec;
|
priv->plat->est->btr[0] = (u32)time.tv_nsec;
|
||||||
priv->plat->est->btr[1] = (u32)time.tv_sec;
|
priv->plat->est->btr[1] = (u32)time.tv_sec;
|
||||||
|
|
||||||
|
@ -216,6 +216,7 @@ int ipa_modem_start(struct ipa *ipa)
|
|||||||
ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]->netdev = netdev;
|
ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]->netdev = netdev;
|
||||||
ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->netdev = netdev;
|
ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->netdev = netdev;
|
||||||
|
|
||||||
|
SET_NETDEV_DEV(netdev, &ipa->pdev->dev);
|
||||||
priv = netdev_priv(netdev);
|
priv = netdev_priv(netdev);
|
||||||
priv->ipa = ipa;
|
priv->ipa = ipa;
|
||||||
|
|
||||||
|
@ -317,7 +317,8 @@ static int smsc_phy_probe(struct phy_device *phydev)
|
|||||||
/* Make clk optional to keep DTB backward compatibility. */
|
/* Make clk optional to keep DTB backward compatibility. */
|
||||||
priv->refclk = clk_get_optional(dev, NULL);
|
priv->refclk = clk_get_optional(dev, NULL);
|
||||||
if (IS_ERR(priv->refclk))
|
if (IS_ERR(priv->refclk))
|
||||||
dev_err_probe(dev, PTR_ERR(priv->refclk), "Failed to request clock\n");
|
return dev_err_probe(dev, PTR_ERR(priv->refclk),
|
||||||
|
"Failed to request clock\n");
|
||||||
|
|
||||||
ret = clk_prepare_enable(priv->refclk);
|
ret = clk_prepare_enable(priv->refclk);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
@ -623,6 +623,7 @@ static int ppp_bridge_channels(struct channel *pch, struct channel *pchb)
|
|||||||
write_unlock_bh(&pch->upl);
|
write_unlock_bh(&pch->upl);
|
||||||
return -EALREADY;
|
return -EALREADY;
|
||||||
}
|
}
|
||||||
|
refcount_inc(&pchb->file.refcnt);
|
||||||
rcu_assign_pointer(pch->bridge, pchb);
|
rcu_assign_pointer(pch->bridge, pchb);
|
||||||
write_unlock_bh(&pch->upl);
|
write_unlock_bh(&pch->upl);
|
||||||
|
|
||||||
@ -632,19 +633,24 @@ static int ppp_bridge_channels(struct channel *pch, struct channel *pchb)
|
|||||||
write_unlock_bh(&pchb->upl);
|
write_unlock_bh(&pchb->upl);
|
||||||
goto err_unset;
|
goto err_unset;
|
||||||
}
|
}
|
||||||
|
refcount_inc(&pch->file.refcnt);
|
||||||
rcu_assign_pointer(pchb->bridge, pch);
|
rcu_assign_pointer(pchb->bridge, pch);
|
||||||
write_unlock_bh(&pchb->upl);
|
write_unlock_bh(&pchb->upl);
|
||||||
|
|
||||||
refcount_inc(&pch->file.refcnt);
|
|
||||||
refcount_inc(&pchb->file.refcnt);
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
err_unset:
|
err_unset:
|
||||||
write_lock_bh(&pch->upl);
|
write_lock_bh(&pch->upl);
|
||||||
|
/* Re-read pch->bridge with upl held in case it was modified concurrently */
|
||||||
|
pchb = rcu_dereference_protected(pch->bridge, lockdep_is_held(&pch->upl));
|
||||||
RCU_INIT_POINTER(pch->bridge, NULL);
|
RCU_INIT_POINTER(pch->bridge, NULL);
|
||||||
write_unlock_bh(&pch->upl);
|
write_unlock_bh(&pch->upl);
|
||||||
synchronize_rcu();
|
synchronize_rcu();
|
||||||
|
|
||||||
|
if (pchb)
|
||||||
|
if (refcount_dec_and_test(&pchb->file.refcnt))
|
||||||
|
ppp_destroy_channel(pchb);
|
||||||
|
|
||||||
return -EALREADY;
|
return -EALREADY;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -631,7 +631,6 @@ config USB_NET_AQC111
|
|||||||
config USB_RTL8153_ECM
|
config USB_RTL8153_ECM
|
||||||
tristate "RTL8153 ECM support"
|
tristate "RTL8153 ECM support"
|
||||||
depends on USB_NET_CDCETHER && (USB_RTL8152 || USB_RTL8152=n)
|
depends on USB_NET_CDCETHER && (USB_RTL8152 || USB_RTL8152=n)
|
||||||
default y
|
|
||||||
help
|
help
|
||||||
This option supports ECM mode for RTL8153 ethernet adapter, when
|
This option supports ECM mode for RTL8153 ethernet adapter, when
|
||||||
CONFIG_USB_RTL8152 is not set, or the RTL8153 device is not
|
CONFIG_USB_RTL8152 is not set, or the RTL8153 device is not
|
||||||
|
@ -793,6 +793,13 @@ static const struct usb_device_id products[] = {
|
|||||||
.driver_info = 0,
|
.driver_info = 0,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
/* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */
|
||||||
|
{
|
||||||
|
USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x721e, USB_CLASS_COMM,
|
||||||
|
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
|
||||||
|
.driver_info = 0,
|
||||||
|
},
|
||||||
|
|
||||||
/* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */
|
/* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */
|
||||||
{
|
{
|
||||||
USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
|
USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
|
||||||
|
@ -6877,6 +6877,7 @@ static const struct usb_device_id rtl8152_table[] = {
|
|||||||
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)},
|
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)},
|
||||||
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)},
|
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)},
|
||||||
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)},
|
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)},
|
||||||
|
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x721e)},
|
||||||
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)},
|
{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)},
|
||||||
{REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)},
|
{REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)},
|
||||||
{REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
|
{REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
|
||||||
|
@ -122,12 +122,20 @@ static const struct driver_info r8153_info = {
|
|||||||
};
|
};
|
||||||
|
|
||||||
static const struct usb_device_id products[] = {
|
static const struct usb_device_id products[] = {
|
||||||
|
/* Realtek RTL8153 Based USB 3.0 Ethernet Adapters */
|
||||||
{
|
{
|
||||||
USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_REALTEK, 0x8153, USB_CLASS_COMM,
|
USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_REALTEK, 0x8153, USB_CLASS_COMM,
|
||||||
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
|
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
|
||||||
.driver_info = (unsigned long)&r8153_info,
|
.driver_info = (unsigned long)&r8153_info,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
/* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */
|
||||||
|
{
|
||||||
|
USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_LENOVO, 0x721e, USB_CLASS_COMM,
|
||||||
|
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
|
||||||
|
.driver_info = (unsigned long)&r8153_info,
|
||||||
|
},
|
||||||
|
|
||||||
{ }, /* END */
|
{ }, /* END */
|
||||||
};
|
};
|
||||||
MODULE_DEVICE_TABLE(usb, products);
|
MODULE_DEVICE_TABLE(usb, products);
|
||||||
|
@ -387,7 +387,7 @@ generic_rndis_bind(struct usbnet *dev, struct usb_interface *intf, int flags)
|
|||||||
reply_len = sizeof *phym;
|
reply_len = sizeof *phym;
|
||||||
retval = rndis_query(dev, intf, u.buf,
|
retval = rndis_query(dev, intf, u.buf,
|
||||||
RNDIS_OID_GEN_PHYSICAL_MEDIUM,
|
RNDIS_OID_GEN_PHYSICAL_MEDIUM,
|
||||||
0, (void **) &phym, &reply_len);
|
reply_len, (void **)&phym, &reply_len);
|
||||||
if (retval != 0 || !phym) {
|
if (retval != 0 || !phym) {
|
||||||
/* OID is optional so don't fail here. */
|
/* OID is optional so don't fail here. */
|
||||||
phym_unspec = cpu_to_le32(RNDIS_PHYSICAL_MEDIUM_UNSPECIFIED);
|
phym_unspec = cpu_to_le32(RNDIS_PHYSICAL_MEDIUM_UNSPECIFIED);
|
||||||
|
@ -366,7 +366,7 @@ static inline void skb_frag_size_sub(skb_frag_t *frag, int delta)
|
|||||||
static inline bool skb_frag_must_loop(struct page *p)
|
static inline bool skb_frag_must_loop(struct page *p)
|
||||||
{
|
{
|
||||||
#if defined(CONFIG_HIGHMEM)
|
#if defined(CONFIG_HIGHMEM)
|
||||||
if (PageHighMem(p))
|
if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) || PageHighMem(p))
|
||||||
return true;
|
return true;
|
||||||
#endif
|
#endif
|
||||||
return false;
|
return false;
|
||||||
@ -1203,6 +1203,7 @@ struct skb_seq_state {
|
|||||||
struct sk_buff *root_skb;
|
struct sk_buff *root_skb;
|
||||||
struct sk_buff *cur_skb;
|
struct sk_buff *cur_skb;
|
||||||
__u8 *frag_data;
|
__u8 *frag_data;
|
||||||
|
__u32 frag_off;
|
||||||
};
|
};
|
||||||
|
|
||||||
void skb_prepare_seq_read(struct sk_buff *skb, unsigned int from,
|
void skb_prepare_seq_read(struct sk_buff *skb, unsigned int from,
|
||||||
|
@ -284,9 +284,7 @@ static int register_vlan_device(struct net_device *real_dev, u16 vlan_id)
|
|||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
out_free_newdev:
|
out_free_newdev:
|
||||||
if (new_dev->reg_state == NETREG_UNINITIALIZED ||
|
free_netdev(new_dev);
|
||||||
new_dev->reg_state == NETREG_UNREGISTERED)
|
|
||||||
free_netdev(new_dev);
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1155,6 +1155,7 @@ static int isotp_getname(struct socket *sock, struct sockaddr *uaddr, int peer)
|
|||||||
if (peer)
|
if (peer)
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
|
|
||||||
|
memset(addr, 0, sizeof(*addr));
|
||||||
addr->can_family = AF_CAN;
|
addr->can_family = AF_CAN;
|
||||||
addr->can_ifindex = so->ifindex;
|
addr->can_ifindex = so->ifindex;
|
||||||
addr->can_addr.tp.rx_id = so->rxid;
|
addr->can_addr.tp.rx_id = so->rxid;
|
||||||
|
@ -9661,9 +9661,15 @@ static netdev_features_t netdev_fix_features(struct net_device *dev,
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if ((features & NETIF_F_HW_TLS_TX) && !(features & NETIF_F_HW_CSUM)) {
|
if (features & NETIF_F_HW_TLS_TX) {
|
||||||
netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n");
|
bool ip_csum = (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) ==
|
||||||
features &= ~NETIF_F_HW_TLS_TX;
|
(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
|
||||||
|
bool hw_csum = features & NETIF_F_HW_CSUM;
|
||||||
|
|
||||||
|
if (!ip_csum && !hw_csum) {
|
||||||
|
netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n");
|
||||||
|
features &= ~NETIF_F_HW_TLS_TX;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return features;
|
return features;
|
||||||
@ -10077,17 +10083,11 @@ int register_netdevice(struct net_device *dev)
|
|||||||
ret = call_netdevice_notifiers(NETDEV_REGISTER, dev);
|
ret = call_netdevice_notifiers(NETDEV_REGISTER, dev);
|
||||||
ret = notifier_to_errno(ret);
|
ret = notifier_to_errno(ret);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
/* Expect explicit free_netdev() on failure */
|
||||||
|
dev->needs_free_netdev = false;
|
||||||
rollback_registered(dev);
|
rollback_registered(dev);
|
||||||
rcu_barrier();
|
net_set_todo(dev);
|
||||||
|
goto out;
|
||||||
dev->reg_state = NETREG_UNREGISTERED;
|
|
||||||
/* We should put the kobject that hold in
|
|
||||||
* netdev_unregister_kobject(), otherwise
|
|
||||||
* the net device cannot be freed when
|
|
||||||
* driver calls free_netdev(), because the
|
|
||||||
* kobject is being hold.
|
|
||||||
*/
|
|
||||||
kobject_put(&dev->dev.kobj);
|
|
||||||
}
|
}
|
||||||
/*
|
/*
|
||||||
* Prevent userspace races by waiting until the network
|
* Prevent userspace races by waiting until the network
|
||||||
@ -10631,6 +10631,17 @@ void free_netdev(struct net_device *dev)
|
|||||||
struct napi_struct *p, *n;
|
struct napi_struct *p, *n;
|
||||||
|
|
||||||
might_sleep();
|
might_sleep();
|
||||||
|
|
||||||
|
/* When called immediately after register_netdevice() failed the unwind
|
||||||
|
* handling may still be dismantling the device. Handle that case by
|
||||||
|
* deferring the free.
|
||||||
|
*/
|
||||||
|
if (dev->reg_state == NETREG_UNREGISTERING) {
|
||||||
|
ASSERT_RTNL();
|
||||||
|
dev->needs_free_netdev = true;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
netif_free_tx_queues(dev);
|
netif_free_tx_queues(dev);
|
||||||
netif_free_rx_queues(dev);
|
netif_free_rx_queues(dev);
|
||||||
|
|
||||||
|
@ -3439,26 +3439,15 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh,
|
|||||||
|
|
||||||
dev->ifindex = ifm->ifi_index;
|
dev->ifindex = ifm->ifi_index;
|
||||||
|
|
||||||
if (ops->newlink) {
|
if (ops->newlink)
|
||||||
err = ops->newlink(link_net ? : net, dev, tb, data, extack);
|
err = ops->newlink(link_net ? : net, dev, tb, data, extack);
|
||||||
/* Drivers should call free_netdev() in ->destructor
|
else
|
||||||
* and unregister it on failure after registration
|
|
||||||
* so that device could be finally freed in rtnl_unlock.
|
|
||||||
*/
|
|
||||||
if (err < 0) {
|
|
||||||
/* If device is not registered at all, free it now */
|
|
||||||
if (dev->reg_state == NETREG_UNINITIALIZED ||
|
|
||||||
dev->reg_state == NETREG_UNREGISTERED)
|
|
||||||
free_netdev(dev);
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
err = register_netdevice(dev);
|
err = register_netdevice(dev);
|
||||||
if (err < 0) {
|
if (err < 0) {
|
||||||
free_netdev(dev);
|
free_netdev(dev);
|
||||||
goto out;
|
goto out;
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
err = rtnl_configure_link(dev, ifm);
|
err = rtnl_configure_link(dev, ifm);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto out_unregister;
|
goto out_unregister;
|
||||||
|
@ -501,13 +501,17 @@ EXPORT_SYMBOL(__netdev_alloc_skb);
|
|||||||
struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
|
struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
|
||||||
gfp_t gfp_mask)
|
gfp_t gfp_mask)
|
||||||
{
|
{
|
||||||
struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
|
struct napi_alloc_cache *nc;
|
||||||
struct sk_buff *skb;
|
struct sk_buff *skb;
|
||||||
void *data;
|
void *data;
|
||||||
|
|
||||||
len += NET_SKB_PAD + NET_IP_ALIGN;
|
len += NET_SKB_PAD + NET_IP_ALIGN;
|
||||||
|
|
||||||
if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) ||
|
/* If requested length is either too small or too big,
|
||||||
|
* we use kmalloc() for skb->head allocation.
|
||||||
|
*/
|
||||||
|
if (len <= SKB_WITH_OVERHEAD(1024) ||
|
||||||
|
len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
|
||||||
(gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
|
(gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
|
||||||
skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
|
skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
|
||||||
if (!skb)
|
if (!skb)
|
||||||
@ -515,6 +519,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
|
|||||||
goto skb_success;
|
goto skb_success;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
nc = this_cpu_ptr(&napi_alloc_cache);
|
||||||
len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
|
len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
|
||||||
len = SKB_DATA_ALIGN(len);
|
len = SKB_DATA_ALIGN(len);
|
||||||
|
|
||||||
@ -3442,6 +3447,7 @@ void skb_prepare_seq_read(struct sk_buff *skb, unsigned int from,
|
|||||||
st->root_skb = st->cur_skb = skb;
|
st->root_skb = st->cur_skb = skb;
|
||||||
st->frag_idx = st->stepped_offset = 0;
|
st->frag_idx = st->stepped_offset = 0;
|
||||||
st->frag_data = NULL;
|
st->frag_data = NULL;
|
||||||
|
st->frag_off = 0;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(skb_prepare_seq_read);
|
EXPORT_SYMBOL(skb_prepare_seq_read);
|
||||||
|
|
||||||
@ -3496,14 +3502,27 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data,
|
|||||||
st->stepped_offset += skb_headlen(st->cur_skb);
|
st->stepped_offset += skb_headlen(st->cur_skb);
|
||||||
|
|
||||||
while (st->frag_idx < skb_shinfo(st->cur_skb)->nr_frags) {
|
while (st->frag_idx < skb_shinfo(st->cur_skb)->nr_frags) {
|
||||||
frag = &skb_shinfo(st->cur_skb)->frags[st->frag_idx];
|
unsigned int pg_idx, pg_off, pg_sz;
|
||||||
block_limit = skb_frag_size(frag) + st->stepped_offset;
|
|
||||||
|
|
||||||
|
frag = &skb_shinfo(st->cur_skb)->frags[st->frag_idx];
|
||||||
|
|
||||||
|
pg_idx = 0;
|
||||||
|
pg_off = skb_frag_off(frag);
|
||||||
|
pg_sz = skb_frag_size(frag);
|
||||||
|
|
||||||
|
if (skb_frag_must_loop(skb_frag_page(frag))) {
|
||||||
|
pg_idx = (pg_off + st->frag_off) >> PAGE_SHIFT;
|
||||||
|
pg_off = offset_in_page(pg_off + st->frag_off);
|
||||||
|
pg_sz = min_t(unsigned int, pg_sz - st->frag_off,
|
||||||
|
PAGE_SIZE - pg_off);
|
||||||
|
}
|
||||||
|
|
||||||
|
block_limit = pg_sz + st->stepped_offset;
|
||||||
if (abs_offset < block_limit) {
|
if (abs_offset < block_limit) {
|
||||||
if (!st->frag_data)
|
if (!st->frag_data)
|
||||||
st->frag_data = kmap_atomic(skb_frag_page(frag));
|
st->frag_data = kmap_atomic(skb_frag_page(frag) + pg_idx);
|
||||||
|
|
||||||
*data = (u8 *) st->frag_data + skb_frag_off(frag) +
|
*data = (u8 *)st->frag_data + pg_off +
|
||||||
(abs_offset - st->stepped_offset);
|
(abs_offset - st->stepped_offset);
|
||||||
|
|
||||||
return block_limit - abs_offset;
|
return block_limit - abs_offset;
|
||||||
@ -3514,8 +3533,12 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data,
|
|||||||
st->frag_data = NULL;
|
st->frag_data = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
st->frag_idx++;
|
st->stepped_offset += pg_sz;
|
||||||
st->stepped_offset += skb_frag_size(frag);
|
st->frag_off += pg_sz;
|
||||||
|
if (st->frag_off == skb_frag_size(frag)) {
|
||||||
|
st->frag_off = 0;
|
||||||
|
st->frag_idx++;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (st->frag_data) {
|
if (st->frag_data) {
|
||||||
@ -3655,7 +3678,8 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
|
|||||||
unsigned int delta_truesize = 0;
|
unsigned int delta_truesize = 0;
|
||||||
unsigned int delta_len = 0;
|
unsigned int delta_len = 0;
|
||||||
struct sk_buff *tail = NULL;
|
struct sk_buff *tail = NULL;
|
||||||
struct sk_buff *nskb;
|
struct sk_buff *nskb, *tmp;
|
||||||
|
int err;
|
||||||
|
|
||||||
skb_push(skb, -skb_network_offset(skb) + offset);
|
skb_push(skb, -skb_network_offset(skb) + offset);
|
||||||
|
|
||||||
@ -3665,11 +3689,28 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
|
|||||||
nskb = list_skb;
|
nskb = list_skb;
|
||||||
list_skb = list_skb->next;
|
list_skb = list_skb->next;
|
||||||
|
|
||||||
|
err = 0;
|
||||||
|
if (skb_shared(nskb)) {
|
||||||
|
tmp = skb_clone(nskb, GFP_ATOMIC);
|
||||||
|
if (tmp) {
|
||||||
|
consume_skb(nskb);
|
||||||
|
nskb = tmp;
|
||||||
|
err = skb_unclone(nskb, GFP_ATOMIC);
|
||||||
|
} else {
|
||||||
|
err = -ENOMEM;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (!tail)
|
if (!tail)
|
||||||
skb->next = nskb;
|
skb->next = nskb;
|
||||||
else
|
else
|
||||||
tail->next = nskb;
|
tail->next = nskb;
|
||||||
|
|
||||||
|
if (unlikely(err)) {
|
||||||
|
nskb->next = list_skb;
|
||||||
|
goto err_linearize;
|
||||||
|
}
|
||||||
|
|
||||||
tail = nskb;
|
tail = nskb;
|
||||||
|
|
||||||
delta_len += nskb->len;
|
delta_len += nskb->len;
|
||||||
|
@ -293,7 +293,7 @@ struct sock *reuseport_select_sock(struct sock *sk,
|
|||||||
i = j = reciprocal_scale(hash, socks);
|
i = j = reciprocal_scale(hash, socks);
|
||||||
while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) {
|
while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) {
|
||||||
i++;
|
i++;
|
||||||
if (i >= reuse->num_socks)
|
if (i >= socks)
|
||||||
i = 0;
|
i = 0;
|
||||||
if (i == j)
|
if (i == j)
|
||||||
goto out;
|
goto out;
|
||||||
|
@ -1765,7 +1765,7 @@ static int dcb_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
|
|||||||
fn = &reply_funcs[dcb->cmd];
|
fn = &reply_funcs[dcb->cmd];
|
||||||
if (!fn->cb)
|
if (!fn->cb)
|
||||||
return -EOPNOTSUPP;
|
return -EOPNOTSUPP;
|
||||||
if (fn->type != nlh->nlmsg_type)
|
if (fn->type == RTM_SETDCB && !netlink_capable(skb, CAP_NET_ADMIN))
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
|
|
||||||
if (!tb[DCB_ATTR_IFNAME])
|
if (!tb[DCB_ATTR_IFNAME])
|
||||||
|
@ -353,9 +353,13 @@ static int dsa_port_devlink_setup(struct dsa_port *dp)
|
|||||||
|
|
||||||
static void dsa_port_teardown(struct dsa_port *dp)
|
static void dsa_port_teardown(struct dsa_port *dp)
|
||||||
{
|
{
|
||||||
|
struct devlink_port *dlp = &dp->devlink_port;
|
||||||
|
|
||||||
if (!dp->setup)
|
if (!dp->setup)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
devlink_port_type_clear(dlp);
|
||||||
|
|
||||||
switch (dp->type) {
|
switch (dp->type) {
|
||||||
case DSA_PORT_TYPE_UNUSED:
|
case DSA_PORT_TYPE_UNUSED:
|
||||||
break;
|
break;
|
||||||
|
@ -309,8 +309,18 @@ static struct lock_class_key dsa_master_addr_list_lock_key;
|
|||||||
int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
|
int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
|
||||||
{
|
{
|
||||||
int mtu = ETH_DATA_LEN + cpu_dp->tag_ops->overhead;
|
int mtu = ETH_DATA_LEN + cpu_dp->tag_ops->overhead;
|
||||||
|
struct dsa_switch *ds = cpu_dp->ds;
|
||||||
|
struct device_link *consumer_link;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
/* The DSA master must use SET_NETDEV_DEV for this to work. */
|
||||||
|
consumer_link = device_link_add(ds->dev, dev->dev.parent,
|
||||||
|
DL_FLAG_AUTOREMOVE_CONSUMER);
|
||||||
|
if (!consumer_link)
|
||||||
|
netdev_err(dev,
|
||||||
|
"Failed to create a device link to DSA switch %s\n",
|
||||||
|
dev_name(ds->dev));
|
||||||
|
|
||||||
rtnl_lock();
|
rtnl_lock();
|
||||||
ret = dev_set_mtu(dev, mtu);
|
ret = dev_set_mtu(dev, mtu);
|
||||||
rtnl_unlock();
|
rtnl_unlock();
|
||||||
|
@ -443,7 +443,6 @@ static int esp_output_encap(struct xfrm_state *x, struct sk_buff *skb,
|
|||||||
int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
|
int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
|
||||||
{
|
{
|
||||||
u8 *tail;
|
u8 *tail;
|
||||||
u8 *vaddr;
|
|
||||||
int nfrags;
|
int nfrags;
|
||||||
int esph_offset;
|
int esph_offset;
|
||||||
struct page *page;
|
struct page *page;
|
||||||
@ -485,14 +484,10 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
|
|||||||
page = pfrag->page;
|
page = pfrag->page;
|
||||||
get_page(page);
|
get_page(page);
|
||||||
|
|
||||||
vaddr = kmap_atomic(page);
|
tail = page_address(page) + pfrag->offset;
|
||||||
|
|
||||||
tail = vaddr + pfrag->offset;
|
|
||||||
|
|
||||||
esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto);
|
esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto);
|
||||||
|
|
||||||
kunmap_atomic(vaddr);
|
|
||||||
|
|
||||||
nfrags = skb_shinfo(skb)->nr_frags;
|
nfrags = skb_shinfo(skb)->nr_frags;
|
||||||
|
|
||||||
__skb_fill_page_desc(skb, nfrags, page, pfrag->offset,
|
__skb_fill_page_desc(skb, nfrags, page, pfrag->offset,
|
||||||
|
@ -478,7 +478,6 @@ static int esp6_output_encap(struct xfrm_state *x, struct sk_buff *skb,
|
|||||||
int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
|
int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
|
||||||
{
|
{
|
||||||
u8 *tail;
|
u8 *tail;
|
||||||
u8 *vaddr;
|
|
||||||
int nfrags;
|
int nfrags;
|
||||||
int esph_offset;
|
int esph_offset;
|
||||||
struct page *page;
|
struct page *page;
|
||||||
@ -519,14 +518,10 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
|
|||||||
page = pfrag->page;
|
page = pfrag->page;
|
||||||
get_page(page);
|
get_page(page);
|
||||||
|
|
||||||
vaddr = kmap_atomic(page);
|
tail = page_address(page) + pfrag->offset;
|
||||||
|
|
||||||
tail = vaddr + pfrag->offset;
|
|
||||||
|
|
||||||
esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto);
|
esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto);
|
||||||
|
|
||||||
kunmap_atomic(vaddr);
|
|
||||||
|
|
||||||
nfrags = skb_shinfo(skb)->nr_frags;
|
nfrags = skb_shinfo(skb)->nr_frags;
|
||||||
|
|
||||||
__skb_fill_page_desc(skb, nfrags, page, pfrag->offset,
|
__skb_fill_page_desc(skb, nfrags, page, pfrag->offset,
|
||||||
|
@ -125,8 +125,43 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int
|
||||||
|
ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk,
|
||||||
|
struct sk_buff *skb, unsigned int mtu)
|
||||||
|
{
|
||||||
|
struct sk_buff *segs, *nskb;
|
||||||
|
netdev_features_t features;
|
||||||
|
int ret = 0;
|
||||||
|
|
||||||
|
/* Please see corresponding comment in ip_finish_output_gso
|
||||||
|
* describing the cases where GSO segment length exceeds the
|
||||||
|
* egress MTU.
|
||||||
|
*/
|
||||||
|
features = netif_skb_features(skb);
|
||||||
|
segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
|
||||||
|
if (IS_ERR_OR_NULL(segs)) {
|
||||||
|
kfree_skb(skb);
|
||||||
|
return -ENOMEM;
|
||||||
|
}
|
||||||
|
|
||||||
|
consume_skb(skb);
|
||||||
|
|
||||||
|
skb_list_walk_safe(segs, segs, nskb) {
|
||||||
|
int err;
|
||||||
|
|
||||||
|
skb_mark_not_on_list(segs);
|
||||||
|
err = ip6_fragment(net, sk, segs, ip6_finish_output2);
|
||||||
|
if (err && ret == 0)
|
||||||
|
ret = err;
|
||||||
|
}
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb)
|
static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb)
|
||||||
{
|
{
|
||||||
|
unsigned int mtu;
|
||||||
|
|
||||||
#if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)
|
#if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)
|
||||||
/* Policy lookup after SNAT yielded a new policy */
|
/* Policy lookup after SNAT yielded a new policy */
|
||||||
if (skb_dst(skb)->xfrm) {
|
if (skb_dst(skb)->xfrm) {
|
||||||
@ -135,7 +170,11 @@ static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) ||
|
mtu = ip6_skb_dst_mtu(skb);
|
||||||
|
if (skb_is_gso(skb) && !skb_gso_validate_network_len(skb, mtu))
|
||||||
|
return ip6_finish_output_gso_slowpath_drop(net, sk, skb, mtu);
|
||||||
|
|
||||||
|
if ((skb->len > mtu && !skb_is_gso(skb)) ||
|
||||||
dst_allfrag(skb_dst(skb)) ||
|
dst_allfrag(skb_dst(skb)) ||
|
||||||
(IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size))
|
(IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size))
|
||||||
return ip6_fragment(net, sk, skb, ip6_finish_output2);
|
return ip6_fragment(net, sk, skb, ip6_finish_output2);
|
||||||
|
@ -1645,8 +1645,11 @@ static int ipip6_newlink(struct net *src_net, struct net_device *dev,
|
|||||||
}
|
}
|
||||||
|
|
||||||
#ifdef CONFIG_IPV6_SIT_6RD
|
#ifdef CONFIG_IPV6_SIT_6RD
|
||||||
if (ipip6_netlink_6rd_parms(data, &ip6rd))
|
if (ipip6_netlink_6rd_parms(data, &ip6rd)) {
|
||||||
err = ipip6_tunnel_update_6rd(nt, &ip6rd);
|
err = ipip6_tunnel_update_6rd(nt, &ip6rd);
|
||||||
|
if (err < 0)
|
||||||
|
unregister_netdevice_queue(dev, NULL);
|
||||||
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
|
@ -427,7 +427,7 @@ static bool mptcp_subflow_active(struct mptcp_subflow_context *subflow)
|
|||||||
static bool tcp_can_send_ack(const struct sock *ssk)
|
static bool tcp_can_send_ack(const struct sock *ssk)
|
||||||
{
|
{
|
||||||
return !((1 << inet_sk_state_load(ssk)) &
|
return !((1 << inet_sk_state_load(ssk)) &
|
||||||
(TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE));
|
(TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN));
|
||||||
}
|
}
|
||||||
|
|
||||||
static void mptcp_send_ack(struct mptcp_sock *msk)
|
static void mptcp_send_ack(struct mptcp_sock *msk)
|
||||||
@ -2642,11 +2642,17 @@ static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)
|
|||||||
|
|
||||||
static int mptcp_disconnect(struct sock *sk, int flags)
|
static int mptcp_disconnect(struct sock *sk, int flags)
|
||||||
{
|
{
|
||||||
/* Should never be called.
|
struct mptcp_subflow_context *subflow;
|
||||||
* inet_stream_connect() calls ->disconnect, but that
|
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||||
* refers to the subflow socket, not the mptcp one.
|
|
||||||
*/
|
__mptcp_flush_join_list(msk);
|
||||||
WARN_ON_ONCE(1);
|
mptcp_for_each_subflow(msk, subflow) {
|
||||||
|
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
|
||||||
|
|
||||||
|
lock_sock(ssk);
|
||||||
|
tcp_disconnect(ssk, flags);
|
||||||
|
release_sock(ssk);
|
||||||
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -3089,6 +3095,14 @@ bool mptcp_finish_join(struct sock *ssk)
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void mptcp_shutdown(struct sock *sk, int how)
|
||||||
|
{
|
||||||
|
pr_debug("sk=%p, how=%d", sk, how);
|
||||||
|
|
||||||
|
if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk))
|
||||||
|
__mptcp_wr_shutdown(sk);
|
||||||
|
}
|
||||||
|
|
||||||
static struct proto mptcp_prot = {
|
static struct proto mptcp_prot = {
|
||||||
.name = "MPTCP",
|
.name = "MPTCP",
|
||||||
.owner = THIS_MODULE,
|
.owner = THIS_MODULE,
|
||||||
@ -3098,7 +3112,7 @@ static struct proto mptcp_prot = {
|
|||||||
.accept = mptcp_accept,
|
.accept = mptcp_accept,
|
||||||
.setsockopt = mptcp_setsockopt,
|
.setsockopt = mptcp_setsockopt,
|
||||||
.getsockopt = mptcp_getsockopt,
|
.getsockopt = mptcp_getsockopt,
|
||||||
.shutdown = tcp_shutdown,
|
.shutdown = mptcp_shutdown,
|
||||||
.destroy = mptcp_destroy,
|
.destroy = mptcp_destroy,
|
||||||
.sendmsg = mptcp_sendmsg,
|
.sendmsg = mptcp_sendmsg,
|
||||||
.recvmsg = mptcp_recvmsg,
|
.recvmsg = mptcp_recvmsg,
|
||||||
@ -3344,43 +3358,6 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
|
|||||||
return mask;
|
return mask;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int mptcp_shutdown(struct socket *sock, int how)
|
|
||||||
{
|
|
||||||
struct mptcp_sock *msk = mptcp_sk(sock->sk);
|
|
||||||
struct sock *sk = sock->sk;
|
|
||||||
int ret = 0;
|
|
||||||
|
|
||||||
pr_debug("sk=%p, how=%d", msk, how);
|
|
||||||
|
|
||||||
lock_sock(sk);
|
|
||||||
|
|
||||||
how++;
|
|
||||||
if ((how & ~SHUTDOWN_MASK) || !how) {
|
|
||||||
ret = -EINVAL;
|
|
||||||
goto out_unlock;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (sock->state == SS_CONNECTING) {
|
|
||||||
if ((1 << sk->sk_state) &
|
|
||||||
(TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_CLOSE))
|
|
||||||
sock->state = SS_DISCONNECTING;
|
|
||||||
else
|
|
||||||
sock->state = SS_CONNECTED;
|
|
||||||
}
|
|
||||||
|
|
||||||
sk->sk_shutdown |= how;
|
|
||||||
if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk))
|
|
||||||
__mptcp_wr_shutdown(sk);
|
|
||||||
|
|
||||||
/* Wake up anyone sleeping in poll. */
|
|
||||||
sk->sk_state_change(sk);
|
|
||||||
|
|
||||||
out_unlock:
|
|
||||||
release_sock(sk);
|
|
||||||
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
static const struct proto_ops mptcp_stream_ops = {
|
static const struct proto_ops mptcp_stream_ops = {
|
||||||
.family = PF_INET,
|
.family = PF_INET,
|
||||||
.owner = THIS_MODULE,
|
.owner = THIS_MODULE,
|
||||||
@ -3394,7 +3371,7 @@ static const struct proto_ops mptcp_stream_ops = {
|
|||||||
.ioctl = inet_ioctl,
|
.ioctl = inet_ioctl,
|
||||||
.gettstamp = sock_gettstamp,
|
.gettstamp = sock_gettstamp,
|
||||||
.listen = mptcp_listen,
|
.listen = mptcp_listen,
|
||||||
.shutdown = mptcp_shutdown,
|
.shutdown = inet_shutdown,
|
||||||
.setsockopt = sock_common_setsockopt,
|
.setsockopt = sock_common_setsockopt,
|
||||||
.getsockopt = sock_common_getsockopt,
|
.getsockopt = sock_common_getsockopt,
|
||||||
.sendmsg = inet_sendmsg,
|
.sendmsg = inet_sendmsg,
|
||||||
@ -3444,7 +3421,7 @@ static const struct proto_ops mptcp_v6_stream_ops = {
|
|||||||
.ioctl = inet6_ioctl,
|
.ioctl = inet6_ioctl,
|
||||||
.gettstamp = sock_gettstamp,
|
.gettstamp = sock_gettstamp,
|
||||||
.listen = mptcp_listen,
|
.listen = mptcp_listen,
|
||||||
.shutdown = mptcp_shutdown,
|
.shutdown = inet_shutdown,
|
||||||
.setsockopt = sock_common_setsockopt,
|
.setsockopt = sock_common_setsockopt,
|
||||||
.getsockopt = sock_common_getsockopt,
|
.getsockopt = sock_common_getsockopt,
|
||||||
.sendmsg = inet6_sendmsg,
|
.sendmsg = inet6_sendmsg,
|
||||||
|
@ -523,6 +523,9 @@ nf_conntrack_hash_sysctl(struct ctl_table *table, int write,
|
|||||||
{
|
{
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
|
/* module_param hashsize could have changed value */
|
||||||
|
nf_conntrack_htable_size_user = nf_conntrack_htable_size;
|
||||||
|
|
||||||
ret = proc_dointvec(table, write, buffer, lenp, ppos);
|
ret = proc_dointvec(table, write, buffer, lenp, ppos);
|
||||||
if (ret < 0 || !write)
|
if (ret < 0 || !write)
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -1174,6 +1174,7 @@ static int __init nf_nat_init(void)
|
|||||||
ret = register_pernet_subsys(&nat_net_ops);
|
ret = register_pernet_subsys(&nat_net_ops);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
nf_ct_extend_unregister(&nat_extend);
|
nf_ct_extend_unregister(&nat_extend);
|
||||||
|
kvfree(nf_nat_bysource);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -430,7 +430,7 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST) {
|
if (state == RXRPC_CALL_SERVER_RECV_REQUEST) {
|
||||||
unsigned long timo = READ_ONCE(call->next_req_timo);
|
unsigned long timo = READ_ONCE(call->next_req_timo);
|
||||||
unsigned long now, expect_req_by;
|
unsigned long now, expect_req_by;
|
||||||
|
|
||||||
|
@ -598,7 +598,7 @@ static long rxrpc_read(const struct key *key,
|
|||||||
default: /* we have a ticket we can't encode */
|
default: /* we have a ticket we can't encode */
|
||||||
pr_err("Unsupported key token type (%u)\n",
|
pr_err("Unsupported key token type (%u)\n",
|
||||||
token->security_index);
|
token->security_index);
|
||||||
continue;
|
return -ENOPKG;
|
||||||
}
|
}
|
||||||
|
|
||||||
_debug("token[%u]: toksize=%u", ntoks, toksize);
|
_debug("token[%u]: toksize=%u", ntoks, toksize);
|
||||||
@ -674,7 +674,9 @@ static long rxrpc_read(const struct key *key,
|
|||||||
break;
|
break;
|
||||||
|
|
||||||
default:
|
default:
|
||||||
break;
|
pr_err("Unsupported key token type (%u)\n",
|
||||||
|
token->security_index);
|
||||||
|
return -ENOPKG;
|
||||||
}
|
}
|
||||||
|
|
||||||
ASSERTCMP((unsigned long)xdr - (unsigned long)oldxdr, ==,
|
ASSERTCMP((unsigned long)xdr - (unsigned long)oldxdr, ==,
|
||||||
|
@ -246,7 +246,8 @@ int smc_nl_get_sys_info(struct sk_buff *skb, struct netlink_callback *cb)
|
|||||||
goto errattr;
|
goto errattr;
|
||||||
smc_clc_get_hostname(&host);
|
smc_clc_get_hostname(&host);
|
||||||
if (host) {
|
if (host) {
|
||||||
snprintf(hostname, sizeof(hostname), "%s", host);
|
memcpy(hostname, host, SMC_MAX_HOSTNAME_LEN);
|
||||||
|
hostname[SMC_MAX_HOSTNAME_LEN] = 0;
|
||||||
if (nla_put_string(skb, SMC_NLA_SYS_LOCAL_HOST, hostname))
|
if (nla_put_string(skb, SMC_NLA_SYS_LOCAL_HOST, hostname))
|
||||||
goto errattr;
|
goto errattr;
|
||||||
}
|
}
|
||||||
@ -257,7 +258,8 @@ int smc_nl_get_sys_info(struct sk_buff *skb, struct netlink_callback *cb)
|
|||||||
smc_ism_get_system_eid(smcd_dev, &seid);
|
smc_ism_get_system_eid(smcd_dev, &seid);
|
||||||
mutex_unlock(&smcd_dev_list.mutex);
|
mutex_unlock(&smcd_dev_list.mutex);
|
||||||
if (seid && smc_ism_is_v2_capable()) {
|
if (seid && smc_ism_is_v2_capable()) {
|
||||||
snprintf(smc_seid, sizeof(smc_seid), "%s", seid);
|
memcpy(smc_seid, seid, SMC_MAX_EID_LEN);
|
||||||
|
smc_seid[SMC_MAX_EID_LEN] = 0;
|
||||||
if (nla_put_string(skb, SMC_NLA_SYS_SEID, smc_seid))
|
if (nla_put_string(skb, SMC_NLA_SYS_SEID, smc_seid))
|
||||||
goto errattr;
|
goto errattr;
|
||||||
}
|
}
|
||||||
@ -295,7 +297,8 @@ static int smc_nl_fill_lgr(struct smc_link_group *lgr,
|
|||||||
goto errattr;
|
goto errattr;
|
||||||
if (nla_put_u8(skb, SMC_NLA_LGR_R_VLAN_ID, lgr->vlan_id))
|
if (nla_put_u8(skb, SMC_NLA_LGR_R_VLAN_ID, lgr->vlan_id))
|
||||||
goto errattr;
|
goto errattr;
|
||||||
snprintf(smc_target, sizeof(smc_target), "%s", lgr->pnet_id);
|
memcpy(smc_target, lgr->pnet_id, SMC_MAX_PNETID_LEN);
|
||||||
|
smc_target[SMC_MAX_PNETID_LEN] = 0;
|
||||||
if (nla_put_string(skb, SMC_NLA_LGR_R_PNETID, smc_target))
|
if (nla_put_string(skb, SMC_NLA_LGR_R_PNETID, smc_target))
|
||||||
goto errattr;
|
goto errattr;
|
||||||
|
|
||||||
@ -312,7 +315,7 @@ static int smc_nl_fill_lgr_link(struct smc_link_group *lgr,
|
|||||||
struct sk_buff *skb,
|
struct sk_buff *skb,
|
||||||
struct netlink_callback *cb)
|
struct netlink_callback *cb)
|
||||||
{
|
{
|
||||||
char smc_ibname[IB_DEVICE_NAME_MAX + 1];
|
char smc_ibname[IB_DEVICE_NAME_MAX];
|
||||||
u8 smc_gid_target[41];
|
u8 smc_gid_target[41];
|
||||||
struct nlattr *attrs;
|
struct nlattr *attrs;
|
||||||
u32 link_uid = 0;
|
u32 link_uid = 0;
|
||||||
@ -461,7 +464,8 @@ static int smc_nl_fill_smcd_lgr(struct smc_link_group *lgr,
|
|||||||
goto errattr;
|
goto errattr;
|
||||||
if (nla_put_u32(skb, SMC_NLA_LGR_D_CHID, smc_ism_get_chid(lgr->smcd)))
|
if (nla_put_u32(skb, SMC_NLA_LGR_D_CHID, smc_ism_get_chid(lgr->smcd)))
|
||||||
goto errattr;
|
goto errattr;
|
||||||
snprintf(smc_pnet, sizeof(smc_pnet), "%s", lgr->smcd->pnetid);
|
memcpy(smc_pnet, lgr->smcd->pnetid, SMC_MAX_PNETID_LEN);
|
||||||
|
smc_pnet[SMC_MAX_PNETID_LEN] = 0;
|
||||||
if (nla_put_string(skb, SMC_NLA_LGR_D_PNETID, smc_pnet))
|
if (nla_put_string(skb, SMC_NLA_LGR_D_PNETID, smc_pnet))
|
||||||
goto errattr;
|
goto errattr;
|
||||||
|
|
||||||
@ -474,10 +478,12 @@ static int smc_nl_fill_smcd_lgr(struct smc_link_group *lgr,
|
|||||||
goto errv2attr;
|
goto errv2attr;
|
||||||
if (nla_put_u8(skb, SMC_NLA_LGR_V2_OS, lgr->peer_os))
|
if (nla_put_u8(skb, SMC_NLA_LGR_V2_OS, lgr->peer_os))
|
||||||
goto errv2attr;
|
goto errv2attr;
|
||||||
snprintf(smc_host, sizeof(smc_host), "%s", lgr->peer_hostname);
|
memcpy(smc_host, lgr->peer_hostname, SMC_MAX_HOSTNAME_LEN);
|
||||||
|
smc_host[SMC_MAX_HOSTNAME_LEN] = 0;
|
||||||
if (nla_put_string(skb, SMC_NLA_LGR_V2_PEER_HOST, smc_host))
|
if (nla_put_string(skb, SMC_NLA_LGR_V2_PEER_HOST, smc_host))
|
||||||
goto errv2attr;
|
goto errv2attr;
|
||||||
snprintf(smc_eid, sizeof(smc_eid), "%s", lgr->negotiated_eid);
|
memcpy(smc_eid, lgr->negotiated_eid, SMC_MAX_EID_LEN);
|
||||||
|
smc_eid[SMC_MAX_EID_LEN] = 0;
|
||||||
if (nla_put_string(skb, SMC_NLA_LGR_V2_NEG_EID, smc_eid))
|
if (nla_put_string(skb, SMC_NLA_LGR_V2_NEG_EID, smc_eid))
|
||||||
goto errv2attr;
|
goto errv2attr;
|
||||||
|
|
||||||
|
@ -371,8 +371,8 @@ static int smc_nl_handle_dev_port(struct sk_buff *skb,
|
|||||||
if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR,
|
if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR,
|
||||||
smcibdev->pnetid_by_user[port]))
|
smcibdev->pnetid_by_user[port]))
|
||||||
goto errattr;
|
goto errattr;
|
||||||
snprintf(smc_pnet, sizeof(smc_pnet), "%s",
|
memcpy(smc_pnet, &smcibdev->pnetid[port], SMC_MAX_PNETID_LEN);
|
||||||
(char *)&smcibdev->pnetid[port]);
|
smc_pnet[SMC_MAX_PNETID_LEN] = 0;
|
||||||
if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet))
|
if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet))
|
||||||
goto errattr;
|
goto errattr;
|
||||||
if (nla_put_u32(skb, SMC_NLA_DEV_PORT_NETDEV,
|
if (nla_put_u32(skb, SMC_NLA_DEV_PORT_NETDEV,
|
||||||
@ -414,7 +414,7 @@ static int smc_nl_handle_smcr_dev(struct smc_ib_device *smcibdev,
|
|||||||
struct sk_buff *skb,
|
struct sk_buff *skb,
|
||||||
struct netlink_callback *cb)
|
struct netlink_callback *cb)
|
||||||
{
|
{
|
||||||
char smc_ibname[IB_DEVICE_NAME_MAX + 1];
|
char smc_ibname[IB_DEVICE_NAME_MAX];
|
||||||
struct smc_pci_dev smc_pci_dev;
|
struct smc_pci_dev smc_pci_dev;
|
||||||
struct pci_dev *pci_dev;
|
struct pci_dev *pci_dev;
|
||||||
unsigned char is_crit;
|
unsigned char is_crit;
|
||||||
|
@ -250,7 +250,8 @@ static int smc_nl_handle_smcd_dev(struct smcd_dev *smcd,
|
|||||||
goto errattr;
|
goto errattr;
|
||||||
if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR, smcd->pnetid_by_user))
|
if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR, smcd->pnetid_by_user))
|
||||||
goto errportattr;
|
goto errportattr;
|
||||||
snprintf(smc_pnet, sizeof(smc_pnet), "%s", smcd->pnetid);
|
memcpy(smc_pnet, smcd->pnetid, SMC_MAX_PNETID_LEN);
|
||||||
|
smc_pnet[SMC_MAX_PNETID_LEN] = 0;
|
||||||
if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet))
|
if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet))
|
||||||
goto errportattr;
|
goto errportattr;
|
||||||
|
|
||||||
|
@ -1030,7 +1030,6 @@ void tipc_link_reset(struct tipc_link *l)
|
|||||||
int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
|
int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
|
||||||
struct sk_buff_head *xmitq)
|
struct sk_buff_head *xmitq)
|
||||||
{
|
{
|
||||||
struct tipc_msg *hdr = buf_msg(skb_peek(list));
|
|
||||||
struct sk_buff_head *backlogq = &l->backlogq;
|
struct sk_buff_head *backlogq = &l->backlogq;
|
||||||
struct sk_buff_head *transmq = &l->transmq;
|
struct sk_buff_head *transmq = &l->transmq;
|
||||||
struct sk_buff *skb, *_skb;
|
struct sk_buff *skb, *_skb;
|
||||||
@ -1038,13 +1037,18 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
|
|||||||
u16 ack = l->rcv_nxt - 1;
|
u16 ack = l->rcv_nxt - 1;
|
||||||
u16 seqno = l->snd_nxt;
|
u16 seqno = l->snd_nxt;
|
||||||
int pkt_cnt = skb_queue_len(list);
|
int pkt_cnt = skb_queue_len(list);
|
||||||
int imp = msg_importance(hdr);
|
|
||||||
unsigned int mss = tipc_link_mss(l);
|
unsigned int mss = tipc_link_mss(l);
|
||||||
unsigned int cwin = l->window;
|
unsigned int cwin = l->window;
|
||||||
unsigned int mtu = l->mtu;
|
unsigned int mtu = l->mtu;
|
||||||
|
struct tipc_msg *hdr;
|
||||||
bool new_bundle;
|
bool new_bundle;
|
||||||
int rc = 0;
|
int rc = 0;
|
||||||
|
int imp;
|
||||||
|
|
||||||
|
if (pkt_cnt <= 0)
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
hdr = buf_msg(skb_peek(list));
|
||||||
if (unlikely(msg_size(hdr) > mtu)) {
|
if (unlikely(msg_size(hdr) > mtu)) {
|
||||||
pr_warn("Too large msg, purging xmit list %d %d %d %d %d!\n",
|
pr_warn("Too large msg, purging xmit list %d %d %d %d %d!\n",
|
||||||
skb_queue_len(list), msg_user(hdr),
|
skb_queue_len(list), msg_user(hdr),
|
||||||
@ -1053,6 +1057,7 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
|
|||||||
return -EMSGSIZE;
|
return -EMSGSIZE;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
imp = msg_importance(hdr);
|
||||||
/* Allow oversubscription of one data msg per source at congestion */
|
/* Allow oversubscription of one data msg per source at congestion */
|
||||||
if (unlikely(l->backlog[imp].len >= l->backlog[imp].limit)) {
|
if (unlikely(l->backlog[imp].len >= l->backlog[imp].limit)) {
|
||||||
if (imp == TIPC_SYSTEM_IMPORTANCE) {
|
if (imp == TIPC_SYSTEM_IMPORTANCE) {
|
||||||
@ -2539,7 +2544,7 @@ void tipc_link_set_queue_limits(struct tipc_link *l, u32 min_win, u32 max_win)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* link_reset_stats - reset link statistics
|
* tipc_link_reset_stats - reset link statistics
|
||||||
* @l: pointer to link
|
* @l: pointer to link
|
||||||
*/
|
*/
|
||||||
void tipc_link_reset_stats(struct tipc_link *l)
|
void tipc_link_reset_stats(struct tipc_link *l)
|
||||||
|
@ -1665,7 +1665,7 @@ static void tipc_lxc_xmit(struct net *peer_net, struct sk_buff_head *list)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* tipc_node_xmit() is the general link level function for message sending
|
* tipc_node_xmit() - general link level function for message sending
|
||||||
* @net: the applicable net namespace
|
* @net: the applicable net namespace
|
||||||
* @list: chain of buffers containing message
|
* @list: chain of buffers containing message
|
||||||
* @dnode: address of destination node
|
* @dnode: address of destination node
|
||||||
|
@ -103,8 +103,8 @@ FIXTURE(tls)
|
|||||||
|
|
||||||
FIXTURE_VARIANT(tls)
|
FIXTURE_VARIANT(tls)
|
||||||
{
|
{
|
||||||
u16 tls_version;
|
uint16_t tls_version;
|
||||||
u16 cipher_type;
|
uint16_t cipher_type;
|
||||||
};
|
};
|
||||||
|
|
||||||
FIXTURE_VARIANT_ADD(tls, 12_gcm)
|
FIXTURE_VARIANT_ADD(tls, 12_gcm)
|
||||||
|
@ -94,7 +94,13 @@ check_for_helper()
|
|||||||
local message=$2
|
local message=$2
|
||||||
local port=$3
|
local port=$3
|
||||||
|
|
||||||
ip netns exec ${netns} conntrack -L -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp'
|
if echo $message |grep -q 'ipv6';then
|
||||||
|
local family="ipv6"
|
||||||
|
else
|
||||||
|
local family="ipv4"
|
||||||
|
fi
|
||||||
|
|
||||||
|
ip netns exec ${netns} conntrack -L -f $family -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp'
|
||||||
if [ $? -ne 0 ] ; then
|
if [ $? -ne 0 ] ; then
|
||||||
echo "FAIL: ${netns} did not show attached helper $message" 1>&2
|
echo "FAIL: ${netns} did not show attached helper $message" 1>&2
|
||||||
ret=1
|
ret=1
|
||||||
@ -111,8 +117,8 @@ test_helper()
|
|||||||
|
|
||||||
sleep 3 | ip netns exec ${ns2} nc -w 2 -l -p $port > /dev/null &
|
sleep 3 | ip netns exec ${ns2} nc -w 2 -l -p $port > /dev/null &
|
||||||
|
|
||||||
sleep 1
|
|
||||||
sleep 1 | ip netns exec ${ns1} nc -w 2 10.0.1.2 $port > /dev/null &
|
sleep 1 | ip netns exec ${ns1} nc -w 2 10.0.1.2 $port > /dev/null &
|
||||||
|
sleep 1
|
||||||
|
|
||||||
check_for_helper "$ns1" "ip $msg" $port
|
check_for_helper "$ns1" "ip $msg" $port
|
||||||
check_for_helper "$ns2" "ip $msg" $port
|
check_for_helper "$ns2" "ip $msg" $port
|
||||||
@ -128,8 +134,8 @@ test_helper()
|
|||||||
|
|
||||||
sleep 3 | ip netns exec ${ns2} nc -w 2 -6 -l -p $port > /dev/null &
|
sleep 3 | ip netns exec ${ns2} nc -w 2 -6 -l -p $port > /dev/null &
|
||||||
|
|
||||||
sleep 1
|
|
||||||
sleep 1 | ip netns exec ${ns1} nc -w 2 -6 dead:1::2 $port > /dev/null &
|
sleep 1 | ip netns exec ${ns1} nc -w 2 -6 dead:1::2 $port > /dev/null &
|
||||||
|
sleep 1
|
||||||
|
|
||||||
check_for_helper "$ns1" "ipv6 $msg" $port
|
check_for_helper "$ns1" "ipv6 $msg" $port
|
||||||
check_for_helper "$ns2" "ipv6 $msg" $port
|
check_for_helper "$ns2" "ipv6 $msg" $port
|
||||||
|
Loading…
Reference in New Issue
Block a user