This is the 5.10.146 stable release

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmM0D5YACgkQONu9yGCS
 aT60zQ//azKm1LwkEJrXhq9W8RH0qFooR5ktMtD77mX7jznl6QrebRycyD0lj67H
 QqkSWLKWocMiGNjCBHA4LS/OXVoMvjfWvdha1ExHO/1fqkM6MVqfy8+z8Tngzky/
 iTfaOjA6BSiQNnAyC+LPtJb5dCnvFYHL78+vZ3Kr6xHhX/MBCoTL+pP5bBp82ES+
 4N5mirDlLgLxI2d2KCfpwVkaRC+Ylsz5/PLkvzYpXz7RnXLL7PAu/tbHvJpM9qqj
 lONQU3av0utXPLzV8FdeejspFdTacG+V9d1AAfXivYQTBI5dyaUEPoR6qkZ4WgsN
 zZ6huMi/7Q0uL9QxGvvSqpEMPeq7hikanqFAZsfgNtXLZQM2Th8GyaqhVKtBN31n
 75z4dMrV5Whb0K6fo4yOZAzPL/safwHtqtEIsZsgpjCnUKgl0YWyRlmrjQyOdTcI
 2DY/wTwf+f+D/U0CNfYd0xrmlDMsRgUQ3pjtT98kLHk0K8VPRySlSvkk9YW0qsLf
 4Hc8DCIiVa5lB5Rl8nGTUq0iIl9t17lpfy1Iboibhxay1IUMLBYdRNQ/bnOD2Y0W
 ZYimIghn6x0KuvqiQkktzMqtRdlzIhvnu3ytOWBL7hNnVlGaa4kEY8zr0Ia5zwMP
 XKA18+ip/qV9qENnrjck/sh69itVR2q2qWa/BlV3cYnQsyTu62Y=
 =dY1i
 -----END PGP SIGNATURE-----

Merge 5.10.146 into android12-5.10-lts

Changes in 5.10.146
	drm/amdgpu: move nbio sdma_doorbell_range() into sdma code for vega
	drm/amdgpu: indirect register access for nv12 sriov
	drm/amdgpu: Separate vf2pf work item init from virt data exchange
	drm/amdgpu: make sure to init common IP before gmc
	usb: typec: intel_pmc_mux: Update IOM port status offset for AlderLake
	usb: typec: intel_pmc_mux: Add new ACPI ID for Meteor Lake IOM device
	usb: dwc3: gadget: Avoid starting DWC3 gadget during UDC unbind
	usb: dwc3: Issue core soft reset before enabling run/stop
	usb: dwc3: gadget: Prevent repeat pullup()
	usb: dwc3: gadget: Refactor pullup()
	usb: dwc3: gadget: Don't modify GEVNTCOUNT in pullup()
	usb: dwc3: gadget: Avoid duplicate requests to enable Run/Stop
	usb: xhci-mtk: get the microframe boundary for ESIT
	usb: xhci-mtk: add only one extra CS for FS/LS INTR
	usb: xhci-mtk: use @sch_tt to check whether need do TT schedule
	usb: xhci-mtk: add a function to (un)load bandwidth info
	usb: xhci-mtk: add some schedule error number
	usb: xhci-mtk: allow multiple Start-Split in a microframe
	usb: xhci-mtk: relax TT periodic bandwidth allocation
	mmc: core: Fix inconsistent sd3_bus_mode at UHS-I SD voltage switch failure
	serial: atmel: remove redundant assignment in rs485_config
	tty: serial: atmel: Preserve previous USART mode if RS485 disabled
	usb: add quirks for Lenovo OneLink+ Dock
	usb: gadget: udc-xilinx: replace memcpy with memcpy_toio
	usb: cdns3: fix incorrect handling TRB_SMM flag for ISOC transfer
	usb: cdns3: fix issue with rearming ISO OUT endpoint
	Revert "usb: add quirks for Lenovo OneLink+ Dock"
	vfio/type1: Change success value of vaddr_get_pfn()
	vfio/type1: Prepare for batched pinning with struct vfio_batch
	vfio/type1: Unpin zero pages
	Revert "usb: gadget: udc-xilinx: replace memcpy with memcpy_toio"
	arm64: Restrict ARM64_BTI_KERNEL to clang 12.0.0 and newer
	arm64/bti: Disable in kernel BTI when cross section thunks are broken
	USB: core: Fix RST error in hub.c
	USB: serial: option: add Quectel BG95 0x0203 composition
	USB: serial: option: add Quectel RM520N
	ALSA: hda/tegra: set depop delay for tegra
	ALSA: hda: add Intel 5 Series / 3400 PCI DID
	ALSA: hda/realtek: Add quirk for Huawei WRT-WX9
	ALSA: hda/realtek: Enable 4-speaker output Dell Precision 5570 laptop
	ALSA: hda/realtek: Re-arrange quirk table entries
	ALSA: hda/realtek: Add pincfg for ASUS G513 HP jack
	ALSA: hda/realtek: Add pincfg for ASUS G533Z HP jack
	ALSA: hda/realtek: Add quirk for ASUS GA503R laptop
	ALSA: hda/realtek: Enable 4-speaker output Dell Precision 5530 laptop
	iommu/vt-d: Check correct capability for sagaw determination
	media: flexcop-usb: fix endpoint type check
	efi: x86: Wipe setup_data on pure EFI boot
	efi: libstub: check Shim mode using MokSBStateRT
	wifi: mt76: fix reading current per-tid starting sequence number for aggregation
	gpio: mockup: fix NULL pointer dereference when removing debugfs
	gpiolib: cdev: Set lineevent_state::irq after IRQ register successfully
	riscv: fix a nasty sigreturn bug...
	can: flexcan: flexcan_mailbox_read() fix return value for drop = true
	mm/slub: fix to return errno if kmalloc() fails
	KVM: SEV: add cache flush to solve SEV cache incoherency issues
	interconnect: qcom: icc-rpmh: Add BCMs to commit list in pre_aggregate
	xfs: fix up non-directory creation in SGID directories
	xfs: reorder iunlink remove operation in xfs_ifree
	xfs: validate inode fork size against fork format
	arm64: dts: rockchip: Pull up wlan wake# on Gru-Bob
	drm/mediatek: dsi: Add atomic {destroy,duplicate}_state, reset callbacks
	arm64: dts: rockchip: Set RK3399-Gru PCLK_EDP to 24 MHz
	dmaengine: ti: k3-udma-private: Fix refcount leak bug in of_xudma_dev_get()
	arm64: dts: rockchip: Remove 'enable-active-low' from rk3399-puma
	netfilter: nf_conntrack_sip: fix ct_sip_walk_headers
	netfilter: nf_conntrack_irc: Tighten matching on DCC message
	netfilter: nfnetlink_osf: fix possible bogus match in nf_osf_find()
	iavf: Fix cached head and tail value for iavf_get_tx_pending
	ipvlan: Fix out-of-bound bugs caused by unset skb->mac_header
	net: let flow have same hash in two directions
	net: core: fix flow symmetric hash
	net: phy: aquantia: wait for the suspend/resume operations to finish
	scsi: mpt3sas: Force PCIe scatterlist allocations to be within same 4 GB region
	scsi: mpt3sas: Fix return value check of dma_get_required_mask()
	net: bonding: Share lacpdu_mcast_addr definition
	net: bonding: Unsync device addresses on ndo_stop
	net: team: Unsync device addresses on ndo_stop
	drm/panel: simple: Fix innolux_g121i1_l01 bus_format
	MIPS: lantiq: export clk_get_io() for lantiq_wdt.ko
	MIPS: Loongson32: Fix PHY-mode being left unspecified
	iavf: Fix bad page state
	iavf: Fix set max MTU size with port VLAN and jumbo frames
	i40e: Fix VF set max MTU size
	i40e: Fix set max_tx_rate when it is lower than 1 Mbps
	sfc: fix TX channel offset when using legacy interrupts
	sfc: fix null pointer dereference in efx_hard_start_xmit
	drm/hisilicon/hibmc: Allow to be built if COMPILE_TEST is enabled
	drm/hisilicon: Add depends on MMU
	of: mdio: Add of_node_put() when breaking out of for_each_xx
	net: ipa: fix assumptions about DMA address size
	net: ipa: fix table alignment requirement
	net: ipa: avoid 64-bit modulus
	net: ipa: DMA addresses are nicely aligned
	net: ipa: kill IPA_TABLE_ENTRY_SIZE
	net: ipa: properly limit modem routing table use
	wireguard: ratelimiter: disable timings test by default
	wireguard: netlink: avoid variable-sized memcpy on sockaddr
	net: enetc: move enetc_set_psfp() out of the common enetc_set_features()
	net: socket: remove register_gifconf
	net/sched: taprio: avoid disabling offload when it was never enabled
	net/sched: taprio: make qdisc_leaf() see the per-netdev-queue pfifo child qdiscs
	netfilter: nf_tables: fix nft_counters_enabled underflow at nf_tables_addchain()
	netfilter: nf_tables: fix percpu memory leak at nf_tables_addchain()
	netfilter: ebtables: fix memory leak when blob is malformed
	can: gs_usb: gs_can_open(): fix race dev->can.state condition
	perf jit: Include program header in ELF files
	perf kcore_copy: Do not check /proc/modules is unchanged
	drm/mediatek: dsi: Move mtk_dsi_stop() call back to mtk_dsi_poweroff()
	net/smc: Stop the CLC flow if no link to map buffers on
	net: sunhme: Fix packet reception for len < RX_COPY_THRESHOLD
	net: sched: fix possible refcount leak in tc_new_tfilter()
	selftests: forwarding: add shebang for sch_red.sh
	drm/amd/amdgpu: fixing read wrong pf2vf data in SRIOV
	serial: Create uart_xmit_advance()
	serial: tegra: Use uart_xmit_advance(), fixes icount.tx accounting
	serial: tegra-tcu: Use uart_xmit_advance(), fixes icount.tx accounting
	s390/dasd: fix Oops in dasd_alias_get_start_dev due to missing pavgroup
	usb: xhci-mtk: fix issue of out-of-bounds array access
	vfio/type1: fix vaddr_get_pfns() return in vfio_pin_page_external()
	drm/amdgpu: Fix check for RAS support
	cifs: use discard iterator to discard unneeded network data more efficiently
	cifs: always initialize struct msghdr smb_msg completely
	Drivers: hv: Never allocate anything besides framebuffer from framebuffer memory region
	drm/gma500: Fix BUG: sleeping function called from invalid context errors
	drm/amdgpu: use dirty framebuffer helper
	drm/amd/display: Limit user regamma to a valid value
	drm/amd/display: Mark dml30's UseMinimumDCFCLK() as noinline for stack usage
	drm/rockchip: Fix return type of cdn_dp_connector_mode_valid
	workqueue: don't skip lockdep work dependency in cancel_work_sync()
	i2c: imx: If pm_runtime_get_sync() returned 1 device access is possible
	i2c: mlxbf: incorrect base address passed during io write
	i2c: mlxbf: prevent stack overflow in mlxbf_i2c_smbus_start_transaction()
	i2c: mlxbf: Fix frequency calculation
	devdax: Fix soft-reservation memory description
	ext4: fix bug in extents parsing when eh_entries == 0 and eh_depth > 0
	ext4: limit the number of retries after discarding preallocations blocks
	ext4: make directory inode spreading reflect flexbg size
	Linux 5.10.146

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I45edad7e4191aad7a85278b43fa9909a6253643f
This commit is contained in:
Greg Kroah-Hartman 2022-09-28 12:19:27 +02:00
commit 1d17080edb
125 changed files with 1109 additions and 667 deletions

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 10
SUBLEVEL = 145
SUBLEVEL = 146
EXTRAVERSION =
NAME = Dare mighty things

View File

@ -1715,7 +1715,8 @@ config ARM64_BTI_KERNEL
depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697
depends on !CC_IS_GCC || GCC_VERSION >= 100100
depends on !(CC_IS_CLANG && GCOV_KERNEL)
# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106671
depends on !CC_IS_GCC
# https://bugs.llvm.org/show_bug.cgi?id=46258
depends on !CFI_CLANG || CLANG_VERSION >= 120000
depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)

View File

@ -87,3 +87,8 @@ h1_int_od_l: h1-int-od-l {
};
};
};
&wlan_host_wake_l {
/* Kevin has an external pull up, but Bob does not. */
rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_up>;
};

View File

@ -237,6 +237,14 @@ &cdn_dp {
&edp {
status = "okay";
/*
* eDP PHY/clk don't sync reliably at anything other than 24 MHz. Only
* set this here, because rk3399-gru.dtsi ensures we can generate this
* off GPLL=600MHz, whereas some other RK3399 boards may not.
*/
assigned-clocks = <&cru PCLK_EDP>;
assigned-clock-rates = <24000000>;
ports {
edp_out: port@1 {
reg = <1>;
@ -395,6 +403,7 @@ wifi_perst_l: wifi-perst-l {
};
wlan_host_wake_l: wlan-host-wake-l {
/* Kevin has an external pull up, but Bob does not */
rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_none>;
};
};

View File

@ -102,7 +102,6 @@ vcc3v3_sys: vcc3v3-sys {
vcc5v0_host: vcc5v0-host-regulator {
compatible = "regulator-fixed";
gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
enable-active-low;
pinctrl-names = "default";
pinctrl-0 = <&vcc5v0_host_en>;
regulator-name = "vcc5v0_host";

View File

@ -50,6 +50,7 @@ struct clk *clk_get_io(void)
{
return &cpu_clk_generic[2];
}
EXPORT_SYMBOL_GPL(clk_get_io);
struct clk *clk_get_ppe(void)
{

View File

@ -98,7 +98,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
if (plat_dat->bus_id) {
__raw_writel(__raw_readl(LS1X_MUX_CTRL0) | GMAC1_USE_UART1 |
GMAC1_USE_UART0, LS1X_MUX_CTRL0);
switch (plat_dat->interface) {
switch (plat_dat->phy_interface) {
case PHY_INTERFACE_MODE_RGMII:
val &= ~(GMAC1_USE_TXCLK | GMAC1_USE_PWM23);
break;
@ -107,12 +107,12 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
break;
default:
pr_err("unsupported mii mode %d\n",
plat_dat->interface);
plat_dat->phy_interface);
return -ENOTSUPP;
}
val &= ~GMAC1_SHUT;
} else {
switch (plat_dat->interface) {
switch (plat_dat->phy_interface) {
case PHY_INTERFACE_MODE_RGMII:
val &= ~(GMAC0_USE_TXCLK | GMAC0_USE_PWM01);
break;
@ -121,7 +121,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
break;
default:
pr_err("unsupported mii mode %d\n",
plat_dat->interface);
plat_dat->phy_interface);
return -ENOTSUPP;
}
val &= ~GMAC0_SHUT;
@ -131,7 +131,7 @@ int ls1x_eth_mux_init(struct platform_device *pdev, void *priv)
plat_dat = dev_get_platdata(&pdev->dev);
val &= ~PHY_INTF_SELI;
if (plat_dat->interface == PHY_INTERFACE_MODE_RMII)
if (plat_dat->phy_interface == PHY_INTERFACE_MODE_RMII)
val |= 0x4 << PHY_INTF_SELI_SHIFT;
__raw_writel(val, LS1X_MUX_CTRL1);
@ -146,9 +146,9 @@ static struct plat_stmmacenet_data ls1x_eth0_pdata = {
.bus_id = 0,
.phy_addr = -1,
#if defined(CONFIG_LOONGSON1_LS1B)
.interface = PHY_INTERFACE_MODE_MII,
.phy_interface = PHY_INTERFACE_MODE_MII,
#elif defined(CONFIG_LOONGSON1_LS1C)
.interface = PHY_INTERFACE_MODE_RMII,
.phy_interface = PHY_INTERFACE_MODE_RMII,
#endif
.mdio_bus_data = &ls1x_mdio_bus_data,
.dma_cfg = &ls1x_eth_dma_cfg,
@ -186,7 +186,7 @@ struct platform_device ls1x_eth0_pdev = {
static struct plat_stmmacenet_data ls1x_eth1_pdata = {
.bus_id = 1,
.phy_addr = -1,
.interface = PHY_INTERFACE_MODE_MII,
.phy_interface = PHY_INTERFACE_MODE_MII,
.mdio_bus_data = &ls1x_mdio_bus_data,
.dma_cfg = &ls1x_eth_dma_cfg,
.has_gmac = 1,

View File

@ -121,6 +121,8 @@ SYSCALL_DEFINE0(rt_sigreturn)
if (restore_altstack(&frame->uc.uc_stack))
goto badframe;
regs->cause = -1UL;
return regs->a0;
badframe:

View File

@ -1275,6 +1275,7 @@ struct kvm_x86_ops {
int (*mem_enc_op)(struct kvm *kvm, void __user *argp);
int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
void (*guest_memory_reclaimed)(struct kvm *kvm);
int (*get_msr_feature)(struct kvm_msr_entry *entry);

View File

@ -1177,6 +1177,14 @@ void sev_hardware_teardown(void)
sev_flush_asids();
}
void sev_guest_memory_reclaimed(struct kvm *kvm)
{
if (!sev_guest(kvm))
return;
wbinvd_on_all_cpus();
}
void pre_sev_run(struct vcpu_svm *svm, int cpu)
{
struct svm_cpu_data *sd = per_cpu(svm_data, cpu);

View File

@ -4325,6 +4325,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.mem_enc_op = svm_mem_enc_op,
.mem_enc_reg_region = svm_register_enc_region,
.mem_enc_unreg_region = svm_unregister_enc_region,
.guest_memory_reclaimed = sev_guest_memory_reclaimed,
.can_emulate_instruction = svm_can_emulate_instruction,

View File

@ -491,6 +491,8 @@ int svm_register_enc_region(struct kvm *kvm,
struct kvm_enc_region *range);
int svm_unregister_enc_region(struct kvm *kvm,
struct kvm_enc_region *range);
void sev_guest_memory_reclaimed(struct kvm *kvm);
void pre_sev_run(struct vcpu_svm *svm, int cpu);
int __init sev_hardware_setup(void);
void sev_hardware_teardown(void);

View File

@ -8875,6 +8875,12 @@ void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
}
void kvm_arch_guest_memory_reclaimed(struct kvm *kvm)
{
if (kvm_x86_ops.guest_memory_reclaimed)
kvm_x86_ops.guest_memory_reclaimed(kvm);
}
void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
{
if (!lapic_in_kernel(vcpu))

View File

@ -15,6 +15,7 @@ void hmem_register_device(int target_nid, struct resource *r)
.start = r->start,
.end = r->end,
.flags = IORESOURCE_MEM,
.desc = IORES_DESC_SOFT_RESERVED,
};
struct platform_device *pdev;
struct memregion_info info;

View File

@ -31,14 +31,14 @@ struct udma_dev *of_xudma_dev_get(struct device_node *np, const char *property)
}
pdev = of_find_device_by_node(udma_node);
if (np != udma_node)
of_node_put(udma_node);
if (!pdev) {
pr_debug("UDMA device not found\n");
return ERR_PTR(-EPROBE_DEFER);
}
if (np != udma_node)
of_node_put(udma_node);
ud = platform_get_drvdata(pdev);
if (!ud) {
pr_debug("UDMA has not been probed\n");

View File

@ -19,7 +19,7 @@ static const efi_char16_t efi_SetupMode_name[] = L"SetupMode";
/* SHIM variables */
static const efi_guid_t shim_guid = EFI_SHIM_LOCK_GUID;
static const efi_char16_t shim_MokSBState_name[] = L"MokSBState";
static const efi_char16_t shim_MokSBState_name[] = L"MokSBStateRT";
/*
* Determine whether we're in secure boot mode.
@ -53,8 +53,8 @@ enum efi_secureboot_mode efi_get_secureboot(void)
/*
* See if a user has put the shim into insecure mode. If so, and if the
* variable doesn't have the runtime attribute set, we might as well
* honor that.
* variable doesn't have the non-volatile attribute set, we might as
* well honor that.
*/
size = sizeof(moksbstate);
status = get_efi_var(shim_MokSBState_name, &shim_guid,
@ -63,7 +63,7 @@ enum efi_secureboot_mode efi_get_secureboot(void)
/* If it fails, we don't care why. Default to secure */
if (status != EFI_SUCCESS)
goto secure_boot_enabled;
if (!(attr & EFI_VARIABLE_RUNTIME_ACCESS) && moksbstate == 1)
if (!(attr & EFI_VARIABLE_NON_VOLATILE) && moksbstate == 1)
return efi_secureboot_mode_disabled;
secure_boot_enabled:

View File

@ -414,6 +414,13 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
hdr->ramdisk_image = 0;
hdr->ramdisk_size = 0;
/*
* Disregard any setup data that was provided by the bootloader:
* setup_data could be pointing anywhere, and we have no way of
* authenticating or validating the payload.
*/
hdr->setup_data = 0;
efi_stub_entry(handle, sys_table_arg, boot_params);
/* not reached */

View File

@ -604,9 +604,9 @@ static int __init gpio_mockup_init(void)
static void __exit gpio_mockup_exit(void)
{
gpio_mockup_unregister_pdevs();
debugfs_remove_recursive(gpio_mockup_dbg_dir);
platform_driver_unregister(&gpio_mockup_driver);
gpio_mockup_unregister_pdevs();
}
module_init(gpio_mockup_init);

View File

@ -1769,7 +1769,6 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
ret = -ENODEV;
goto out_free_le;
}
le->irq = irq;
if (eflags & GPIOEVENT_REQUEST_RISING_EDGE)
irqflags |= test_bit(FLAG_ACTIVE_LOW, &desc->flags) ?
@ -1783,7 +1782,7 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
init_waitqueue_head(&le->wait);
/* Request a thread to read the events */
ret = request_threaded_irq(le->irq,
ret = request_threaded_irq(irq,
lineevent_irq_handler,
lineevent_irq_thread,
irqflags,
@ -1792,6 +1791,8 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
if (ret)
goto out_free_le;
le->irq = irq;
fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC);
if (fd < 0) {
ret = fd;

View File

@ -2047,6 +2047,11 @@ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
amdgpu_vf_error_put(adev, AMDGIM_ERROR_VF_ATOMBIOS_INIT_FAIL, 0, 0);
return r;
}
/*get pf2vf msg info at it's earliest time*/
if (amdgpu_sriov_vf(adev))
amdgpu_virt_init_data_exchange(adev);
}
}
@ -2174,8 +2179,20 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
}
adev->ip_blocks[i].status.sw = true;
/* need to do gmc hw init early so we can allocate gpu mem */
if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {
if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_COMMON) {
/* need to do common hw init early so everything is set up for gmc */
r = adev->ip_blocks[i].version->funcs->hw_init((void *)adev);
if (r) {
DRM_ERROR("hw_init %d failed %d\n", i, r);
goto init_failed;
}
adev->ip_blocks[i].status.hw = true;
} else if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) {
/* need to do gmc hw init early so we can allocate gpu mem */
/* Try to reserve bad pages early */
if (amdgpu_sriov_vf(adev))
amdgpu_virt_exchange_data(adev);
r = amdgpu_device_vram_scratch_init(adev);
if (r) {
DRM_ERROR("amdgpu_vram_scratch_init failed %d\n", r);
@ -2753,8 +2770,8 @@ static int amdgpu_device_ip_reinit_early_sriov(struct amdgpu_device *adev)
int i, r;
static enum amd_ip_block_type ip_order[] = {
AMD_IP_BLOCK_TYPE_GMC,
AMD_IP_BLOCK_TYPE_COMMON,
AMD_IP_BLOCK_TYPE_GMC,
AMD_IP_BLOCK_TYPE_PSP,
AMD_IP_BLOCK_TYPE_IH,
};

View File

@ -35,6 +35,7 @@
#include <linux/pci.h>
#include <linux/pm_runtime.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_damage_helper.h>
#include <drm/drm_edid.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_fb_helper.h>
@ -498,6 +499,7 @@ bool amdgpu_display_ddc_probe(struct amdgpu_connector *amdgpu_connector,
static const struct drm_framebuffer_funcs amdgpu_fb_funcs = {
.destroy = drm_gem_fb_destroy,
.create_handle = drm_gem_fb_create_handle,
.dirty = drm_atomic_helper_dirtyfb,
};
uint32_t amdgpu_display_supported_domains(struct amdgpu_device *adev,

View File

@ -1979,15 +1979,12 @@ int amdgpu_ras_request_reset_on_boot(struct amdgpu_device *adev,
return 0;
}
static int amdgpu_ras_check_asic_type(struct amdgpu_device *adev)
static bool amdgpu_ras_asic_supported(struct amdgpu_device *adev)
{
if (adev->asic_type != CHIP_VEGA10 &&
adev->asic_type != CHIP_VEGA20 &&
adev->asic_type != CHIP_ARCTURUS &&
adev->asic_type != CHIP_SIENNA_CICHLID)
return 1;
else
return 0;
return adev->asic_type == CHIP_VEGA10 ||
adev->asic_type == CHIP_VEGA20 ||
adev->asic_type == CHIP_ARCTURUS ||
adev->asic_type == CHIP_SIENNA_CICHLID;
}
/*
@ -2006,7 +2003,7 @@ static void amdgpu_ras_check_supported(struct amdgpu_device *adev,
*supported = 0;
if (amdgpu_sriov_vf(adev) || !adev->is_atom_fw ||
amdgpu_ras_check_asic_type(adev))
!amdgpu_ras_asic_supported(adev))
return;
if (amdgpu_atomfirmware_mem_ecc_supported(adev)) {

View File

@ -580,16 +580,34 @@ void amdgpu_virt_fini_data_exchange(struct amdgpu_device *adev)
void amdgpu_virt_init_data_exchange(struct amdgpu_device *adev)
{
uint64_t bp_block_offset = 0;
uint32_t bp_block_size = 0;
struct amd_sriov_msg_pf2vf_info *pf2vf_v2 = NULL;
adev->virt.fw_reserve.p_pf2vf = NULL;
adev->virt.fw_reserve.p_vf2pf = NULL;
adev->virt.vf2pf_update_interval_ms = 0;
if (adev->mman.fw_vram_usage_va != NULL) {
adev->virt.vf2pf_update_interval_ms = 2000;
/* go through this logic in ip_init and reset to init workqueue*/
amdgpu_virt_exchange_data(adev);
INIT_DELAYED_WORK(&adev->virt.vf2pf_work, amdgpu_virt_update_vf2pf_work_item);
schedule_delayed_work(&(adev->virt.vf2pf_work), msecs_to_jiffies(adev->virt.vf2pf_update_interval_ms));
} else if (adev->bios != NULL) {
/* got through this logic in early init stage to get necessary flags, e.g. rlcg_acc related*/
adev->virt.fw_reserve.p_pf2vf =
(struct amd_sriov_msg_pf2vf_info_header *)
(adev->bios + (AMD_SRIOV_MSG_PF2VF_OFFSET_KB << 10));
amdgpu_virt_read_pf2vf_data(adev);
}
}
void amdgpu_virt_exchange_data(struct amdgpu_device *adev)
{
uint64_t bp_block_offset = 0;
uint32_t bp_block_size = 0;
struct amd_sriov_msg_pf2vf_info *pf2vf_v2 = NULL;
if (adev->mman.fw_vram_usage_va != NULL) {
adev->virt.fw_reserve.p_pf2vf =
(struct amd_sriov_msg_pf2vf_info_header *)
@ -616,13 +634,9 @@ void amdgpu_virt_init_data_exchange(struct amdgpu_device *adev)
amdgpu_virt_add_bad_page(adev, bp_block_offset, bp_block_size);
}
}
if (adev->virt.vf2pf_update_interval_ms != 0) {
INIT_DELAYED_WORK(&adev->virt.vf2pf_work, amdgpu_virt_update_vf2pf_work_item);
schedule_delayed_work(&(adev->virt.vf2pf_work), adev->virt.vf2pf_update_interval_ms);
}
}
void amdgpu_detect_virtualization(struct amdgpu_device *adev)
{
uint32_t reg;

View File

@ -271,6 +271,7 @@ int amdgpu_virt_alloc_mm_table(struct amdgpu_device *adev);
void amdgpu_virt_free_mm_table(struct amdgpu_device *adev);
void amdgpu_virt_release_ras_err_handler_data(struct amdgpu_device *adev);
void amdgpu_virt_init_data_exchange(struct amdgpu_device *adev);
void amdgpu_virt_exchange_data(struct amdgpu_device *adev);
void amdgpu_virt_fini_data_exchange(struct amdgpu_device *adev);
void amdgpu_detect_virtualization(struct amdgpu_device *adev);

View File

@ -1475,6 +1475,11 @@ static int sdma_v4_0_start(struct amdgpu_device *adev)
WREG32_SDMA(i, mmSDMA0_CNTL, temp);
if (!amdgpu_sriov_vf(adev)) {
ring = &adev->sdma.instance[i].ring;
adev->nbio.funcs->sdma_doorbell_range(adev, i,
ring->use_doorbell, ring->doorbell_index,
adev->doorbell_index.sdma_doorbell_range);
/* unhalt engine */
temp = RREG32_SDMA(i, mmSDMA0_F32_CNTL);
temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);

View File

@ -1332,25 +1332,6 @@ static int soc15_common_sw_fini(void *handle)
return 0;
}
static void soc15_doorbell_range_init(struct amdgpu_device *adev)
{
int i;
struct amdgpu_ring *ring;
/* sdma/ih doorbell range are programed by hypervisor */
if (!amdgpu_sriov_vf(adev)) {
for (i = 0; i < adev->sdma.num_instances; i++) {
ring = &adev->sdma.instance[i].ring;
adev->nbio.funcs->sdma_doorbell_range(adev, i,
ring->use_doorbell, ring->doorbell_index,
adev->doorbell_index.sdma_doorbell_range);
}
adev->nbio.funcs->ih_doorbell_range(adev, adev->irq.ih.use_doorbell,
adev->irq.ih.doorbell_index);
}
}
static int soc15_common_hw_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
@ -1370,12 +1351,6 @@ static int soc15_common_hw_init(void *handle)
/* enable the doorbell aperture */
soc15_enable_doorbell_aperture(adev, true);
/* HW doorbell routing policy: doorbell writing not
* in SDMA/IH/MM/ACV range will be routed to CP. So
* we need to init SDMA/IH/MM/ACV doorbell range prior
* to CP ip block init and ring test.
*/
soc15_doorbell_range_init(adev);
return 0;
}

View File

@ -6653,8 +6653,7 @@ static double CalculateUrgentLatency(
return ret;
}
static void UseMinimumDCFCLK(
static noinline_for_stack void UseMinimumDCFCLK(
struct display_mode_lib *mode_lib,
int MaxInterDCNTileRepeaters,
int MaxPrefetchMode,

View File

@ -1524,6 +1524,7 @@ static void interpolate_user_regamma(uint32_t hw_points_num,
struct fixed31_32 lut2;
struct fixed31_32 delta_lut;
struct fixed31_32 delta_index;
const struct fixed31_32 one = dc_fixpt_from_int(1);
i = 0;
/* fixed_pt library has problems handling too small values */
@ -1552,6 +1553,9 @@ static void interpolate_user_regamma(uint32_t hw_points_num,
} else
hw_x = coordinates_x[i].x;
if (dc_fixpt_le(one, hw_x))
hw_x = one;
norm_x = dc_fixpt_mul(norm_factor, hw_x);
index = dc_fixpt_floor(norm_x);
if (index < 0 || index > 255)

View File

@ -529,15 +529,18 @@ int gma_crtc_page_flip(struct drm_crtc *crtc,
WARN_ON(drm_crtc_vblank_get(crtc) != 0);
gma_crtc->page_flip_event = event;
spin_unlock_irqrestore(&dev->event_lock, flags);
/* Call this locked if we want an event at vblank interrupt. */
ret = crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, old_fb);
if (ret) {
gma_crtc->page_flip_event = NULL;
drm_crtc_vblank_put(crtc);
spin_lock_irqsave(&dev->event_lock, flags);
if (gma_crtc->page_flip_event) {
gma_crtc->page_flip_event = NULL;
drm_crtc_vblank_put(crtc);
}
spin_unlock_irqrestore(&dev->event_lock, flags);
}
spin_unlock_irqrestore(&dev->event_lock, flags);
} else {
ret = crtc_funcs->mode_set_base(crtc, crtc->x, crtc->y, old_fb);
}

View File

@ -1,7 +1,8 @@
# SPDX-License-Identifier: GPL-2.0-only
config DRM_HISI_HIBMC
tristate "DRM Support for Hisilicon Hibmc"
depends on DRM && PCI && ARM64
depends on DRM && PCI && (ARM64 || COMPILE_TEST)
depends on MMU
select DRM_KMS_HELPER
select DRM_VRAM_HELPER
select DRM_TTM

View File

@ -668,6 +668,16 @@ static void mtk_dsi_poweroff(struct mtk_dsi *dsi)
if (--dsi->refcount != 0)
return;
/*
* mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
* mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
* which needs irq for vblank, and mtk_dsi_stop() will disable irq.
* mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
* after dsi is fully set.
*/
mtk_dsi_stop(dsi);
mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
mtk_dsi_reset_engine(dsi);
mtk_dsi_lane0_ulp_mode_enter(dsi);
mtk_dsi_clk_ulp_mode_enter(dsi);
@ -718,17 +728,6 @@ static void mtk_output_dsi_disable(struct mtk_dsi *dsi)
if (!dsi->enabled)
return;
/*
* mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since
* mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(),
* which needs irq for vblank, and mtk_dsi_stop() will disable irq.
* mtk_dsi_start() needs to be called in mtk_output_dsi_enable(),
* after dsi is fully set.
*/
mtk_dsi_stop(dsi);
mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500);
dsi->enabled = false;
}
@ -791,10 +790,13 @@ static void mtk_dsi_bridge_atomic_post_disable(struct drm_bridge *bridge,
static const struct drm_bridge_funcs mtk_dsi_bridge_funcs = {
.attach = mtk_dsi_bridge_attach,
.atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,
.atomic_disable = mtk_dsi_bridge_atomic_disable,
.atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,
.atomic_enable = mtk_dsi_bridge_atomic_enable,
.atomic_pre_enable = mtk_dsi_bridge_atomic_pre_enable,
.atomic_post_disable = mtk_dsi_bridge_atomic_post_disable,
.atomic_reset = drm_atomic_helper_bridge_reset,
.mode_set = mtk_dsi_bridge_mode_set,
};

View File

@ -2201,7 +2201,7 @@ static const struct panel_desc innolux_g121i1_l01 = {
.enable = 200,
.disable = 20,
},
.bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG,
.bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG,
.connector_type = DRM_MODE_CONNECTOR_LVDS,
};

View File

@ -276,8 +276,9 @@ static int cdn_dp_connector_get_modes(struct drm_connector *connector)
return ret;
}
static int cdn_dp_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
static enum drm_mode_status
cdn_dp_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct cdn_dp_device *dp = connector_to_dp(connector);
struct drm_display_info *display_info = &dp->connector.display_info;

View File

@ -2251,7 +2251,7 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
bool fb_overlap_ok)
{
struct resource *iter, *shadow;
resource_size_t range_min, range_max, start;
resource_size_t range_min, range_max, start, end;
const char *dev_n = dev_name(&device_obj->device);
int retval;
@ -2286,6 +2286,14 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
range_max = iter->end;
start = (range_min + align - 1) & ~(align - 1);
for (; start + size - 1 <= range_max; start += align) {
end = start + size - 1;
/* Skip the whole fb_mmio region if not fb_overlap_ok */
if (!fb_overlap_ok && fb_mmio &&
(((start >= fb_mmio->start) && (start <= fb_mmio->end)) ||
((end >= fb_mmio->start) && (end <= fb_mmio->end))))
continue;
shadow = __request_region(iter, start, size, NULL,
IORESOURCE_BUSY);
if (!shadow)

View File

@ -1289,7 +1289,7 @@ static int i2c_imx_remove(struct platform_device *pdev)
if (i2c_imx->dma)
i2c_imx_dma_free(i2c_imx);
if (ret == 0) {
if (ret >= 0) {
/* setup chip registers to defaults */
imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR);
imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);

View File

@ -6,6 +6,7 @@
*/
#include <linux/acpi.h>
#include <linux/bitfield.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/interrupt.h>
@ -63,13 +64,14 @@
*/
#define MLXBF_I2C_TYU_PLL_OUT_FREQ (400 * 1000 * 1000)
/* Reference clock for Bluefield - 156 MHz. */
#define MLXBF_I2C_PLL_IN_FREQ (156 * 1000 * 1000)
#define MLXBF_I2C_PLL_IN_FREQ 156250000ULL
/* Constant used to determine the PLL frequency. */
#define MLNXBF_I2C_COREPLL_CONST 16384
#define MLNXBF_I2C_COREPLL_CONST 16384ULL
#define MLXBF_I2C_FREQUENCY_1GHZ 1000000000ULL
/* PLL registers. */
#define MLXBF_I2C_CORE_PLL_REG0 0x0
#define MLXBF_I2C_CORE_PLL_REG1 0x4
#define MLXBF_I2C_CORE_PLL_REG2 0x8
@ -187,22 +189,15 @@ enum {
#define MLXBF_I2C_COREPLL_FREQ MLXBF_I2C_TYU_PLL_OUT_FREQ
/* Core PLL TYU configuration. */
#define MLXBF_I2C_COREPLL_CORE_F_TYU_MASK GENMASK(12, 0)
#define MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK GENMASK(3, 0)
#define MLXBF_I2C_COREPLL_CORE_R_TYU_MASK GENMASK(5, 0)
#define MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT 3
#define MLXBF_I2C_COREPLL_CORE_OD_TYU_SHIFT 16
#define MLXBF_I2C_COREPLL_CORE_R_TYU_SHIFT 20
#define MLXBF_I2C_COREPLL_CORE_F_TYU_MASK GENMASK(15, 3)
#define MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK GENMASK(19, 16)
#define MLXBF_I2C_COREPLL_CORE_R_TYU_MASK GENMASK(25, 20)
/* Core PLL YU configuration. */
#define MLXBF_I2C_COREPLL_CORE_F_YU_MASK GENMASK(25, 0)
#define MLXBF_I2C_COREPLL_CORE_OD_YU_MASK GENMASK(3, 0)
#define MLXBF_I2C_COREPLL_CORE_R_YU_MASK GENMASK(5, 0)
#define MLXBF_I2C_COREPLL_CORE_R_YU_MASK GENMASK(31, 26)
#define MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT 0
#define MLXBF_I2C_COREPLL_CORE_OD_YU_SHIFT 1
#define MLXBF_I2C_COREPLL_CORE_R_YU_SHIFT 26
/* Core PLL frequency. */
static u64 mlxbf_i2c_corepll_frequency;
@ -485,8 +480,6 @@ static struct mutex mlxbf_i2c_bus_lock;
#define MLXBF_I2C_MASK_8 GENMASK(7, 0)
#define MLXBF_I2C_MASK_16 GENMASK(15, 0)
#define MLXBF_I2C_FREQUENCY_1GHZ 1000000000
/*
* Function to poll a set of bits at a specific address; it checks whether
* the bits are equal to zero when eq_zero is set to 'true', and not equal
@ -675,7 +668,7 @@ static int mlxbf_i2c_smbus_enable(struct mlxbf_i2c_priv *priv, u8 slave,
/* Clear status bits. */
writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_STATUS);
/* Set the cause data. */
writel(~0x0, priv->smbus->io + MLXBF_I2C_CAUSE_OR_CLEAR);
writel(~0x0, priv->mst_cause->io + MLXBF_I2C_CAUSE_OR_CLEAR);
/* Zero PEC byte. */
writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_PEC);
/* Zero byte count. */
@ -744,6 +737,9 @@ mlxbf_i2c_smbus_start_transaction(struct mlxbf_i2c_priv *priv,
if (flags & MLXBF_I2C_F_WRITE) {
write_en = 1;
write_len += operation->length;
if (data_idx + operation->length >
MLXBF_I2C_MASTER_DATA_DESC_SIZE)
return -ENOBUFS;
memcpy(data_desc + data_idx,
operation->buffer, operation->length);
data_idx += operation->length;
@ -1413,24 +1409,19 @@ static int mlxbf_i2c_init_master(struct platform_device *pdev,
return 0;
}
static u64 mlxbf_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res)
static u64 mlxbf_i2c_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res)
{
u64 core_frequency, pad_frequency;
u64 core_frequency;
u8 core_od, core_r;
u32 corepll_val;
u16 core_f;
pad_frequency = MLXBF_I2C_PLL_IN_FREQ;
corepll_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1);
/* Get Core PLL configuration bits. */
core_f = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT) &
MLXBF_I2C_COREPLL_CORE_F_TYU_MASK;
core_od = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_OD_TYU_SHIFT) &
MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK;
core_r = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_R_TYU_SHIFT) &
MLXBF_I2C_COREPLL_CORE_R_TYU_MASK;
core_f = FIELD_GET(MLXBF_I2C_COREPLL_CORE_F_TYU_MASK, corepll_val);
core_od = FIELD_GET(MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK, corepll_val);
core_r = FIELD_GET(MLXBF_I2C_COREPLL_CORE_R_TYU_MASK, corepll_val);
/*
* Compute PLL output frequency as follow:
@ -1442,31 +1433,26 @@ static u64 mlxbf_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res)
* Where PLL_OUT_FREQ and PLL_IN_FREQ refer to CoreFrequency
* and PadFrequency, respectively.
*/
core_frequency = pad_frequency * (++core_f);
core_frequency = MLXBF_I2C_PLL_IN_FREQ * (++core_f);
core_frequency /= (++core_r) * (++core_od);
return core_frequency;
}
static u64 mlxbf_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res)
static u64 mlxbf_i2c_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res)
{
u32 corepll_reg1_val, corepll_reg2_val;
u64 corepll_frequency, pad_frequency;
u64 corepll_frequency;
u8 core_od, core_r;
u32 core_f;
pad_frequency = MLXBF_I2C_PLL_IN_FREQ;
corepll_reg1_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1);
corepll_reg2_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG2);
/* Get Core PLL configuration bits */
core_f = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT) &
MLXBF_I2C_COREPLL_CORE_F_YU_MASK;
core_r = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_R_YU_SHIFT) &
MLXBF_I2C_COREPLL_CORE_R_YU_MASK;
core_od = rol32(corepll_reg2_val, MLXBF_I2C_COREPLL_CORE_OD_YU_SHIFT) &
MLXBF_I2C_COREPLL_CORE_OD_YU_MASK;
core_f = FIELD_GET(MLXBF_I2C_COREPLL_CORE_F_YU_MASK, corepll_reg1_val);
core_r = FIELD_GET(MLXBF_I2C_COREPLL_CORE_R_YU_MASK, corepll_reg1_val);
core_od = FIELD_GET(MLXBF_I2C_COREPLL_CORE_OD_YU_MASK, corepll_reg2_val);
/*
* Compute PLL output frequency as follow:
@ -1478,7 +1464,7 @@ static u64 mlxbf_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res)
* Where PLL_OUT_FREQ and PLL_IN_FREQ refer to CoreFrequency
* and PadFrequency, respectively.
*/
corepll_frequency = (pad_frequency * core_f) / MLNXBF_I2C_COREPLL_CONST;
corepll_frequency = (MLXBF_I2C_PLL_IN_FREQ * core_f) / MLNXBF_I2C_COREPLL_CONST;
corepll_frequency /= (++core_r) * (++core_od);
return corepll_frequency;
@ -2186,14 +2172,14 @@ static struct mlxbf_i2c_chip_info mlxbf_i2c_chip[] = {
[1] = &mlxbf_i2c_corepll_res[MLXBF_I2C_CHIP_TYPE_1],
[2] = &mlxbf_i2c_gpio_res[MLXBF_I2C_CHIP_TYPE_1]
},
.calculate_freq = mlxbf_calculate_freq_from_tyu
.calculate_freq = mlxbf_i2c_calculate_freq_from_tyu
},
[MLXBF_I2C_CHIP_TYPE_2] = {
.type = MLXBF_I2C_CHIP_TYPE_2,
.shared_res = {
[0] = &mlxbf_i2c_corepll_res[MLXBF_I2C_CHIP_TYPE_2]
},
.calculate_freq = mlxbf_calculate_freq_from_yu
.calculate_freq = mlxbf_i2c_calculate_freq_from_yu
}
};

View File

@ -20,13 +20,18 @@ void qcom_icc_pre_aggregate(struct icc_node *node)
{
size_t i;
struct qcom_icc_node *qn;
struct qcom_icc_provider *qp;
qn = node->data;
qp = to_qcom_provider(node->provider);
for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) {
qn->sum_avg[i] = 0;
qn->max_peak[i] = 0;
}
for (i = 0; i < qn->num_bcms; i++)
qcom_icc_bcm_voter_add(qp->voter, qn->bcms[i]);
}
EXPORT_SYMBOL_GPL(qcom_icc_pre_aggregate);
@ -44,10 +49,8 @@ int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
{
size_t i;
struct qcom_icc_node *qn;
struct qcom_icc_provider *qp;
qn = node->data;
qp = to_qcom_provider(node->provider);
if (!tag)
tag = QCOM_ICC_TAG_ALWAYS;
@ -67,9 +70,6 @@ int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
*agg_avg += avg_bw;
*agg_peak = max_t(u32, *agg_peak, peak_bw);
for (i = 0; i < qn->num_bcms; i++)
qcom_icc_bcm_voter_add(qp->voter, qn->bcms[i]);
return 0;
}
EXPORT_SYMBOL_GPL(qcom_icc_aggregate);

View File

@ -627,7 +627,6 @@ static struct platform_driver qnoc_driver = {
.driver = {
.name = "qnoc-sm8150",
.of_match_table = qnoc_of_match,
.sync_state = icc_sync_state,
},
};
module_platform_driver(qnoc_driver);

View File

@ -643,7 +643,6 @@ static struct platform_driver qnoc_driver = {
.driver = {
.name = "qnoc-sm8250",
.of_match_table = qnoc_of_match,
.sync_state = icc_sync_state,
},
};
module_platform_driver(qnoc_driver);

View File

@ -569,7 +569,7 @@ static unsigned long __iommu_calculate_sagaw(struct intel_iommu *iommu)
{
unsigned long fl_sagaw, sl_sagaw;
fl_sagaw = BIT(2) | (cap_fl1gp_support(iommu->cap) ? BIT(3) : 0);
fl_sagaw = BIT(2) | (cap_5lp_support(iommu->cap) ? BIT(3) : 0);
sl_sagaw = cap_sagaw(iommu->cap);
/* Second level only. */

View File

@ -512,7 +512,7 @@ static int flexcop_usb_init(struct flexcop_usb *fc_usb)
if (fc_usb->uintf->cur_altsetting->desc.bNumEndpoints < 1)
return -ENODEV;
if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[1].desc))
if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[0].desc))
return -ENODEV;
switch (fc_usb->udev->speed) {

View File

@ -936,15 +936,16 @@ int mmc_sd_setup_card(struct mmc_host *host, struct mmc_card *card,
/* Erase init depends on CSD and SSR */
mmc_init_erase(card);
/*
* Fetch switch information from card.
*/
err = mmc_read_switch(card);
if (err)
return err;
}
/*
* Fetch switch information from card. Note, sd3_bus_mode can change if
* voltage switch outcome changes, so do this always.
*/
err = mmc_read_switch(card);
if (err)
return err;
/*
* For SPI, enable CRC as appropriate.
* This CRC enable is located AFTER the reading of the
@ -1093,26 +1094,15 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
if (!v18_fixup_failed && !mmc_host_is_spi(host) && mmc_host_uhs(host) &&
mmc_sd_card_using_v18(card) &&
host->ios.signal_voltage != MMC_SIGNAL_VOLTAGE_180) {
/*
* Re-read switch information in case it has changed since
* oldcard was initialized.
*/
if (oldcard) {
err = mmc_read_switch(card);
if (err)
goto free_card;
}
if (mmc_sd_card_using_v18(card)) {
if (mmc_host_set_uhs_voltage(host) ||
mmc_sd_init_uhs_card(card)) {
v18_fixup_failed = true;
mmc_power_cycle(host, ocr);
if (!oldcard)
mmc_remove_card(card);
goto retry;
}
goto cont;
if (mmc_host_set_uhs_voltage(host) ||
mmc_sd_init_uhs_card(card)) {
v18_fixup_failed = true;
mmc_power_cycle(host, ocr);
if (!oldcard)
mmc_remove_card(card);
goto retry;
}
goto cont;
}
/* Initialization sequence for UHS-I cards */

View File

@ -85,8 +85,9 @@ static const u8 null_mac_addr[ETH_ALEN + 2] __long_aligned = {
static u16 ad_ticks_per_sec;
static const int ad_delta_in_ticks = (AD_TIMER_INTERVAL * HZ) / 1000;
static const u8 lacpdu_mcast_addr[ETH_ALEN + 2] __long_aligned =
MULTICAST_LACPDU_ADDR;
const u8 lacpdu_mcast_addr[ETH_ALEN + 2] __long_aligned = {
0x01, 0x80, 0xC2, 0x00, 0x00, 0x02
};
/* ================= main 802.3ad protocol functions ================== */
static int ad_lacpdu_send(struct port *port);

View File

@ -827,12 +827,8 @@ static void bond_hw_addr_flush(struct net_device *bond_dev,
dev_uc_unsync(slave_dev, bond_dev);
dev_mc_unsync(slave_dev, bond_dev);
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
/* del lacpdu mc addr from mc list */
u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
dev_mc_del(slave_dev, lacpdu_multicast);
}
if (BOND_MODE(bond) == BOND_MODE_8023AD)
dev_mc_del(slave_dev, lacpdu_mcast_addr);
}
/*--------------------------- Active slave change ---------------------------*/
@ -852,7 +848,8 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
if (bond->dev->flags & IFF_ALLMULTI)
dev_set_allmulti(old_active->dev, -1);
bond_hw_addr_flush(bond->dev, old_active->dev);
if (bond->dev->flags & IFF_UP)
bond_hw_addr_flush(bond->dev, old_active->dev);
}
if (new_active) {
@ -863,10 +860,12 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
if (bond->dev->flags & IFF_ALLMULTI)
dev_set_allmulti(new_active->dev, 1);
netif_addr_lock_bh(bond->dev);
dev_uc_sync(new_active->dev, bond->dev);
dev_mc_sync(new_active->dev, bond->dev);
netif_addr_unlock_bh(bond->dev);
if (bond->dev->flags & IFF_UP) {
netif_addr_lock_bh(bond->dev);
dev_uc_sync(new_active->dev, bond->dev);
dev_mc_sync(new_active->dev, bond->dev);
netif_addr_unlock_bh(bond->dev);
}
}
}
@ -2073,16 +2072,14 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
}
}
netif_addr_lock_bh(bond_dev);
dev_mc_sync_multiple(slave_dev, bond_dev);
dev_uc_sync_multiple(slave_dev, bond_dev);
netif_addr_unlock_bh(bond_dev);
if (bond_dev->flags & IFF_UP) {
netif_addr_lock_bh(bond_dev);
dev_mc_sync_multiple(slave_dev, bond_dev);
dev_uc_sync_multiple(slave_dev, bond_dev);
netif_addr_unlock_bh(bond_dev);
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
/* add lacpdu mc addr to mc list */
u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
dev_mc_add(slave_dev, lacpdu_multicast);
if (BOND_MODE(bond) == BOND_MODE_8023AD)
dev_mc_add(slave_dev, lacpdu_mcast_addr);
}
}
@ -2310,7 +2307,8 @@ static int __bond_release_one(struct net_device *bond_dev,
if (old_flags & IFF_ALLMULTI)
dev_set_allmulti(slave_dev, -1);
bond_hw_addr_flush(bond_dev, slave_dev);
if (old_flags & IFF_UP)
bond_hw_addr_flush(bond_dev, slave_dev);
}
slave_disable_netpoll(slave);
@ -3772,6 +3770,9 @@ static int bond_open(struct net_device *bond_dev)
/* register to receive LACPDUs */
bond->recv_probe = bond_3ad_lacpdu_recv;
bond_3ad_initiate_agg_selection(bond, 1);
bond_for_each_slave(bond, slave, iter)
dev_mc_add(slave->dev, lacpdu_mcast_addr);
}
if (bond_mode_can_use_xmit_hash(bond))
@ -3783,6 +3784,7 @@ static int bond_open(struct net_device *bond_dev)
static int bond_close(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave;
bond_work_cancel_all(bond);
bond->send_peer_notif = 0;
@ -3790,6 +3792,19 @@ static int bond_close(struct net_device *bond_dev)
bond_alb_deinitialize(bond);
bond->recv_probe = NULL;
if (bond_uses_primary(bond)) {
rcu_read_lock();
slave = rcu_dereference(bond->curr_active_slave);
if (slave)
bond_hw_addr_flush(bond_dev, slave->dev);
rcu_read_unlock();
} else {
struct list_head *iter;
bond_for_each_slave(bond, slave, iter)
bond_hw_addr_flush(bond_dev, slave->dev);
}
return 0;
}

View File

@ -954,11 +954,6 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
u32 reg_ctrl, reg_id, reg_iflag1;
int i;
if (unlikely(drop)) {
skb = ERR_PTR(-ENOBUFS);
goto mark_as_read;
}
mb = flexcan_get_mb(priv, n);
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
@ -987,6 +982,11 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
reg_ctrl = priv->read(&mb->can_ctrl);
}
if (unlikely(drop)) {
skb = ERR_PTR(-ENOBUFS);
goto mark_as_read;
}
if (reg_ctrl & FLEXCAN_MB_CNT_EDL)
skb = alloc_canfd_skb(offload->dev, &cfd);
else

View File

@ -678,6 +678,7 @@ static int gs_can_open(struct net_device *netdev)
flags |= GS_CAN_MODE_TRIPLE_SAMPLE;
/* finally start device */
dev->can.state = CAN_STATE_ERROR_ACTIVE;
dm->mode = cpu_to_le32(GS_CAN_MODE_START);
dm->flags = cpu_to_le32(flags);
rc = usb_control_msg(interface_to_usbdev(dev->iface),
@ -694,13 +695,12 @@ static int gs_can_open(struct net_device *netdev)
if (rc < 0) {
netdev_err(netdev, "Couldn't start device (err=%d)\n", rc);
kfree(dm);
dev->can.state = CAN_STATE_STOPPED;
return rc;
}
kfree(dm);
dev->can.state = CAN_STATE_ERROR_ACTIVE;
parent->active_channels++;
if (!(dev->can.ctrlmode & CAN_CTRLMODE_LISTENONLY))
netif_start_queue(netdev);

View File

@ -1671,29 +1671,6 @@ static int enetc_set_rss(struct net_device *ndev, int en)
return 0;
}
static int enetc_set_psfp(struct net_device *ndev, int en)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
int err;
if (en) {
err = enetc_psfp_enable(priv);
if (err)
return err;
priv->active_offloads |= ENETC_F_QCI;
return 0;
}
err = enetc_psfp_disable(priv);
if (err)
return err;
priv->active_offloads &= ~ENETC_F_QCI;
return 0;
}
static void enetc_enable_rxvlan(struct net_device *ndev, bool en)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
@ -1712,11 +1689,9 @@ static void enetc_enable_txvlan(struct net_device *ndev, bool en)
enetc_bdr_enable_txvlan(&priv->si->hw, i, en);
}
int enetc_set_features(struct net_device *ndev,
netdev_features_t features)
void enetc_set_features(struct net_device *ndev, netdev_features_t features)
{
netdev_features_t changed = ndev->features ^ features;
int err = 0;
if (changed & NETIF_F_RXHASH)
enetc_set_rss(ndev, !!(features & NETIF_F_RXHASH));
@ -1728,11 +1703,6 @@ int enetc_set_features(struct net_device *ndev,
if (changed & NETIF_F_HW_VLAN_CTAG_TX)
enetc_enable_txvlan(ndev,
!!(features & NETIF_F_HW_VLAN_CTAG_TX));
if (changed & NETIF_F_HW_TC)
err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
return err;
}
#ifdef CONFIG_FSL_ENETC_PTP_CLOCK

View File

@ -301,8 +301,7 @@ void enetc_start(struct net_device *ndev);
void enetc_stop(struct net_device *ndev);
netdev_tx_t enetc_xmit(struct sk_buff *skb, struct net_device *ndev);
struct net_device_stats *enetc_get_stats(struct net_device *ndev);
int enetc_set_features(struct net_device *ndev,
netdev_features_t features);
void enetc_set_features(struct net_device *ndev, netdev_features_t features);
int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd);
int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
void *type_data);
@ -335,6 +334,7 @@ int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
int enetc_setup_tc_psfp(struct net_device *ndev, void *type_data);
int enetc_psfp_init(struct enetc_ndev_priv *priv);
int enetc_psfp_clean(struct enetc_ndev_priv *priv);
int enetc_set_psfp(struct net_device *ndev, bool en);
static inline void enetc_get_max_cap(struct enetc_ndev_priv *priv)
{
@ -410,4 +410,9 @@ static inline int enetc_psfp_disable(struct enetc_ndev_priv *priv)
{
return 0;
}
static inline int enetc_set_psfp(struct net_device *ndev, bool en)
{
return 0;
}
#endif

View File

@ -671,6 +671,13 @@ static int enetc_pf_set_features(struct net_device *ndev,
{
netdev_features_t changed = ndev->features ^ features;
struct enetc_ndev_priv *priv = netdev_priv(ndev);
int err;
if (changed & NETIF_F_HW_TC) {
err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
if (err)
return err;
}
if (changed & NETIF_F_HW_VLAN_CTAG_FILTER) {
struct enetc_pf *pf = enetc_si_priv(priv->si);
@ -684,7 +691,9 @@ static int enetc_pf_set_features(struct net_device *ndev,
if (changed & NETIF_F_LOOPBACK)
enetc_set_loopback(ndev, !!(features & NETIF_F_LOOPBACK));
return enetc_set_features(ndev, features);
enetc_set_features(ndev, features);
return 0;
}
static const struct net_device_ops enetc_ndev_ops = {

View File

@ -1525,6 +1525,29 @@ int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
int enetc_set_psfp(struct net_device *ndev, bool en)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
int err;
if (en) {
err = enetc_psfp_enable(priv);
if (err)
return err;
priv->active_offloads |= ENETC_F_QCI;
return 0;
}
err = enetc_psfp_disable(priv);
if (err)
return err;
priv->active_offloads &= ~ENETC_F_QCI;
return 0;
}
int enetc_psfp_init(struct enetc_ndev_priv *priv)
{
if (epsfp.psfp_sfi_bitmap)

View File

@ -88,7 +88,9 @@ static int enetc_vf_set_mac_addr(struct net_device *ndev, void *addr)
static int enetc_vf_set_features(struct net_device *ndev,
netdev_features_t features)
{
return enetc_set_features(ndev, features);
enetc_set_features(ndev, features);
return 0;
}
/* Probing/ Init */

View File

@ -5733,6 +5733,26 @@ static int i40e_get_link_speed(struct i40e_vsi *vsi)
}
}
/**
* i40e_bw_bytes_to_mbits - Convert max_tx_rate from bytes to mbits
* @vsi: Pointer to vsi structure
* @max_tx_rate: max TX rate in bytes to be converted into Mbits
*
* Helper function to convert units before send to set BW limit
**/
static u64 i40e_bw_bytes_to_mbits(struct i40e_vsi *vsi, u64 max_tx_rate)
{
if (max_tx_rate < I40E_BW_MBPS_DIVISOR) {
dev_warn(&vsi->back->pdev->dev,
"Setting max tx rate to minimum usable value of 50Mbps.\n");
max_tx_rate = I40E_BW_CREDIT_DIVISOR;
} else {
do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
}
return max_tx_rate;
}
/**
* i40e_set_bw_limit - setup BW limit for Tx traffic based on max_tx_rate
* @vsi: VSI to be configured
@ -5755,10 +5775,10 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
max_tx_rate, seid);
return -EINVAL;
}
if (max_tx_rate && max_tx_rate < 50) {
if (max_tx_rate && max_tx_rate < I40E_BW_CREDIT_DIVISOR) {
dev_warn(&pf->pdev->dev,
"Setting max tx rate to minimum usable value of 50Mbps.\n");
max_tx_rate = 50;
max_tx_rate = I40E_BW_CREDIT_DIVISOR;
}
/* Tx rate credits are in values of 50Mbps, 0 is disabled */
@ -7719,9 +7739,9 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
if (pf->flags & I40E_FLAG_TC_MQPRIO) {
if (vsi->mqprio_qopt.max_rate[0]) {
u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,
vsi->mqprio_qopt.max_rate[0]);
do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);
if (!ret) {
u64 credits = max_tx_rate;
@ -10366,10 +10386,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
}
if (vsi->mqprio_qopt.max_rate[0]) {
u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,
vsi->mqprio_qopt.max_rate[0]);
u64 credits = 0;
do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);
if (ret)
goto end_unlock;

View File

@ -1985,6 +1985,25 @@ static void i40e_del_qch(struct i40e_vf *vf)
}
}
/**
* i40e_vc_get_max_frame_size
* @vf: pointer to the VF
*
* Max frame size is determined based on the current port's max frame size and
* whether a port VLAN is configured on this VF. The VF is not aware whether
* it's in a port VLAN so the PF needs to account for this in max frame size
* checks and sending the max frame size to the VF.
**/
static u16 i40e_vc_get_max_frame_size(struct i40e_vf *vf)
{
u16 max_frame_size = vf->pf->hw.phy.link_info.max_frame_size;
if (vf->port_vlan_id)
max_frame_size -= VLAN_HLEN;
return max_frame_size;
}
/**
* i40e_vc_get_vf_resources_msg
* @vf: pointer to the VF info
@ -2085,6 +2104,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
vfres->max_vectors = pf->hw.func_caps.num_msix_vectors_vf;
vfres->rss_key_size = I40E_HKEY_ARRAY_SIZE;
vfres->rss_lut_size = I40E_VF_HLUT_ARRAY_SIZE;
vfres->max_mtu = i40e_vc_get_max_frame_size(vf);
if (vf->lan_vsi_idx) {
vfres->vsi_res[0].vsi_id = vf->lan_vsi_id;

View File

@ -114,8 +114,11 @@ u32 iavf_get_tx_pending(struct iavf_ring *ring, bool in_sw)
{
u32 head, tail;
/* underlying hardware might not allow access and/or always return
* 0 for the head/tail registers so just use the cached values
*/
head = ring->next_to_clean;
tail = readl(ring->tail);
tail = ring->next_to_use;
if (head != tail)
return (head < tail) ?
@ -1368,7 +1371,7 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
#endif
struct sk_buff *skb;
if (!rx_buffer)
if (!rx_buffer || !size)
return NULL;
/* prefetch first cache line of first page */
va = page_address(rx_buffer->page) + rx_buffer->page_offset;
@ -1526,7 +1529,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
/* exit if we failed to retrieve a buffer */
if (!skb) {
rx_ring->rx_stats.alloc_buff_failed++;
if (rx_buffer)
if (rx_buffer && size)
rx_buffer->pagecnt_bias++;
break;
}

View File

@ -241,11 +241,14 @@ int iavf_get_vf_config(struct iavf_adapter *adapter)
void iavf_configure_queues(struct iavf_adapter *adapter)
{
struct virtchnl_vsi_queue_config_info *vqci;
struct virtchnl_queue_pair_info *vqpi;
int i, max_frame = adapter->vf_res->max_mtu;
int pairs = adapter->num_active_queues;
int i, max_frame = IAVF_MAX_RXBUFFER;
struct virtchnl_queue_pair_info *vqpi;
size_t len;
if (max_frame > IAVF_MAX_RXBUFFER || !max_frame)
max_frame = IAVF_MAX_RXBUFFER;
if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
dev_err(&adapter->pdev->dev, "Cannot configure queues, command %d pending\n",

View File

@ -308,7 +308,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0);
efx->n_rx_channels = 1;
efx->n_tx_channels = 1;
efx->tx_channel_offset = 1;
efx->tx_channel_offset = efx_separate_tx_channels ? 1 : 0;
efx->n_xdp_channels = 0;
efx->xdp_channel_offset = efx->n_channels;
efx->legacy_irq = efx->pci_dev->irq;

View File

@ -545,7 +545,7 @@ netdev_tx_t efx_hard_start_xmit(struct sk_buff *skb,
* previous packets out.
*/
if (!netdev_xmit_more())
efx_tx_send_pending(tx_queue->channel);
efx_tx_send_pending(efx_get_tx_channel(efx, index));
return NETDEV_TX_OK;
}

View File

@ -2063,9 +2063,9 @@ static void happy_meal_rx(struct happy_meal *hp, struct net_device *dev)
skb_reserve(copy_skb, 2);
skb_put(copy_skb, len);
dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE);
dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE);
skb_copy_from_linear_data(skb, copy_skb->data, len);
dma_sync_single_for_device(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE);
dma_sync_single_for_device(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE);
/* Reuse original ring buffer. */
hme_write_rxd(hp, this,
(RXFLAG_OWN|((RX_BUF_ALLOC_SIZE-RX_OFFSET)<<16)),

View File

@ -1251,20 +1251,18 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
/* Initialize a ring, including allocating DMA memory for its entries */
static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count)
{
size_t size = count * GSI_RING_ELEMENT_SIZE;
u32 size = count * GSI_RING_ELEMENT_SIZE;
struct device *dev = gsi->dev;
dma_addr_t addr;
/* Hardware requires a 2^n ring size, with alignment equal to size */
/* Hardware requires a 2^n ring size, with alignment equal to size.
* The DMA address returned by dma_alloc_coherent() is guaranteed to
* be a power-of-2 number of pages, which satisfies the requirement.
*/
ring->virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
if (ring->virt && addr % size) {
dma_free_coherent(dev, size, ring->virt, addr);
dev_err(dev, "unable to alloc 0x%zx-aligned ring buffer\n",
size);
return -EINVAL; /* Not a good error value, but distinct */
} else if (!ring->virt) {
if (!ring->virt)
return -ENOMEM;
}
ring->addr = addr;
ring->count = count;

View File

@ -14,7 +14,7 @@ struct gsi_trans;
struct gsi_ring;
struct gsi_channel;
#define GSI_RING_ELEMENT_SIZE 16 /* bytes */
#define GSI_RING_ELEMENT_SIZE 16 /* bytes; must be a power of 2 */
/* Return the entry that follows one provided in a transaction pool */
void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element);

View File

@ -153,11 +153,10 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
size = __roundup_pow_of_two(size);
total_size = (count + max_alloc - 1) * size;
/* The allocator will give us a power-of-2 number of pages. But we
* can't guarantee that, so request it. That way we won't waste any
* memory that would be available beyond the required space.
*
* Note that gsi_trans_pool_exit_dma() assumes the total allocated
/* The allocator will give us a power-of-2 number of pages
* sufficient to satisfy our request. Round up our requested
* size to avoid any unused space in the allocation. This way
* gsi_trans_pool_exit_dma() can assume the total allocated
* size is exactly (count * size).
*/
total_size = get_order(total_size) << PAGE_SHIFT;

View File

@ -154,7 +154,7 @@ static void ipa_cmd_validate_build(void)
* of entries, as and IPv4 and IPv6 route tables have the same number
* of entries.
*/
#define TABLE_SIZE (TABLE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE)
#define TABLE_SIZE (TABLE_COUNT_MAX * sizeof(__le64))
#define TABLE_COUNT_MAX max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX)
BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK));
BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));

View File

@ -72,8 +72,8 @@
* that can be included in a single transaction.
*/
struct gsi_channel_data {
u16 tre_count;
u16 event_count;
u16 tre_count; /* must be a power of 2 */
u16 event_count; /* must be a power of 2 */
u8 tlv_count;
};

View File

@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
mem = &ipa->mem[IPA_MEM_V4_ROUTE];
req.v4_route_tbl_info_valid = 1;
req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE;
req.v4_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
mem = &ipa->mem[IPA_MEM_V6_ROUTE];
req.v6_route_tbl_info_valid = 1;
req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE;
req.v6_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
mem = &ipa->mem[IPA_MEM_V4_FILTER];
req.v4_filter_tbl_start_valid = 1;
@ -352,8 +352,7 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
req.v4_hash_route_tbl_info_valid = 1;
req.v4_hash_route_tbl_info.start =
ipa->mem_offset + mem->offset;
req.v4_hash_route_tbl_info.count =
mem->size / IPA_TABLE_ENTRY_SIZE;
req.v4_hash_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
}
mem = &ipa->mem[IPA_MEM_V6_ROUTE_HASHED];
@ -361,8 +360,7 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
req.v6_hash_route_tbl_info_valid = 1;
req.v6_hash_route_tbl_info.start =
ipa->mem_offset + mem->offset;
req.v6_hash_route_tbl_info.count =
mem->size / IPA_TABLE_ENTRY_SIZE;
req.v6_hash_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
}
mem = &ipa->mem[IPA_MEM_V4_FILTER_HASHED];

View File

@ -271,7 +271,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
.tlv_type = 0x12,
.offset = offsetof(struct ipa_init_modem_driver_req,
v4_route_tbl_info),
.ei_array = ipa_mem_array_ei,
.ei_array = ipa_mem_bounds_ei,
},
{
.data_type = QMI_OPT_FLAG,
@ -292,7 +292,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
.tlv_type = 0x13,
.offset = offsetof(struct ipa_init_modem_driver_req,
v6_route_tbl_info),
.ei_array = ipa_mem_array_ei,
.ei_array = ipa_mem_bounds_ei,
},
{
.data_type = QMI_OPT_FLAG,
@ -456,7 +456,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
.tlv_type = 0x1b,
.offset = offsetof(struct ipa_init_modem_driver_req,
v4_hash_route_tbl_info),
.ei_array = ipa_mem_array_ei,
.ei_array = ipa_mem_bounds_ei,
},
{
.data_type = QMI_OPT_FLAG,
@ -477,7 +477,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
.tlv_type = 0x1c,
.offset = offsetof(struct ipa_init_modem_driver_req,
v6_hash_route_tbl_info),
.ei_array = ipa_mem_array_ei,
.ei_array = ipa_mem_bounds_ei,
},
{
.data_type = QMI_OPT_FLAG,

View File

@ -82,9 +82,11 @@ enum ipa_platform_type {
IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01 = 5, /* QNX MSM */
};
/* This defines the start and end offset of a range of memory. Both
* fields are offsets relative to the start of IPA shared memory.
* The end value is the last addressable byte *within* the range.
/* This defines the start and end offset of a range of memory. The start
* value is a byte offset relative to the start of IPA shared memory. The
* end value is the last addressable unit *within* the range. Typically
* the end value is in units of bytes, however it can also be a maximum
* array index value.
*/
struct ipa_mem_bounds {
u32 start;
@ -125,18 +127,19 @@ struct ipa_init_modem_driver_req {
u8 hdr_tbl_info_valid;
struct ipa_mem_bounds hdr_tbl_info;
/* Routing table information. These define the location and size of
* non-hashable IPv4 and IPv6 filter tables. The start values are
* offsets relative to the start of IPA shared memory.
/* Routing table information. These define the location and maximum
* *index* (not byte) for the modem portion of non-hashable IPv4 and
* IPv6 routing tables. The start values are byte offsets relative
* to the start of IPA shared memory.
*/
u8 v4_route_tbl_info_valid;
struct ipa_mem_array v4_route_tbl_info;
struct ipa_mem_bounds v4_route_tbl_info;
u8 v6_route_tbl_info_valid;
struct ipa_mem_array v6_route_tbl_info;
struct ipa_mem_bounds v6_route_tbl_info;
/* Filter table information. These define the location of the
* non-hashable IPv4 and IPv6 filter tables. The start values are
* offsets relative to the start of IPA shared memory.
* byte offsets relative to the start of IPA shared memory.
*/
u8 v4_filter_tbl_start_valid;
u32 v4_filter_tbl_start;
@ -177,18 +180,20 @@ struct ipa_init_modem_driver_req {
u8 zip_tbl_info_valid;
struct ipa_mem_bounds zip_tbl_info;
/* Routing table information. These define the location and size
* of hashable IPv4 and IPv6 filter tables. The start values are
* offsets relative to the start of IPA shared memory.
/* Routing table information. These define the location and maximum
* *index* (not byte) for the modem portion of hashable IPv4 and IPv6
* routing tables (if supported by hardware). The start values are
* byte offsets relative to the start of IPA shared memory.
*/
u8 v4_hash_route_tbl_info_valid;
struct ipa_mem_array v4_hash_route_tbl_info;
struct ipa_mem_bounds v4_hash_route_tbl_info;
u8 v6_hash_route_tbl_info_valid;
struct ipa_mem_array v6_hash_route_tbl_info;
struct ipa_mem_bounds v6_hash_route_tbl_info;
/* Filter table information. These define the location and size
* of hashable IPv4 and IPv6 filter tables. The start values are
* offsets relative to the start of IPA shared memory.
* of hashable IPv4 and IPv6 filter tables (if supported by hardware).
* The start values are byte offsets relative to the start of IPA
* shared memory.
*/
u8 v4_hash_filter_tbl_start_valid;
u32 v4_hash_filter_tbl_start;

View File

@ -27,28 +27,38 @@
/**
* DOC: IPA Filter and Route Tables
*
* The IPA has tables defined in its local shared memory that define filter
* and routing rules. Each entry in these tables contains a 64-bit DMA
* address that refers to DRAM (system memory) containing a rule definition.
* The IPA has tables defined in its local (IPA-resident) memory that define
* filter and routing rules. An entry in either of these tables is a little
* endian 64-bit "slot" that holds the address of a rule definition. (The
* size of these slots is 64 bits regardless of the host DMA address size.)
*
* Separate tables (both filter and route) used for IPv4 and IPv6. There
* are normally another set of "hashed" filter and route tables, which are
* used with a hash of message metadata. Hashed operation is not supported
* by all IPA hardware (IPA v4.2 doesn't support hashed tables).
*
* Rules can be in local memory or in DRAM (system memory). The offset of
* an object (such as a route or filter table) in IPA-resident memory must
* 128-byte aligned. An object in system memory (such as a route or filter
* rule) must be at an 8-byte aligned address. We currently only place
* route or filter rules in system memory.
*
* A rule consists of a contiguous block of 32-bit values terminated with
* 32 zero bits. A special "zero entry" rule consisting of 64 zero bits
* represents "no filtering" or "no routing," and is the reset value for
* filter or route table rules. Separate tables (both filter and route)
* used for IPv4 and IPv6. Additionally, there can be hashed filter or
* route tables, which are used when a hash of message metadata matches.
* Hashed operation is not supported by all IPA hardware.
* filter or route table rules.
*
* Each filter rule is associated with an AP or modem TX endpoint, though
* not all TX endpoints support filtering. The first 64-bit entry in a
* not all TX endpoints support filtering. The first 64-bit slot in a
* filter table is a bitmap indicating which endpoints have entries in
* the table. The low-order bit (bit 0) in this bitmap represents a
* special global filter, which applies to all traffic. This is not
* used in the current code. Bit 1, if set, indicates that there is an
* entry (i.e. a DMA address referring to a rule) for endpoint 0 in the
* table. Bit 2, if set, indicates there is an entry for endpoint 1,
* and so on. Space is set aside in IPA local memory to hold as many
* filter table entries as might be required, but typically they are not
* all used.
* entry (i.e. slot containing a system address referring to a rule) for
* endpoint 0 in the table. Bit 3, if set, indicates there is an entry
* for endpoint 2, and so on. Space is set aside in IPA local memory to
* hold as many filter table entries as might be required, but typically
* they are not all used.
*
* The AP initializes all entries in a filter table to refer to a "zero"
* entry. Once initialized the modem and AP update the entries for
@ -96,13 +106,8 @@
* ----------------------
*/
/* IPA hardware constrains filter and route tables alignment */
#define IPA_TABLE_ALIGN 128 /* Minimum table alignment */
/* Assignment of route table entries to the modem and AP */
#define IPA_ROUTE_MODEM_MIN 0
#define IPA_ROUTE_MODEM_COUNT 8
#define IPA_ROUTE_AP_MIN IPA_ROUTE_MODEM_COUNT
#define IPA_ROUTE_AP_COUNT \
(IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT)
@ -118,21 +123,14 @@
/* Check things that can be validated at build time. */
static void ipa_table_validate_build(void)
{
/* IPA hardware accesses memory 128 bytes at a time. Addresses
* referred to by entries in filter and route tables must be
* aligned on 128-byte byte boundaries. The only rule address
* ever use is the "zero rule", and it's aligned at the base
* of a coherent DMA allocation.
/* Filter and route tables contain DMA addresses that refer
* to filter or route rules. But the size of a table entry
* is 64 bits regardless of what the size of an AP DMA address
* is. A fixed constant defines the size of an entry, and
* code in ipa_table_init() uses a pointer to __le64 to
* initialize tables.
*/
BUILD_BUG_ON(ARCH_DMA_MINALIGN % IPA_TABLE_ALIGN);
/* Filter and route tables contain DMA addresses that refer to
* filter or route rules. We use a fixed constant to represent
* the size of either type of table entry. Code in ipa_table_init()
* uses a pointer to __le64 to initialize table entriews.
*/
BUILD_BUG_ON(IPA_TABLE_ENTRY_SIZE != sizeof(dma_addr_t));
BUILD_BUG_ON(sizeof(dma_addr_t) != sizeof(__le64));
BUILD_BUG_ON(sizeof(dma_addr_t) > sizeof(__le64));
/* A "zero rule" is used to represent no filtering or no routing.
* It is a 64-bit block of zeroed memory. Code in ipa_table_init()
@ -163,7 +161,7 @@ ipa_table_valid_one(struct ipa *ipa, bool route, bool ipv6, bool hashed)
else
mem = hashed ? &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]
: &ipa->mem[IPA_MEM_V4_ROUTE];
size = IPA_ROUTE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE;
size = IPA_ROUTE_COUNT_MAX * sizeof(__le64);
} else {
if (ipv6)
mem = hashed ? &ipa->mem[IPA_MEM_V6_FILTER_HASHED]
@ -171,7 +169,7 @@ ipa_table_valid_one(struct ipa *ipa, bool route, bool ipv6, bool hashed)
else
mem = hashed ? &ipa->mem[IPA_MEM_V4_FILTER_HASHED]
: &ipa->mem[IPA_MEM_V4_FILTER];
size = (1 + IPA_FILTER_COUNT_MAX) * IPA_TABLE_ENTRY_SIZE;
size = (1 + IPA_FILTER_COUNT_MAX) * sizeof(__le64);
}
if (!ipa_cmd_table_valid(ipa, mem, route, ipv6, hashed))
@ -270,8 +268,8 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
if (filter)
first++; /* skip over bitmap */
offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE;
size = count * IPA_TABLE_ENTRY_SIZE;
offset = mem->offset + first * sizeof(__le64);
size = count * sizeof(__le64);
addr = ipa_table_addr(ipa, false, count);
ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true);
@ -455,11 +453,11 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
count = 1 + hweight32(ipa->filter_map);
hash_count = hash_mem->size ? count : 0;
} else {
count = mem->size / IPA_TABLE_ENTRY_SIZE;
hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE;
count = mem->size / sizeof(__le64);
hash_count = hash_mem->size / sizeof(__le64);
}
size = count * IPA_TABLE_ENTRY_SIZE;
hash_size = hash_count * IPA_TABLE_ENTRY_SIZE;
size = count * sizeof(__le64);
hash_size = hash_count * sizeof(__le64);
addr = ipa_table_addr(ipa, filter, count);
hash_addr = ipa_table_addr(ipa, filter, hash_count);
@ -662,7 +660,13 @@ int ipa_table_init(struct ipa *ipa)
ipa_table_validate_build();
size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE;
/* The IPA hardware requires route and filter table rules to be
* aligned on a 128-byte boundary. We put the "zero rule" at the
* base of the table area allocated here. The DMA address returned
* by dma_alloc_coherent() is guaranteed to be a power-of-2 number
* of pages, which satisfies the rule alignment requirement.
*/
size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
if (!virt)
return -ENOMEM;
@ -694,7 +698,7 @@ void ipa_table_exit(struct ipa *ipa)
struct device *dev = &ipa->pdev->dev;
size_t size;
size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE;
size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr);
ipa->table_addr = 0;

View File

@ -10,12 +10,12 @@
struct ipa;
/* The size of a filter or route table entry */
#define IPA_TABLE_ENTRY_SIZE sizeof(__le64) /* Holds a physical address */
/* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
#define IPA_FILTER_COUNT_MAX 14
/* The number of route table entries allotted to the modem */
#define IPA_ROUTE_MODEM_COUNT 8
/* The maximum number of route table entries (IPv4, IPv6; hashed or not) */
#define IPA_ROUTE_COUNT_MAX 15

View File

@ -496,7 +496,6 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
static int ipvlan_process_outbound(struct sk_buff *skb)
{
struct ethhdr *ethh = eth_hdr(skb);
int ret = NET_XMIT_DROP;
/* The ipvlan is a pseudo-L2 device, so the packets that we receive
@ -506,6 +505,8 @@ static int ipvlan_process_outbound(struct sk_buff *skb)
if (skb_mac_header_was_set(skb)) {
/* In this mode we dont care about
* multicast and broadcast traffic */
struct ethhdr *ethh = eth_hdr(skb);
if (is_multicast_ether_addr(ethh->h_dest)) {
pr_debug_ratelimited(
"Dropped {multi|broad}cast of type=[%x]\n",
@ -590,7 +591,7 @@ static int ipvlan_xmit_mode_l3(struct sk_buff *skb, struct net_device *dev)
static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
{
const struct ipvl_dev *ipvlan = netdev_priv(dev);
struct ethhdr *eth = eth_hdr(skb);
struct ethhdr *eth = skb_eth_hdr(skb);
struct ipvl_addr *addr;
void *lyr3h;
int addr_type;
@ -620,6 +621,7 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
return dev_forward_skb(ipvlan->phy_dev, skb);
} else if (is_multicast_ether_addr(eth->h_dest)) {
skb_reset_mac_header(skb);
ipvlan_skb_crossing_ns(skb, NULL);
ipvlan_multicast_enqueue(ipvlan->port, skb, true);
return NET_XMIT_SUCCESS;

View File

@ -332,6 +332,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
return 0;
unregister:
of_node_put(child);
mdiobus_unregister(mdio);
return rc;
}

View File

@ -89,6 +89,9 @@
#define VEND1_GLOBAL_FW_ID_MAJOR GENMASK(15, 8)
#define VEND1_GLOBAL_FW_ID_MINOR GENMASK(7, 0)
#define VEND1_GLOBAL_GEN_STAT2 0xc831
#define VEND1_GLOBAL_GEN_STAT2_OP_IN_PROG BIT(15)
#define VEND1_GLOBAL_RSVD_STAT1 0xc885
#define VEND1_GLOBAL_RSVD_STAT1_FW_BUILD_ID GENMASK(7, 4)
#define VEND1_GLOBAL_RSVD_STAT1_PROV_ID GENMASK(3, 0)
@ -123,6 +126,12 @@
#define VEND1_GLOBAL_INT_VEND_MASK_GLOBAL2 BIT(1)
#define VEND1_GLOBAL_INT_VEND_MASK_GLOBAL3 BIT(0)
/* Sleep and timeout for checking if the Processor-Intensive
* MDIO operation is finished
*/
#define AQR107_OP_IN_PROG_SLEEP 1000
#define AQR107_OP_IN_PROG_TIMEOUT 100000
struct aqr107_hw_stat {
const char *name;
int reg;
@ -569,16 +578,52 @@ static void aqr107_link_change_notify(struct phy_device *phydev)
phydev_info(phydev, "Aquantia 1000Base-T2 mode active\n");
}
static int aqr107_wait_processor_intensive_op(struct phy_device *phydev)
{
int val, err;
/* The datasheet notes to wait at least 1ms after issuing a
* processor intensive operation before checking.
* We cannot use the 'sleep_before_read' parameter of read_poll_timeout
* because that just determines the maximum time slept, not the minimum.
*/
usleep_range(1000, 5000);
err = phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1,
VEND1_GLOBAL_GEN_STAT2, val,
!(val & VEND1_GLOBAL_GEN_STAT2_OP_IN_PROG),
AQR107_OP_IN_PROG_SLEEP,
AQR107_OP_IN_PROG_TIMEOUT, false);
if (err) {
phydev_err(phydev, "timeout: processor-intensive MDIO operation\n");
return err;
}
return 0;
}
static int aqr107_suspend(struct phy_device *phydev)
{
return phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
MDIO_CTRL1_LPOWER);
int err;
err = phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
MDIO_CTRL1_LPOWER);
if (err)
return err;
return aqr107_wait_processor_intensive_op(phydev);
}
static int aqr107_resume(struct phy_device *phydev)
{
return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
MDIO_CTRL1_LPOWER);
int err;
err = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
MDIO_CTRL1_LPOWER);
if (err)
return err;
return aqr107_wait_processor_intensive_op(phydev);
}
static int aqr107_probe(struct phy_device *phydev)

View File

@ -1270,10 +1270,12 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
}
}
netif_addr_lock_bh(dev);
dev_uc_sync_multiple(port_dev, dev);
dev_mc_sync_multiple(port_dev, dev);
netif_addr_unlock_bh(dev);
if (dev->flags & IFF_UP) {
netif_addr_lock_bh(dev);
dev_uc_sync_multiple(port_dev, dev);
dev_mc_sync_multiple(port_dev, dev);
netif_addr_unlock_bh(dev);
}
port->index = -1;
list_add_tail_rcu(&port->list, &team->port_list);
@ -1344,8 +1346,10 @@ static int team_port_del(struct team *team, struct net_device *port_dev)
netdev_rx_handler_unregister(port_dev);
team_port_disable_netpoll(port);
vlan_vids_del_by_dev(port_dev, dev);
dev_uc_unsync(port_dev, dev);
dev_mc_unsync(port_dev, dev);
if (dev->flags & IFF_UP) {
dev_uc_unsync(port_dev, dev);
dev_mc_unsync(port_dev, dev);
}
dev_close(port_dev);
team_port_leave(team, port);
@ -1695,6 +1699,14 @@ static int team_open(struct net_device *dev)
static int team_close(struct net_device *dev)
{
struct team *team = netdev_priv(dev);
struct team_port *port;
list_for_each_entry(port, &team->port_list, list) {
dev_uc_unsync(port->dev, dev);
dev_mc_unsync(port->dev, dev);
}
return 0;
}

View File

@ -436,14 +436,13 @@ static int set_peer(struct wg_device *wg, struct nlattr **attrs)
if (attrs[WGPEER_A_ENDPOINT]) {
struct sockaddr *addr = nla_data(attrs[WGPEER_A_ENDPOINT]);
size_t len = nla_len(attrs[WGPEER_A_ENDPOINT]);
struct endpoint endpoint = { { { 0 } } };
if ((len == sizeof(struct sockaddr_in) &&
addr->sa_family == AF_INET) ||
(len == sizeof(struct sockaddr_in6) &&
addr->sa_family == AF_INET6)) {
struct endpoint endpoint = { { { 0 } } };
memcpy(&endpoint.addr, addr, len);
if (len == sizeof(struct sockaddr_in) && addr->sa_family == AF_INET) {
endpoint.addr4 = *(struct sockaddr_in *)addr;
wg_socket_set_peer_endpoint(peer, &endpoint);
} else if (len == sizeof(struct sockaddr_in6) && addr->sa_family == AF_INET6) {
endpoint.addr6 = *(struct sockaddr_in6 *)addr;
wg_socket_set_peer_endpoint(peer, &endpoint);
}
}

View File

@ -6,29 +6,28 @@
#ifdef DEBUG
#include <linux/jiffies.h>
#include <linux/hrtimer.h>
static const struct {
bool result;
u64 nsec_to_sleep_before;
unsigned int msec_to_sleep_before;
} expected_results[] __initconst = {
[0 ... PACKETS_BURSTABLE - 1] = { true, 0 },
[PACKETS_BURSTABLE] = { false, 0 },
[PACKETS_BURSTABLE + 1] = { true, NSEC_PER_SEC / PACKETS_PER_SECOND },
[PACKETS_BURSTABLE + 1] = { true, MSEC_PER_SEC / PACKETS_PER_SECOND },
[PACKETS_BURSTABLE + 2] = { false, 0 },
[PACKETS_BURSTABLE + 3] = { true, (NSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
[PACKETS_BURSTABLE + 3] = { true, (MSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
[PACKETS_BURSTABLE + 4] = { true, 0 },
[PACKETS_BURSTABLE + 5] = { false, 0 }
};
static __init unsigned int maximum_jiffies_at_index(int index)
{
u64 total_nsecs = 2 * NSEC_PER_SEC / PACKETS_PER_SECOND / 3;
unsigned int total_msecs = 2 * MSEC_PER_SEC / PACKETS_PER_SECOND / 3;
int i;
for (i = 0; i <= index; ++i)
total_nsecs += expected_results[i].nsec_to_sleep_before;
return nsecs_to_jiffies(total_nsecs);
total_msecs += expected_results[i].msec_to_sleep_before;
return msecs_to_jiffies(total_msecs);
}
static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
@ -43,12 +42,8 @@ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
loop_start_time = jiffies;
for (i = 0; i < ARRAY_SIZE(expected_results); ++i) {
if (expected_results[i].nsec_to_sleep_before) {
ktime_t timeout = ktime_add(ktime_add_ns(ktime_get_coarse_boottime(), TICK_NSEC * 4 / 3),
ns_to_ktime(expected_results[i].nsec_to_sleep_before));
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_hrtimeout_range_clock(&timeout, 0, HRTIMER_MODE_ABS, CLOCK_BOOTTIME);
}
if (expected_results[i].msec_to_sleep_before)
msleep(expected_results[i].msec_to_sleep_before);
if (time_is_before_jiffies(loop_start_time +
maximum_jiffies_at_index(i)))
@ -132,7 +127,7 @@ bool __init wg_ratelimiter_selftest(void)
if (IS_ENABLED(CONFIG_KASAN) || IS_ENABLED(CONFIG_UBSAN))
return true;
BUILD_BUG_ON(NSEC_PER_SEC % PACKETS_PER_SECOND != 0);
BUILD_BUG_ON(MSEC_PER_SEC % PACKETS_PER_SECOND != 0);
if (wg_ratelimiter_init())
goto out;
@ -172,7 +167,7 @@ bool __init wg_ratelimiter_selftest(void)
++test;
#endif
for (trials = TRIALS_BEFORE_GIVING_UP;;) {
for (trials = TRIALS_BEFORE_GIVING_UP; IS_ENABLED(DEBUG_RATELIMITER_TIMINGS);) {
int test_count = 0, ret;
ret = timings_test(skb4, hdr4, skb6, hdr6, &test_count);

View File

@ -950,7 +950,7 @@ u32 mt7615_mac_get_sta_tid_sn(struct mt7615_dev *dev, int wcid, u8 tid)
offset %= 32;
val = mt76_rr(dev, addr);
val >>= (tid % 32);
val >>= offset;
if (offset > 20) {
addr += 4;

View File

@ -675,12 +675,12 @@ int dasd_alias_remove_device(struct dasd_device *device)
struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *base_device)
{
struct dasd_eckd_private *alias_priv, *private = base_device->private;
struct alias_pav_group *group = private->pavgroup;
struct alias_lcu *lcu = private->lcu;
struct dasd_device *alias_device;
struct alias_pav_group *group;
unsigned long flags;
if (!group || !lcu)
if (!lcu)
return NULL;
if (lcu->pav == NO_PAV ||
lcu->flags & (NEED_UAC_UPDATE | UPDATE_PENDING))
@ -697,6 +697,11 @@ struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *base_device)
}
spin_lock_irqsave(&lcu->lock, flags);
group = private->pavgroup;
if (!group) {
spin_unlock_irqrestore(&lcu->lock, flags);
return NULL;
}
alias_device = group->next;
if (!alias_device) {
if (list_empty(&group->aliaslist)) {

View File

@ -2822,23 +2822,22 @@ static int
_base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
{
struct sysinfo s;
int dma_mask;
if (ioc->is_mcpu_endpoint ||
sizeof(dma_addr_t) == 4 || ioc->use_32bit_dma ||
dma_get_required_mask(&pdev->dev) <= 32)
dma_mask = 32;
dma_get_required_mask(&pdev->dev) <= DMA_BIT_MASK(32))
ioc->dma_mask = 32;
/* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */
else if (ioc->hba_mpi_version_belonged > MPI2_VERSION)
dma_mask = 63;
ioc->dma_mask = 63;
else
dma_mask = 64;
ioc->dma_mask = 64;
if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(dma_mask)) ||
dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(dma_mask)))
if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(ioc->dma_mask)) ||
dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(ioc->dma_mask)))
return -ENODEV;
if (dma_mask > 32) {
if (ioc->dma_mask > 32) {
ioc->base_add_sg_single = &_base_add_sg_single_64;
ioc->sge_size = sizeof(Mpi2SGESimple64_t);
} else {
@ -2848,7 +2847,7 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
si_meminfo(&s);
ioc_info(ioc, "%d BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (%ld kB)\n",
dma_mask, convert_to_kb(s.totalram));
ioc->dma_mask, convert_to_kb(s.totalram));
return 0;
}
@ -4902,10 +4901,10 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
dma_pool_free(ioc->pcie_sgl_dma_pool,
ioc->pcie_sg_lookup[i].pcie_sgl,
ioc->pcie_sg_lookup[i].pcie_sgl_dma);
ioc->pcie_sg_lookup[i].pcie_sgl = NULL;
}
dma_pool_destroy(ioc->pcie_sgl_dma_pool);
}
if (ioc->config_page) {
dexitprintk(ioc,
ioc_info(ioc, "config_page(0x%p): free\n",
@ -4960,6 +4959,89 @@ mpt3sas_check_same_4gb_region(long reply_pool_start_address, u32 pool_sz)
return 0;
}
/**
* _base_reduce_hba_queue_depth- Retry with reduced queue depth
* @ioc: Adapter object
*
* Return: 0 for success, non-zero for failure.
**/
static inline int
_base_reduce_hba_queue_depth(struct MPT3SAS_ADAPTER *ioc)
{
int reduce_sz = 64;
if ((ioc->hba_queue_depth - reduce_sz) >
(ioc->internal_depth + INTERNAL_SCSIIO_CMDS_COUNT)) {
ioc->hba_queue_depth -= reduce_sz;
return 0;
} else
return -ENOMEM;
}
/**
* _base_allocate_pcie_sgl_pool - Allocating DMA'able memory
* for pcie sgl pools.
* @ioc: Adapter object
* @sz: DMA Pool size
* @ct: Chain tracker
* Return: 0 for success, non-zero for failure.
*/
static int
_base_allocate_pcie_sgl_pool(struct MPT3SAS_ADAPTER *ioc, u32 sz)
{
int i = 0, j = 0;
struct chain_tracker *ct;
ioc->pcie_sgl_dma_pool =
dma_pool_create("PCIe SGL pool", &ioc->pdev->dev, sz,
ioc->page_size, 0);
if (!ioc->pcie_sgl_dma_pool) {
ioc_err(ioc, "PCIe SGL pool: dma_pool_create failed\n");
return -ENOMEM;
}
ioc->chains_per_prp_buffer = sz/ioc->chain_segment_sz;
ioc->chains_per_prp_buffer =
min(ioc->chains_per_prp_buffer, ioc->chains_needed_per_io);
for (i = 0; i < ioc->scsiio_depth; i++) {
ioc->pcie_sg_lookup[i].pcie_sgl =
dma_pool_alloc(ioc->pcie_sgl_dma_pool, GFP_KERNEL,
&ioc->pcie_sg_lookup[i].pcie_sgl_dma);
if (!ioc->pcie_sg_lookup[i].pcie_sgl) {
ioc_err(ioc, "PCIe SGL pool: dma_pool_alloc failed\n");
return -EAGAIN;
}
if (!mpt3sas_check_same_4gb_region(
(long)ioc->pcie_sg_lookup[i].pcie_sgl, sz)) {
ioc_err(ioc, "PCIE SGLs are not in same 4G !! pcie sgl (0x%p) dma = (0x%llx)\n",
ioc->pcie_sg_lookup[i].pcie_sgl,
(unsigned long long)
ioc->pcie_sg_lookup[i].pcie_sgl_dma);
ioc->use_32bit_dma = true;
return -EAGAIN;
}
for (j = 0; j < ioc->chains_per_prp_buffer; j++) {
ct = &ioc->chain_lookup[i].chains_per_smid[j];
ct->chain_buffer =
ioc->pcie_sg_lookup[i].pcie_sgl +
(j * ioc->chain_segment_sz);
ct->chain_buffer_dma =
ioc->pcie_sg_lookup[i].pcie_sgl_dma +
(j * ioc->chain_segment_sz);
}
}
dinitprintk(ioc, ioc_info(ioc,
"PCIe sgl pool depth(%d), element_size(%d), pool_size(%d kB)\n",
ioc->scsiio_depth, sz, (sz * ioc->scsiio_depth)/1024));
dinitprintk(ioc, ioc_info(ioc,
"Number of chains can fit in a PRP page(%d)\n",
ioc->chains_per_prp_buffer));
return 0;
}
/**
* base_alloc_rdpq_dma_pool - Allocating DMA'able memory
* for reply queues.
@ -5058,7 +5140,7 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
unsigned short sg_tablesize;
u16 sge_size;
int i, j;
int ret = 0;
int ret = 0, rc = 0;
struct chain_tracker *ct;
dinitprintk(ioc, ioc_info(ioc, "%s\n", __func__));
@ -5357,6 +5439,7 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
* be required for NVMe PRP's, only each set of NVMe blocks will be
* contiguous, so a new set is allocated for each possible I/O.
*/
ioc->chains_per_prp_buffer = 0;
if (ioc->facts.ProtocolFlags & MPI2_IOCFACTS_PROTOCOL_NVME_DEVICES) {
nvme_blocks_needed =
@ -5371,43 +5454,11 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
goto out;
}
sz = nvme_blocks_needed * ioc->page_size;
ioc->pcie_sgl_dma_pool =
dma_pool_create("PCIe SGL pool", &ioc->pdev->dev, sz, 16, 0);
if (!ioc->pcie_sgl_dma_pool) {
ioc_info(ioc, "PCIe SGL pool: dma_pool_create failed\n");
goto out;
}
ioc->chains_per_prp_buffer = sz/ioc->chain_segment_sz;
ioc->chains_per_prp_buffer = min(ioc->chains_per_prp_buffer,
ioc->chains_needed_per_io);
for (i = 0; i < ioc->scsiio_depth; i++) {
ioc->pcie_sg_lookup[i].pcie_sgl = dma_pool_alloc(
ioc->pcie_sgl_dma_pool, GFP_KERNEL,
&ioc->pcie_sg_lookup[i].pcie_sgl_dma);
if (!ioc->pcie_sg_lookup[i].pcie_sgl) {
ioc_info(ioc, "PCIe SGL pool: dma_pool_alloc failed\n");
goto out;
}
for (j = 0; j < ioc->chains_per_prp_buffer; j++) {
ct = &ioc->chain_lookup[i].chains_per_smid[j];
ct->chain_buffer =
ioc->pcie_sg_lookup[i].pcie_sgl +
(j * ioc->chain_segment_sz);
ct->chain_buffer_dma =
ioc->pcie_sg_lookup[i].pcie_sgl_dma +
(j * ioc->chain_segment_sz);
}
}
dinitprintk(ioc,
ioc_info(ioc, "PCIe sgl pool depth(%d), element_size(%d), pool_size(%d kB)\n",
ioc->scsiio_depth, sz,
(sz * ioc->scsiio_depth) / 1024));
dinitprintk(ioc,
ioc_info(ioc, "Number of chains can fit in a PRP page(%d)\n",
ioc->chains_per_prp_buffer));
rc = _base_allocate_pcie_sgl_pool(ioc, sz);
if (rc == -ENOMEM)
return -ENOMEM;
else if (rc == -EAGAIN)
goto try_32bit_dma;
total_sz += sz * ioc->scsiio_depth;
}
@ -5577,6 +5628,19 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
ioc->shost->sg_tablesize);
return 0;
try_32bit_dma:
_base_release_memory_pools(ioc);
if (ioc->use_32bit_dma && (ioc->dma_mask > 32)) {
/* Change dma coherent mask to 32 bit and reallocate */
if (_base_config_dma_addressing(ioc, ioc->pdev) != 0) {
pr_err("Setting 32 bit coherent DMA mask Failed %s\n",
pci_name(ioc->pdev));
return -ENODEV;
}
} else if (_base_reduce_hba_queue_depth(ioc) != 0)
return -ENOMEM;
goto retry_allocation;
out:
return -ENOMEM;
}
@ -7239,6 +7303,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
ioc->rdpq_array_enable_assigned = 0;
ioc->use_32bit_dma = false;
ioc->dma_mask = 64;
if (ioc->is_aero_ioc)
ioc->base_readl = &_base_readl_aero;
else

View File

@ -1257,6 +1257,7 @@ struct MPT3SAS_ADAPTER {
u16 thresh_hold;
u8 high_iops_queues;
u32 drv_support_bitmap;
u32 dma_mask;
bool enable_sdev_max_qd;
bool use_32bit_dma;

View File

@ -295,20 +295,16 @@ static int atmel_config_rs485(struct uart_port *port,
mode = atmel_uart_readl(port, ATMEL_US_MR);
/* Resetting serial mode to RS232 (0x0) */
mode &= ~ATMEL_US_USMODE;
port->rs485 = *rs485conf;
if (rs485conf->flags & SER_RS485_ENABLED) {
dev_dbg(port->dev, "Setting UART to RS485\n");
if (port->rs485.flags & SER_RS485_RX_DURING_TX)
if (rs485conf->flags & SER_RS485_RX_DURING_TX)
atmel_port->tx_done_mask = ATMEL_US_TXRDY;
else
atmel_port->tx_done_mask = ATMEL_US_TXEMPTY;
atmel_uart_writel(port, ATMEL_US_TTGR,
rs485conf->delay_rts_after_send);
mode &= ~ATMEL_US_USMODE;
mode |= ATMEL_US_USMODE_RS485;
} else {
dev_dbg(port->dev, "Setting UART to RS232\n");

View File

@ -520,7 +520,7 @@ static void tegra_uart_tx_dma_complete(void *args)
count = tup->tx_bytes_requested - state.residue;
async_tx_ack(tup->tx_dma_desc);
spin_lock_irqsave(&tup->uport.lock, flags);
xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
uart_xmit_advance(&tup->uport, count);
tup->tx_in_progress = 0;
if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
uart_write_wakeup(&tup->uport);
@ -608,7 +608,6 @@ static unsigned int tegra_uart_tx_empty(struct uart_port *u)
static void tegra_uart_stop_tx(struct uart_port *u)
{
struct tegra_uart_port *tup = to_tegra_uport(u);
struct circ_buf *xmit = &tup->uport.state->xmit;
struct dma_tx_state state;
unsigned int count;
@ -619,7 +618,7 @@ static void tegra_uart_stop_tx(struct uart_port *u)
dmaengine_tx_status(tup->tx_dma_chan, tup->tx_cookie, &state);
count = tup->tx_bytes_requested - state.residue;
async_tx_ack(tup->tx_dma_desc);
xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
uart_xmit_advance(&tup->uport, count);
tup->tx_in_progress = 0;
}

View File

@ -101,7 +101,7 @@ static void tegra_tcu_uart_start_tx(struct uart_port *port)
break;
tegra_tcu_write(tcu, &xmit->buf[xmit->tail], count);
xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1);
uart_xmit_advance(port, count);
}
uart_write_wakeup(port);

View File

@ -1531,7 +1531,8 @@ static void cdns3_transfer_completed(struct cdns3_device *priv_dev,
TRB_LEN(le32_to_cpu(trb->length));
if (priv_req->num_of_trb > 1 &&
le32_to_cpu(trb->control) & TRB_SMM)
le32_to_cpu(trb->control) & TRB_SMM &&
le32_to_cpu(trb->control) & TRB_CHAIN)
transfer_end = true;
cdns3_ep_inc_deq(priv_ep);
@ -1691,6 +1692,7 @@ static int cdns3_check_ep_interrupt_proceed(struct cdns3_endpoint *priv_ep)
ep_cfg &= ~EP_CFG_ENABLE;
writel(ep_cfg, &priv_dev->regs->ep_cfg);
priv_ep->flags &= ~EP_QUIRK_ISO_OUT_EN;
priv_ep->flags |= EP_UPDATE_EP_TRBADDR;
}
cdns3_transfer_completed(priv_dev, priv_ep);
} else if (!(priv_ep->flags & EP_STALLED) &&

View File

@ -1044,6 +1044,7 @@ struct dwc3_scratchpad_array {
* @tx_fifo_resize_max_num: max number of fifos allocated during txfifo resize
* @hsphy_interface: "utmi" or "ulpi"
* @connected: true when we're connected to a host, false otherwise
* @softconnect: true when gadget connect is called, false when disconnect runs
* @delayed_status: true when gadget driver asks for delayed status
* @ep0_bounced: true when we used bounce buffer
* @ep0_expect_in: true when we expect a DATA IN transfer
@ -1264,6 +1265,7 @@ struct dwc3 {
const char *hsphy_interface;
unsigned connected:1;
unsigned softconnect:1;
unsigned delayed_status:1;
unsigned ep0_bounced:1;
unsigned ep0_expect_in:1;

View File

@ -2451,18 +2451,42 @@ static void dwc3_gadget_disable_irq(struct dwc3 *dwc);
static void __dwc3_gadget_stop(struct dwc3 *dwc);
static int __dwc3_gadget_start(struct dwc3 *dwc);
static int dwc3_gadget_soft_disconnect(struct dwc3 *dwc)
{
unsigned long flags;
spin_lock_irqsave(&dwc->lock, flags);
dwc->connected = false;
/*
* In the Synopsys DesignWare Cores USB3 Databook Rev. 3.30a
* Section 4.1.8 Table 4-7, it states that for a device-initiated
* disconnect, the SW needs to ensure that it sends "a DEPENDXFER
* command for any active transfers" before clearing the RunStop
* bit.
*/
dwc3_stop_active_transfers(dwc);
__dwc3_gadget_stop(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
/*
* Note: if the GEVNTCOUNT indicates events in the event buffer, the
* driver needs to acknowledge them before the controller can halt.
* Simply let the interrupt handler acknowledges and handle the
* remaining event generated by the controller while polling for
* DSTS.DEVCTLHLT.
*/
return dwc3_gadget_run_stop(dwc, false, false);
}
static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
{
struct dwc3 *dwc = gadget_to_dwc(g);
struct dwc3_vendor *vdwc = container_of(dwc, struct dwc3_vendor, dwc);
unsigned long flags;
int ret;
is_on = !!is_on;
if (dwc->pullups_connected == is_on)
return 0;
vdwc->softconnect = is_on;
/*
@ -2500,42 +2524,13 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
return 0;
}
/*
* Synchronize and disable any further event handling while controller
* is being enabled/disabled.
*/
disable_irq(dwc->irq_gadget);
spin_lock_irqsave(&dwc->lock, flags);
if (dwc->pullups_connected == is_on) {
pm_runtime_put(dwc->dev);
return 0;
}
if (!is_on) {
u32 count;
dwc->connected = false;
/*
* In the Synopsis DesignWare Cores USB3 Databook Rev. 3.30a
* Section 4.1.8 Table 4-7, it states that for a device-initiated
* disconnect, the SW needs to ensure that it sends "a DEPENDXFER
* command for any active transfers" before clearing the RunStop
* bit.
*/
dwc3_stop_active_transfers(dwc);
__dwc3_gadget_stop(dwc);
/*
* In the Synopsis DesignWare Cores USB3 Databook Rev. 3.30a
* Section 1.3.4, it mentions that for the DEVCTRLHLT bit, the
* "software needs to acknowledge the events that are generated
* (by writing to GEVNTCOUNTn) while it is waiting for this bit
* to be set to '1'."
*/
count = dwc3_readl(dwc->regs, DWC3_GEVNTCOUNT(0));
count &= DWC3_GEVNTCOUNT_MASK;
if (count > 0) {
dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), count);
dwc->ev_buf->lpos = (dwc->ev_buf->lpos + count) %
dwc->ev_buf->length;
}
ret = dwc3_gadget_soft_disconnect(dwc);
} else {
/*
* In the Synopsys DWC_usb31 1.90a programming guide section
@ -2543,18 +2538,13 @@ static int dwc3_gadget_pullup(struct usb_gadget *g, int is_on)
* device-initiated disconnect requires a core soft reset
* (DCTL.CSftRst) before enabling the run/stop bit.
*/
spin_unlock_irqrestore(&dwc->lock, flags);
dwc3_core_soft_reset(dwc);
spin_lock_irqsave(&dwc->lock, flags);
dwc3_event_buffers_setup(dwc);
__dwc3_gadget_start(dwc);
ret = dwc3_gadget_run_stop(dwc, true, false);
}
ret = dwc3_gadget_run_stop(dwc, is_on, false);
spin_unlock_irqrestore(&dwc->lock, flags);
enable_irq(dwc->irq_gadget);
pm_runtime_put(dwc->dev);
return ret;

View File

@ -449,16 +449,17 @@ static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
u32 num_esit, tmp;
int base;
int i, j;
u8 uframes = DIV_ROUND_UP(sch_ep->maxpkt, FS_PAYLOAD_MAX);
num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
if (sch_ep->ep_type == INT_IN_EP || sch_ep->ep_type == ISOC_IN_EP)
offset++;
for (i = 0; i < num_esit; i++) {
base = offset + i * sch_ep->esit;
/*
* Compared with hs bus, no matter what ep type,
* the hub will always delay one uframe to send data
*/
for (j = 0; j < sch_ep->cs_count; j++) {
for (j = 0; j < uframes; j++) {
tmp = tt->fs_bus_bw[base + j] + sch_ep->bw_cost_per_microframe;
if (tmp > FS_PAYLOAD_MAX)
return -ESCH_BW_OVERFLOW;
@ -470,11 +471,9 @@ static int check_fs_bus_bw(struct mu3h_sch_ep_info *sch_ep, int offset)
static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset)
{
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
u32 extra_cs_count;
u32 start_ss, last_ss;
u32 start_cs, last_cs;
int i;
start_ss = offset % 8;
@ -488,10 +487,6 @@ static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset)
if (!(start_ss == 7 || last_ss < 6))
return -ESCH_SS_Y6;
for (i = 0; i < sch_ep->cs_count; i++)
if (test_bit(offset + i, tt->ss_bit_map))
return -ESCH_SS_OVERLAP;
} else {
u32 cs_count = DIV_ROUND_UP(sch_ep->maxpkt, FS_PAYLOAD_MAX);
@ -518,9 +513,6 @@ static int check_sch_tt(struct mu3h_sch_ep_info *sch_ep, u32 offset)
if (cs_count > 7)
cs_count = 7; /* HW limit */
if (test_bit(offset, tt->ss_bit_map))
return -ESCH_SS_OVERLAP;
sch_ep->cs_count = cs_count;
/* one for ss, the other for idle */
sch_ep->num_budget_microframes = cs_count + 2;
@ -541,28 +533,24 @@ static void update_sch_tt(struct mu3h_sch_ep_info *sch_ep, bool used)
struct mu3h_sch_tt *tt = sch_ep->sch_tt;
u32 base, num_esit;
int bw_updated;
int bits;
int i, j;
int offset = sch_ep->offset;
u8 uframes = DIV_ROUND_UP(sch_ep->maxpkt, FS_PAYLOAD_MAX);
num_esit = XHCI_MTK_MAX_ESIT / sch_ep->esit;
bits = (sch_ep->ep_type == ISOC_OUT_EP) ? sch_ep->cs_count : 1;
if (used)
bw_updated = sch_ep->bw_cost_per_microframe;
else
bw_updated = -sch_ep->bw_cost_per_microframe;
if (sch_ep->ep_type == INT_IN_EP || sch_ep->ep_type == ISOC_IN_EP)
offset++;
for (i = 0; i < num_esit; i++) {
base = sch_ep->offset + i * sch_ep->esit;
base = offset + i * sch_ep->esit;
for (j = 0; j < bits; j++) {
if (used)
set_bit(base + j, tt->ss_bit_map);
else
clear_bit(base + j, tt->ss_bit_map);
}
for (j = 0; j < sch_ep->cs_count; j++)
for (j = 0; j < uframes; j++)
tt->fs_bus_bw[base + j] += bw_updated;
}

View File

@ -20,12 +20,10 @@
#define XHCI_MTK_MAX_ESIT 64
/**
* @ss_bit_map: used to avoid start split microframes overlay
* @fs_bus_bw: array to keep track of bandwidth already used for FS
* @ep_list: Endpoints using this TT
*/
struct mu3h_sch_tt {
DECLARE_BITMAP(ss_bit_map, XHCI_MTK_MAX_ESIT);
u32 fs_bus_bw[XHCI_MTK_MAX_ESIT];
struct list_head ep_list;
};

View File

@ -256,6 +256,7 @@ static void option_instat_callback(struct urb *urb);
#define QUECTEL_PRODUCT_EM060K 0x030b
#define QUECTEL_PRODUCT_EM12 0x0512
#define QUECTEL_PRODUCT_RM500Q 0x0800
#define QUECTEL_PRODUCT_RM520N 0x0801
#define QUECTEL_PRODUCT_EC200S_CN 0x6002
#define QUECTEL_PRODUCT_EC200T 0x6026
#define QUECTEL_PRODUCT_RM500K 0x7001
@ -1138,6 +1139,8 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff),
.driver_info = NUMEP2 },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) },
{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0203, 0xff), /* BG95-M3 */
.driver_info = ZLP },
{ USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96),
.driver_info = RSVD(4) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
@ -1159,6 +1162,9 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10),
.driver_info = ZLP },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },

View File

@ -83,8 +83,6 @@ enum {
/*
* Input Output Manager (IOM) PORT STATUS
*/
#define IOM_PORT_STATUS_OFFSET 0x560
#define IOM_PORT_STATUS_ACTIVITY_TYPE_MASK GENMASK(9, 6)
#define IOM_PORT_STATUS_ACTIVITY_TYPE_SHIFT 6
#define IOM_PORT_STATUS_ACTIVITY_TYPE_USB 0x03
@ -144,6 +142,7 @@ struct pmc_usb {
struct pmc_usb_port *port;
struct acpi_device *iom_adev;
void __iomem *iom_base;
u32 iom_port_status_offset;
};
static void update_port_status(struct pmc_usb_port *port)
@ -153,7 +152,8 @@ static void update_port_status(struct pmc_usb_port *port)
/* SoC expects the USB Type-C port numbers to start with 0 */
port_num = port->usb3_port - 1;
port->iom_status = readl(port->pmc->iom_base + IOM_PORT_STATUS_OFFSET +
port->iom_status = readl(port->pmc->iom_base +
port->pmc->iom_port_status_offset +
port_num * sizeof(u32));
}
@ -554,19 +554,42 @@ static int pmc_usb_register_port(struct pmc_usb *pmc, int index,
static int is_memory(struct acpi_resource *res, void *data)
{
struct resource r;
struct resource_win win = {};
struct resource *r = &win.res;
return !acpi_dev_resource_memory(res, &r);
return !(acpi_dev_resource_memory(res, r) ||
acpi_dev_resource_address_space(res, &win));
}
/* IOM ACPI IDs and IOM_PORT_STATUS_OFFSET */
static const struct acpi_device_id iom_acpi_ids[] = {
/* TigerLake */
{ "INTC1072", 0x560, },
/* AlderLake */
{ "INTC1079", 0x160, },
/* Meteor Lake */
{ "INTC107A", 0x160, },
{}
};
static int pmc_usb_probe_iom(struct pmc_usb *pmc)
{
struct list_head resource_list;
struct resource_entry *rentry;
struct acpi_device *adev;
static const struct acpi_device_id *dev_id;
struct acpi_device *adev = NULL;
int ret;
adev = acpi_dev_get_first_match_dev("INTC1072", NULL, -1);
for (dev_id = &iom_acpi_ids[0]; dev_id->id[0]; dev_id++) {
if (acpi_dev_present(dev_id->id, NULL, -1)) {
pmc->iom_port_status_offset = (u32)dev_id->driver_data;
adev = acpi_dev_get_first_match_dev(dev_id->id, NULL, -1);
break;
}
}
if (!adev)
return -ENODEV;

View File

@ -98,6 +98,12 @@ struct vfio_dma {
unsigned long *bitmap;
};
struct vfio_batch {
struct page **pages; /* for pin_user_pages_remote */
struct page *fallback_page; /* if pages alloc fails */
int capacity; /* length of pages array */
};
struct vfio_group {
struct iommu_group *iommu_group;
struct list_head next;
@ -428,6 +434,31 @@ static int put_pfn(unsigned long pfn, int prot)
return 0;
}
#define VFIO_BATCH_MAX_CAPACITY (PAGE_SIZE / sizeof(struct page *))
static void vfio_batch_init(struct vfio_batch *batch)
{
if (unlikely(disable_hugepages))
goto fallback;
batch->pages = (struct page **) __get_free_page(GFP_KERNEL);
if (!batch->pages)
goto fallback;
batch->capacity = VFIO_BATCH_MAX_CAPACITY;
return;
fallback:
batch->pages = &batch->fallback_page;
batch->capacity = 1;
}
static void vfio_batch_fini(struct vfio_batch *batch)
{
if (batch->capacity == VFIO_BATCH_MAX_CAPACITY)
free_page((unsigned long)batch->pages);
}
static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
unsigned long vaddr, unsigned long *pfn,
bool write_fault)
@ -464,10 +495,14 @@ static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
return ret;
}
static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
int prot, unsigned long *pfn)
/*
* Returns the positive number of pfns successfully obtained or a negative
* error code.
*/
static int vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr,
long npages, int prot, unsigned long *pfn,
struct page **pages)
{
struct page *page[1];
struct vm_area_struct *vma;
unsigned int flags = 0;
int ret;
@ -476,11 +511,22 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
flags |= FOLL_WRITE;
mmap_read_lock(mm);
ret = pin_user_pages_remote(mm, vaddr, 1, flags | FOLL_LONGTERM,
page, NULL, NULL);
if (ret == 1) {
*pfn = page_to_pfn(page[0]);
ret = 0;
ret = pin_user_pages_remote(mm, vaddr, npages, flags | FOLL_LONGTERM,
pages, NULL, NULL);
if (ret > 0) {
int i;
/*
* The zero page is always resident, we don't need to pin it
* and it falls into our invalid/reserved test so we don't
* unpin in put_pfn(). Unpin all zero pages in the batch here.
*/
for (i = 0 ; i < ret; i++) {
if (unlikely(is_zero_pfn(page_to_pfn(pages[i]))))
unpin_user_page(pages[i]);
}
*pfn = page_to_pfn(pages[0]);
goto done;
}
@ -494,8 +540,12 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
if (ret == -EAGAIN)
goto retry;
if (!ret && !is_invalid_reserved_pfn(*pfn))
ret = -EFAULT;
if (!ret) {
if (is_invalid_reserved_pfn(*pfn))
ret = 1;
else
ret = -EFAULT;
}
}
done:
mmap_read_unlock(mm);
@ -509,7 +559,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
*/
static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
long npage, unsigned long *pfn_base,
unsigned long limit)
unsigned long limit, struct vfio_batch *batch)
{
unsigned long pfn = 0;
long ret, pinned = 0, lock_acct = 0;
@ -520,8 +570,9 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
if (!current->mm)
return -ENODEV;
ret = vaddr_get_pfn(current->mm, vaddr, dma->prot, pfn_base);
if (ret)
ret = vaddr_get_pfns(current->mm, vaddr, 1, dma->prot, pfn_base,
batch->pages);
if (ret < 0)
return ret;
pinned++;
@ -547,8 +598,9 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
/* Lock all the consecutive pages from pfn_base */
for (vaddr += PAGE_SIZE, iova += PAGE_SIZE; pinned < npage;
pinned++, vaddr += PAGE_SIZE, iova += PAGE_SIZE) {
ret = vaddr_get_pfn(current->mm, vaddr, dma->prot, &pfn);
if (ret)
ret = vaddr_get_pfns(current->mm, vaddr, 1, dma->prot, &pfn,
batch->pages);
if (ret < 0)
break;
if (pfn != *pfn_base + pinned ||
@ -574,7 +626,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
ret = vfio_lock_acct(dma, lock_acct, false);
unpin_out:
if (ret) {
if (ret < 0) {
if (!rsvd) {
for (pfn = *pfn_base ; pinned ; pfn++, pinned--)
put_pfn(pfn, dma->prot);
@ -610,6 +662,7 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
unsigned long *pfn_base, bool do_accounting)
{
struct page *pages[1];
struct mm_struct *mm;
int ret;
@ -617,8 +670,13 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
if (!mm)
return -ENODEV;
ret = vaddr_get_pfn(mm, vaddr, dma->prot, pfn_base);
if (!ret && do_accounting && !is_invalid_reserved_pfn(*pfn_base)) {
ret = vaddr_get_pfns(mm, vaddr, 1, dma->prot, pfn_base, pages);
if (ret != 1)
goto out;
ret = 0;
if (do_accounting && !is_invalid_reserved_pfn(*pfn_base)) {
ret = vfio_lock_acct(dma, 1, true);
if (ret) {
put_pfn(*pfn_base, dma->prot);
@ -630,6 +688,7 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
}
}
out:
mmput(mm);
return ret;
}
@ -1263,15 +1322,19 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma,
{
dma_addr_t iova = dma->iova;
unsigned long vaddr = dma->vaddr;
struct vfio_batch batch;
size_t size = map_size;
long npage;
unsigned long pfn, limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
int ret = 0;
vfio_batch_init(&batch);
while (size) {
/* Pin a contiguous chunk of memory */
npage = vfio_pin_pages_remote(dma, vaddr + dma->size,
size >> PAGE_SHIFT, &pfn, limit);
size >> PAGE_SHIFT, &pfn, limit,
&batch);
if (npage <= 0) {
WARN_ON(!npage);
ret = (int)npage;
@ -1291,6 +1354,7 @@ static int vfio_pin_map_dma(struct vfio_iommu *iommu, struct vfio_dma *dma,
dma->size += npage << PAGE_SHIFT;
}
vfio_batch_fini(&batch);
dma->iommu_mapped = true;
if (ret)
@ -1449,6 +1513,7 @@ static int vfio_bus_type(struct device *dev, void *data)
static int vfio_iommu_replay(struct vfio_iommu *iommu,
struct vfio_domain *domain)
{
struct vfio_batch batch;
struct vfio_domain *d = NULL;
struct rb_node *n;
unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
@ -1459,6 +1524,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
d = list_first_entry(&iommu->domain_list,
struct vfio_domain, next);
vfio_batch_init(&batch);
n = rb_first(&iommu->dma_list);
for (; n; n = rb_next(n)) {
@ -1506,7 +1573,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
npage = vfio_pin_pages_remote(dma, vaddr,
n >> PAGE_SHIFT,
&pfn, limit);
&pfn, limit,
&batch);
if (npage <= 0) {
WARN_ON(!npage);
ret = (int)npage;
@ -1539,6 +1607,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
dma->iommu_mapped = true;
}
vfio_batch_fini(&batch);
return 0;
unwind:
@ -1579,6 +1648,7 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
}
}
vfio_batch_fini(&batch);
return ret;
}

View File

@ -230,6 +230,8 @@ extern unsigned int setup_special_user_owner_ACE(struct cifs_ace *pace);
extern void dequeue_mid(struct mid_q_entry *mid, bool malformed);
extern int cifs_read_from_socket(struct TCP_Server_Info *server, char *buf,
unsigned int to_read);
extern ssize_t cifs_discard_from_socket(struct TCP_Server_Info *server,
size_t to_read);
extern int cifs_read_page_from_socket(struct TCP_Server_Info *server,
struct page *page,
unsigned int page_offset,

View File

@ -1451,9 +1451,9 @@ cifs_discard_remaining_data(struct TCP_Server_Info *server)
while (remaining > 0) {
int length;
length = cifs_read_from_socket(server, server->bigbuf,
min_t(unsigned int, remaining,
CIFSMaxBufSize + MAX_HEADER_SIZE(server)));
length = cifs_discard_from_socket(server,
min_t(size_t, remaining,
CIFSMaxBufSize + MAX_HEADER_SIZE(server)));
if (length < 0)
return length;
server->total_read += length;

View File

@ -695,9 +695,6 @@ cifs_readv_from_socket(struct TCP_Server_Info *server, struct msghdr *smb_msg)
int length = 0;
int total_read;
smb_msg->msg_control = NULL;
smb_msg->msg_controllen = 0;
for (total_read = 0; msg_data_left(smb_msg); total_read += length) {
try_to_freeze();
@ -748,18 +745,33 @@ int
cifs_read_from_socket(struct TCP_Server_Info *server, char *buf,
unsigned int to_read)
{
struct msghdr smb_msg;
struct msghdr smb_msg = {};
struct kvec iov = {.iov_base = buf, .iov_len = to_read};
iov_iter_kvec(&smb_msg.msg_iter, READ, &iov, 1, to_read);
return cifs_readv_from_socket(server, &smb_msg);
}
ssize_t
cifs_discard_from_socket(struct TCP_Server_Info *server, size_t to_read)
{
struct msghdr smb_msg = {};
/*
* iov_iter_discard already sets smb_msg.type and count and iov_offset
* and cifs_readv_from_socket sets msg_control and msg_controllen
* so little to initialize in struct msghdr
*/
iov_iter_discard(&smb_msg.msg_iter, READ, to_read);
return cifs_readv_from_socket(server, &smb_msg);
}
int
cifs_read_page_from_socket(struct TCP_Server_Info *server, struct page *page,
unsigned int page_offset, unsigned int to_read)
{
struct msghdr smb_msg;
struct msghdr smb_msg = {};
struct bio_vec bv = {
.bv_page = page, .bv_len = to_read, .bv_offset = page_offset};
iov_iter_bvec(&smb_msg.msg_iter, READ, &bv, 1, to_read);

View File

@ -209,10 +209,6 @@ smb_send_kvec(struct TCP_Server_Info *server, struct msghdr *smb_msg,
*sent = 0;
smb_msg->msg_name = NULL;
smb_msg->msg_namelen = 0;
smb_msg->msg_control = NULL;
smb_msg->msg_controllen = 0;
if (server->noblocksnd)
smb_msg->msg_flags = MSG_DONTWAIT + MSG_NOSIGNAL;
else
@ -324,7 +320,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
sigset_t mask, oldmask;
size_t total_len = 0, sent, size;
struct socket *ssocket = server->ssocket;
struct msghdr smb_msg;
struct msghdr smb_msg = {};
__be32 rfc1002_marker;
if (cifs_rdma_enabled(server)) {

View File

@ -459,6 +459,10 @@ static int __ext4_ext_check(const char *function, unsigned int line,
error_msg = "invalid eh_entries";
goto corrupted;
}
if (unlikely((eh->eh_entries == 0) && (depth > 0))) {
error_msg = "eh_entries is 0 but eh_depth is > 0";
goto corrupted;
}
if (!ext4_valid_extent_entries(inode, eh, lblk, &pblk, depth)) {
error_msg = "invalid extent entries";
goto corrupted;

View File

@ -511,7 +511,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
goto fallback;
}
max_dirs = ndirs / ngroups + inodes_per_group / 16;
max_dirs = ndirs / ngroups + inodes_per_group*flex_size / 16;
min_inodes = avefreei - inodes_per_group*flex_size / 4;
if (min_inodes < 1)
min_inodes = 1;

View File

@ -4959,6 +4959,7 @@ ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle,
ext4_fsblk_t block = 0;
unsigned int inquota = 0;
unsigned int reserv_clstrs = 0;
int retries = 0;
u64 seq;
might_sleep();
@ -5061,7 +5062,8 @@ ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle,
ar->len = ac->ac_b_ex.fe_len;
}
} else {
if (ext4_mb_discard_preallocations_should_retry(sb, ac, &seq))
if (++retries < 3 &&
ext4_mb_discard_preallocations_should_retry(sb, ac, &seq))
goto repeat;
/*
* If block allocation fails then the pa allocated above

View File

@ -358,19 +358,36 @@ xfs_dinode_verify_fork(
int whichfork)
{
uint32_t di_nextents = XFS_DFORK_NEXTENTS(dip, whichfork);
mode_t mode = be16_to_cpu(dip->di_mode);
uint32_t fork_size = XFS_DFORK_SIZE(dip, mp, whichfork);
uint32_t fork_format = XFS_DFORK_FORMAT(dip, whichfork);
switch (XFS_DFORK_FORMAT(dip, whichfork)) {
case XFS_DINODE_FMT_LOCAL:
/*
* no local regular files yet
*/
if (whichfork == XFS_DATA_FORK) {
if (S_ISREG(be16_to_cpu(dip->di_mode)))
return __this_address;
if (be64_to_cpu(dip->di_size) >
XFS_DFORK_SIZE(dip, mp, whichfork))
/*
* For fork types that can contain local data, check that the fork
* format matches the size of local data contained within the fork.
*
* For all types, check that when the size says the should be in extent
* or btree format, the inode isn't claiming it is in local format.
*/
if (whichfork == XFS_DATA_FORK) {
if (S_ISDIR(mode) || S_ISLNK(mode)) {
if (be64_to_cpu(dip->di_size) <= fork_size &&
fork_format != XFS_DINODE_FMT_LOCAL)
return __this_address;
}
if (be64_to_cpu(dip->di_size) > fork_size &&
fork_format == XFS_DINODE_FMT_LOCAL)
return __this_address;
}
switch (fork_format) {
case XFS_DINODE_FMT_LOCAL:
/*
* No local regular files yet.
*/
if (S_ISREG(mode) && whichfork == XFS_DATA_FORK)
return __this_address;
if (di_nextents)
return __this_address;
break;

View File

@ -802,6 +802,7 @@ xfs_ialloc(
xfs_buf_t **ialloc_context,
xfs_inode_t **ipp)
{
struct inode *dir = pip ? VFS_I(pip) : NULL;
struct xfs_mount *mp = tp->t_mountp;
xfs_ino_t ino;
xfs_inode_t *ip;
@ -847,18 +848,17 @@ xfs_ialloc(
return error;
ASSERT(ip != NULL);
inode = VFS_I(ip);
inode->i_mode = mode;
set_nlink(inode, nlink);
inode->i_uid = current_fsuid();
inode->i_rdev = rdev;
ip->i_d.di_projid = prid;
if (pip && XFS_INHERIT_GID(pip)) {
inode->i_gid = VFS_I(pip)->i_gid;
if ((VFS_I(pip)->i_mode & S_ISGID) && S_ISDIR(mode))
inode->i_mode |= S_ISGID;
if (dir && !(dir->i_mode & S_ISGID) &&
(mp->m_flags & XFS_MOUNT_GRPID)) {
inode->i_uid = current_fsuid();
inode->i_gid = dir->i_gid;
inode->i_mode = mode;
} else {
inode->i_gid = current_fsgid();
inode_init_owner(inode, dir, mode);
}
/*
@ -2669,14 +2669,13 @@ xfs_ifree_cluster(
}
/*
* This is called to return an inode to the inode free list.
* The inode should already be truncated to 0 length and have
* no pages associated with it. This routine also assumes that
* the inode is already a part of the transaction.
* This is called to return an inode to the inode free list. The inode should
* already be truncated to 0 length and have no pages associated with it. This
* routine also assumes that the inode is already a part of the transaction.
*
* The on-disk copy of the inode will have been added to the list
* of unlinked inodes in the AGI. We need to remove the inode from
* that list atomically with respect to freeing it here.
* The on-disk copy of the inode will have been added to the list of unlinked
* inodes in the AGI. We need to remove the inode from that list atomically with
* respect to freeing it here.
*/
int
xfs_ifree(
@ -2694,13 +2693,16 @@ xfs_ifree(
ASSERT(ip->i_d.di_nblocks == 0);
/*
* Pull the on-disk inode from the AGI unlinked list.
* Free the inode first so that we guarantee that the AGI lock is going
* to be taken before we remove the inode from the unlinked list. This
* makes the AGI lock -> unlinked list modification order the same as
* used in O_TMPFILE creation.
*/
error = xfs_iunlink_remove(tp, ip);
error = xfs_difree(tp, ip->i_ino, &xic);
if (error)
return error;
error = xfs_difree(tp, ip->i_ino, &xic);
error = xfs_iunlink_remove(tp, ip);
if (error)
return error;

View File

@ -178,6 +178,15 @@ static inline struct net_device *ip_dev_find(struct net *net, __be32 addr)
int inet_addr_onlink(struct in_device *in_dev, __be32 a, __be32 b);
int devinet_ioctl(struct net *net, unsigned int cmd, struct ifreq *);
#ifdef CONFIG_INET
int inet_gifconf(struct net_device *dev, char __user *buf, int len, int size);
#else
static inline int inet_gifconf(struct net_device *dev, char __user *buf,
int len, int size)
{
return 0;
}
#endif
void devinet_init(void);
struct in_device *inetdev_by_index(struct net *, int);
__be32 inet_select_addr(const struct net_device *dev, __be32 dst, int scope);

View File

@ -1489,6 +1489,8 @@ static inline long kvm_arch_vcpu_async_ioctl(struct file *filp,
void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
unsigned long start, unsigned long end);
void kvm_arch_guest_memory_reclaimed(struct kvm *kvm);
#ifdef CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE
int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu);
#else

Some files were not shown because too many files have changed in this diff Show More