This is the 5.4.225 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmOA8KAACgkQONu9yGCS aT4gDQ//bzrHgBr7HQxbW1uI6g8SyjAyhLLP41kCv7uEdq/kzCm3moAo60VA59tR SsCj74NaQrZwcdRrfW+hTeayX+VOBlDFMHaeetwetPGw8ON3KGDeu0OVSJQZExVM sBXA6oT95R3Gw3tOFO/rPJj+X+GLgY9IRODeOdedeNPwEu0X0GOGm1gLKv857mWw WD13Zn85RqoV7XzEVL1jN1DEN31VbqIwr/b0hf02c1kAn5oErsHRZTx9yg32Wjq6 TPcrIN/SImysHTui5HMJvRHPOkacY3Iw1UmXThnrrskMX5tljhi++3qcsTokekbv qgARIRT/zC7CJHcLud7Q9+iG1IHYWnrraOhNZehAaK713hrmyBzFx8xJOkjE+041 BcY3BASrB39+Nx5cPMe66ArCBzRPS2ALbpJGu49Az4/Oh9+QFsrx68O3hjvBK/ev zefqhPXjGyOiiW/WHydpDavGy93g6JT9100XAvbF3lb4AMPH0BDhy9MfNuqlynuW 5acfRZTKVlcrXTSe+zQBQfIFXYCh1euLyMDzTvQUpIvArSl3Tn6UMJ7MflVITlGQ vLLhkYSyo0WN6/PruU8aUNh0dDBgh323K5bAjen3OinbdQND3abDXMMNLI6pCWx7 jgoM//tDSWfiNHdqNcpCYRIAP5NjjInx0+k/F7KWM9/Y3Xhr3T4= =2ZWO -----END PGP SIGNATURE----- Merge 5.4.225 into android11-5.4-lts Changes in 5.4.225 xfs: preserve rmapbt swapext block reservation from freed blocks xfs: rename xfs_bmap_is_real_extent to is_written_extent xfs: redesign the reflink remap loop to fix blkres depletion crash xfs: use MMAPLOCK around filemap_map_pages() xfs: preserve inode versioning across remounts xfs: drain the buf delwri queue before xfsaild idles phy: stm32: fix an error code in probe wifi: cfg80211: silence a sparse RCU warning wifi: cfg80211: fix memory leak in query_regdb_file() bpf, sockmap: Fix the sk->sk_forward_alloc warning of sk_stream_kill_queues HID: hyperv: fix possible memory leak in mousevsc_probe() net: gso: fix panic on frag_list with mixed head alloc types net: tun: Fix memory leaks of napi_get_frags bnxt_en: Fix possible crash in bnxt_hwrm_set_coal() bnxt_en: fix potentially incorrect return value for ndo_rx_flow_steer net: fman: Unregister ethernet device on removal capabilities: fix undefined behavior in bit shift for CAP_TO_MASK net: lapbether: fix issue of dev reference count leakage in lapbeth_device_event() hamradio: fix issue of dev reference count leakage in bpq_device_event() drm/vc4: Fix missing platform_unregister_drivers() call in vc4_drm_register() ipv6: addrlabel: fix infoleak when sending struct ifaddrlblmsg to network can: af_can: fix NULL pointer dereference in can_rx_register() tipc: fix the msg->req tlv len check in tipc_nl_compat_name_table_dump_header dmaengine: pxa_dma: use platform_get_irq_optional dmaengine: mv_xor_v2: Fix a resource leak in mv_xor_v2_remove() drivers: net: xgene: disable napi when register irq failed in xgene_enet_open() perf stat: Fix printing os->prefix in CSV metrics output net: nixge: disable napi when enable interrupts failed in nixge_open() net/mlx5: Allow async trigger completion execution on single CPU systems net: cpsw: disable napi in cpsw_ndo_open() net: cxgb3_main: disable napi when bind qsets failed in cxgb_up() cxgb4vf: shut down the adapter when t4vf_update_port_info() failed in cxgb4vf_open() ethernet: s2io: disable napi when start nic failed in s2io_card_up() net: mv643xx_eth: disable napi when init rxq or txq failed in mv643xx_eth_open() ethernet: tundra: free irq when alloc ring failed in tsi108_open() net: macvlan: fix memory leaks of macvlan_common_newlink riscv: process: fix kernel info leakage arm64: efi: Fix handling of misaligned runtime regions and drop warning MIPS: jump_label: Fix compat branch range check mmc: cqhci: Provide helper for resetting both SDHCI and CQHCI mmc: sdhci-of-arasan: Fix SDHCI_RESET_ALL for CQHCI mmc: sdhci-tegra: Fix SDHCI_RESET_ALL for CQHCI ALSA: hda/ca0132: add quirk for EVGA Z390 DARK ALSA: hda: fix potential memleak in 'add_widget_node' ALSA: usb-audio: Add quirk entry for M-Audio Micro ALSA: usb-audio: Add DSD support for Accuphase DAC-60 vmlinux.lds.h: Fix placement of '.data..decrypted' section nilfs2: fix deadlock in nilfs_count_free_blocks() nilfs2: fix use-after-free bug of ns_writer on remount drm/i915/dmabuf: fix sg_table handling in map_dma_buf platform/x86: hp_wmi: Fix rfkill causing soft blocked wifi btrfs: selftests: fix wrong error check in btrfs_free_dummy_root() udf: Fix a slab-out-of-bounds write bug in udf_find_entry() can: j1939: j1939_send_one(): fix missing CAN header initialization cert host tools: Stop complaining about deprecated OpenSSL functions dmaengine: at_hdmac: Fix at_lli struct definition dmaengine: at_hdmac: Don't start transactions at tx_submit level dmaengine: at_hdmac: Fix completion of unissued descriptor in case of errors dmaengine: at_hdmac: Don't allow CPU to reorder channel enable dmaengine: at_hdmac: Fix impossible condition dmaengine: at_hdmac: Check return code of dma_async_device_register net: tun: call napi_schedule_prep() to ensure we own a napi x86/cpu: Restore AMD's DE_CFG MSR after resume ASoC: wm5102: Revert "ASoC: wm5102: Fix PM disable depth imbalance in wm5102_probe" ASoC: wm5110: Revert "ASoC: wm5110: Fix PM disable depth imbalance in wm5110_probe" ASoC: wm8997: Revert "ASoC: wm8997: Fix PM disable depth imbalance in wm8997_probe" ASoC: wm8962: Add an event handler for TEMP_HP and TEMP_SPK spi: intel: Fix the offset to get the 64K erase opcode ASoC: codecs: jz4725b: add missed Line In power control bit ASoC: codecs: jz4725b: fix reported volume for Master ctl ASoC: codecs: jz4725b: use right control for Capture Volume ASoC: codecs: jz4725b: fix capture selector naming selftests/futex: fix build for clang selftests/intel_pstate: fix build for ARCH=x86_64 NFSv4: Retry LOCK on OLD_STATEID during delegation return i2c: i801: add lis3lv02d's I2C address for Vostro 5568 drm/imx: imx-tve: Fix return type of imx_tve_connector_mode_valid btrfs: remove pointless and double ulist frees in error paths of qgroup tests Bluetooth: L2CAP: Fix l2cap_global_chan_by_psm ASoC: codecs: jz4725b: Fix spelling mistake "Sourc" -> "Source", "Routee" -> "Route" spi: stm32: Print summary 'callbacks suppressed' message ASoC: core: Fix use-after-free in snd_soc_exit() serial: 8250_omap: remove wait loop from Errata i202 workaround serial: 8250: omap: Fix unpaired pm_runtime_put_sync() in omap8250_remove() serial: 8250: omap: Flush PM QOS work on remove serial: imx: Add missing .thaw_noirq hook tty: n_gsm: fix sleep-in-atomic-context bug in gsm_control_send ASoC: soc-utils: Remove __exit for snd_soc_util_exit() block: sed-opal: kmalloc the cmd/resp buffers siox: fix possible memory leak in siox_device_add() parport_pc: Avoid FIFO port location truncation pinctrl: devicetree: fix null pointer dereferencing in pinctrl_dt_to_map arm64: dts: imx8mm: Fix NAND controller size-cells arm64: dts: imx8mn: Fix NAND controller size-cells ata: libata-transport: fix double ata_host_put() in ata_tport_add() net: bgmac: Drop free_netdev() from bgmac_enet_remove() mISDN: fix possible memory leak in mISDN_dsp_element_register() net: liquidio: release resources when liquidio driver open failed mISDN: fix misuse of put_device() in mISDN_register_device() net: macvlan: Use built-in RCU list checking net: caif: fix double disconnect client in chnl_net_open() bnxt_en: Remove debugfs when pci_register_driver failed xen/pcpu: fix possible memory leak in register_pcpu() drbd: use after free in drbd_create_device() platform/x86/intel: pmc: Don't unconditionally attach Intel PMC when virtualized net/x25: Fix skb leak in x25_lapb_receive_frame() cifs: Fix wrong return value checking when GETFLAGS net: thunderbolt: Fix error handling in tbnet_init() cifs: add check for returning value of SMB2_set_info_init ftrace: Fix the possible incorrect kernel message ftrace: Optimize the allocation for mcount entries ftrace: Fix null pointer dereference in ftrace_add_mod() ring_buffer: Do not deactivate non-existant pages ALSA: usb-audio: Drop snd_BUG_ON() from snd_usbmidi_output_open() Revert "usb: dwc3: disable USB core PHY management" slimbus: stream: correct presence rate frequencies speakup: fix a segfault caused by switching consoles USB: serial: option: add Sierra Wireless EM9191 USB: serial: option: remove old LARA-R6 PID USB: serial: option: add u-blox LARA-R6 00B modem USB: serial: option: add u-blox LARA-L6 modem USB: serial: option: add Fibocom FM160 0x0111 composition usb: add NO_LPM quirk for Realforce 87U Keyboard usb: chipidea: fix deadlock in ci_otg_del_timer iio: adc: at91_adc: fix possible memory leak in at91_adc_allocate_trigger() iio: trigger: sysfs: fix possible memory leak in iio_sysfs_trig_init() iio: pressure: ms5611: changed hardcoded SPI speed to value limited dm ioctl: fix misbehavior if list_versions races with module loading serial: 8250: Fall back to non-DMA Rx if IIR_RDI occurs serial: 8250_lpss: Configure DMA also w/o DMA filter Input: iforce - invert valid length check when fetching device IDs scsi: zfcp: Fix double free of FSF request when qdio send fails mmc: core: properly select voltage range without power cycle mmc: sdhci-pci-o2micro: fix card detect fail issue caused by CD# debounce timeout mmc: sdhci-pci: Fix possible memory leak caused by missing pci_dev_put() docs: update mediator contact information in CoC doc misc/vmw_vmci: fix an infoleak in vmci_host_do_receive_datagram() serial: 8250: Flush DMA Rx on RLSI ring-buffer: Include dropped pages in counting dirty patches scsi: target: tcm_loop: Fix possible name leak in tcm_loop_setup_hba_bus() kprobes: Skip clearing aggrprobe's post_handler in kprobe-on-ftrace case Input: i8042 - fix leaking of platform device on module removal macvlan: enforce a consistent minimal mtu tcp: cdg: allow tcp_cdg_release() to be called multiple times kcm: avoid potential race in kcm_tx_work bpf, test_run: Fix alignment problem in bpf_prog_test_run_skb() kcm: close race conditions on sk_receive_queue 9p: trans_fd/p9_conn_cancel: drop client lock earlier gfs2: Check sb_bsize_shift after reading superblock gfs2: Switch from strlcpy to strscpy 9p/trans_fd: always use O_NONBLOCK read/write mm: fs: initialize fsdata passed to write_begin/write_end interface ntfs: fix use-after-free in ntfs_attr_find() ntfs: fix out-of-bounds read in ntfs_attr_find() ntfs: check overflow when iterating ATTR_RECORDs Linux 5.4.225 Change-Id: I7c04b5784804b3883c8cac2b860e6ddfef6f5e1f Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
17d66a1fd0
@ -51,7 +51,7 @@ the Technical Advisory Board (TAB) or other maintainers if you're
|
||||
uncertain how to handle situations that come up. It will not be
|
||||
considered a violation report unless you want it to be. If you are
|
||||
uncertain about approaching the TAB or any other maintainers, please
|
||||
reach out to our conflict mediator, Joanna Lee <joanna.lee@gesmer.com>.
|
||||
reach out to our conflict mediator, Joanna Lee <jlee@linuxfoundation.org>.
|
||||
|
||||
In the end, "be kind to each other" is really what the end goal is for
|
||||
everybody. We know everyone is human and we all fail at times, but the
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 224
|
||||
SUBLEVEL = 225
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
|
@ -838,10 +838,10 @@
|
||||
clocks = <&clk IMX8MM_CLK_NAND_USDHC_BUS_RAWNAND_CLK>;
|
||||
};
|
||||
|
||||
gpmi: nand-controller@33002000{
|
||||
gpmi: nand-controller@33002000 {
|
||||
compatible = "fsl,imx8mm-gpmi-nand", "fsl,imx7d-gpmi-nand";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
|
||||
reg-names = "gpmi-nand", "bch";
|
||||
interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
@ -695,7 +695,7 @@
|
||||
gpmi: nand-controller@33002000 {
|
||||
compatible = "fsl,imx8mn-gpmi-nand", "fsl,imx7d-gpmi-nand";
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
reg = <0x33002000 0x2000>, <0x33004000 0x4000>;
|
||||
reg-names = "gpmi-nand", "bch";
|
||||
interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
@ -12,6 +12,14 @@
|
||||
|
||||
#include <asm/efi.h>
|
||||
|
||||
static bool region_is_misaligned(const efi_memory_desc_t *md)
|
||||
{
|
||||
if (PAGE_SIZE == EFI_PAGE_SIZE)
|
||||
return false;
|
||||
return !PAGE_ALIGNED(md->phys_addr) ||
|
||||
!PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT);
|
||||
}
|
||||
|
||||
/*
|
||||
* Only regions of type EFI_RUNTIME_SERVICES_CODE need to be
|
||||
* executable, everything else can be mapped with the XN bits
|
||||
@ -25,14 +33,22 @@ static __init pteval_t create_mapping_protection(efi_memory_desc_t *md)
|
||||
if (type == EFI_MEMORY_MAPPED_IO)
|
||||
return PROT_DEVICE_nGnRE;
|
||||
|
||||
if (WARN_ONCE(!PAGE_ALIGNED(md->phys_addr),
|
||||
"UEFI Runtime regions are not aligned to 64 KB -- buggy firmware?"))
|
||||
if (region_is_misaligned(md)) {
|
||||
static bool __initdata code_is_misaligned;
|
||||
|
||||
/*
|
||||
* If the region is not aligned to the page size of the OS, we
|
||||
* can not use strict permissions, since that would also affect
|
||||
* the mapping attributes of the adjacent regions.
|
||||
* Regions that are not aligned to the OS page size cannot be
|
||||
* mapped with strict permissions, as those might interfere
|
||||
* with the permissions that are needed by the adjacent
|
||||
* region's mapping. However, if we haven't encountered any
|
||||
* misaligned runtime code regions so far, we can safely use
|
||||
* non-executable permissions for non-code regions.
|
||||
*/
|
||||
return pgprot_val(PAGE_KERNEL_EXEC);
|
||||
code_is_misaligned |= (type == EFI_RUNTIME_SERVICES_CODE);
|
||||
|
||||
return code_is_misaligned ? pgprot_val(PAGE_KERNEL_EXEC)
|
||||
: pgprot_val(PAGE_KERNEL);
|
||||
}
|
||||
|
||||
/* R-- */
|
||||
if ((attr & (EFI_MEMORY_XP | EFI_MEMORY_RO)) ==
|
||||
@ -62,19 +78,16 @@ int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
|
||||
bool page_mappings_only = (md->type == EFI_RUNTIME_SERVICES_CODE ||
|
||||
md->type == EFI_RUNTIME_SERVICES_DATA);
|
||||
|
||||
if (!PAGE_ALIGNED(md->phys_addr) ||
|
||||
!PAGE_ALIGNED(md->num_pages << EFI_PAGE_SHIFT)) {
|
||||
/*
|
||||
* If the end address of this region is not aligned to page
|
||||
* size, the mapping is rounded up, and may end up sharing a
|
||||
* page frame with the next UEFI memory region. If we create
|
||||
* a block entry now, we may need to split it again when mapping
|
||||
* the next region, and support for that is going to be removed
|
||||
* from the MMU routines. So avoid block mappings altogether in
|
||||
* that case.
|
||||
*/
|
||||
/*
|
||||
* If this region is not aligned to the page size used by the OS, the
|
||||
* mapping will be rounded outwards, and may end up sharing a page
|
||||
* frame with an adjacent runtime memory region. Given that the page
|
||||
* table descriptor covering the shared page will be rewritten when the
|
||||
* adjacent region gets mapped, we must avoid block mappings here so we
|
||||
* don't have to worry about splitting them when that happens.
|
||||
*/
|
||||
if (region_is_misaligned(md))
|
||||
page_mappings_only = true;
|
||||
}
|
||||
|
||||
create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
|
||||
md->num_pages << EFI_PAGE_SHIFT,
|
||||
@ -101,6 +114,9 @@ int __init efi_set_mapping_permissions(struct mm_struct *mm,
|
||||
BUG_ON(md->type != EFI_RUNTIME_SERVICES_CODE &&
|
||||
md->type != EFI_RUNTIME_SERVICES_DATA);
|
||||
|
||||
if (region_is_misaligned(md))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Calling apply_to_page_range() is only safe on regions that are
|
||||
* guaranteed to be mapped down to pages. Since we are only called
|
||||
|
@ -56,7 +56,7 @@ void arch_jump_label_transform(struct jump_entry *e,
|
||||
* The branch offset must fit in the instruction's 26
|
||||
* bit field.
|
||||
*/
|
||||
WARN_ON((offset >= BIT(25)) ||
|
||||
WARN_ON((offset >= (long)BIT(25)) ||
|
||||
(offset < -(long)BIT(25)));
|
||||
|
||||
insn.j_format.opcode = bc6_op;
|
||||
|
@ -104,6 +104,8 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long usp,
|
||||
{
|
||||
struct pt_regs *childregs = task_pt_regs(p);
|
||||
|
||||
memset(&p->thread.s, 0, sizeof(p->thread.s));
|
||||
|
||||
/* p->thread holds context to be restored by __switch_to() */
|
||||
if (unlikely(p->flags & PF_KTHREAD)) {
|
||||
/* Kernel thread */
|
||||
|
@ -454,6 +454,11 @@
|
||||
#define MSR_AMD64_OSVW_STATUS 0xc0010141
|
||||
#define MSR_AMD64_LS_CFG 0xc0011020
|
||||
#define MSR_AMD64_DC_CFG 0xc0011022
|
||||
|
||||
#define MSR_AMD64_DE_CFG 0xc0011029
|
||||
#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT 1
|
||||
#define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE BIT_ULL(MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT)
|
||||
|
||||
#define MSR_AMD64_BU_CFG2 0xc001102a
|
||||
#define MSR_AMD64_IBSFETCHCTL 0xc0011030
|
||||
#define MSR_AMD64_IBSFETCHLINAD 0xc0011031
|
||||
@ -522,9 +527,6 @@
|
||||
#define FAM10H_MMIO_CONF_BASE_MASK 0xfffffffULL
|
||||
#define FAM10H_MMIO_CONF_BASE_SHIFT 20
|
||||
#define MSR_FAM10H_NODE_ID 0xc001100c
|
||||
#define MSR_F10H_DECFG 0xc0011029
|
||||
#define MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT 1
|
||||
#define MSR_F10H_DECFG_LFENCE_SERIALIZE BIT_ULL(MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT)
|
||||
|
||||
/* K8 MSRs */
|
||||
#define MSR_K8_TOP_MEM1 0xc001001a
|
||||
|
@ -794,8 +794,6 @@ static void init_amd_gh(struct cpuinfo_x86 *c)
|
||||
set_cpu_bug(c, X86_BUG_AMD_TLB_MMATCH);
|
||||
}
|
||||
|
||||
#define MSR_AMD64_DE_CFG 0xC0011029
|
||||
|
||||
static void init_amd_ln(struct cpuinfo_x86 *c)
|
||||
{
|
||||
/*
|
||||
@ -965,8 +963,8 @@ static void init_amd(struct cpuinfo_x86 *c)
|
||||
* msr_set_bit() uses the safe accessors, too, even if the MSR
|
||||
* is not present.
|
||||
*/
|
||||
msr_set_bit(MSR_F10H_DECFG,
|
||||
MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT);
|
||||
msr_set_bit(MSR_AMD64_DE_CFG,
|
||||
MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT);
|
||||
|
||||
/* A serializing LFENCE stops RDTSC speculation */
|
||||
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
|
||||
|
@ -335,8 +335,8 @@ static void init_hygon(struct cpuinfo_x86 *c)
|
||||
* msr_set_bit() uses the safe accessors, too, even if the MSR
|
||||
* is not present.
|
||||
*/
|
||||
msr_set_bit(MSR_F10H_DECFG,
|
||||
MSR_F10H_DECFG_LFENCE_SERIALIZE_BIT);
|
||||
msr_set_bit(MSR_AMD64_DE_CFG,
|
||||
MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT);
|
||||
|
||||
/* A serializing LFENCE stops RDTSC speculation */
|
||||
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
|
||||
|
@ -4180,9 +4180,9 @@ static int svm_get_msr_feature(struct kvm_msr_entry *msr)
|
||||
msr->data = 0;
|
||||
|
||||
switch (msr->index) {
|
||||
case MSR_F10H_DECFG:
|
||||
if (boot_cpu_has(X86_FEATURE_LFENCE_RDTSC))
|
||||
msr->data |= MSR_F10H_DECFG_LFENCE_SERIALIZE;
|
||||
case MSR_AMD64_DE_CFG:
|
||||
if (cpu_feature_enabled(X86_FEATURE_LFENCE_RDTSC))
|
||||
msr->data |= MSR_AMD64_DE_CFG_LFENCE_SERIALIZE;
|
||||
break;
|
||||
default:
|
||||
return 1;
|
||||
@ -4284,7 +4284,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
|
||||
msr_info->data = 0x1E;
|
||||
}
|
||||
break;
|
||||
case MSR_F10H_DECFG:
|
||||
case MSR_AMD64_DE_CFG:
|
||||
msr_info->data = svm->msr_decfg;
|
||||
break;
|
||||
default:
|
||||
@ -4451,7 +4451,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
|
||||
case MSR_VM_IGNNE:
|
||||
vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);
|
||||
break;
|
||||
case MSR_F10H_DECFG: {
|
||||
case MSR_AMD64_DE_CFG: {
|
||||
struct kvm_msr_entry msr_entry;
|
||||
|
||||
msr_entry.index = msr->index;
|
||||
|
@ -1337,7 +1337,7 @@ static const u32 msr_based_features_all[] = {
|
||||
MSR_IA32_VMX_EPT_VPID_CAP,
|
||||
MSR_IA32_VMX_VMFUNC,
|
||||
|
||||
MSR_F10H_DECFG,
|
||||
MSR_AMD64_DE_CFG,
|
||||
MSR_IA32_UCODE_REV,
|
||||
MSR_IA32_ARCH_CAPABILITIES,
|
||||
};
|
||||
|
@ -541,6 +541,7 @@ static void pm_save_spec_msr(void)
|
||||
MSR_TSX_FORCE_ABORT,
|
||||
MSR_IA32_MCU_OPT_CTRL,
|
||||
MSR_AMD64_LS_CFG,
|
||||
MSR_AMD64_DE_CFG,
|
||||
};
|
||||
|
||||
msr_build_context(spec_msr_id, ARRAY_SIZE(spec_msr_id));
|
||||
|
@ -88,8 +88,8 @@ struct opal_dev {
|
||||
u64 lowest_lba;
|
||||
|
||||
size_t pos;
|
||||
u8 cmd[IO_BUFFER_LENGTH];
|
||||
u8 resp[IO_BUFFER_LENGTH];
|
||||
u8 *cmd;
|
||||
u8 *resp;
|
||||
|
||||
struct parsed_resp parsed;
|
||||
size_t prev_d_len;
|
||||
@ -2019,6 +2019,8 @@ void free_opal_dev(struct opal_dev *dev)
|
||||
return;
|
||||
|
||||
clean_opal_dev(dev);
|
||||
kfree(dev->resp);
|
||||
kfree(dev->cmd);
|
||||
kfree(dev);
|
||||
}
|
||||
EXPORT_SYMBOL(free_opal_dev);
|
||||
@ -2031,17 +2033,39 @@ struct opal_dev *init_opal_dev(void *data, sec_send_recv *send_recv)
|
||||
if (!dev)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* Presumably DMA-able buffers must be cache-aligned. Kmalloc makes
|
||||
* sure the allocated buffer is DMA-safe in that regard.
|
||||
*/
|
||||
dev->cmd = kmalloc(IO_BUFFER_LENGTH, GFP_KERNEL);
|
||||
if (!dev->cmd)
|
||||
goto err_free_dev;
|
||||
|
||||
dev->resp = kmalloc(IO_BUFFER_LENGTH, GFP_KERNEL);
|
||||
if (!dev->resp)
|
||||
goto err_free_cmd;
|
||||
|
||||
INIT_LIST_HEAD(&dev->unlk_lst);
|
||||
mutex_init(&dev->dev_lock);
|
||||
dev->data = data;
|
||||
dev->send_recv = send_recv;
|
||||
if (check_opal_support(dev) != 0) {
|
||||
pr_debug("Opal is not supported on this device\n");
|
||||
kfree(dev);
|
||||
return NULL;
|
||||
goto err_free_resp;
|
||||
}
|
||||
|
||||
return dev;
|
||||
|
||||
err_free_resp:
|
||||
kfree(dev->resp);
|
||||
|
||||
err_free_cmd:
|
||||
kfree(dev->cmd);
|
||||
|
||||
err_free_dev:
|
||||
kfree(dev);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(init_opal_dev);
|
||||
|
||||
|
@ -317,7 +317,6 @@ int ata_tport_add(struct device *parent,
|
||||
tport_err:
|
||||
transport_destroy_device(dev);
|
||||
put_device(dev);
|
||||
ata_host_put(ap->host);
|
||||
return error;
|
||||
}
|
||||
|
||||
|
@ -2778,7 +2778,7 @@ static int init_submitter(struct drbd_device *device)
|
||||
enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsigned int minor)
|
||||
{
|
||||
struct drbd_resource *resource = adm_ctx->resource;
|
||||
struct drbd_connection *connection;
|
||||
struct drbd_connection *connection, *n;
|
||||
struct drbd_device *device;
|
||||
struct drbd_peer_device *peer_device, *tmp_peer_device;
|
||||
struct gendisk *disk;
|
||||
@ -2906,7 +2906,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
|
||||
out_idr_remove_vol:
|
||||
idr_remove(&connection->peer_devices, vnr);
|
||||
out_idr_remove_from_resource:
|
||||
for_each_connection(connection, resource) {
|
||||
for_each_connection_safe(connection, n, resource) {
|
||||
peer_device = idr_remove(&connection->peer_devices, vnr);
|
||||
if (peer_device)
|
||||
kref_put(&connection->kref, drbd_destroy_connection);
|
||||
|
@ -246,6 +246,8 @@ static void atc_dostart(struct at_dma_chan *atchan, struct at_desc *first)
|
||||
ATC_SPIP_BOUNDARY(first->boundary));
|
||||
channel_writel(atchan, DPIP, ATC_DPIP_HOLE(first->dst_hole) |
|
||||
ATC_DPIP_BOUNDARY(first->boundary));
|
||||
/* Don't allow CPU to reorder channel enable. */
|
||||
wmb();
|
||||
dma_writel(atdma, CHER, atchan->mask);
|
||||
|
||||
vdbg_dump_regs(atchan);
|
||||
@ -306,7 +308,8 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
|
||||
struct at_desc *desc_first = atc_first_active(atchan);
|
||||
struct at_desc *desc;
|
||||
int ret;
|
||||
u32 ctrla, dscr, trials;
|
||||
u32 ctrla, dscr;
|
||||
unsigned int i;
|
||||
|
||||
/*
|
||||
* If the cookie doesn't match to the currently running transfer then
|
||||
@ -376,7 +379,7 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
|
||||
dscr = channel_readl(atchan, DSCR);
|
||||
rmb(); /* ensure DSCR is read before CTRLA */
|
||||
ctrla = channel_readl(atchan, CTRLA);
|
||||
for (trials = 0; trials < ATC_MAX_DSCR_TRIALS; ++trials) {
|
||||
for (i = 0; i < ATC_MAX_DSCR_TRIALS; ++i) {
|
||||
u32 new_dscr;
|
||||
|
||||
rmb(); /* ensure DSCR is read after CTRLA */
|
||||
@ -402,7 +405,7 @@ static int atc_get_bytes_left(struct dma_chan *chan, dma_cookie_t cookie)
|
||||
rmb(); /* ensure DSCR is read before CTRLA */
|
||||
ctrla = channel_readl(atchan, CTRLA);
|
||||
}
|
||||
if (unlikely(trials >= ATC_MAX_DSCR_TRIALS))
|
||||
if (unlikely(i == ATC_MAX_DSCR_TRIALS))
|
||||
return -ETIMEDOUT;
|
||||
|
||||
/* for the first descriptor we can be more accurate */
|
||||
@ -550,10 +553,6 @@ static void atc_handle_error(struct at_dma_chan *atchan)
|
||||
bad_desc = atc_first_active(atchan);
|
||||
list_del_init(&bad_desc->desc_node);
|
||||
|
||||
/* As we are stopped, take advantage to push queued descriptors
|
||||
* in active_list */
|
||||
list_splice_init(&atchan->queue, atchan->active_list.prev);
|
||||
|
||||
/* Try to restart the controller */
|
||||
if (!list_empty(&atchan->active_list))
|
||||
atc_dostart(atchan, atc_first_active(atchan));
|
||||
@ -674,19 +673,11 @@ static dma_cookie_t atc_tx_submit(struct dma_async_tx_descriptor *tx)
|
||||
spin_lock_irqsave(&atchan->lock, flags);
|
||||
cookie = dma_cookie_assign(tx);
|
||||
|
||||
if (list_empty(&atchan->active_list)) {
|
||||
dev_vdbg(chan2dev(tx->chan), "tx_submit: started %u\n",
|
||||
desc->txd.cookie);
|
||||
atc_dostart(atchan, desc);
|
||||
list_add_tail(&desc->desc_node, &atchan->active_list);
|
||||
} else {
|
||||
dev_vdbg(chan2dev(tx->chan), "tx_submit: queued %u\n",
|
||||
desc->txd.cookie);
|
||||
list_add_tail(&desc->desc_node, &atchan->queue);
|
||||
}
|
||||
|
||||
list_add_tail(&desc->desc_node, &atchan->queue);
|
||||
spin_unlock_irqrestore(&atchan->lock, flags);
|
||||
|
||||
dev_vdbg(chan2dev(tx->chan), "tx_submit: queued %u\n",
|
||||
desc->txd.cookie);
|
||||
return cookie;
|
||||
}
|
||||
|
||||
@ -1957,7 +1948,11 @@ static int __init at_dma_probe(struct platform_device *pdev)
|
||||
dma_has_cap(DMA_SLAVE, atdma->dma_common.cap_mask) ? "slave " : "",
|
||||
plat_dat->nr_channels);
|
||||
|
||||
dma_async_device_register(&atdma->dma_common);
|
||||
err = dma_async_device_register(&atdma->dma_common);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Unable to register: %d.\n", err);
|
||||
goto err_dma_async_device_register;
|
||||
}
|
||||
|
||||
/*
|
||||
* Do not return an error if the dmac node is not present in order to
|
||||
@ -1977,6 +1972,7 @@ static int __init at_dma_probe(struct platform_device *pdev)
|
||||
|
||||
err_of_dma_controller_register:
|
||||
dma_async_device_unregister(&atdma->dma_common);
|
||||
err_dma_async_device_register:
|
||||
dma_pool_destroy(atdma->memset_pool);
|
||||
err_memset_pool_create:
|
||||
dma_pool_destroy(atdma->dma_desc_pool);
|
||||
|
@ -164,13 +164,13 @@
|
||||
/* LLI == Linked List Item; aka DMA buffer descriptor */
|
||||
struct at_lli {
|
||||
/* values that are not changed by hardware */
|
||||
dma_addr_t saddr;
|
||||
dma_addr_t daddr;
|
||||
u32 saddr;
|
||||
u32 daddr;
|
||||
/* value that may get written back: */
|
||||
u32 ctrla;
|
||||
u32 ctrla;
|
||||
/* more values that are not changed by hardware */
|
||||
u32 ctrlb;
|
||||
dma_addr_t dscr; /* chain to next lli */
|
||||
u32 ctrlb;
|
||||
u32 dscr; /* chain to next lli */
|
||||
};
|
||||
|
||||
/**
|
||||
|
@ -895,6 +895,7 @@ static int mv_xor_v2_remove(struct platform_device *pdev)
|
||||
tasklet_kill(&xor_dev->irq_tasklet);
|
||||
|
||||
clk_disable_unprepare(xor_dev->clk);
|
||||
clk_disable_unprepare(xor_dev->reg_clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1249,14 +1249,14 @@ static int pxad_init_phys(struct platform_device *op,
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < nb_phy_chans; i++)
|
||||
if (platform_get_irq(op, i) > 0)
|
||||
if (platform_get_irq_optional(op, i) > 0)
|
||||
nr_irq++;
|
||||
|
||||
for (i = 0; i < nb_phy_chans; i++) {
|
||||
phy = &pdev->phys[i];
|
||||
phy->base = pdev->base;
|
||||
phy->idx = i;
|
||||
irq = platform_get_irq(op, i);
|
||||
irq = platform_get_irq_optional(op, i);
|
||||
if ((nr_irq > 1) && (irq > 0))
|
||||
ret = devm_request_irq(&op->dev, irq,
|
||||
pxad_chan_handler,
|
||||
|
@ -36,13 +36,13 @@ static struct sg_table *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
|
||||
goto err_unpin_pages;
|
||||
}
|
||||
|
||||
ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL);
|
||||
ret = sg_alloc_table(st, obj->mm.pages->orig_nents, GFP_KERNEL);
|
||||
if (ret)
|
||||
goto err_free;
|
||||
|
||||
src = obj->mm.pages->sgl;
|
||||
dst = st->sgl;
|
||||
for (i = 0; i < obj->mm.pages->nents; i++) {
|
||||
for (i = 0; i < obj->mm.pages->orig_nents; i++) {
|
||||
sg_set_page(dst, sg_page(src), src->length, 0);
|
||||
dst = sg_next(dst);
|
||||
src = sg_next(src);
|
||||
|
@ -237,8 +237,9 @@ static int imx_tve_connector_get_modes(struct drm_connector *connector)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int imx_tve_connector_mode_valid(struct drm_connector *connector,
|
||||
struct drm_display_mode *mode)
|
||||
static enum drm_mode_status
|
||||
imx_tve_connector_mode_valid(struct drm_connector *connector,
|
||||
struct drm_display_mode *mode)
|
||||
{
|
||||
struct imx_tve *tve = con_to_tve(connector);
|
||||
unsigned long rate;
|
||||
|
@ -392,7 +392,12 @@ static int __init vc4_drm_register(void)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return platform_driver_register(&vc4_platform_driver);
|
||||
ret = platform_driver_register(&vc4_platform_driver);
|
||||
if (ret)
|
||||
platform_unregister_drivers(component_drivers,
|
||||
ARRAY_SIZE(component_drivers));
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit vc4_drm_unregister(void)
|
||||
|
@ -492,7 +492,7 @@ static int mousevsc_probe(struct hv_device *device,
|
||||
|
||||
ret = hid_add_device(hid_dev);
|
||||
if (ret)
|
||||
goto probe_err1;
|
||||
goto probe_err2;
|
||||
|
||||
|
||||
ret = hid_parse(hid_dev);
|
||||
|
@ -1253,6 +1253,7 @@ static const struct {
|
||||
* Additional individual entries were added after verification.
|
||||
*/
|
||||
{ "Vostro V131", 0x1d },
|
||||
{ "Vostro 5568", 0x29 },
|
||||
};
|
||||
|
||||
static void register_dell_lis3lv02d_i2c_device(struct i801_priv *priv)
|
||||
|
@ -616,8 +616,10 @@ static struct iio_trigger *at91_adc_allocate_trigger(struct iio_dev *idev,
|
||||
trig->ops = &at91_adc_trigger_ops;
|
||||
|
||||
ret = iio_trigger_register(trig);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
iio_trigger_free(trig);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return trig;
|
||||
}
|
||||
|
@ -92,7 +92,7 @@ static int ms5611_spi_probe(struct spi_device *spi)
|
||||
spi_set_drvdata(spi, indio_dev);
|
||||
|
||||
spi->mode = SPI_MODE_0;
|
||||
spi->max_speed_hz = 20000000;
|
||||
spi->max_speed_hz = min(spi->max_speed_hz, 20000000U);
|
||||
spi->bits_per_word = 8;
|
||||
ret = spi_setup(spi);
|
||||
if (ret < 0)
|
||||
|
@ -209,9 +209,13 @@ static int iio_sysfs_trigger_remove(int id)
|
||||
|
||||
static int __init iio_sysfs_trig_init(void)
|
||||
{
|
||||
int ret;
|
||||
device_initialize(&iio_sysfs_trig_dev);
|
||||
dev_set_name(&iio_sysfs_trig_dev, "iio_sysfs_trigger");
|
||||
return device_add(&iio_sysfs_trig_dev);
|
||||
ret = device_add(&iio_sysfs_trig_dev);
|
||||
if (ret)
|
||||
put_device(&iio_sysfs_trig_dev);
|
||||
return ret;
|
||||
}
|
||||
module_init(iio_sysfs_trig_init);
|
||||
|
||||
|
@ -273,22 +273,22 @@ int iforce_init_device(struct device *parent, u16 bustype,
|
||||
* Get device info.
|
||||
*/
|
||||
|
||||
if (!iforce_get_id_packet(iforce, 'M', buf, &len) || len < 3)
|
||||
if (!iforce_get_id_packet(iforce, 'M', buf, &len) && len >= 3)
|
||||
input_dev->id.vendor = get_unaligned_le16(buf + 1);
|
||||
else
|
||||
dev_warn(&iforce->dev->dev, "Device does not respond to id packet M\n");
|
||||
|
||||
if (!iforce_get_id_packet(iforce, 'P', buf, &len) || len < 3)
|
||||
if (!iforce_get_id_packet(iforce, 'P', buf, &len) && len >= 3)
|
||||
input_dev->id.product = get_unaligned_le16(buf + 1);
|
||||
else
|
||||
dev_warn(&iforce->dev->dev, "Device does not respond to id packet P\n");
|
||||
|
||||
if (!iforce_get_id_packet(iforce, 'B', buf, &len) || len < 3)
|
||||
if (!iforce_get_id_packet(iforce, 'B', buf, &len) && len >= 3)
|
||||
iforce->device_memory.end = get_unaligned_le16(buf + 1);
|
||||
else
|
||||
dev_warn(&iforce->dev->dev, "Device does not respond to id packet B\n");
|
||||
|
||||
if (!iforce_get_id_packet(iforce, 'N', buf, &len) || len < 2)
|
||||
if (!iforce_get_id_packet(iforce, 'N', buf, &len) && len >= 2)
|
||||
ff_effects = buf[1];
|
||||
else
|
||||
dev_warn(&iforce->dev->dev, "Device does not respond to id packet N\n");
|
||||
|
@ -1540,8 +1540,6 @@ static int i8042_probe(struct platform_device *dev)
|
||||
{
|
||||
int error;
|
||||
|
||||
i8042_platform_device = dev;
|
||||
|
||||
if (i8042_reset == I8042_RESET_ALWAYS) {
|
||||
error = i8042_controller_selftest();
|
||||
if (error)
|
||||
@ -1579,7 +1577,6 @@ static int i8042_probe(struct platform_device *dev)
|
||||
i8042_free_aux_ports(); /* in case KBD failed but AUX not */
|
||||
i8042_free_irqs();
|
||||
i8042_controller_reset(false);
|
||||
i8042_platform_device = NULL;
|
||||
|
||||
return error;
|
||||
}
|
||||
@ -1589,7 +1586,6 @@ static int i8042_remove(struct platform_device *dev)
|
||||
i8042_unregister_ports();
|
||||
i8042_free_irqs();
|
||||
i8042_controller_reset(false);
|
||||
i8042_platform_device = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -222,7 +222,7 @@ mISDN_register_device(struct mISDNdevice *dev,
|
||||
|
||||
err = get_free_devid();
|
||||
if (err < 0)
|
||||
goto error1;
|
||||
return err;
|
||||
dev->id = err;
|
||||
|
||||
device_initialize(&dev->dev);
|
||||
|
@ -80,6 +80,7 @@ int mISDN_dsp_element_register(struct mISDN_dsp_element *elem)
|
||||
if (!entry)
|
||||
return -ENOMEM;
|
||||
|
||||
INIT_LIST_HEAD(&entry->list);
|
||||
entry->elem = elem;
|
||||
|
||||
entry->dev.class = elements_class;
|
||||
@ -114,7 +115,7 @@ int mISDN_dsp_element_register(struct mISDN_dsp_element *elem)
|
||||
device_unregister(&entry->dev);
|
||||
return ret;
|
||||
err1:
|
||||
kfree(entry);
|
||||
put_device(&entry->dev);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(mISDN_dsp_element_register);
|
||||
|
@ -573,7 +573,7 @@ static void list_version_get_needed(struct target_type *tt, void *needed_param)
|
||||
size_t *needed = needed_param;
|
||||
|
||||
*needed += sizeof(struct dm_target_versions);
|
||||
*needed += strlen(tt->name);
|
||||
*needed += strlen(tt->name) + 1;
|
||||
*needed += ALIGN_MASK;
|
||||
}
|
||||
|
||||
@ -638,7 +638,7 @@ static int __list_versions(struct dm_ioctl *param, size_t param_size, const char
|
||||
iter_info.old_vers = NULL;
|
||||
iter_info.vers = vers;
|
||||
iter_info.flags = 0;
|
||||
iter_info.end = (char *)vers+len;
|
||||
iter_info.end = (char *)vers + needed;
|
||||
|
||||
/*
|
||||
* Now loop through filling out the names & versions.
|
||||
|
@ -852,6 +852,7 @@ static int qp_notify_peer_local(bool attach, struct vmci_handle handle)
|
||||
u32 context_id = vmci_get_context_id();
|
||||
struct vmci_event_qp ev;
|
||||
|
||||
memset(&ev, 0, sizeof(ev));
|
||||
ev.msg.hdr.dst = vmci_make_handle(context_id, VMCI_EVENT_HANDLER);
|
||||
ev.msg.hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
|
||||
VMCI_CONTEXT_RESOURCE_ID);
|
||||
@ -1465,6 +1466,7 @@ static int qp_notify_peer(bool attach,
|
||||
* kernel.
|
||||
*/
|
||||
|
||||
memset(&ev, 0, sizeof(ev));
|
||||
ev.msg.hdr.dst = vmci_make_handle(peer_id, VMCI_EVENT_HANDLER);
|
||||
ev.msg.hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
|
||||
VMCI_CONTEXT_RESOURCE_ID);
|
||||
|
@ -1145,7 +1145,13 @@ u32 mmc_select_voltage(struct mmc_host *host, u32 ocr)
|
||||
mmc_power_cycle(host, ocr);
|
||||
} else {
|
||||
bit = fls(ocr) - 1;
|
||||
ocr &= 3 << bit;
|
||||
/*
|
||||
* The bit variable represents the highest voltage bit set in
|
||||
* the OCR register.
|
||||
* To keep a range of 2 values (e.g. 3.2V/3.3V and 3.3V/3.4V),
|
||||
* we must shift the mask '3' with (bit - 1).
|
||||
*/
|
||||
ocr &= 3 << (bit - 1);
|
||||
if (bit != host->ios.vdd)
|
||||
dev_warn(mmc_dev(host), "exceeding card's volts\n");
|
||||
}
|
||||
|
24
drivers/mmc/host/sdhci-cqhci.h
Normal file
24
drivers/mmc/host/sdhci-cqhci.h
Normal file
@ -0,0 +1,24 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Copyright 2022 The Chromium OS Authors
|
||||
*
|
||||
* Support that applies to the combination of SDHCI and CQHCI, while not
|
||||
* expressing a dependency between the two modules.
|
||||
*/
|
||||
|
||||
#ifndef __MMC_HOST_SDHCI_CQHCI_H__
|
||||
#define __MMC_HOST_SDHCI_CQHCI_H__
|
||||
|
||||
#include "cqhci.h"
|
||||
#include "sdhci.h"
|
||||
|
||||
static inline void sdhci_and_cqhci_reset(struct sdhci_host *host, u8 mask)
|
||||
{
|
||||
if ((host->mmc->caps2 & MMC_CAP2_CQE) && (mask & SDHCI_RESET_ALL) &&
|
||||
host->mmc->cqe_private)
|
||||
cqhci_deactivate(host->mmc);
|
||||
|
||||
sdhci_reset(host, mask);
|
||||
}
|
||||
|
||||
#endif /* __MMC_HOST_SDHCI_CQHCI_H__ */
|
@ -24,6 +24,7 @@
|
||||
#include <linux/of.h>
|
||||
|
||||
#include "cqhci.h"
|
||||
#include "sdhci-cqhci.h"
|
||||
#include "sdhci-pltfm.h"
|
||||
|
||||
#define SDHCI_ARASAN_VENDOR_REGISTER 0x78
|
||||
@ -264,7 +265,7 @@ static void sdhci_arasan_reset(struct sdhci_host *host, u8 mask)
|
||||
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
|
||||
struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host);
|
||||
|
||||
sdhci_reset(host, mask);
|
||||
sdhci_and_cqhci_reset(host, mask);
|
||||
|
||||
if (sdhci_arasan->quirks & SDHCI_ARASAN_QUIRK_FORCE_CDTEST) {
|
||||
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
|
||||
|
@ -1795,6 +1795,8 @@ static int amd_probe(struct sdhci_pci_chip *chip)
|
||||
}
|
||||
}
|
||||
|
||||
pci_dev_put(smbus_dev);
|
||||
|
||||
if (gen == AMD_CHIPSET_BEFORE_ML || gen == AMD_CHIPSET_CZ)
|
||||
chip->quirks2 |= SDHCI_QUIRK2_CLEAR_TRANSFERMODE_REG_BEFORE_CMD;
|
||||
|
||||
|
@ -31,6 +31,7 @@
|
||||
#define O2_SD_CAPS 0xE0
|
||||
#define O2_SD_ADMA1 0xE2
|
||||
#define O2_SD_ADMA2 0xE7
|
||||
#define O2_SD_MISC_CTRL2 0xF0
|
||||
#define O2_SD_INF_MOD 0xF1
|
||||
#define O2_SD_MISC_CTRL4 0xFC
|
||||
#define O2_SD_TUNING_CTRL 0x300
|
||||
@ -777,6 +778,12 @@ int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip)
|
||||
/* Set Tuning Windows to 5 */
|
||||
pci_write_config_byte(chip->pdev,
|
||||
O2_SD_TUNING_CTRL, 0x55);
|
||||
//Adjust 1st and 2nd CD debounce time
|
||||
pci_read_config_dword(chip->pdev, O2_SD_MISC_CTRL2, &scratch_32);
|
||||
scratch_32 &= 0xFFE7FFFF;
|
||||
scratch_32 |= 0x00180000;
|
||||
pci_write_config_dword(chip->pdev, O2_SD_MISC_CTRL2, scratch_32);
|
||||
pci_write_config_dword(chip->pdev, O2_SD_DETECT_SETTING, 1);
|
||||
/* Lock WP */
|
||||
ret = pci_read_config_byte(chip->pdev,
|
||||
O2_SD_LOCK_WP, &scratch);
|
||||
|
@ -24,6 +24,7 @@
|
||||
#include <linux/gpio/consumer.h>
|
||||
#include <linux/ktime.h>
|
||||
|
||||
#include "sdhci-cqhci.h"
|
||||
#include "sdhci-pltfm.h"
|
||||
#include "cqhci.h"
|
||||
|
||||
@ -347,7 +348,7 @@ static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask)
|
||||
const struct sdhci_tegra_soc_data *soc_data = tegra_host->soc_data;
|
||||
u32 misc_ctrl, clk_ctrl, pad_ctrl;
|
||||
|
||||
sdhci_reset(host, mask);
|
||||
sdhci_and_cqhci_reset(host, mask);
|
||||
|
||||
if (!(mask & SDHCI_RESET_ALL))
|
||||
return;
|
||||
|
@ -113,7 +113,7 @@
|
||||
#define ERASE_OPCODE_SHIFT 8
|
||||
#define ERASE_OPCODE_MASK (0xff << ERASE_OPCODE_SHIFT)
|
||||
#define ERASE_64K_OPCODE_SHIFT 16
|
||||
#define ERASE_64K_OPCODE_MASK (0xff << ERASE_OPCODE_SHIFT)
|
||||
#define ERASE_64K_OPCODE_MASK (0xff << ERASE_64K_OPCODE_SHIFT)
|
||||
|
||||
#define INTEL_SPI_TIMEOUT 5000 /* ms */
|
||||
#define INTEL_SPI_FIFO_SZ 64
|
||||
|
@ -1004,8 +1004,10 @@ static int xgene_enet_open(struct net_device *ndev)
|
||||
|
||||
xgene_enet_napi_enable(pdata);
|
||||
ret = xgene_enet_register_irq(ndev);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
xgene_enet_napi_disable(pdata);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (ndev->phydev) {
|
||||
phy_start(ndev->phydev);
|
||||
|
@ -1564,7 +1564,6 @@ void bgmac_enet_remove(struct bgmac *bgmac)
|
||||
phy_disconnect(bgmac->net_dev->phydev);
|
||||
netif_napi_del(&bgmac->napi);
|
||||
bgmac_dma_free(bgmac);
|
||||
free_netdev(bgmac->net_dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(bgmac_enet_remove);
|
||||
|
||||
|
@ -11182,8 +11182,8 @@ static int bnxt_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
|
||||
rcu_read_lock();
|
||||
hlist_for_each_entry_rcu(fltr, head, hash) {
|
||||
if (bnxt_fltr_match(fltr, new_fltr)) {
|
||||
rc = fltr->sw_id;
|
||||
rcu_read_unlock();
|
||||
rc = 0;
|
||||
goto err_free;
|
||||
}
|
||||
}
|
||||
@ -12232,8 +12232,16 @@ static struct pci_driver bnxt_pci_driver = {
|
||||
|
||||
static int __init bnxt_init(void)
|
||||
{
|
||||
int err;
|
||||
|
||||
bnxt_debug_init();
|
||||
return pci_register_driver(&bnxt_pci_driver);
|
||||
err = pci_register_driver(&bnxt_pci_driver);
|
||||
if (err) {
|
||||
bnxt_debug_exit();
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __exit bnxt_exit(void)
|
||||
|
@ -124,7 +124,7 @@ static int bnxt_set_coalesce(struct net_device *dev,
|
||||
}
|
||||
|
||||
reset_coalesce:
|
||||
if (netif_running(dev)) {
|
||||
if (test_bit(BNXT_STATE_OPEN, &bp->state)) {
|
||||
if (update_stats) {
|
||||
rc = bnxt_close_nic(bp, true, false);
|
||||
if (!rc)
|
||||
|
@ -1836,13 +1836,10 @@ static int liquidio_open(struct net_device *netdev)
|
||||
|
||||
ifstate_set(lio, LIO_IFSTATE_RUNNING);
|
||||
|
||||
if (OCTEON_CN23XX_PF(oct)) {
|
||||
if (!oct->msix_on)
|
||||
if (setup_tx_poll_fn(netdev))
|
||||
return -1;
|
||||
} else {
|
||||
if (setup_tx_poll_fn(netdev))
|
||||
return -1;
|
||||
if (!OCTEON_CN23XX_PF(oct) || (OCTEON_CN23XX_PF(oct) && !oct->msix_on)) {
|
||||
ret = setup_tx_poll_fn(netdev);
|
||||
if (ret)
|
||||
goto err_poll;
|
||||
}
|
||||
|
||||
netif_tx_start_all_queues(netdev);
|
||||
@ -1855,7 +1852,7 @@ static int liquidio_open(struct net_device *netdev)
|
||||
/* tell Octeon to start forwarding packets to host */
|
||||
ret = send_rx_ctrl_cmd(lio, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
goto err_rx_ctrl;
|
||||
|
||||
/* start periodical statistics fetch */
|
||||
INIT_DELAYED_WORK(&lio->stats_wk.work, lio_fetch_stats);
|
||||
@ -1866,6 +1863,27 @@ static int liquidio_open(struct net_device *netdev)
|
||||
dev_info(&oct->pci_dev->dev, "%s interface is opened\n",
|
||||
netdev->name);
|
||||
|
||||
return 0;
|
||||
|
||||
err_rx_ctrl:
|
||||
if (!OCTEON_CN23XX_PF(oct) || (OCTEON_CN23XX_PF(oct) && !oct->msix_on))
|
||||
cleanup_tx_poll_fn(netdev);
|
||||
err_poll:
|
||||
if (lio->ptp_clock) {
|
||||
ptp_clock_unregister(lio->ptp_clock);
|
||||
lio->ptp_clock = NULL;
|
||||
}
|
||||
|
||||
if (oct->props[lio->ifidx].napi_enabled == 1) {
|
||||
list_for_each_entry_safe(napi, n, &netdev->napi_list, dev_list)
|
||||
napi_disable(napi);
|
||||
|
||||
oct->props[lio->ifidx].napi_enabled = 0;
|
||||
|
||||
if (OCTEON_CN23XX_PF(oct))
|
||||
oct->droq[0]->ops.poll_mode = 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1303,6 +1303,7 @@ static int cxgb_up(struct adapter *adap)
|
||||
if (ret < 0) {
|
||||
CH_ERR(adap, "failed to bind qsets, err %d\n", ret);
|
||||
t3_intr_disable(adap);
|
||||
quiesce_rx(adap);
|
||||
free_irq_resources(adap);
|
||||
err = ret;
|
||||
goto out;
|
||||
|
@ -860,7 +860,7 @@ static int cxgb4vf_open(struct net_device *dev)
|
||||
*/
|
||||
err = t4vf_update_port_info(pi);
|
||||
if (err < 0)
|
||||
return err;
|
||||
goto err_unwind;
|
||||
|
||||
/*
|
||||
* Note that this interface is up and start everything up ...
|
||||
|
@ -885,12 +885,21 @@ static int mac_probe(struct platform_device *_of_dev)
|
||||
return err;
|
||||
}
|
||||
|
||||
static int mac_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct mac_device *mac_dev = platform_get_drvdata(pdev);
|
||||
|
||||
platform_device_unregister(mac_dev->priv->eth_dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver mac_driver = {
|
||||
.driver = {
|
||||
.name = KBUILD_MODNAME,
|
||||
.of_match_table = mac_match,
|
||||
},
|
||||
.probe = mac_probe,
|
||||
.remove = mac_remove,
|
||||
};
|
||||
|
||||
builtin_platform_driver(mac_driver);
|
||||
|
@ -2476,6 +2476,7 @@ static int mv643xx_eth_open(struct net_device *dev)
|
||||
for (i = 0; i < mp->rxq_count; i++)
|
||||
rxq_deinit(mp->rxq + i);
|
||||
out:
|
||||
napi_disable(&mp->napi);
|
||||
free_irq(dev->irq, dev);
|
||||
|
||||
return err;
|
||||
|
@ -1682,12 +1682,17 @@ void mlx5_cmd_flush(struct mlx5_core_dev *dev)
|
||||
struct mlx5_cmd *cmd = &dev->cmd;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < cmd->max_reg_cmds; i++)
|
||||
while (down_trylock(&cmd->sem))
|
||||
for (i = 0; i < cmd->max_reg_cmds; i++) {
|
||||
while (down_trylock(&cmd->sem)) {
|
||||
mlx5_cmd_trigger_completions(dev);
|
||||
cond_resched();
|
||||
}
|
||||
}
|
||||
|
||||
while (down_trylock(&cmd->pages_sem))
|
||||
while (down_trylock(&cmd->pages_sem)) {
|
||||
mlx5_cmd_trigger_completions(dev);
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
/* Unlock cmdif */
|
||||
up(&cmd->pages_sem);
|
||||
|
@ -7122,9 +7122,8 @@ static int s2io_card_up(struct s2io_nic *sp)
|
||||
if (ret) {
|
||||
DBG_PRINT(ERR_DBG, "%s: Out of memory in Open\n",
|
||||
dev->name);
|
||||
s2io_reset(sp);
|
||||
free_rx_buffers(sp);
|
||||
return -ENOMEM;
|
||||
ret = -ENOMEM;
|
||||
goto err_fill_buff;
|
||||
}
|
||||
DBG_PRINT(INFO_DBG, "Buf in ring:%d is %d:\n", i,
|
||||
ring->rx_bufs_left);
|
||||
@ -7162,18 +7161,16 @@ static int s2io_card_up(struct s2io_nic *sp)
|
||||
/* Enable Rx Traffic and interrupts on the NIC */
|
||||
if (start_nic(sp)) {
|
||||
DBG_PRINT(ERR_DBG, "%s: Starting NIC failed\n", dev->name);
|
||||
s2io_reset(sp);
|
||||
free_rx_buffers(sp);
|
||||
return -ENODEV;
|
||||
ret = -ENODEV;
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
/* Add interrupt service routine */
|
||||
if (s2io_add_isr(sp) != 0) {
|
||||
if (sp->config.intr_type == MSI_X)
|
||||
s2io_rem_isr(sp);
|
||||
s2io_reset(sp);
|
||||
free_rx_buffers(sp);
|
||||
return -ENODEV;
|
||||
ret = -ENODEV;
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
timer_setup(&sp->alarm_timer, s2io_alarm_handle, 0);
|
||||
@ -7193,6 +7190,20 @@ static int s2io_card_up(struct s2io_nic *sp)
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
err_out:
|
||||
if (config->napi) {
|
||||
if (config->intr_type == MSI_X) {
|
||||
for (i = 0; i < sp->config.rx_ring_num; i++)
|
||||
napi_disable(&sp->mac_control.rings[i].napi);
|
||||
} else {
|
||||
napi_disable(&sp->napi);
|
||||
}
|
||||
}
|
||||
err_fill_buff:
|
||||
s2io_reset(sp);
|
||||
free_rx_buffers(sp);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -899,6 +899,7 @@ static int nixge_open(struct net_device *ndev)
|
||||
err_rx_irq:
|
||||
free_irq(priv->tx_irq, ndev);
|
||||
err_tx_irq:
|
||||
napi_disable(&priv->napi);
|
||||
phy_stop(phy);
|
||||
phy_disconnect(phy);
|
||||
tasklet_kill(&priv->dma_err_tasklet);
|
||||
|
@ -1753,6 +1753,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
|
||||
|
||||
err_cleanup:
|
||||
if (!cpsw->usage_count) {
|
||||
napi_disable(&cpsw->napi_rx);
|
||||
napi_disable(&cpsw->napi_tx);
|
||||
cpdma_ctlr_stop(cpsw->dma);
|
||||
cpsw_destroy_xdp_rxqs(cpsw);
|
||||
}
|
||||
|
@ -1302,12 +1302,15 @@ static int tsi108_open(struct net_device *dev)
|
||||
|
||||
data->rxring = dma_alloc_coherent(&data->pdev->dev, rxring_size,
|
||||
&data->rxdma, GFP_KERNEL);
|
||||
if (!data->rxring)
|
||||
if (!data->rxring) {
|
||||
free_irq(data->irq_num, dev);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
data->txring = dma_alloc_coherent(&data->pdev->dev, txring_size,
|
||||
&data->txdma, GFP_KERNEL);
|
||||
if (!data->txring) {
|
||||
free_irq(data->irq_num, dev);
|
||||
dma_free_coherent(&data->pdev->dev, rxring_size, data->rxring,
|
||||
data->rxdma);
|
||||
return -ENOMEM;
|
||||
|
@ -511,7 +511,7 @@ static int bpq_device_event(struct notifier_block *this,
|
||||
if (!net_eq(dev_net(dev), &init_net))
|
||||
return NOTIFY_DONE;
|
||||
|
||||
if (!dev_is_ethdev(dev))
|
||||
if (!dev_is_ethdev(dev) && !bpq_get_ax25_dev(dev))
|
||||
return NOTIFY_DONE;
|
||||
|
||||
switch (event) {
|
||||
|
@ -138,7 +138,7 @@ static struct macvlan_source_entry *macvlan_hash_lookup_source(
|
||||
u32 idx = macvlan_eth_hash(addr);
|
||||
struct hlist_head *h = &vlan->port->vlan_source_hash[idx];
|
||||
|
||||
hlist_for_each_entry_rcu(entry, h, hlist) {
|
||||
hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) {
|
||||
if (ether_addr_equal_64bits(entry->addr, addr) &&
|
||||
entry->vlan == vlan)
|
||||
return entry;
|
||||
@ -1166,7 +1166,7 @@ void macvlan_common_setup(struct net_device *dev)
|
||||
{
|
||||
ether_setup(dev);
|
||||
|
||||
dev->min_mtu = 0;
|
||||
/* ether_setup() has set dev->min_mtu to ETH_MIN_MTU. */
|
||||
dev->max_mtu = ETH_MAX_MTU;
|
||||
dev->priv_flags &= ~IFF_TX_SKB_SHARING;
|
||||
netif_keep_dst(dev);
|
||||
@ -1499,8 +1499,10 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev,
|
||||
/* the macvlan port may be freed by macvlan_uninit when fail to register.
|
||||
* so we destroy the macvlan port only when it's valid.
|
||||
*/
|
||||
if (create && macvlan_port_get_rtnl(lowerdev))
|
||||
if (create && macvlan_port_get_rtnl(lowerdev)) {
|
||||
macvlan_flush_sources(port, vlan);
|
||||
macvlan_port_destroy(port->dev);
|
||||
}
|
||||
return err;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(macvlan_common_newlink);
|
||||
@ -1602,7 +1604,7 @@ static int macvlan_fill_info_macaddr(struct sk_buff *skb,
|
||||
struct hlist_head *h = &vlan->port->vlan_source_hash[i];
|
||||
struct macvlan_source_entry *entry;
|
||||
|
||||
hlist_for_each_entry_rcu(entry, h, hlist) {
|
||||
hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) {
|
||||
if (entry->vlan != vlan)
|
||||
continue;
|
||||
if (nla_put(skb, IFLA_MACVLAN_MACADDR, ETH_ALEN, entry->addr))
|
||||
|
@ -1339,12 +1339,21 @@ static int __init tbnet_init(void)
|
||||
TBNET_MATCH_FRAGS_ID);
|
||||
|
||||
ret = tb_register_property_dir("network", tbnet_dir);
|
||||
if (ret) {
|
||||
tb_property_free_dir(tbnet_dir);
|
||||
return ret;
|
||||
}
|
||||
if (ret)
|
||||
goto err_free_dir;
|
||||
|
||||
return tb_register_service_driver(&tbnet_driver);
|
||||
ret = tb_register_service_driver(&tbnet_driver);
|
||||
if (ret)
|
||||
goto err_unregister;
|
||||
|
||||
return 0;
|
||||
|
||||
err_unregister:
|
||||
tb_unregister_property_dir("network", tbnet_dir);
|
||||
err_free_dir:
|
||||
tb_property_free_dir(tbnet_dir);
|
||||
|
||||
return ret;
|
||||
}
|
||||
module_init(tbnet_init);
|
||||
|
||||
|
@ -2002,17 +2002,25 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
|
||||
skb_headlen(skb));
|
||||
|
||||
if (unlikely(headlen > skb_headlen(skb))) {
|
||||
WARN_ON_ONCE(1);
|
||||
err = -ENOMEM;
|
||||
this_cpu_inc(tun->pcpu_stats->rx_dropped);
|
||||
napi_busy:
|
||||
napi_free_frags(&tfile->napi);
|
||||
rcu_read_unlock();
|
||||
mutex_unlock(&tfile->napi_mutex);
|
||||
WARN_ON(1);
|
||||
return -ENOMEM;
|
||||
return err;
|
||||
}
|
||||
|
||||
local_bh_disable();
|
||||
napi_gro_frags(&tfile->napi);
|
||||
local_bh_enable();
|
||||
if (likely(napi_schedule_prep(&tfile->napi))) {
|
||||
local_bh_disable();
|
||||
napi_gro_frags(&tfile->napi);
|
||||
napi_complete(&tfile->napi);
|
||||
local_bh_enable();
|
||||
} else {
|
||||
err = -EBUSY;
|
||||
goto napi_busy;
|
||||
}
|
||||
mutex_unlock(&tfile->napi_mutex);
|
||||
} else if (tfile->napi_enabled) {
|
||||
struct sk_buff_head *queue = &tfile->sk.sk_write_queue;
|
||||
|
@ -403,7 +403,7 @@ static int lapbeth_device_event(struct notifier_block *this,
|
||||
if (dev_net(dev) != &init_net)
|
||||
return NOTIFY_DONE;
|
||||
|
||||
if (!dev_is_ethdev(dev))
|
||||
if (!dev_is_ethdev(dev) && !lapbeth_get_x25_dev(dev))
|
||||
return NOTIFY_DONE;
|
||||
|
||||
switch (event) {
|
||||
|
@ -475,7 +475,7 @@ static size_t parport_pc_fifo_write_block_pio(struct parport *port,
|
||||
const unsigned char *bufp = buf;
|
||||
size_t left = length;
|
||||
unsigned long expire = jiffies + port->physport->cad->timeout;
|
||||
const int fifo = FIFO(port);
|
||||
const unsigned long fifo = FIFO(port);
|
||||
int poll_for = 8; /* 80 usecs */
|
||||
const struct parport_pc_private *priv = port->physport->private_data;
|
||||
const int fifo_depth = priv->fifo_depth;
|
||||
|
@ -393,6 +393,8 @@ static int stm32_usbphyc_probe(struct platform_device *pdev)
|
||||
ret = of_property_read_u32(child, "reg", &index);
|
||||
if (ret || index > usbphyc->nphys) {
|
||||
dev_err(&phy->dev, "invalid reg property: %d\n", ret);
|
||||
if (!ret)
|
||||
ret = -EINVAL;
|
||||
goto put_child;
|
||||
}
|
||||
|
||||
|
@ -223,6 +223,8 @@ int pinctrl_dt_to_map(struct pinctrl *p, struct pinctrl_dev *pctldev)
|
||||
for (state = 0; ; state++) {
|
||||
/* Retrieve the pinctrl-* property */
|
||||
propname = kasprintf(GFP_KERNEL, "pinctrl-%d", state);
|
||||
if (!propname)
|
||||
return -ENOMEM;
|
||||
prop = of_find_property(np, propname, &size);
|
||||
kfree(propname);
|
||||
if (!prop) {
|
||||
|
@ -880,8 +880,16 @@ static int __init hp_wmi_bios_setup(struct platform_device *device)
|
||||
wwan_rfkill = NULL;
|
||||
rfkill2_count = 0;
|
||||
|
||||
if (hp_wmi_rfkill_setup(device))
|
||||
hp_wmi_rfkill2_setup(device);
|
||||
/*
|
||||
* In pre-2009 BIOS, command 1Bh return 0x4 to indicate that
|
||||
* BIOS no longer controls the power for the wireless
|
||||
* devices. All features supported by this command will no
|
||||
* longer be supported.
|
||||
*/
|
||||
if (!hp_wmi_bios_2009_later()) {
|
||||
if (hp_wmi_rfkill_setup(device))
|
||||
hp_wmi_rfkill2_setup(device);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -18,6 +18,8 @@
|
||||
#include <asm/cpu_device_id.h>
|
||||
#include <asm/intel-family.h>
|
||||
|
||||
#include <xen/xen.h>
|
||||
|
||||
static void intel_pmc_core_release(struct device *dev)
|
||||
{
|
||||
/* Nothing to do. */
|
||||
@ -56,6 +58,13 @@ static int __init pmc_core_platform_init(void)
|
||||
if (acpi_dev_present("INT33A1", NULL, -1))
|
||||
return -ENODEV;
|
||||
|
||||
/*
|
||||
* Skip forcefully attaching the device for VMs. Make an exception for
|
||||
* Xen dom0, which does have full hardware access.
|
||||
*/
|
||||
if (cpu_feature_enabled(X86_FEATURE_HYPERVISOR) && !xen_initial_domain())
|
||||
return -ENODEV;
|
||||
|
||||
if (!x86_match_cpu(intel_pmc_core_platform_ids))
|
||||
return -ENODEV;
|
||||
|
||||
|
@ -755,7 +755,7 @@ static int zfcp_fsf_req_send(struct zfcp_fsf_req *req)
|
||||
const bool is_srb = zfcp_fsf_req_is_status_read_buffer(req);
|
||||
struct zfcp_adapter *adapter = req->adapter;
|
||||
struct zfcp_qdio *qdio = adapter->qdio;
|
||||
int req_id = req->req_id;
|
||||
unsigned long req_id = req->req_id;
|
||||
|
||||
zfcp_reqlist_add(adapter->req_list, req);
|
||||
|
||||
|
@ -835,6 +835,8 @@ static struct siox_device *siox_device_add(struct siox_master *smaster,
|
||||
|
||||
err_device_register:
|
||||
/* don't care to make the buffer smaller again */
|
||||
put_device(&sdevice->dev);
|
||||
sdevice = NULL;
|
||||
|
||||
err_buf_alloc:
|
||||
siox_master_unlock(smaster);
|
||||
|
@ -67,10 +67,10 @@ static const int slim_presence_rate_table[] = {
|
||||
384000,
|
||||
768000,
|
||||
0, /* Reserved */
|
||||
110250,
|
||||
220500,
|
||||
441000,
|
||||
882000,
|
||||
11025,
|
||||
22050,
|
||||
44100,
|
||||
88200,
|
||||
176400,
|
||||
352800,
|
||||
705600,
|
||||
|
@ -937,6 +937,7 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
|
||||
static DEFINE_RATELIMIT_STATE(rs,
|
||||
DEFAULT_RATELIMIT_INTERVAL * 10,
|
||||
1);
|
||||
ratelimit_set_flags(&rs, RATELIMIT_MSG_ON_RELEASE);
|
||||
if (__ratelimit(&rs))
|
||||
dev_dbg_ratelimited(spi->dev, "Communication suspended\n");
|
||||
if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0)))
|
||||
|
@ -1781,7 +1781,7 @@ static void speakup_con_update(struct vc_data *vc)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (!speakup_console[vc->vc_num] || spk_parked)
|
||||
if (!speakup_console[vc->vc_num] || spk_parked || !synth)
|
||||
return;
|
||||
if (!spin_trylock_irqsave(&speakup_info.spinlock, flags))
|
||||
/* Speakup output, discard */
|
||||
|
@ -394,6 +394,7 @@ static int tcm_loop_setup_hba_bus(struct tcm_loop_hba *tl_hba, int tcm_loop_host
|
||||
ret = device_register(&tl_hba->dev);
|
||||
if (ret) {
|
||||
pr_err("device_register() failed for tl_hba->dev: %d\n", ret);
|
||||
put_device(&tl_hba->dev);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
@ -1072,7 +1073,7 @@ static struct se_wwn *tcm_loop_make_scsi_hba(
|
||||
*/
|
||||
ret = tcm_loop_setup_hba_bus(tl_hba, tcm_loop_hba_no_cnt);
|
||||
if (ret)
|
||||
goto out;
|
||||
return ERR_PTR(ret);
|
||||
|
||||
sh = tl_hba->sh;
|
||||
tcm_loop_hba_no_cnt++;
|
||||
|
@ -1413,7 +1413,7 @@ static struct gsm_control *gsm_control_send(struct gsm_mux *gsm,
|
||||
unsigned int command, u8 *data, int clen)
|
||||
{
|
||||
struct gsm_control *ctrl = kzalloc(sizeof(struct gsm_control),
|
||||
GFP_KERNEL);
|
||||
GFP_ATOMIC);
|
||||
unsigned long flags;
|
||||
if (ctrl == NULL)
|
||||
return NULL;
|
||||
|
@ -258,8 +258,13 @@ static int lpss8250_dma_setup(struct lpss8250 *lpss, struct uart_8250_port *port
|
||||
struct dw_dma_slave *rx_param, *tx_param;
|
||||
struct device *dev = port->port.dev;
|
||||
|
||||
if (!lpss->dma_param.dma_dev)
|
||||
if (!lpss->dma_param.dma_dev) {
|
||||
dma = port->dma;
|
||||
if (dma)
|
||||
goto out_configuration_only;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
rx_param = devm_kzalloc(dev, sizeof(*rx_param), GFP_KERNEL);
|
||||
if (!rx_param)
|
||||
@ -270,16 +275,18 @@ static int lpss8250_dma_setup(struct lpss8250 *lpss, struct uart_8250_port *port
|
||||
return -ENOMEM;
|
||||
|
||||
*rx_param = lpss->dma_param;
|
||||
dma->rxconf.src_maxburst = lpss->dma_maxburst;
|
||||
|
||||
*tx_param = lpss->dma_param;
|
||||
dma->txconf.dst_maxburst = lpss->dma_maxburst;
|
||||
|
||||
dma->fn = lpss8250_dma_filter;
|
||||
dma->rx_param = rx_param;
|
||||
dma->tx_param = tx_param;
|
||||
|
||||
port->dma = dma;
|
||||
|
||||
out_configuration_only:
|
||||
dma->rxconf.src_maxburst = lpss->dma_maxburst;
|
||||
dma->txconf.dst_maxburst = lpss->dma_maxburst;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -169,27 +169,10 @@ static void omap8250_set_mctrl(struct uart_port *port, unsigned int mctrl)
|
||||
static void omap_8250_mdr1_errataset(struct uart_8250_port *up,
|
||||
struct omap8250_priv *priv)
|
||||
{
|
||||
u8 timeout = 255;
|
||||
|
||||
serial_out(up, UART_OMAP_MDR1, priv->mdr1);
|
||||
udelay(2);
|
||||
serial_out(up, UART_FCR, up->fcr | UART_FCR_CLEAR_XMIT |
|
||||
UART_FCR_CLEAR_RCVR);
|
||||
/*
|
||||
* Wait for FIFO to empty: when empty, RX_FIFO_E bit is 0 and
|
||||
* TX_FIFO_E bit is 1.
|
||||
*/
|
||||
while (UART_LSR_THRE != (serial_in(up, UART_LSR) &
|
||||
(UART_LSR_THRE | UART_LSR_DR))) {
|
||||
timeout--;
|
||||
if (!timeout) {
|
||||
/* Should *never* happen. we warn and carry on */
|
||||
dev_crit(up->port.dev, "Errata i202: timedout %x\n",
|
||||
serial_in(up, UART_LSR));
|
||||
break;
|
||||
}
|
||||
udelay(1);
|
||||
}
|
||||
}
|
||||
|
||||
static void omap_8250_get_divisor(struct uart_port *port, unsigned int baud,
|
||||
@ -1290,9 +1273,15 @@ static int omap8250_probe(struct platform_device *pdev)
|
||||
static int omap8250_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct omap8250_priv *priv = platform_get_drvdata(pdev);
|
||||
int err;
|
||||
|
||||
err = pm_runtime_resume_and_get(&pdev->dev);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
pm_runtime_dont_use_autosuspend(&pdev->dev);
|
||||
pm_runtime_put_sync(&pdev->dev);
|
||||
flush_work(&priv->qos_work);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
serial8250_unregister_port(priv->line);
|
||||
pm_qos_remove_request(&priv->pm_qos_request);
|
||||
|
@ -1820,10 +1820,13 @@ EXPORT_SYMBOL_GPL(serial8250_modem_status);
|
||||
static bool handle_rx_dma(struct uart_8250_port *up, unsigned int iir)
|
||||
{
|
||||
switch (iir & 0x3f) {
|
||||
case UART_IIR_RDI:
|
||||
if (!up->dma->rx_running)
|
||||
break;
|
||||
fallthrough;
|
||||
case UART_IIR_RLSI:
|
||||
case UART_IIR_RX_TIMEOUT:
|
||||
serial8250_rx_dma_flush(up);
|
||||
/* fall-through */
|
||||
case UART_IIR_RLSI:
|
||||
return true;
|
||||
}
|
||||
return up->dma->rx_dma(up);
|
||||
|
@ -2551,6 +2551,7 @@ static const struct dev_pm_ops imx_uart_pm_ops = {
|
||||
.suspend_noirq = imx_uart_suspend_noirq,
|
||||
.resume_noirq = imx_uart_resume_noirq,
|
||||
.freeze_noirq = imx_uart_suspend_noirq,
|
||||
.thaw_noirq = imx_uart_resume_noirq,
|
||||
.restore_noirq = imx_uart_resume_noirq,
|
||||
.suspend = imx_uart_suspend,
|
||||
.resume = imx_uart_resume,
|
||||
|
@ -256,8 +256,10 @@ static void ci_otg_del_timer(struct ci_hdrc *ci, enum otg_fsm_timer t)
|
||||
ci->enabled_otg_timer_bits &= ~(1 << t);
|
||||
if (ci->next_otg_timer == t) {
|
||||
if (ci->enabled_otg_timer_bits == 0) {
|
||||
spin_unlock_irqrestore(&ci->lock, flags);
|
||||
/* No enabled timers after delete it */
|
||||
hrtimer_cancel(&ci->otg_fsm_hrtimer);
|
||||
spin_lock_irqsave(&ci->lock, flags);
|
||||
ci->next_otg_timer = NUM_OTG_FSM_TIMERS;
|
||||
} else {
|
||||
/* Find the next timer */
|
||||
|
@ -362,6 +362,9 @@ static const struct usb_device_id usb_quirk_list[] = {
|
||||
{ USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM },
|
||||
{ USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
/* Realforce 87U Keyboard */
|
||||
{ USB_DEVICE(0x0853, 0x011b), .driver_info = USB_QUIRK_NO_LPM },
|
||||
|
||||
/* M-Systems Flash Disk Pioneers */
|
||||
{ USB_DEVICE(0x08ec, 0x1000), .driver_info = USB_QUIRK_RESET_RESUME },
|
||||
|
||||
|
@ -9,13 +9,8 @@
|
||||
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include "../host/xhci-plat.h"
|
||||
#include "core.h"
|
||||
|
||||
static const struct xhci_plat_priv dwc3_xhci_plat_priv = {
|
||||
.quirks = XHCI_SKIP_PHY_INIT,
|
||||
};
|
||||
|
||||
static int dwc3_host_get_irq(struct dwc3 *dwc)
|
||||
{
|
||||
struct platform_device *dwc3_pdev = to_platform_device(dwc->dev);
|
||||
@ -90,11 +85,6 @@ int dwc3_host_init(struct dwc3 *dwc)
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = platform_device_add_data(xhci, &dwc3_xhci_plat_priv,
|
||||
sizeof(dwc3_xhci_plat_priv));
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
memset(props, 0, sizeof(struct property_entry) * ARRAY_SIZE(props));
|
||||
|
||||
if (dwc->usb3_lpm_capable)
|
||||
|
@ -162,6 +162,8 @@ static void option_instat_callback(struct urb *urb);
|
||||
#define NOVATELWIRELESS_PRODUCT_G2 0xA010
|
||||
#define NOVATELWIRELESS_PRODUCT_MC551 0xB001
|
||||
|
||||
#define UBLOX_VENDOR_ID 0x1546
|
||||
|
||||
/* AMOI PRODUCTS */
|
||||
#define AMOI_VENDOR_ID 0x1614
|
||||
#define AMOI_PRODUCT_H01 0x0800
|
||||
@ -240,7 +242,6 @@ static void option_instat_callback(struct urb *urb);
|
||||
#define QUECTEL_PRODUCT_UC15 0x9090
|
||||
/* These u-blox products use Qualcomm's vendor ID */
|
||||
#define UBLOX_PRODUCT_R410M 0x90b2
|
||||
#define UBLOX_PRODUCT_R6XX 0x90fa
|
||||
/* These Yuga products use Qualcomm's vendor ID */
|
||||
#define YUGA_PRODUCT_CLM920_NC5 0x9625
|
||||
|
||||
@ -581,6 +582,9 @@ static void option_instat_callback(struct urb *urb);
|
||||
#define OPPO_VENDOR_ID 0x22d9
|
||||
#define OPPO_PRODUCT_R11 0x276c
|
||||
|
||||
/* Sierra Wireless products */
|
||||
#define SIERRA_VENDOR_ID 0x1199
|
||||
#define SIERRA_PRODUCT_EM9191 0x90d3
|
||||
|
||||
/* Device flags */
|
||||
|
||||
@ -1124,8 +1128,16 @@ static const struct usb_device_id option_ids[] = {
|
||||
/* u-blox products using Qualcomm vendor ID */
|
||||
{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M),
|
||||
.driver_info = RSVD(1) | RSVD(3) },
|
||||
{ USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R6XX),
|
||||
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x908b), /* u-blox LARA-R6 00B */
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x90fa),
|
||||
.driver_info = RSVD(3) },
|
||||
/* u-blox products */
|
||||
{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1341) }, /* u-blox LARA-L6 */
|
||||
{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1342), /* u-blox LARA-L6 (RMNET) */
|
||||
.driver_info = RSVD(4) },
|
||||
{ USB_DEVICE(UBLOX_VENDOR_ID, 0x1343), /* u-blox LARA-L6 (ECM) */
|
||||
.driver_info = RSVD(4) },
|
||||
/* Quectel products using Quectel vendor ID */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21, 0xff, 0xff, 0xff),
|
||||
.driver_info = NUMEP2 },
|
||||
@ -2167,6 +2179,7 @@ static const struct usb_device_id option_ids[] = {
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x010a, 0xff) }, /* Fibocom MA510 (ECM mode) */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) }, /* Fibocom FG150 Diag */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) }, /* Fibocom FG150 AT */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0111, 0xff) }, /* Fibocom FM160 (MBIM mode) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) }, /* Fibocom NL668-AM/NL652-EU (laptop MBIM) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) }, /* Fibocom FM101-GL (laptop MBIM) */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff), /* Fibocom FM101-GL (laptop MBIM) */
|
||||
@ -2176,6 +2189,8 @@ static const struct usb_device_id option_ids[] = {
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */
|
||||
{ USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) }, /* GosunCn GM500 ECM/NCM */
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
|
||||
{ USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
|
||||
{ } /* Terminating entry */
|
||||
};
|
||||
MODULE_DEVICE_TABLE(usb, option_ids);
|
||||
|
@ -228,7 +228,7 @@ static int register_pcpu(struct pcpu *pcpu)
|
||||
|
||||
err = device_register(dev);
|
||||
if (err) {
|
||||
pcpu_release(dev);
|
||||
put_device(dev);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -189,7 +189,7 @@ void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)
|
||||
|
||||
void btrfs_free_dummy_root(struct btrfs_root *root)
|
||||
{
|
||||
if (!root)
|
||||
if (IS_ERR_OR_NULL(root))
|
||||
return;
|
||||
/* Will be freed by btrfs_free_fs_roots */
|
||||
if (WARN_ON(test_bit(BTRFS_ROOT_IN_RADIX, &root->state)))
|
||||
|
@ -230,7 +230,6 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
|
||||
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
@ -246,7 +245,6 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
ulist_free(new_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
@ -258,18 +256,19 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* btrfs_qgroup_account_extent() always frees the ulists passed to it. */
|
||||
old_roots = NULL;
|
||||
new_roots = NULL;
|
||||
|
||||
if (btrfs_verify_qgroup_counts(fs_info, BTRFS_FS_TREE_OBJECTID,
|
||||
nodesize, nodesize)) {
|
||||
test_err("qgroup counts didn't match expected values");
|
||||
return -EINVAL;
|
||||
}
|
||||
old_roots = NULL;
|
||||
new_roots = NULL;
|
||||
|
||||
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
@ -284,7 +283,6 @@ static int test_no_shared_qgroup(struct btrfs_root *root,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
ulist_free(new_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
@ -335,7 +333,6 @@ static int test_multiple_refs(struct btrfs_root *root,
|
||||
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
@ -351,7 +348,6 @@ static int test_multiple_refs(struct btrfs_root *root,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
ulist_free(new_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
@ -372,7 +368,6 @@ static int test_multiple_refs(struct btrfs_root *root,
|
||||
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
@ -388,7 +383,6 @@ static int test_multiple_refs(struct btrfs_root *root,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
ulist_free(new_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
@ -415,7 +409,6 @@ static int test_multiple_refs(struct btrfs_root *root,
|
||||
ret = btrfs_find_all_roots(&trans, fs_info, nodesize, 0, &old_roots,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
@ -431,7 +424,6 @@ static int test_multiple_refs(struct btrfs_root *root,
|
||||
false);
|
||||
if (ret) {
|
||||
ulist_free(old_roots);
|
||||
ulist_free(new_roots);
|
||||
test_err("couldn't find old roots: %d", ret);
|
||||
return ret;
|
||||
}
|
||||
|
@ -2319,7 +2319,7 @@ int generic_cont_expand_simple(struct inode *inode, loff_t size)
|
||||
{
|
||||
struct address_space *mapping = inode->i_mapping;
|
||||
struct page *page;
|
||||
void *fsdata;
|
||||
void *fsdata = NULL;
|
||||
int err;
|
||||
|
||||
err = inode_newsize_ok(inode, size);
|
||||
@ -2345,7 +2345,7 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping,
|
||||
struct inode *inode = mapping->host;
|
||||
unsigned int blocksize = i_blocksize(inode);
|
||||
struct page *page;
|
||||
void *fsdata;
|
||||
void *fsdata = NULL;
|
||||
pgoff_t index, curidx;
|
||||
loff_t curpos;
|
||||
unsigned zerofrom, offset, len;
|
||||
|
@ -191,7 +191,7 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg)
|
||||
rc = put_user(ExtAttrBits &
|
||||
FS_FL_USER_VISIBLE,
|
||||
(int __user *)arg);
|
||||
if (rc != EOPNOTSUPP)
|
||||
if (rc != -EOPNOTSUPP)
|
||||
break;
|
||||
}
|
||||
#endif /* CONFIG_CIFS_POSIX */
|
||||
@ -220,7 +220,7 @@ long cifs_ioctl(struct file *filep, unsigned int command, unsigned long arg)
|
||||
* pSMBFile->fid.netfid,
|
||||
* extAttrBits,
|
||||
* &ExtAttrMask);
|
||||
* if (rc != EOPNOTSUPP)
|
||||
* if (rc != -EOPNOTSUPP)
|
||||
* break;
|
||||
*/
|
||||
|
||||
|
@ -1216,6 +1216,8 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
|
||||
COMPOUND_FID, current->tgid,
|
||||
FILE_FULL_EA_INFORMATION,
|
||||
SMB2_O_INFO_FILE, 0, data, size);
|
||||
if (rc)
|
||||
goto sea_exit;
|
||||
smb2_set_next_command(tcon, &rqst[1]);
|
||||
smb2_set_related(&rqst[1]);
|
||||
|
||||
|
@ -180,7 +180,10 @@ static int gfs2_check_sb(struct gfs2_sbd *sdp, int silent)
|
||||
pr_warn("Invalid superblock size\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (sb->sb_bsize_shift != ffs(sb->sb_bsize) - 1) {
|
||||
pr_warn("Invalid block size shift\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -377,8 +380,10 @@ static int init_names(struct gfs2_sbd *sdp, int silent)
|
||||
if (!table[0])
|
||||
table = sdp->sd_vfs->s_id;
|
||||
|
||||
strlcpy(sdp->sd_proto_name, proto, GFS2_FSNAME_LEN);
|
||||
strlcpy(sdp->sd_table_name, table, GFS2_FSNAME_LEN);
|
||||
BUILD_BUG_ON(GFS2_LOCKNAME_LEN > GFS2_FSNAME_LEN);
|
||||
|
||||
strscpy(sdp->sd_proto_name, proto, GFS2_LOCKNAME_LEN);
|
||||
strscpy(sdp->sd_table_name, table, GFS2_LOCKNAME_LEN);
|
||||
|
||||
table = sdp->sd_table_name;
|
||||
while ((table = strchr(table, '/')))
|
||||
@ -1349,13 +1354,13 @@ static int gfs2_parse_param(struct fs_context *fc, struct fs_parameter *param)
|
||||
|
||||
switch (o) {
|
||||
case Opt_lockproto:
|
||||
strlcpy(args->ar_lockproto, param->string, GFS2_LOCKNAME_LEN);
|
||||
strscpy(args->ar_lockproto, param->string, GFS2_LOCKNAME_LEN);
|
||||
break;
|
||||
case Opt_locktable:
|
||||
strlcpy(args->ar_locktable, param->string, GFS2_LOCKNAME_LEN);
|
||||
strscpy(args->ar_locktable, param->string, GFS2_LOCKNAME_LEN);
|
||||
break;
|
||||
case Opt_hostdata:
|
||||
strlcpy(args->ar_hostdata, param->string, GFS2_LOCKNAME_LEN);
|
||||
strscpy(args->ar_hostdata, param->string, GFS2_LOCKNAME_LEN);
|
||||
break;
|
||||
case Opt_spectator:
|
||||
args->ar_spectator = 1;
|
||||
|
@ -4903,7 +4903,7 @@ int __page_symlink(struct inode *inode, const char *symname, int len, int nofs)
|
||||
{
|
||||
struct address_space *mapping = inode->i_mapping;
|
||||
struct page *page;
|
||||
void *fsdata;
|
||||
void *fsdata = NULL;
|
||||
int err;
|
||||
unsigned int flags = 0;
|
||||
if (nofs)
|
||||
|
@ -6854,6 +6854,7 @@ static void nfs4_lock_done(struct rpc_task *task, void *calldata)
|
||||
{
|
||||
struct nfs4_lockdata *data = calldata;
|
||||
struct nfs4_lock_state *lsp = data->lsp;
|
||||
struct nfs_server *server = NFS_SERVER(d_inode(data->ctx->dentry));
|
||||
|
||||
dprintk("%s: begin!\n", __func__);
|
||||
|
||||
@ -6863,8 +6864,7 @@ static void nfs4_lock_done(struct rpc_task *task, void *calldata)
|
||||
data->rpc_status = task->tk_status;
|
||||
switch (task->tk_status) {
|
||||
case 0:
|
||||
renew_lease(NFS_SERVER(d_inode(data->ctx->dentry)),
|
||||
data->timestamp);
|
||||
renew_lease(server, data->timestamp);
|
||||
if (data->arg.new_lock && !data->cancelled) {
|
||||
data->fl.fl_flags &= ~(FL_SLEEP | FL_ACCESS);
|
||||
if (locks_lock_inode_wait(lsp->ls_state->inode, &data->fl) < 0)
|
||||
@ -6885,6 +6885,8 @@ static void nfs4_lock_done(struct rpc_task *task, void *calldata)
|
||||
if (!nfs4_stateid_match(&data->arg.open_stateid,
|
||||
&lsp->ls_state->open_stateid))
|
||||
goto out_restart;
|
||||
else if (nfs4_async_handle_error(task, server, lsp->ls_state, NULL) == -EAGAIN)
|
||||
goto out_restart;
|
||||
} else if (!nfs4_stateid_match(&data->arg.lock_stateid,
|
||||
&lsp->ls_stateid))
|
||||
goto out_restart;
|
||||
|
@ -322,7 +322,7 @@ void nilfs_relax_pressure_in_lock(struct super_block *sb)
|
||||
struct the_nilfs *nilfs = sb->s_fs_info;
|
||||
struct nilfs_sc_info *sci = nilfs->ns_writer;
|
||||
|
||||
if (!sci || !sci->sc_flush_request)
|
||||
if (sb_rdonly(sb) || unlikely(!sci) || !sci->sc_flush_request)
|
||||
return;
|
||||
|
||||
set_bit(NILFS_SC_PRIOR_FLUSH, &sci->sc_flags);
|
||||
@ -2243,7 +2243,7 @@ int nilfs_construct_segment(struct super_block *sb)
|
||||
struct nilfs_transaction_info *ti;
|
||||
int err;
|
||||
|
||||
if (!sci)
|
||||
if (sb_rdonly(sb) || unlikely(!sci))
|
||||
return -EROFS;
|
||||
|
||||
/* A call inside transactions causes a deadlock. */
|
||||
@ -2282,7 +2282,7 @@ int nilfs_construct_dsync_segment(struct super_block *sb, struct inode *inode,
|
||||
struct nilfs_transaction_info ti;
|
||||
int err = 0;
|
||||
|
||||
if (!sci)
|
||||
if (sb_rdonly(sb) || unlikely(!sci))
|
||||
return -EROFS;
|
||||
|
||||
nilfs_transaction_lock(sb, &ti, 0);
|
||||
@ -2778,11 +2778,12 @@ int nilfs_attach_log_writer(struct super_block *sb, struct nilfs_root *root)
|
||||
|
||||
if (nilfs->ns_writer) {
|
||||
/*
|
||||
* This happens if the filesystem was remounted
|
||||
* read/write after nilfs_error degenerated it into a
|
||||
* read-only mount.
|
||||
* This happens if the filesystem is made read-only by
|
||||
* __nilfs_error or nilfs_remount and then remounted
|
||||
* read/write. In these cases, reuse the existing
|
||||
* writer.
|
||||
*/
|
||||
nilfs_detach_log_writer(sb);
|
||||
return 0;
|
||||
}
|
||||
|
||||
nilfs->ns_writer = nilfs_segctor_new(sb, root);
|
||||
|
@ -1131,8 +1131,6 @@ static int nilfs_remount(struct super_block *sb, int *flags, char *data)
|
||||
if ((bool)(*flags & SB_RDONLY) == sb_rdonly(sb))
|
||||
goto out;
|
||||
if (*flags & SB_RDONLY) {
|
||||
/* Shutting down log writer */
|
||||
nilfs_detach_log_writer(sb);
|
||||
sb->s_flags |= SB_RDONLY;
|
||||
|
||||
/*
|
||||
|
@ -695,9 +695,7 @@ int nilfs_count_free_blocks(struct the_nilfs *nilfs, sector_t *nblocks)
|
||||
{
|
||||
unsigned long ncleansegs;
|
||||
|
||||
down_read(&NILFS_MDT(nilfs->ns_dat)->mi_sem);
|
||||
ncleansegs = nilfs_sufile_get_ncleansegs(nilfs->ns_sufile);
|
||||
up_read(&NILFS_MDT(nilfs->ns_dat)->mi_sem);
|
||||
*nblocks = (sector_t)ncleansegs * nilfs->ns_blocks_per_segment;
|
||||
return 0;
|
||||
}
|
||||
|
@ -594,17 +594,37 @@ static int ntfs_attr_find(const ATTR_TYPE type, const ntfschar *name,
|
||||
for (;; a = (ATTR_RECORD*)((u8*)a + le32_to_cpu(a->length))) {
|
||||
u8 *mrec_end = (u8 *)ctx->mrec +
|
||||
le32_to_cpu(ctx->mrec->bytes_allocated);
|
||||
u8 *name_end = (u8 *)a + le16_to_cpu(a->name_offset) +
|
||||
a->name_length * sizeof(ntfschar);
|
||||
if ((u8*)a < (u8*)ctx->mrec || (u8*)a > mrec_end ||
|
||||
name_end > mrec_end)
|
||||
u8 *name_end;
|
||||
|
||||
/* check whether ATTR_RECORD wrap */
|
||||
if ((u8 *)a < (u8 *)ctx->mrec)
|
||||
break;
|
||||
|
||||
/* check whether Attribute Record Header is within bounds */
|
||||
if ((u8 *)a > mrec_end ||
|
||||
(u8 *)a + sizeof(ATTR_RECORD) > mrec_end)
|
||||
break;
|
||||
|
||||
/* check whether ATTR_RECORD's name is within bounds */
|
||||
name_end = (u8 *)a + le16_to_cpu(a->name_offset) +
|
||||
a->name_length * sizeof(ntfschar);
|
||||
if (name_end > mrec_end)
|
||||
break;
|
||||
|
||||
ctx->attr = a;
|
||||
if (unlikely(le32_to_cpu(a->type) > le32_to_cpu(type) ||
|
||||
a->type == AT_END))
|
||||
return -ENOENT;
|
||||
if (unlikely(!a->length))
|
||||
break;
|
||||
|
||||
/* check whether ATTR_RECORD's length wrap */
|
||||
if ((u8 *)a + le32_to_cpu(a->length) < (u8 *)a)
|
||||
break;
|
||||
/* check whether ATTR_RECORD's length is within bounds */
|
||||
if ((u8 *)a + le32_to_cpu(a->length) > mrec_end)
|
||||
break;
|
||||
|
||||
if (a->type != type)
|
||||
continue;
|
||||
/*
|
||||
|
@ -1829,6 +1829,13 @@ int ntfs_read_inode_mount(struct inode *vi)
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
/* Sanity check offset to the first attribute */
|
||||
if (le16_to_cpu(m->attrs_offset) >= le32_to_cpu(m->bytes_allocated)) {
|
||||
ntfs_error(sb, "Incorrect mft offset to the first attribute %u in superblock.",
|
||||
le16_to_cpu(m->attrs_offset));
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
/* Need this to sanity check attribute list references to $MFT. */
|
||||
vi->i_generation = ni->seq_no = le16_to_cpu(m->sequence_number);
|
||||
|
||||
|
@ -241,7 +241,7 @@ static struct fileIdentDesc *udf_find_entry(struct inode *dir,
|
||||
poffset - lfi);
|
||||
else {
|
||||
if (!copy_name) {
|
||||
copy_name = kmalloc(UDF_NAME_LEN,
|
||||
copy_name = kmalloc(UDF_NAME_LEN_CS0,
|
||||
GFP_NOFS);
|
||||
if (!copy_name) {
|
||||
fi = ERR_PTR(-ENOMEM);
|
||||
|
@ -158,17 +158,22 @@ static inline int xfs_bmapi_whichfork(int bmapi_flags)
|
||||
{ BMAP_ATTRFORK, "ATTR" }, \
|
||||
{ BMAP_COWFORK, "COW" }
|
||||
|
||||
/* Return true if the extent is an allocated extent, written or not. */
|
||||
static inline bool xfs_bmap_is_real_extent(struct xfs_bmbt_irec *irec)
|
||||
{
|
||||
return irec->br_startblock != HOLESTARTBLOCK &&
|
||||
irec->br_startblock != DELAYSTARTBLOCK &&
|
||||
!isnullstartblock(irec->br_startblock);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return true if the extent is a real, allocated extent, or false if it is a
|
||||
* delayed allocation, and unwritten extent or a hole.
|
||||
*/
|
||||
static inline bool xfs_bmap_is_real_extent(struct xfs_bmbt_irec *irec)
|
||||
static inline bool xfs_bmap_is_written_extent(struct xfs_bmbt_irec *irec)
|
||||
{
|
||||
return irec->br_state != XFS_EXT_UNWRITTEN &&
|
||||
irec->br_startblock != HOLESTARTBLOCK &&
|
||||
irec->br_startblock != DELAYSTARTBLOCK &&
|
||||
!isnullstartblock(irec->br_startblock);
|
||||
return xfs_bmap_is_real_extent(irec) &&
|
||||
irec->br_state != XFS_EXT_UNWRITTEN;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -70,7 +70,7 @@ xfs_rtbuf_get(
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
if (nmap == 0 || !xfs_bmap_is_real_extent(&map)) {
|
||||
if (nmap == 0 || !xfs_bmap_is_written_extent(&map)) {
|
||||
XFS_ERROR_REPORT(__func__, XFS_ERRLEVEL_LOW, mp);
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
@ -65,6 +65,7 @@ void xfs_log_get_max_trans_res(struct xfs_mount *mp,
|
||||
#define XFS_TRANS_DQ_DIRTY 0x10 /* at least one dquot in trx dirty */
|
||||
#define XFS_TRANS_RESERVE 0x20 /* OK to use reserved data blocks */
|
||||
#define XFS_TRANS_NO_WRITECOUNT 0x40 /* do not elevate SB writecount */
|
||||
#define XFS_TRANS_RES_FDBLKS 0x80 /* reserve newly freed blocks */
|
||||
/*
|
||||
* LOWMODE is used by the allocator to activate the lowspace algorithm - when
|
||||
* free space is running low the extent allocator may choose to allocate an
|
||||
|
@ -1740,6 +1740,7 @@ xfs_swap_extents(
|
||||
int lock_flags;
|
||||
uint64_t f;
|
||||
int resblks = 0;
|
||||
unsigned int flags = 0;
|
||||
|
||||
/*
|
||||
* Lock the inodes against other IO, page faults and truncate to
|
||||
@ -1795,17 +1796,16 @@ xfs_swap_extents(
|
||||
resblks += XFS_SWAP_RMAP_SPACE_RES(mp, tipnext, w);
|
||||
|
||||
/*
|
||||
* Handle the corner case where either inode might straddle the
|
||||
* btree format boundary. If so, the inode could bounce between
|
||||
* btree <-> extent format on unmap -> remap cycles, freeing and
|
||||
* allocating a bmapbt block each time.
|
||||
* If either inode straddles a bmapbt block allocation boundary,
|
||||
* the rmapbt algorithm triggers repeated allocs and frees as
|
||||
* extents are remapped. This can exhaust the block reservation
|
||||
* prematurely and cause shutdown. Return freed blocks to the
|
||||
* transaction reservation to counter this behavior.
|
||||
*/
|
||||
if (ipnext == (XFS_IFORK_MAXEXT(ip, w) + 1))
|
||||
resblks += XFS_IFORK_MAXEXT(ip, w);
|
||||
if (tipnext == (XFS_IFORK_MAXEXT(tip, w) + 1))
|
||||
resblks += XFS_IFORK_MAXEXT(tip, w);
|
||||
flags |= XFS_TRANS_RES_FDBLKS;
|
||||
}
|
||||
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, 0, &tp);
|
||||
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, flags,
|
||||
&tp);
|
||||
if (error)
|
||||
goto out_unlock;
|
||||
|
||||
|
@ -1267,10 +1267,23 @@ xfs_filemap_pfn_mkwrite(
|
||||
return __xfs_filemap_fault(vmf, PE_SIZE_PTE, true);
|
||||
}
|
||||
|
||||
static void
|
||||
xfs_filemap_map_pages(
|
||||
struct vm_fault *vmf,
|
||||
pgoff_t start_pgoff,
|
||||
pgoff_t end_pgoff)
|
||||
{
|
||||
struct inode *inode = file_inode(vmf->vma->vm_file);
|
||||
|
||||
xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
|
||||
filemap_map_pages(vmf, start_pgoff, end_pgoff);
|
||||
xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
|
||||
}
|
||||
|
||||
static const struct vm_operations_struct xfs_file_vm_ops = {
|
||||
.fault = xfs_filemap_fault,
|
||||
.huge_fault = xfs_filemap_huge_fault,
|
||||
.map_pages = filemap_map_pages,
|
||||
.map_pages = xfs_filemap_map_pages,
|
||||
.page_mkwrite = xfs_filemap_page_mkwrite,
|
||||
.pfn_mkwrite = xfs_filemap_pfn_mkwrite,
|
||||
};
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user