This is the 5.10.113 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmJpLt4ACgkQONu9yGCS aT5Wjg//dzSnqQoqXgMjLwSoMx15rfs/HjC8vgRUpdKctpzITabLc7ywdbcxuyQt it+tlQAFMIq2caH20M+u91zm1kre9f8ap5KnVEt+snkJK+mxWZ8u0uxgzGqRJV7w 1SX4lRCdbfT82T2qjlPFlLQ3bFlxy1nbYHJI1lOltl8JXgHEHuFDGH0oWr6QwdOu wAayeL5MmIpUqtLE7G5Jb9Yc1Hg+dCPHGjJNHbtR6URnVGNY664Moz/ij0qWA8RE Gaxxud677xEVoc3OVRS3r9CzEmhZGBeI0xwc9Gc8vGWaVkJGlS2/p/+M8mk75yKu gUpGZE2DNZ+8G0rs/9hs74nV01KpcOCJokLTqka+0MqKHalNVibkw8RPLThn30Ct JyK43veFQigd3WJULwvOaoM4YBzCishYQc2jvyftZRqb5rxRfTk62UoQoqNgmhyr 1MDUS8w741jF0qdH/v8Wgv7H64d4iilZV6VqVtWiyowPphHbd76qGpRSe42Xg/gY gL/xfjS17Uwid5es+wzIP4J9D3yxwwh3KZjgfAuaOVnMVCn2RqEjZyqQJSCAc8sF kCPMbXjAN9/5sGwidGGDf7ML67MIcIF6928pel95RU3lmz7X5cEzN2FCeAZg28rn W2iiSeWEh6XD7Pzbd+TYYftG3M2kGN6qzaKM2wOGNc6cK/dDROs= =NhyD -----END PGP SIGNATURE----- Merge 5.10.113 into android12-5.10-lts Changes in 5.10.113 etherdevice: Adjust ether_addr* prototypes to silence -Wstringop-overead mm: page_alloc: fix building error on -Werror=array-compare tracing: Dump stacktrace trigger to the corresponding instance perf tools: Fix segfault accessing sample_id xyarray gfs2: assign rgrp glock before compute_bitstructs net/sched: cls_u32: fix netns refcount changes in u32_change() ALSA: usb-audio: Clear MIDI port active flag after draining ALSA: hda/realtek: Add quirk for Clevo NP70PNP dm: fix mempool NULL pointer race when completing IO ASoC: atmel: Remove system clock tree configuration for at91sam9g20ek ASoC: msm8916-wcd-digital: Check failure for devm_snd_soc_register_component ASoC: codecs: wcd934x: do not switch off SIDO Buck when codec is in use dmaengine: imx-sdma: Fix error checking in sdma_event_remap dmaengine: mediatek:Fix PM usage reference leak of mtk_uart_apdma_alloc_chan_resources spi: spi-mtk-nor: initialize spi controller after resume esp: limit skb_page_frag_refill use to a single page igc: Fix infinite loop in release_swfw_sync igc: Fix BUG: scheduling while atomic rxrpc: Restore removed timer deletion net/smc: Fix sock leak when release after smc_shutdown() net/packet: fix packet_sock xmit return value checking ip6_gre: Avoid updating tunnel->tun_hlen in __gre6_xmit() ip6_gre: Fix skb_under_panic in __gre6_xmit() net/sched: cls_u32: fix possible leak in u32_init_knode() l3mdev: l3mdev_master_upper_ifindex_by_index_rcu should be using netdev_master_upper_dev_get_rcu ipv6: make ip6_rt_gc_expire an atomic_t netlink: reset network and mac headers in netlink_dump() net: stmmac: Use readl_poll_timeout_atomic() in atomic state dmaengine: idxd: add RO check for wq max_batch_size write dmaengine: idxd: add RO check for wq max_transfer_size write selftests: mlxsw: vxlan_flooding: Prevent flooding of unwanted packets arm64/mm: Remove [PUD|PMD]_TABLE_BIT from [pud|pmd]_bad() arm64: mm: fix p?d_leaf() ARM: vexpress/spc: Avoid negative array index when !SMP reset: tegra-bpmp: Restore Handle errors in BPMP response platform/x86: samsung-laptop: Fix an unsigned comparison which can never be negative ALSA: usb-audio: Fix undefined behavior due to shift overflowing the constant arm64: dts: imx: Fix imx8*-var-som touchscreen property sizes vxlan: fix error return code in vxlan_fdb_append cifs: Check the IOCB_DIRECT flag, not O_DIRECT net: atlantic: Avoid out-of-bounds indexing mt76: Fix undefined behavior due to shift overflowing the constant brcmfmac: sdio: Fix undefined behavior due to shift overflowing the constant dpaa_eth: Fix missing of_node_put in dpaa_get_ts_info() drm/msm/mdp5: check the return of kzalloc() net: macb: Restart tx only if queue pointer is lagging scsi: qedi: Fix failed disconnect handling stat: fix inconsistency between struct stat and struct compat_stat nvme: add a quirk to disable namespace identifiers nvme-pci: disable namespace identifiers for Qemu controllers EDAC/synopsys: Read the error count from the correct register mm, hugetlb: allow for "high" userspace addresses oom_kill.c: futex: delay the OOM reaper to allow time for proper futex cleanup mm/mmu_notifier.c: fix race in mmu_interval_notifier_remove() ata: pata_marvell: Check the 'bmdma_addr' beforing reading dma: at_xdmac: fix a missing check on list iterator net: atlantic: invert deep par in pm functions, preventing null derefs xtensa: patch_text: Fixup last cpu should be master xtensa: fix a7 clobbering in coprocessor context load/store openvswitch: fix OOB access in reserve_sfa_size() gpio: Request interrupts after IRQ is initialized ASoC: soc-dapm: fix two incorrect uses of list iterator e1000e: Fix possible overflow in LTR decoding ARC: entry: fix syscall_trace_exit argument arm_pmu: Validate single/group leader events sched/pelt: Fix attach_entity_load_avg() corner case perf/core: Fix perf_mmap fail when CONFIG_PERF_USE_VMALLOC enabled drm/panel/raspberrypi-touchscreen: Avoid NULL deref if not initialised drm/panel/raspberrypi-touchscreen: Initialise the bridge in prepare KVM: PPC: Fix TCE handling for VFIO drm/vc4: Use pm_runtime_resume_and_get to fix pm_runtime_get_sync() usage powerpc/perf: Fix power9 event alternatives perf report: Set PERF_SAMPLE_DATA_SRC bit for Arm SPE event ext4: fix fallocate to use file_modified to update permissions consistently ext4: fix symlink file size not match to file content ext4: fix use-after-free in ext4_search_dir ext4: limit length to bitmap_maxbytes - blocksize in punch_hole ext4, doc: fix incorrect h_reserved size ext4: fix overhead calculation to account for the reserved gdt blocks ext4: force overhead calculation if the s_overhead_cluster makes no sense can: isotp: stop timeout monitoring when no first frame was sent jbd2: fix a potential race while discarding reserved buffers after an abort spi: atmel-quadspi: Fix the buswidth adjustment between spi-mem and controller staging: ion: Prevent incorrect reference counting behavour block/compat_ioctl: fix range check in BLKGETSIZE Revert "net: micrel: fix KS8851_MLL Kconfig" Linux 5.10.113 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I4ed10699cbb32b89caf79b8b4a2a35b3d8824115
This commit is contained in:
commit
ca9b002a16
@ -76,7 +76,7 @@ The beginning of an extended attribute block is in
|
|||||||
- Checksum of the extended attribute block.
|
- Checksum of the extended attribute block.
|
||||||
* - 0x14
|
* - 0x14
|
||||||
- \_\_u32
|
- \_\_u32
|
||||||
- h\_reserved[2]
|
- h\_reserved[3]
|
||||||
- Zero.
|
- Zero.
|
||||||
|
|
||||||
The checksum is calculated against the FS UUID, the 64-bit block number
|
The checksum is calculated against the FS UUID, the 64-bit block number
|
||||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
|||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
VERSION = 5
|
VERSION = 5
|
||||||
PATCHLEVEL = 10
|
PATCHLEVEL = 10
|
||||||
SUBLEVEL = 112
|
SUBLEVEL = 113
|
||||||
EXTRAVERSION =
|
EXTRAVERSION =
|
||||||
NAME = Dare mighty things
|
NAME = Dare mighty things
|
||||||
|
|
||||||
|
@ -199,6 +199,7 @@ tracesys_exit:
|
|||||||
st r0, [sp, PT_r0] ; sys call return value in pt_regs
|
st r0, [sp, PT_r0] ; sys call return value in pt_regs
|
||||||
|
|
||||||
;POST Sys Call Ptrace Hook
|
;POST Sys Call Ptrace Hook
|
||||||
|
mov r0, sp ; pt_regs needed
|
||||||
bl @syscall_trace_exit
|
bl @syscall_trace_exit
|
||||||
b ret_from_exception ; NOT ret_from_system_call at is saves r0 which
|
b ret_from_exception ; NOT ret_from_system_call at is saves r0 which
|
||||||
; we'd done before calling post hook above
|
; we'd done before calling post hook above
|
||||||
|
@ -580,7 +580,7 @@ static int __init ve_spc_clk_init(void)
|
|||||||
}
|
}
|
||||||
|
|
||||||
cluster = topology_physical_package_id(cpu_dev->id);
|
cluster = topology_physical_package_id(cpu_dev->id);
|
||||||
if (init_opp_table[cluster])
|
if (cluster < 0 || init_opp_table[cluster])
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
if (ve_init_opp_table(cpu_dev))
|
if (ve_init_opp_table(cpu_dev))
|
||||||
|
@ -89,12 +89,12 @@ touchscreen@0 {
|
|||||||
pendown-gpio = <&gpio1 3 GPIO_ACTIVE_LOW>;
|
pendown-gpio = <&gpio1 3 GPIO_ACTIVE_LOW>;
|
||||||
|
|
||||||
ti,x-min = /bits/ 16 <125>;
|
ti,x-min = /bits/ 16 <125>;
|
||||||
touchscreen-size-x = /bits/ 16 <4008>;
|
touchscreen-size-x = <4008>;
|
||||||
ti,y-min = /bits/ 16 <282>;
|
ti,y-min = /bits/ 16 <282>;
|
||||||
touchscreen-size-y = /bits/ 16 <3864>;
|
touchscreen-size-y = <3864>;
|
||||||
ti,x-plate-ohms = /bits/ 16 <180>;
|
ti,x-plate-ohms = /bits/ 16 <180>;
|
||||||
touchscreen-max-pressure = /bits/ 16 <255>;
|
touchscreen-max-pressure = <255>;
|
||||||
touchscreen-average-samples = /bits/ 16 <10>;
|
touchscreen-average-samples = <10>;
|
||||||
ti,debounce-tol = /bits/ 16 <3>;
|
ti,debounce-tol = /bits/ 16 <3>;
|
||||||
ti,debounce-rep = /bits/ 16 <1>;
|
ti,debounce-rep = /bits/ 16 <1>;
|
||||||
ti,settle-delay-usec = /bits/ 16 <150>;
|
ti,settle-delay-usec = /bits/ 16 <150>;
|
||||||
|
@ -70,12 +70,12 @@ touchscreen@0 {
|
|||||||
pendown-gpio = <&gpio1 3 GPIO_ACTIVE_LOW>;
|
pendown-gpio = <&gpio1 3 GPIO_ACTIVE_LOW>;
|
||||||
|
|
||||||
ti,x-min = /bits/ 16 <125>;
|
ti,x-min = /bits/ 16 <125>;
|
||||||
touchscreen-size-x = /bits/ 16 <4008>;
|
touchscreen-size-x = <4008>;
|
||||||
ti,y-min = /bits/ 16 <282>;
|
ti,y-min = /bits/ 16 <282>;
|
||||||
touchscreen-size-y = /bits/ 16 <3864>;
|
touchscreen-size-y = <3864>;
|
||||||
ti,x-plate-ohms = /bits/ 16 <180>;
|
ti,x-plate-ohms = /bits/ 16 <180>;
|
||||||
touchscreen-max-pressure = /bits/ 16 <255>;
|
touchscreen-max-pressure = <255>;
|
||||||
touchscreen-average-samples = /bits/ 16 <10>;
|
touchscreen-average-samples = <10>;
|
||||||
ti,debounce-tol = /bits/ 16 <3>;
|
ti,debounce-tol = /bits/ 16 <3>;
|
||||||
ti,debounce-rep = /bits/ 16 <1>;
|
ti,debounce-rep = /bits/ 16 <1>;
|
||||||
ti,settle-delay-usec = /bits/ 16 <150>;
|
ti,settle-delay-usec = /bits/ 16 <150>;
|
||||||
|
@ -522,13 +522,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
|
|||||||
|
|
||||||
#define pmd_none(pmd) (!pmd_val(pmd))
|
#define pmd_none(pmd) (!pmd_val(pmd))
|
||||||
|
|
||||||
#define pmd_bad(pmd) (!(pmd_val(pmd) & PMD_TABLE_BIT))
|
|
||||||
|
|
||||||
#define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
|
#define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
|
||||||
PMD_TYPE_TABLE)
|
PMD_TYPE_TABLE)
|
||||||
#define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
|
#define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
|
||||||
PMD_TYPE_SECT)
|
PMD_TYPE_SECT)
|
||||||
#define pmd_leaf(pmd) pmd_sect(pmd)
|
#define pmd_leaf(pmd) (pmd_present(pmd) && !pmd_table(pmd))
|
||||||
|
#define pmd_bad(pmd) (!pmd_table(pmd))
|
||||||
|
|
||||||
#if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3
|
#if defined(CONFIG_ARM64_64K_PAGES) || CONFIG_PGTABLE_LEVELS < 3
|
||||||
static inline bool pud_sect(pud_t pud) { return false; }
|
static inline bool pud_sect(pud_t pud) { return false; }
|
||||||
@ -619,9 +618,9 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
|
|||||||
pr_err("%s:%d: bad pmd %016llx.\n", __FILE__, __LINE__, pmd_val(e))
|
pr_err("%s:%d: bad pmd %016llx.\n", __FILE__, __LINE__, pmd_val(e))
|
||||||
|
|
||||||
#define pud_none(pud) (!pud_val(pud))
|
#define pud_none(pud) (!pud_val(pud))
|
||||||
#define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT))
|
#define pud_bad(pud) (!pud_table(pud))
|
||||||
#define pud_present(pud) pte_present(pud_pte(pud))
|
#define pud_present(pud) pte_present(pud_pte(pud))
|
||||||
#define pud_leaf(pud) pud_sect(pud)
|
#define pud_leaf(pud) (pud_present(pud) && !pud_table(pud))
|
||||||
#define pud_valid(pud) pte_valid(pud_pte(pud))
|
#define pud_valid(pud) pte_valid(pud_pte(pud))
|
||||||
|
|
||||||
static inline void set_pud(pud_t *pudp, pud_t pud)
|
static inline void set_pud(pud_t *pudp, pud_t pud)
|
||||||
|
@ -421,13 +421,19 @@ static void kvmppc_tce_put(struct kvmppc_spapr_tce_table *stt,
|
|||||||
tbl[idx % TCES_PER_PAGE] = tce;
|
tbl[idx % TCES_PER_PAGE] = tce;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void kvmppc_clear_tce(struct mm_struct *mm, struct iommu_table *tbl,
|
static void kvmppc_clear_tce(struct mm_struct *mm, struct kvmppc_spapr_tce_table *stt,
|
||||||
unsigned long entry)
|
struct iommu_table *tbl, unsigned long entry)
|
||||||
{
|
{
|
||||||
|
unsigned long i;
|
||||||
|
unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift);
|
||||||
|
unsigned long io_entry = entry << (stt->page_shift - tbl->it_page_shift);
|
||||||
|
|
||||||
|
for (i = 0; i < subpages; ++i) {
|
||||||
unsigned long hpa = 0;
|
unsigned long hpa = 0;
|
||||||
enum dma_data_direction dir = DMA_NONE;
|
enum dma_data_direction dir = DMA_NONE;
|
||||||
|
|
||||||
iommu_tce_xchg_no_kill(mm, tbl, entry, &hpa, &dir);
|
iommu_tce_xchg_no_kill(mm, tbl, io_entry + i, &hpa, &dir);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm,
|
static long kvmppc_tce_iommu_mapped_dec(struct kvm *kvm,
|
||||||
@ -486,6 +492,8 @@ static long kvmppc_tce_iommu_unmap(struct kvm *kvm,
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
iommu_tce_kill(tbl, io_entry, subpages);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -545,6 +553,8 @@ static long kvmppc_tce_iommu_map(struct kvm *kvm,
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
iommu_tce_kill(tbl, io_entry, subpages);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -591,10 +601,9 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
|
|||||||
ret = kvmppc_tce_iommu_map(vcpu->kvm, stt, stit->tbl,
|
ret = kvmppc_tce_iommu_map(vcpu->kvm, stt, stit->tbl,
|
||||||
entry, ua, dir);
|
entry, ua, dir);
|
||||||
|
|
||||||
iommu_tce_kill(stit->tbl, entry, 1);
|
|
||||||
|
|
||||||
if (ret != H_SUCCESS) {
|
if (ret != H_SUCCESS) {
|
||||||
kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
|
kvmppc_clear_tce(vcpu->kvm->mm, stt, stit->tbl, entry);
|
||||||
goto unlock_exit;
|
goto unlock_exit;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -670,13 +679,13 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
|
|||||||
*/
|
*/
|
||||||
if (get_user(tce, tces + i)) {
|
if (get_user(tce, tces + i)) {
|
||||||
ret = H_TOO_HARD;
|
ret = H_TOO_HARD;
|
||||||
goto invalidate_exit;
|
goto unlock_exit;
|
||||||
}
|
}
|
||||||
tce = be64_to_cpu(tce);
|
tce = be64_to_cpu(tce);
|
||||||
|
|
||||||
if (kvmppc_tce_to_ua(vcpu->kvm, tce, &ua)) {
|
if (kvmppc_tce_to_ua(vcpu->kvm, tce, &ua)) {
|
||||||
ret = H_PARAMETER;
|
ret = H_PARAMETER;
|
||||||
goto invalidate_exit;
|
goto unlock_exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
|
list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
|
||||||
@ -685,19 +694,15 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
|
|||||||
iommu_tce_direction(tce));
|
iommu_tce_direction(tce));
|
||||||
|
|
||||||
if (ret != H_SUCCESS) {
|
if (ret != H_SUCCESS) {
|
||||||
kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl,
|
kvmppc_clear_tce(vcpu->kvm->mm, stt, stit->tbl,
|
||||||
entry);
|
entry + i);
|
||||||
goto invalidate_exit;
|
goto unlock_exit;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
kvmppc_tce_put(stt, entry + i, tce);
|
kvmppc_tce_put(stt, entry + i, tce);
|
||||||
}
|
}
|
||||||
|
|
||||||
invalidate_exit:
|
|
||||||
list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
|
|
||||||
iommu_tce_kill(stit->tbl, entry, npages);
|
|
||||||
|
|
||||||
unlock_exit:
|
unlock_exit:
|
||||||
srcu_read_unlock(&vcpu->kvm->srcu, idx);
|
srcu_read_unlock(&vcpu->kvm->srcu, idx);
|
||||||
|
|
||||||
@ -736,20 +741,16 @@ long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
|
|||||||
continue;
|
continue;
|
||||||
|
|
||||||
if (ret == H_TOO_HARD)
|
if (ret == H_TOO_HARD)
|
||||||
goto invalidate_exit;
|
return ret;
|
||||||
|
|
||||||
WARN_ON_ONCE(1);
|
WARN_ON_ONCE(1);
|
||||||
kvmppc_clear_tce(vcpu->kvm->mm, stit->tbl, entry);
|
kvmppc_clear_tce(vcpu->kvm->mm, stt, stit->tbl, entry + i);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for (i = 0; i < npages; ++i, ioba += (1ULL << stt->page_shift))
|
for (i = 0; i < npages; ++i, ioba += (1ULL << stt->page_shift))
|
||||||
kvmppc_tce_put(stt, ioba >> stt->page_shift, tce_value);
|
kvmppc_tce_put(stt, ioba >> stt->page_shift, tce_value);
|
||||||
|
|
||||||
invalidate_exit:
|
|
||||||
list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
|
|
||||||
iommu_tce_kill(stit->tbl, ioba >> stt->page_shift, npages);
|
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(kvmppc_h_stuff_tce);
|
EXPORT_SYMBOL_GPL(kvmppc_h_stuff_tce);
|
||||||
|
@ -247,13 +247,19 @@ static void iommu_tce_kill_rm(struct iommu_table *tbl,
|
|||||||
tbl->it_ops->tce_kill(tbl, entry, pages, true);
|
tbl->it_ops->tce_kill(tbl, entry, pages, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void kvmppc_rm_clear_tce(struct kvm *kvm, struct iommu_table *tbl,
|
static void kvmppc_rm_clear_tce(struct kvm *kvm, struct kvmppc_spapr_tce_table *stt,
|
||||||
unsigned long entry)
|
struct iommu_table *tbl, unsigned long entry)
|
||||||
{
|
{
|
||||||
|
unsigned long i;
|
||||||
|
unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift);
|
||||||
|
unsigned long io_entry = entry << (stt->page_shift - tbl->it_page_shift);
|
||||||
|
|
||||||
|
for (i = 0; i < subpages; ++i) {
|
||||||
unsigned long hpa = 0;
|
unsigned long hpa = 0;
|
||||||
enum dma_data_direction dir = DMA_NONE;
|
enum dma_data_direction dir = DMA_NONE;
|
||||||
|
|
||||||
iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, entry, &hpa, &dir);
|
iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, io_entry + i, &hpa, &dir);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static long kvmppc_rm_tce_iommu_mapped_dec(struct kvm *kvm,
|
static long kvmppc_rm_tce_iommu_mapped_dec(struct kvm *kvm,
|
||||||
@ -316,6 +322,8 @@ static long kvmppc_rm_tce_iommu_unmap(struct kvm *kvm,
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
iommu_tce_kill_rm(tbl, io_entry, subpages);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -379,6 +387,8 @@ static long kvmppc_rm_tce_iommu_map(struct kvm *kvm,
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
iommu_tce_kill_rm(tbl, io_entry, subpages);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -424,10 +434,8 @@ long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
|
|||||||
ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, stt,
|
ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, stt,
|
||||||
stit->tbl, entry, ua, dir);
|
stit->tbl, entry, ua, dir);
|
||||||
|
|
||||||
iommu_tce_kill_rm(stit->tbl, entry, 1);
|
|
||||||
|
|
||||||
if (ret != H_SUCCESS) {
|
if (ret != H_SUCCESS) {
|
||||||
kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry);
|
kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl, entry);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -569,7 +577,7 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
|
|||||||
ua = 0;
|
ua = 0;
|
||||||
if (kvmppc_rm_tce_to_ua(vcpu->kvm, tce, &ua)) {
|
if (kvmppc_rm_tce_to_ua(vcpu->kvm, tce, &ua)) {
|
||||||
ret = H_PARAMETER;
|
ret = H_PARAMETER;
|
||||||
goto invalidate_exit;
|
goto unlock_exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
|
list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
|
||||||
@ -578,19 +586,15 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
|
|||||||
iommu_tce_direction(tce));
|
iommu_tce_direction(tce));
|
||||||
|
|
||||||
if (ret != H_SUCCESS) {
|
if (ret != H_SUCCESS) {
|
||||||
kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl,
|
kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl,
|
||||||
entry);
|
entry + i);
|
||||||
goto invalidate_exit;
|
goto unlock_exit;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
kvmppc_rm_tce_put(stt, entry + i, tce);
|
kvmppc_rm_tce_put(stt, entry + i, tce);
|
||||||
}
|
}
|
||||||
|
|
||||||
invalidate_exit:
|
|
||||||
list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
|
|
||||||
iommu_tce_kill_rm(stit->tbl, entry, npages);
|
|
||||||
|
|
||||||
unlock_exit:
|
unlock_exit:
|
||||||
if (!prereg)
|
if (!prereg)
|
||||||
arch_spin_unlock(&kvm->mmu_lock.rlock.raw_lock);
|
arch_spin_unlock(&kvm->mmu_lock.rlock.raw_lock);
|
||||||
@ -632,20 +636,16 @@ long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu,
|
|||||||
continue;
|
continue;
|
||||||
|
|
||||||
if (ret == H_TOO_HARD)
|
if (ret == H_TOO_HARD)
|
||||||
goto invalidate_exit;
|
return ret;
|
||||||
|
|
||||||
WARN_ON_ONCE_RM(1);
|
WARN_ON_ONCE_RM(1);
|
||||||
kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry);
|
kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl, entry + i);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for (i = 0; i < npages; ++i, ioba += (1ULL << stt->page_shift))
|
for (i = 0; i < npages; ++i, ioba += (1ULL << stt->page_shift))
|
||||||
kvmppc_rm_tce_put(stt, ioba >> stt->page_shift, tce_value);
|
kvmppc_rm_tce_put(stt, ioba >> stt->page_shift, tce_value);
|
||||||
|
|
||||||
invalidate_exit:
|
|
||||||
list_for_each_entry_lockless(stit, &stt->iommu_tables, next)
|
|
||||||
iommu_tce_kill_rm(stit->tbl, ioba >> stt->page_shift, npages);
|
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -133,11 +133,11 @@ int p9_dd22_bl_ev[] = {
|
|||||||
|
|
||||||
/* Table of alternatives, sorted by column 0 */
|
/* Table of alternatives, sorted by column 0 */
|
||||||
static const unsigned int power9_event_alternatives[][MAX_ALT] = {
|
static const unsigned int power9_event_alternatives[][MAX_ALT] = {
|
||||||
|
{ PM_BR_2PATH, PM_BR_2PATH_ALT },
|
||||||
{ PM_INST_DISP, PM_INST_DISP_ALT },
|
{ PM_INST_DISP, PM_INST_DISP_ALT },
|
||||||
{ PM_RUN_CYC_ALT, PM_RUN_CYC },
|
{ PM_RUN_CYC_ALT, PM_RUN_CYC },
|
||||||
{ PM_RUN_INST_CMPL_ALT, PM_RUN_INST_CMPL },
|
|
||||||
{ PM_LD_MISS_L1, PM_LD_MISS_L1_ALT },
|
{ PM_LD_MISS_L1, PM_LD_MISS_L1_ALT },
|
||||||
{ PM_BR_2PATH, PM_BR_2PATH_ALT },
|
{ PM_RUN_INST_CMPL_ALT, PM_RUN_INST_CMPL },
|
||||||
};
|
};
|
||||||
|
|
||||||
static int power9_get_alternatives(u64 event, unsigned int flags, u64 alt[])
|
static int power9_get_alternatives(u64 event, unsigned int flags, u64 alt[])
|
||||||
|
@ -29,15 +29,13 @@ typedef u32 compat_caddr_t;
|
|||||||
typedef __kernel_fsid_t compat_fsid_t;
|
typedef __kernel_fsid_t compat_fsid_t;
|
||||||
|
|
||||||
struct compat_stat {
|
struct compat_stat {
|
||||||
compat_dev_t st_dev;
|
u32 st_dev;
|
||||||
u16 __pad1;
|
|
||||||
compat_ino_t st_ino;
|
compat_ino_t st_ino;
|
||||||
compat_mode_t st_mode;
|
compat_mode_t st_mode;
|
||||||
compat_nlink_t st_nlink;
|
compat_nlink_t st_nlink;
|
||||||
__compat_uid_t st_uid;
|
__compat_uid_t st_uid;
|
||||||
__compat_gid_t st_gid;
|
__compat_gid_t st_gid;
|
||||||
compat_dev_t st_rdev;
|
u32 st_rdev;
|
||||||
u16 __pad2;
|
|
||||||
u32 st_size;
|
u32 st_size;
|
||||||
u32 st_blksize;
|
u32 st_blksize;
|
||||||
u32 st_blocks;
|
u32 st_blocks;
|
||||||
|
@ -29,7 +29,7 @@
|
|||||||
.if XTENSA_HAVE_COPROCESSOR(x); \
|
.if XTENSA_HAVE_COPROCESSOR(x); \
|
||||||
.align 4; \
|
.align 4; \
|
||||||
.Lsave_cp_regs_cp##x: \
|
.Lsave_cp_regs_cp##x: \
|
||||||
xchal_cp##x##_store a2 a4 a5 a6 a7; \
|
xchal_cp##x##_store a2 a3 a4 a5 a6; \
|
||||||
jx a0; \
|
jx a0; \
|
||||||
.endif
|
.endif
|
||||||
|
|
||||||
@ -46,7 +46,7 @@
|
|||||||
.if XTENSA_HAVE_COPROCESSOR(x); \
|
.if XTENSA_HAVE_COPROCESSOR(x); \
|
||||||
.align 4; \
|
.align 4; \
|
||||||
.Lload_cp_regs_cp##x: \
|
.Lload_cp_regs_cp##x: \
|
||||||
xchal_cp##x##_load a2 a4 a5 a6 a7; \
|
xchal_cp##x##_load a2 a3 a4 a5 a6; \
|
||||||
jx a0; \
|
jx a0; \
|
||||||
.endif
|
.endif
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ static int patch_text_stop_machine(void *data)
|
|||||||
{
|
{
|
||||||
struct patch *patch = data;
|
struct patch *patch = data;
|
||||||
|
|
||||||
if (atomic_inc_return(&patch->cpu_count) == 1) {
|
if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
|
||||||
local_patch_text(patch->addr, patch->data, patch->sz);
|
local_patch_text(patch->addr, patch->data, patch->sz);
|
||||||
atomic_inc(&patch->cpu_count);
|
atomic_inc(&patch->cpu_count);
|
||||||
} else {
|
} else {
|
||||||
|
@ -679,7 +679,7 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
|
|||||||
(bdev->bd_bdi->ra_pages * PAGE_SIZE) / 512);
|
(bdev->bd_bdi->ra_pages * PAGE_SIZE) / 512);
|
||||||
case BLKGETSIZE:
|
case BLKGETSIZE:
|
||||||
size = i_size_read(bdev->bd_inode);
|
size = i_size_read(bdev->bd_inode);
|
||||||
if ((size >> 9) > ~0UL)
|
if ((size >> 9) > ~(compat_ulong_t)0)
|
||||||
return -EFBIG;
|
return -EFBIG;
|
||||||
return compat_put_ulong(argp, size >> 9);
|
return compat_put_ulong(argp, size >> 9);
|
||||||
|
|
||||||
|
@ -83,6 +83,8 @@ static int marvell_cable_detect(struct ata_port *ap)
|
|||||||
switch(ap->port_no)
|
switch(ap->port_no)
|
||||||
{
|
{
|
||||||
case 0:
|
case 0:
|
||||||
|
if (!ap->ioaddr.bmdma_addr)
|
||||||
|
return ATA_CBL_PATA_UNK;
|
||||||
if (ioread8(ap->ioaddr.bmdma_addr + 1) & 1)
|
if (ioread8(ap->ioaddr.bmdma_addr + 1) & 1)
|
||||||
return ATA_CBL_PATA40;
|
return ATA_CBL_PATA40;
|
||||||
return ATA_CBL_PATA80;
|
return ATA_CBL_PATA80;
|
||||||
|
@ -1390,7 +1390,7 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
|
|||||||
{
|
{
|
||||||
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
|
struct at_xdmac_chan *atchan = to_at_xdmac_chan(chan);
|
||||||
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
|
struct at_xdmac *atxdmac = to_at_xdmac(atchan->chan.device);
|
||||||
struct at_xdmac_desc *desc, *_desc;
|
struct at_xdmac_desc *desc, *_desc, *iter;
|
||||||
struct list_head *descs_list;
|
struct list_head *descs_list;
|
||||||
enum dma_status ret;
|
enum dma_status ret;
|
||||||
int residue, retry;
|
int residue, retry;
|
||||||
@ -1505,12 +1505,14 @@ at_xdmac_tx_status(struct dma_chan *chan, dma_cookie_t cookie,
|
|||||||
* microblock.
|
* microblock.
|
||||||
*/
|
*/
|
||||||
descs_list = &desc->descs_list;
|
descs_list = &desc->descs_list;
|
||||||
list_for_each_entry_safe(desc, _desc, descs_list, desc_node) {
|
list_for_each_entry_safe(iter, _desc, descs_list, desc_node) {
|
||||||
dwidth = at_xdmac_get_dwidth(desc->lld.mbr_cfg);
|
dwidth = at_xdmac_get_dwidth(iter->lld.mbr_cfg);
|
||||||
residue -= (desc->lld.mbr_ubc & 0xffffff) << dwidth;
|
residue -= (iter->lld.mbr_ubc & 0xffffff) << dwidth;
|
||||||
if ((desc->lld.mbr_nda & 0xfffffffc) == cur_nda)
|
if ((iter->lld.mbr_nda & 0xfffffffc) == cur_nda) {
|
||||||
|
desc = iter;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
residue += cur_ubc << dwidth;
|
residue += cur_ubc << dwidth;
|
||||||
|
|
||||||
dma_set_residue(txstate, residue);
|
dma_set_residue(txstate, residue);
|
||||||
|
@ -1098,6 +1098,9 @@ static ssize_t wq_max_transfer_size_store(struct device *dev, struct device_attr
|
|||||||
u64 xfer_size;
|
u64 xfer_size;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
|
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
|
||||||
|
return -EPERM;
|
||||||
|
|
||||||
if (wq->state != IDXD_WQ_DISABLED)
|
if (wq->state != IDXD_WQ_DISABLED)
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
|
|
||||||
@ -1132,6 +1135,9 @@ static ssize_t wq_max_batch_size_store(struct device *dev, struct device_attribu
|
|||||||
u64 batch_size;
|
u64 batch_size;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
|
if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags))
|
||||||
|
return -EPERM;
|
||||||
|
|
||||||
if (wq->state != IDXD_WQ_DISABLED)
|
if (wq->state != IDXD_WQ_DISABLED)
|
||||||
return -EPERM;
|
return -EPERM;
|
||||||
|
|
||||||
|
@ -1789,7 +1789,7 @@ static int sdma_event_remap(struct sdma_engine *sdma)
|
|||||||
u32 reg, val, shift, num_map, i;
|
u32 reg, val, shift, num_map, i;
|
||||||
int ret = 0;
|
int ret = 0;
|
||||||
|
|
||||||
if (IS_ERR(np) || IS_ERR(gpr_np))
|
if (IS_ERR(np) || !gpr_np)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
event_remap = of_find_property(np, propname, NULL);
|
event_remap = of_find_property(np, propname, NULL);
|
||||||
@ -1837,7 +1837,7 @@ static int sdma_event_remap(struct sdma_engine *sdma)
|
|||||||
}
|
}
|
||||||
|
|
||||||
out:
|
out:
|
||||||
if (!IS_ERR(gpr_np))
|
if (gpr_np)
|
||||||
of_node_put(gpr_np);
|
of_node_put(gpr_np);
|
||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -274,7 +274,7 @@ static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
|
|||||||
unsigned int status;
|
unsigned int status;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = pm_runtime_get_sync(mtkd->ddev.dev);
|
ret = pm_runtime_resume_and_get(mtkd->ddev.dev);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
pm_runtime_put_noidle(chan->device->dev);
|
pm_runtime_put_noidle(chan->device->dev);
|
||||||
return ret;
|
return ret;
|
||||||
@ -288,18 +288,21 @@ static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
|
|||||||
ret = readx_poll_timeout(readl, c->base + VFF_EN,
|
ret = readx_poll_timeout(readl, c->base + VFF_EN,
|
||||||
status, !status, 10, 100);
|
status, !status, 10, 100);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
goto err_pm;
|
||||||
|
|
||||||
ret = request_irq(c->irq, mtk_uart_apdma_irq_handler,
|
ret = request_irq(c->irq, mtk_uart_apdma_irq_handler,
|
||||||
IRQF_TRIGGER_NONE, KBUILD_MODNAME, chan);
|
IRQF_TRIGGER_NONE, KBUILD_MODNAME, chan);
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
dev_err(chan->device->dev, "Can't request dma IRQ\n");
|
dev_err(chan->device->dev, "Can't request dma IRQ\n");
|
||||||
return -EINVAL;
|
ret = -EINVAL;
|
||||||
|
goto err_pm;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mtkd->support_33bits)
|
if (mtkd->support_33bits)
|
||||||
mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
|
mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
|
||||||
|
|
||||||
|
err_pm:
|
||||||
|
pm_runtime_put_noidle(mtkd->ddev.dev);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -163,6 +163,11 @@
|
|||||||
#define ECC_STAT_CECNT_SHIFT 8
|
#define ECC_STAT_CECNT_SHIFT 8
|
||||||
#define ECC_STAT_BITNUM_MASK 0x7F
|
#define ECC_STAT_BITNUM_MASK 0x7F
|
||||||
|
|
||||||
|
/* ECC error count register definitions */
|
||||||
|
#define ECC_ERRCNT_UECNT_MASK 0xFFFF0000
|
||||||
|
#define ECC_ERRCNT_UECNT_SHIFT 16
|
||||||
|
#define ECC_ERRCNT_CECNT_MASK 0xFFFF
|
||||||
|
|
||||||
/* DDR QOS Interrupt register definitions */
|
/* DDR QOS Interrupt register definitions */
|
||||||
#define DDR_QOS_IRQ_STAT_OFST 0x20200
|
#define DDR_QOS_IRQ_STAT_OFST 0x20200
|
||||||
#define DDR_QOSUE_MASK 0x4
|
#define DDR_QOSUE_MASK 0x4
|
||||||
@ -418,15 +423,16 @@ static int zynqmp_get_error_info(struct synps_edac_priv *priv)
|
|||||||
base = priv->baseaddr;
|
base = priv->baseaddr;
|
||||||
p = &priv->stat;
|
p = &priv->stat;
|
||||||
|
|
||||||
|
regval = readl(base + ECC_ERRCNT_OFST);
|
||||||
|
p->ce_cnt = regval & ECC_ERRCNT_CECNT_MASK;
|
||||||
|
p->ue_cnt = (regval & ECC_ERRCNT_UECNT_MASK) >> ECC_ERRCNT_UECNT_SHIFT;
|
||||||
|
if (!p->ce_cnt)
|
||||||
|
goto ue_err;
|
||||||
|
|
||||||
regval = readl(base + ECC_STAT_OFST);
|
regval = readl(base + ECC_STAT_OFST);
|
||||||
if (!regval)
|
if (!regval)
|
||||||
return 1;
|
return 1;
|
||||||
|
|
||||||
p->ce_cnt = (regval & ECC_STAT_CECNT_MASK) >> ECC_STAT_CECNT_SHIFT;
|
|
||||||
p->ue_cnt = (regval & ECC_STAT_UECNT_MASK) >> ECC_STAT_UECNT_SHIFT;
|
|
||||||
if (!p->ce_cnt)
|
|
||||||
goto ue_err;
|
|
||||||
|
|
||||||
p->ceinfo.bitpos = (regval & ECC_STAT_BITNUM_MASK);
|
p->ceinfo.bitpos = (regval & ECC_STAT_BITNUM_MASK);
|
||||||
|
|
||||||
regval = readl(base + ECC_CEADDR0_OFST);
|
regval = readl(base + ECC_CEADDR0_OFST);
|
||||||
|
@ -1613,8 +1613,6 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
|
|||||||
|
|
||||||
gpiochip_set_irq_hooks(gc);
|
gpiochip_set_irq_hooks(gc);
|
||||||
|
|
||||||
acpi_gpiochip_request_interrupts(gc);
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Using barrier() here to prevent compiler from reordering
|
* Using barrier() here to prevent compiler from reordering
|
||||||
* gc->irq.initialized before initialization of above
|
* gc->irq.initialized before initialization of above
|
||||||
@ -1624,6 +1622,8 @@ static int gpiochip_add_irqchip(struct gpio_chip *gc,
|
|||||||
|
|
||||||
gc->irq.initialized = true;
|
gc->irq.initialized = true;
|
||||||
|
|
||||||
|
acpi_gpiochip_request_interrupts(gc);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -179,7 +179,10 @@ static void mdp5_plane_reset(struct drm_plane *plane)
|
|||||||
drm_framebuffer_put(plane->state->fb);
|
drm_framebuffer_put(plane->state->fb);
|
||||||
|
|
||||||
kfree(to_mdp5_plane_state(plane->state));
|
kfree(to_mdp5_plane_state(plane->state));
|
||||||
|
plane->state = NULL;
|
||||||
mdp5_state = kzalloc(sizeof(*mdp5_state), GFP_KERNEL);
|
mdp5_state = kzalloc(sizeof(*mdp5_state), GFP_KERNEL);
|
||||||
|
if (!mdp5_state)
|
||||||
|
return;
|
||||||
|
|
||||||
/* assign default blend parameters */
|
/* assign default blend parameters */
|
||||||
mdp5_state->alpha = 255;
|
mdp5_state->alpha = 255;
|
||||||
|
@ -229,7 +229,7 @@ static void rpi_touchscreen_i2c_write(struct rpi_touchscreen *ts,
|
|||||||
|
|
||||||
ret = i2c_smbus_write_byte_data(ts->i2c, reg, val);
|
ret = i2c_smbus_write_byte_data(ts->i2c, reg, val);
|
||||||
if (ret)
|
if (ret)
|
||||||
dev_err(&ts->dsi->dev, "I2C write failed: %d\n", ret);
|
dev_err(&ts->i2c->dev, "I2C write failed: %d\n", ret);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int rpi_touchscreen_write(struct rpi_touchscreen *ts, u16 reg, u32 val)
|
static int rpi_touchscreen_write(struct rpi_touchscreen *ts, u16 reg, u32 val)
|
||||||
@ -265,7 +265,7 @@ static int rpi_touchscreen_noop(struct drm_panel *panel)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int rpi_touchscreen_enable(struct drm_panel *panel)
|
static int rpi_touchscreen_prepare(struct drm_panel *panel)
|
||||||
{
|
{
|
||||||
struct rpi_touchscreen *ts = panel_to_ts(panel);
|
struct rpi_touchscreen *ts = panel_to_ts(panel);
|
||||||
int i;
|
int i;
|
||||||
@ -295,6 +295,13 @@ static int rpi_touchscreen_enable(struct drm_panel *panel)
|
|||||||
rpi_touchscreen_write(ts, DSI_STARTDSI, 0x01);
|
rpi_touchscreen_write(ts, DSI_STARTDSI, 0x01);
|
||||||
msleep(100);
|
msleep(100);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int rpi_touchscreen_enable(struct drm_panel *panel)
|
||||||
|
{
|
||||||
|
struct rpi_touchscreen *ts = panel_to_ts(panel);
|
||||||
|
|
||||||
/* Turn on the backlight. */
|
/* Turn on the backlight. */
|
||||||
rpi_touchscreen_i2c_write(ts, REG_PWM, 255);
|
rpi_touchscreen_i2c_write(ts, REG_PWM, 255);
|
||||||
|
|
||||||
@ -349,7 +356,7 @@ static int rpi_touchscreen_get_modes(struct drm_panel *panel,
|
|||||||
static const struct drm_panel_funcs rpi_touchscreen_funcs = {
|
static const struct drm_panel_funcs rpi_touchscreen_funcs = {
|
||||||
.disable = rpi_touchscreen_disable,
|
.disable = rpi_touchscreen_disable,
|
||||||
.unprepare = rpi_touchscreen_noop,
|
.unprepare = rpi_touchscreen_noop,
|
||||||
.prepare = rpi_touchscreen_noop,
|
.prepare = rpi_touchscreen_prepare,
|
||||||
.enable = rpi_touchscreen_enable,
|
.enable = rpi_touchscreen_enable,
|
||||||
.get_modes = rpi_touchscreen_get_modes,
|
.get_modes = rpi_touchscreen_get_modes,
|
||||||
};
|
};
|
||||||
|
@ -835,7 +835,7 @@ static void vc4_dsi_encoder_enable(struct drm_encoder *encoder)
|
|||||||
unsigned long phy_clock;
|
unsigned long phy_clock;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
ret = pm_runtime_get_sync(dev);
|
ret = pm_runtime_resume_and_get(dev);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
DRM_ERROR("Failed to runtime PM enable on DSI%d\n", dsi->port);
|
DRM_ERROR("Failed to runtime PM enable on DSI%d\n", dsi->port);
|
||||||
return;
|
return;
|
||||||
|
@ -480,8 +480,8 @@ int aq_nic_start(struct aq_nic_s *self)
|
|||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err_exit;
|
goto err_exit;
|
||||||
|
|
||||||
for (i = 0U, aq_vec = self->aq_vec[0];
|
for (i = 0U; self->aq_vecs > i; ++i) {
|
||||||
self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i]) {
|
aq_vec = self->aq_vec[i];
|
||||||
err = aq_vec_start(aq_vec);
|
err = aq_vec_start(aq_vec);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err_exit;
|
goto err_exit;
|
||||||
@ -511,8 +511,8 @@ int aq_nic_start(struct aq_nic_s *self)
|
|||||||
mod_timer(&self->polling_timer, jiffies +
|
mod_timer(&self->polling_timer, jiffies +
|
||||||
AQ_CFG_POLLING_TIMER_INTERVAL);
|
AQ_CFG_POLLING_TIMER_INTERVAL);
|
||||||
} else {
|
} else {
|
||||||
for (i = 0U, aq_vec = self->aq_vec[0];
|
for (i = 0U; self->aq_vecs > i; ++i) {
|
||||||
self->aq_vecs > i; ++i, aq_vec = self->aq_vec[i]) {
|
aq_vec = self->aq_vec[i];
|
||||||
err = aq_pci_func_alloc_irq(self, i, self->ndev->name,
|
err = aq_pci_func_alloc_irq(self, i, self->ndev->name,
|
||||||
aq_vec_isr, aq_vec,
|
aq_vec_isr, aq_vec,
|
||||||
aq_vec_get_affinity_mask(aq_vec));
|
aq_vec_get_affinity_mask(aq_vec));
|
||||||
|
@ -450,22 +450,22 @@ static int atl_resume_common(struct device *dev, bool deep)
|
|||||||
|
|
||||||
static int aq_pm_freeze(struct device *dev)
|
static int aq_pm_freeze(struct device *dev)
|
||||||
{
|
{
|
||||||
return aq_suspend_common(dev, false);
|
return aq_suspend_common(dev, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int aq_pm_suspend_poweroff(struct device *dev)
|
static int aq_pm_suspend_poweroff(struct device *dev)
|
||||||
{
|
{
|
||||||
return aq_suspend_common(dev, true);
|
return aq_suspend_common(dev, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int aq_pm_thaw(struct device *dev)
|
static int aq_pm_thaw(struct device *dev)
|
||||||
{
|
{
|
||||||
return atl_resume_common(dev, false);
|
return atl_resume_common(dev, true);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int aq_pm_resume_restore(struct device *dev)
|
static int aq_pm_resume_restore(struct device *dev)
|
||||||
{
|
{
|
||||||
return atl_resume_common(dev, true);
|
return atl_resume_common(dev, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct dev_pm_ops aq_pm_ops = {
|
static const struct dev_pm_ops aq_pm_ops = {
|
||||||
|
@ -43,8 +43,8 @@ static int aq_vec_poll(struct napi_struct *napi, int budget)
|
|||||||
if (!self) {
|
if (!self) {
|
||||||
err = -EINVAL;
|
err = -EINVAL;
|
||||||
} else {
|
} else {
|
||||||
for (i = 0U, ring = self->ring[0];
|
for (i = 0U; self->tx_rings > i; ++i) {
|
||||||
self->tx_rings > i; ++i, ring = self->ring[i]) {
|
ring = self->ring[i];
|
||||||
u64_stats_update_begin(&ring[AQ_VEC_RX_ID].stats.rx.syncp);
|
u64_stats_update_begin(&ring[AQ_VEC_RX_ID].stats.rx.syncp);
|
||||||
ring[AQ_VEC_RX_ID].stats.rx.polls++;
|
ring[AQ_VEC_RX_ID].stats.rx.polls++;
|
||||||
u64_stats_update_end(&ring[AQ_VEC_RX_ID].stats.rx.syncp);
|
u64_stats_update_end(&ring[AQ_VEC_RX_ID].stats.rx.syncp);
|
||||||
@ -182,8 +182,8 @@ int aq_vec_init(struct aq_vec_s *self, const struct aq_hw_ops *aq_hw_ops,
|
|||||||
self->aq_hw_ops = aq_hw_ops;
|
self->aq_hw_ops = aq_hw_ops;
|
||||||
self->aq_hw = aq_hw;
|
self->aq_hw = aq_hw;
|
||||||
|
|
||||||
for (i = 0U, ring = self->ring[0];
|
for (i = 0U; self->tx_rings > i; ++i) {
|
||||||
self->tx_rings > i; ++i, ring = self->ring[i]) {
|
ring = self->ring[i];
|
||||||
err = aq_ring_init(&ring[AQ_VEC_TX_ID], ATL_RING_TX);
|
err = aq_ring_init(&ring[AQ_VEC_TX_ID], ATL_RING_TX);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
goto err_exit;
|
goto err_exit;
|
||||||
@ -224,8 +224,8 @@ int aq_vec_start(struct aq_vec_s *self)
|
|||||||
unsigned int i = 0U;
|
unsigned int i = 0U;
|
||||||
int err = 0;
|
int err = 0;
|
||||||
|
|
||||||
for (i = 0U, ring = self->ring[0];
|
for (i = 0U; self->tx_rings > i; ++i) {
|
||||||
self->tx_rings > i; ++i, ring = self->ring[i]) {
|
ring = self->ring[i];
|
||||||
err = self->aq_hw_ops->hw_ring_tx_start(self->aq_hw,
|
err = self->aq_hw_ops->hw_ring_tx_start(self->aq_hw,
|
||||||
&ring[AQ_VEC_TX_ID]);
|
&ring[AQ_VEC_TX_ID]);
|
||||||
if (err < 0)
|
if (err < 0)
|
||||||
@ -248,8 +248,8 @@ void aq_vec_stop(struct aq_vec_s *self)
|
|||||||
struct aq_ring_s *ring = NULL;
|
struct aq_ring_s *ring = NULL;
|
||||||
unsigned int i = 0U;
|
unsigned int i = 0U;
|
||||||
|
|
||||||
for (i = 0U, ring = self->ring[0];
|
for (i = 0U; self->tx_rings > i; ++i) {
|
||||||
self->tx_rings > i; ++i, ring = self->ring[i]) {
|
ring = self->ring[i];
|
||||||
self->aq_hw_ops->hw_ring_tx_stop(self->aq_hw,
|
self->aq_hw_ops->hw_ring_tx_stop(self->aq_hw,
|
||||||
&ring[AQ_VEC_TX_ID]);
|
&ring[AQ_VEC_TX_ID]);
|
||||||
|
|
||||||
@ -268,8 +268,8 @@ void aq_vec_deinit(struct aq_vec_s *self)
|
|||||||
if (!self)
|
if (!self)
|
||||||
goto err_exit;
|
goto err_exit;
|
||||||
|
|
||||||
for (i = 0U, ring = self->ring[0];
|
for (i = 0U; self->tx_rings > i; ++i) {
|
||||||
self->tx_rings > i; ++i, ring = self->ring[i]) {
|
ring = self->ring[i];
|
||||||
aq_ring_tx_clean(&ring[AQ_VEC_TX_ID]);
|
aq_ring_tx_clean(&ring[AQ_VEC_TX_ID]);
|
||||||
aq_ring_rx_deinit(&ring[AQ_VEC_RX_ID]);
|
aq_ring_rx_deinit(&ring[AQ_VEC_RX_ID]);
|
||||||
}
|
}
|
||||||
@ -297,8 +297,8 @@ void aq_vec_ring_free(struct aq_vec_s *self)
|
|||||||
if (!self)
|
if (!self)
|
||||||
goto err_exit;
|
goto err_exit;
|
||||||
|
|
||||||
for (i = 0U, ring = self->ring[0];
|
for (i = 0U; self->tx_rings > i; ++i) {
|
||||||
self->tx_rings > i; ++i, ring = self->ring[i]) {
|
ring = self->ring[i];
|
||||||
aq_ring_free(&ring[AQ_VEC_TX_ID]);
|
aq_ring_free(&ring[AQ_VEC_TX_ID]);
|
||||||
if (i < self->rx_rings)
|
if (i < self->rx_rings)
|
||||||
aq_ring_free(&ring[AQ_VEC_RX_ID]);
|
aq_ring_free(&ring[AQ_VEC_RX_ID]);
|
||||||
|
@ -1531,6 +1531,7 @@ static void macb_tx_restart(struct macb_queue *queue)
|
|||||||
unsigned int head = queue->tx_head;
|
unsigned int head = queue->tx_head;
|
||||||
unsigned int tail = queue->tx_tail;
|
unsigned int tail = queue->tx_tail;
|
||||||
struct macb *bp = queue->bp;
|
struct macb *bp = queue->bp;
|
||||||
|
unsigned int head_idx, tbqp;
|
||||||
|
|
||||||
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
|
if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
|
||||||
queue_writel(queue, ISR, MACB_BIT(TXUBR));
|
queue_writel(queue, ISR, MACB_BIT(TXUBR));
|
||||||
@ -1538,6 +1539,13 @@ static void macb_tx_restart(struct macb_queue *queue)
|
|||||||
if (head == tail)
|
if (head == tail)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
tbqp = queue_readl(queue, TBQP) / macb_dma_desc_get_size(bp);
|
||||||
|
tbqp = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, tbqp));
|
||||||
|
head_idx = macb_adj_dma_desc_idx(bp, macb_tx_ring_wrap(bp, head));
|
||||||
|
|
||||||
|
if (tbqp == head_idx)
|
||||||
|
return;
|
||||||
|
|
||||||
macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
|
macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -489,11 +489,15 @@ static int dpaa_get_ts_info(struct net_device *net_dev,
|
|||||||
info->phc_index = -1;
|
info->phc_index = -1;
|
||||||
|
|
||||||
fman_node = of_get_parent(mac_node);
|
fman_node = of_get_parent(mac_node);
|
||||||
if (fman_node)
|
if (fman_node) {
|
||||||
ptp_node = of_parse_phandle(fman_node, "ptimer-handle", 0);
|
ptp_node = of_parse_phandle(fman_node, "ptimer-handle", 0);
|
||||||
|
of_node_put(fman_node);
|
||||||
|
}
|
||||||
|
|
||||||
if (ptp_node)
|
if (ptp_node) {
|
||||||
ptp_dev = of_find_device_by_node(ptp_node);
|
ptp_dev = of_find_device_by_node(ptp_node);
|
||||||
|
of_node_put(ptp_node);
|
||||||
|
}
|
||||||
|
|
||||||
if (ptp_dev)
|
if (ptp_dev)
|
||||||
ptp = platform_get_drvdata(ptp_dev);
|
ptp = platform_get_drvdata(ptp_dev);
|
||||||
|
@ -1006,8 +1006,8 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
|
|||||||
{
|
{
|
||||||
u32 reg = link << (E1000_LTRV_REQ_SHIFT + E1000_LTRV_NOSNOOP_SHIFT) |
|
u32 reg = link << (E1000_LTRV_REQ_SHIFT + E1000_LTRV_NOSNOOP_SHIFT) |
|
||||||
link << E1000_LTRV_REQ_SHIFT | E1000_LTRV_SEND;
|
link << E1000_LTRV_REQ_SHIFT | E1000_LTRV_SEND;
|
||||||
u16 max_ltr_enc_d = 0; /* maximum LTR decoded by platform */
|
u32 max_ltr_enc_d = 0; /* maximum LTR decoded by platform */
|
||||||
u16 lat_enc_d = 0; /* latency decoded */
|
u32 lat_enc_d = 0; /* latency decoded */
|
||||||
u16 lat_enc = 0; /* latency encoded */
|
u16 lat_enc = 0; /* latency encoded */
|
||||||
|
|
||||||
if (link) {
|
if (link) {
|
||||||
|
@ -156,8 +156,15 @@ void igc_release_swfw_sync_i225(struct igc_hw *hw, u16 mask)
|
|||||||
{
|
{
|
||||||
u32 swfw_sync;
|
u32 swfw_sync;
|
||||||
|
|
||||||
while (igc_get_hw_semaphore_i225(hw))
|
/* Releasing the resource requires first getting the HW semaphore.
|
||||||
; /* Empty */
|
* If we fail to get the semaphore, there is nothing we can do,
|
||||||
|
* except log an error and quit. We are not allowed to hang here
|
||||||
|
* indefinitely, as it may cause denial of service or system crash.
|
||||||
|
*/
|
||||||
|
if (igc_get_hw_semaphore_i225(hw)) {
|
||||||
|
hw_dbg("Failed to release SW_FW_SYNC.\n");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
swfw_sync = rd32(IGC_SW_FW_SYNC);
|
swfw_sync = rd32(IGC_SW_FW_SYNC);
|
||||||
swfw_sync &= ~mask;
|
swfw_sync &= ~mask;
|
||||||
|
@ -583,7 +583,7 @@ static s32 igc_read_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 *data)
|
|||||||
* the lower time out
|
* the lower time out
|
||||||
*/
|
*/
|
||||||
for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
|
for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
|
||||||
usleep_range(500, 1000);
|
udelay(50);
|
||||||
mdic = rd32(IGC_MDIC);
|
mdic = rd32(IGC_MDIC);
|
||||||
if (mdic & IGC_MDIC_READY)
|
if (mdic & IGC_MDIC_READY)
|
||||||
break;
|
break;
|
||||||
@ -640,7 +640,7 @@ static s32 igc_write_phy_reg_mdic(struct igc_hw *hw, u32 offset, u16 data)
|
|||||||
* the lower time out
|
* the lower time out
|
||||||
*/
|
*/
|
||||||
for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
|
for (i = 0; i < IGC_GEN_POLL_TIMEOUT; i++) {
|
||||||
usleep_range(500, 1000);
|
udelay(50);
|
||||||
mdic = rd32(IGC_MDIC);
|
mdic = rd32(IGC_MDIC);
|
||||||
if (mdic & IGC_MDIC_READY)
|
if (mdic & IGC_MDIC_READY)
|
||||||
break;
|
break;
|
||||||
|
@ -37,7 +37,6 @@ config KS8851
|
|||||||
config KS8851_MLL
|
config KS8851_MLL
|
||||||
tristate "Micrel KS8851 MLL"
|
tristate "Micrel KS8851 MLL"
|
||||||
depends on HAS_IOMEM
|
depends on HAS_IOMEM
|
||||||
depends on PTP_1588_CLOCK_OPTIONAL
|
|
||||||
select MII
|
select MII
|
||||||
select CRC32
|
select CRC32
|
||||||
select EEPROM_93CX6
|
select EEPROM_93CX6
|
||||||
|
@ -68,9 +68,9 @@ static int init_systime(void __iomem *ioaddr, u32 sec, u32 nsec)
|
|||||||
writel(value, ioaddr + PTP_TCR);
|
writel(value, ioaddr + PTP_TCR);
|
||||||
|
|
||||||
/* wait for present system time initialize to complete */
|
/* wait for present system time initialize to complete */
|
||||||
return readl_poll_timeout(ioaddr + PTP_TCR, value,
|
return readl_poll_timeout_atomic(ioaddr + PTP_TCR, value,
|
||||||
!(value & PTP_TCR_TSINIT),
|
!(value & PTP_TCR_TSINIT),
|
||||||
10000, 100000);
|
10, 100000);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int config_addend(void __iomem *ioaddr, u32 addend)
|
static int config_addend(void __iomem *ioaddr, u32 addend)
|
||||||
|
@ -710,11 +710,11 @@ static int vxlan_fdb_append(struct vxlan_fdb *f,
|
|||||||
|
|
||||||
rd = kmalloc(sizeof(*rd), GFP_ATOMIC);
|
rd = kmalloc(sizeof(*rd), GFP_ATOMIC);
|
||||||
if (rd == NULL)
|
if (rd == NULL)
|
||||||
return -ENOBUFS;
|
return -ENOMEM;
|
||||||
|
|
||||||
if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
|
if (dst_cache_init(&rd->dst_cache, GFP_ATOMIC)) {
|
||||||
kfree(rd);
|
kfree(rd);
|
||||||
return -ENOBUFS;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
|
||||||
rd->remote_ip = *ip;
|
rd->remote_ip = *ip;
|
||||||
|
@ -557,7 +557,7 @@ enum brcmf_sdio_frmtype {
|
|||||||
BRCMF_SDIO_FT_SUB,
|
BRCMF_SDIO_FT_SUB,
|
||||||
};
|
};
|
||||||
|
|
||||||
#define SDIOD_DRVSTR_KEY(chip, pmu) (((chip) << 16) | (pmu))
|
#define SDIOD_DRVSTR_KEY(chip, pmu) (((unsigned int)(chip) << 16) | (pmu))
|
||||||
|
|
||||||
/* SDIO Pad drive strength to select value mappings */
|
/* SDIO Pad drive strength to select value mappings */
|
||||||
struct sdiod_drive_str {
|
struct sdiod_drive_str {
|
||||||
|
@ -80,7 +80,7 @@ mt76x2e_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
|||||||
mt76_rmw_field(dev, 0x15a10, 0x1f << 16, 0x9);
|
mt76_rmw_field(dev, 0x15a10, 0x1f << 16, 0x9);
|
||||||
|
|
||||||
/* RG_SSUSB_G1_CDR_BIC_LTR = 0xf */
|
/* RG_SSUSB_G1_CDR_BIC_LTR = 0xf */
|
||||||
mt76_rmw_field(dev, 0x15a0c, 0xf << 28, 0xf);
|
mt76_rmw_field(dev, 0x15a0c, 0xfU << 28, 0xf);
|
||||||
|
|
||||||
/* RG_SSUSB_CDR_BR_PE1D = 0x3 */
|
/* RG_SSUSB_CDR_BR_PE1D = 0x3 */
|
||||||
mt76_rmw_field(dev, 0x15c58, 0x3 << 6, 0x3);
|
mt76_rmw_field(dev, 0x15c58, 0x3 << 6, 0x3);
|
||||||
|
@ -1270,6 +1270,8 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
|
|||||||
warn_str, cur->nidl);
|
warn_str, cur->nidl);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
if (ctrl->quirks & NVME_QUIRK_BOGUS_NID)
|
||||||
|
return NVME_NIDT_EUI64_LEN;
|
||||||
memcpy(ids->eui64, data + sizeof(*cur), NVME_NIDT_EUI64_LEN);
|
memcpy(ids->eui64, data + sizeof(*cur), NVME_NIDT_EUI64_LEN);
|
||||||
return NVME_NIDT_EUI64_LEN;
|
return NVME_NIDT_EUI64_LEN;
|
||||||
case NVME_NIDT_NGUID:
|
case NVME_NIDT_NGUID:
|
||||||
@ -1278,6 +1280,8 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
|
|||||||
warn_str, cur->nidl);
|
warn_str, cur->nidl);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
if (ctrl->quirks & NVME_QUIRK_BOGUS_NID)
|
||||||
|
return NVME_NIDT_NGUID_LEN;
|
||||||
memcpy(ids->nguid, data + sizeof(*cur), NVME_NIDT_NGUID_LEN);
|
memcpy(ids->nguid, data + sizeof(*cur), NVME_NIDT_NGUID_LEN);
|
||||||
return NVME_NIDT_NGUID_LEN;
|
return NVME_NIDT_NGUID_LEN;
|
||||||
case NVME_NIDT_UUID:
|
case NVME_NIDT_UUID:
|
||||||
@ -1286,6 +1290,8 @@ static int nvme_process_ns_desc(struct nvme_ctrl *ctrl, struct nvme_ns_ids *ids,
|
|||||||
warn_str, cur->nidl);
|
warn_str, cur->nidl);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
if (ctrl->quirks & NVME_QUIRK_BOGUS_NID)
|
||||||
|
return NVME_NIDT_UUID_LEN;
|
||||||
uuid_copy(&ids->uuid, data + sizeof(*cur));
|
uuid_copy(&ids->uuid, data + sizeof(*cur));
|
||||||
return NVME_NIDT_UUID_LEN;
|
return NVME_NIDT_UUID_LEN;
|
||||||
case NVME_NIDT_CSI:
|
case NVME_NIDT_CSI:
|
||||||
@ -1381,12 +1387,18 @@ static int nvme_identify_ns(struct nvme_ctrl *ctrl, unsigned nsid,
|
|||||||
if ((*id)->ncap == 0) /* namespace not allocated or attached */
|
if ((*id)->ncap == 0) /* namespace not allocated or attached */
|
||||||
goto out_free_id;
|
goto out_free_id;
|
||||||
|
|
||||||
|
|
||||||
|
if (ctrl->quirks & NVME_QUIRK_BOGUS_NID) {
|
||||||
|
dev_info(ctrl->device,
|
||||||
|
"Ignoring bogus Namespace Identifiers\n");
|
||||||
|
} else {
|
||||||
if (ctrl->vs >= NVME_VS(1, 1, 0) &&
|
if (ctrl->vs >= NVME_VS(1, 1, 0) &&
|
||||||
!memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
|
!memchr_inv(ids->eui64, 0, sizeof(ids->eui64)))
|
||||||
memcpy(ids->eui64, (*id)->eui64, sizeof(ids->eui64));
|
memcpy(ids->eui64, (*id)->eui64, sizeof(ids->eui64));
|
||||||
if (ctrl->vs >= NVME_VS(1, 2, 0) &&
|
if (ctrl->vs >= NVME_VS(1, 2, 0) &&
|
||||||
!memchr_inv(ids->nguid, 0, sizeof(ids->nguid)))
|
!memchr_inv(ids->nguid, 0, sizeof(ids->nguid)))
|
||||||
memcpy(ids->nguid, (*id)->nguid, sizeof(ids->nguid));
|
memcpy(ids->nguid, (*id)->nguid, sizeof(ids->nguid));
|
||||||
|
}
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
|
@ -150,6 +150,11 @@ enum nvme_quirks {
|
|||||||
* encoding the generation sequence number.
|
* encoding the generation sequence number.
|
||||||
*/
|
*/
|
||||||
NVME_QUIRK_SKIP_CID_GEN = (1 << 17),
|
NVME_QUIRK_SKIP_CID_GEN = (1 << 17),
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Reports garbage in the namespace identifiers (eui64, nguid, uuid).
|
||||||
|
*/
|
||||||
|
NVME_QUIRK_BOGUS_NID = (1 << 18),
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -3212,7 +3212,10 @@ static const struct pci_device_id nvme_id_table[] = {
|
|||||||
.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
|
.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
|
||||||
{ PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */
|
{ PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */
|
||||||
.driver_data = NVME_QUIRK_IDENTIFY_CNS |
|
.driver_data = NVME_QUIRK_IDENTIFY_CNS |
|
||||||
NVME_QUIRK_DISABLE_WRITE_ZEROES, },
|
NVME_QUIRK_DISABLE_WRITE_ZEROES |
|
||||||
|
NVME_QUIRK_BOGUS_NID, },
|
||||||
|
{ PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */
|
||||||
|
.driver_data = NVME_QUIRK_BOGUS_NID, },
|
||||||
{ PCI_DEVICE(0x126f, 0x2263), /* Silicon Motion unidentified */
|
{ PCI_DEVICE(0x126f, 0x2263), /* Silicon Motion unidentified */
|
||||||
.driver_data = NVME_QUIRK_NO_NS_DESC_LIST, },
|
.driver_data = NVME_QUIRK_NO_NS_DESC_LIST, },
|
||||||
{ PCI_DEVICE(0x1bb1, 0x0100), /* Seagate Nytro Flash Storage */
|
{ PCI_DEVICE(0x1bb1, 0x0100), /* Seagate Nytro Flash Storage */
|
||||||
|
@ -398,6 +398,9 @@ validate_group(struct perf_event *event)
|
|||||||
if (!validate_event(event->pmu, &fake_pmu, leader))
|
if (!validate_event(event->pmu, &fake_pmu, leader))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
|
if (event == leader)
|
||||||
|
return 0;
|
||||||
|
|
||||||
for_each_sibling_event(sibling, leader) {
|
for_each_sibling_event(sibling, leader) {
|
||||||
if (!validate_event(event->pmu, &fake_pmu, sibling))
|
if (!validate_event(event->pmu, &fake_pmu, sibling))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@ -487,12 +490,7 @@ __hw_perf_event_init(struct perf_event *event)
|
|||||||
local64_set(&hwc->period_left, hwc->sample_period);
|
local64_set(&hwc->period_left, hwc->sample_period);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (event->group_leader != event) {
|
return validate_group(event);
|
||||||
if (validate_group(event) != 0)
|
|
||||||
return -EINVAL;
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int armpmu_event_init(struct perf_event *event)
|
static int armpmu_event_init(struct perf_event *event)
|
||||||
|
@ -1121,8 +1121,6 @@ static void kbd_led_set(struct led_classdev *led_cdev,
|
|||||||
|
|
||||||
if (value > samsung->kbd_led.max_brightness)
|
if (value > samsung->kbd_led.max_brightness)
|
||||||
value = samsung->kbd_led.max_brightness;
|
value = samsung->kbd_led.max_brightness;
|
||||||
else if (value < 0)
|
|
||||||
value = 0;
|
|
||||||
|
|
||||||
samsung->kbd_led_wk = value;
|
samsung->kbd_led_wk = value;
|
||||||
queue_work(samsung->led_workqueue, &samsung->kbd_led_work);
|
queue_work(samsung->led_workqueue, &samsung->kbd_led_work);
|
||||||
|
@ -20,6 +20,7 @@ static int tegra_bpmp_reset_common(struct reset_controller_dev *rstc,
|
|||||||
struct tegra_bpmp *bpmp = to_tegra_bpmp(rstc);
|
struct tegra_bpmp *bpmp = to_tegra_bpmp(rstc);
|
||||||
struct mrq_reset_request request;
|
struct mrq_reset_request request;
|
||||||
struct tegra_bpmp_message msg;
|
struct tegra_bpmp_message msg;
|
||||||
|
int err;
|
||||||
|
|
||||||
memset(&request, 0, sizeof(request));
|
memset(&request, 0, sizeof(request));
|
||||||
request.cmd = command;
|
request.cmd = command;
|
||||||
@ -30,7 +31,13 @@ static int tegra_bpmp_reset_common(struct reset_controller_dev *rstc,
|
|||||||
msg.tx.data = &request;
|
msg.tx.data = &request;
|
||||||
msg.tx.size = sizeof(request);
|
msg.tx.size = sizeof(request);
|
||||||
|
|
||||||
return tegra_bpmp_transfer(bpmp, &msg);
|
err = tegra_bpmp_transfer(bpmp, &msg);
|
||||||
|
if (err)
|
||||||
|
return err;
|
||||||
|
if (msg.rx.ret)
|
||||||
|
return -EINVAL;
|
||||||
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int tegra_bpmp_reset_module(struct reset_controller_dev *rstc,
|
static int tegra_bpmp_reset_module(struct reset_controller_dev *rstc,
|
||||||
|
@ -828,6 +828,37 @@ static int qedi_task_xmit(struct iscsi_task *task)
|
|||||||
return qedi_iscsi_send_ioreq(task);
|
return qedi_iscsi_send_ioreq(task);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void qedi_offload_work(struct work_struct *work)
|
||||||
|
{
|
||||||
|
struct qedi_endpoint *qedi_ep =
|
||||||
|
container_of(work, struct qedi_endpoint, offload_work);
|
||||||
|
struct qedi_ctx *qedi;
|
||||||
|
int wait_delay = 5 * HZ;
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
qedi = qedi_ep->qedi;
|
||||||
|
|
||||||
|
ret = qedi_iscsi_offload_conn(qedi_ep);
|
||||||
|
if (ret) {
|
||||||
|
QEDI_ERR(&qedi->dbg_ctx,
|
||||||
|
"offload error: iscsi_cid=%u, qedi_ep=%p, ret=%d\n",
|
||||||
|
qedi_ep->iscsi_cid, qedi_ep, ret);
|
||||||
|
qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
ret = wait_event_interruptible_timeout(qedi_ep->tcp_ofld_wait,
|
||||||
|
(qedi_ep->state ==
|
||||||
|
EP_STATE_OFLDCONN_COMPL),
|
||||||
|
wait_delay);
|
||||||
|
if (ret <= 0 || qedi_ep->state != EP_STATE_OFLDCONN_COMPL) {
|
||||||
|
qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
|
||||||
|
QEDI_ERR(&qedi->dbg_ctx,
|
||||||
|
"Offload conn TIMEOUT iscsi_cid=%u, qedi_ep=%p\n",
|
||||||
|
qedi_ep->iscsi_cid, qedi_ep);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static struct iscsi_endpoint *
|
static struct iscsi_endpoint *
|
||||||
qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
|
qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
|
||||||
int non_blocking)
|
int non_blocking)
|
||||||
@ -876,6 +907,7 @@ qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
|
|||||||
}
|
}
|
||||||
qedi_ep = ep->dd_data;
|
qedi_ep = ep->dd_data;
|
||||||
memset(qedi_ep, 0, sizeof(struct qedi_endpoint));
|
memset(qedi_ep, 0, sizeof(struct qedi_endpoint));
|
||||||
|
INIT_WORK(&qedi_ep->offload_work, qedi_offload_work);
|
||||||
qedi_ep->state = EP_STATE_IDLE;
|
qedi_ep->state = EP_STATE_IDLE;
|
||||||
qedi_ep->iscsi_cid = (u32)-1;
|
qedi_ep->iscsi_cid = (u32)-1;
|
||||||
qedi_ep->qedi = qedi;
|
qedi_ep->qedi = qedi;
|
||||||
@ -1026,12 +1058,11 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
|
|||||||
qedi_ep = ep->dd_data;
|
qedi_ep = ep->dd_data;
|
||||||
qedi = qedi_ep->qedi;
|
qedi = qedi_ep->qedi;
|
||||||
|
|
||||||
|
flush_work(&qedi_ep->offload_work);
|
||||||
|
|
||||||
if (qedi_ep->state == EP_STATE_OFLDCONN_START)
|
if (qedi_ep->state == EP_STATE_OFLDCONN_START)
|
||||||
goto ep_exit_recover;
|
goto ep_exit_recover;
|
||||||
|
|
||||||
if (qedi_ep->state != EP_STATE_OFLDCONN_NONE)
|
|
||||||
flush_work(&qedi_ep->offload_work);
|
|
||||||
|
|
||||||
if (qedi_ep->conn) {
|
if (qedi_ep->conn) {
|
||||||
qedi_conn = qedi_ep->conn;
|
qedi_conn = qedi_ep->conn;
|
||||||
conn = qedi_conn->cls_conn->dd_data;
|
conn = qedi_conn->cls_conn->dd_data;
|
||||||
@ -1196,37 +1227,6 @@ static int qedi_data_avail(struct qedi_ctx *qedi, u16 vlanid)
|
|||||||
return rc;
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void qedi_offload_work(struct work_struct *work)
|
|
||||||
{
|
|
||||||
struct qedi_endpoint *qedi_ep =
|
|
||||||
container_of(work, struct qedi_endpoint, offload_work);
|
|
||||||
struct qedi_ctx *qedi;
|
|
||||||
int wait_delay = 5 * HZ;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
qedi = qedi_ep->qedi;
|
|
||||||
|
|
||||||
ret = qedi_iscsi_offload_conn(qedi_ep);
|
|
||||||
if (ret) {
|
|
||||||
QEDI_ERR(&qedi->dbg_ctx,
|
|
||||||
"offload error: iscsi_cid=%u, qedi_ep=%p, ret=%d\n",
|
|
||||||
qedi_ep->iscsi_cid, qedi_ep, ret);
|
|
||||||
qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = wait_event_interruptible_timeout(qedi_ep->tcp_ofld_wait,
|
|
||||||
(qedi_ep->state ==
|
|
||||||
EP_STATE_OFLDCONN_COMPL),
|
|
||||||
wait_delay);
|
|
||||||
if ((ret <= 0) || (qedi_ep->state != EP_STATE_OFLDCONN_COMPL)) {
|
|
||||||
qedi_ep->state = EP_STATE_OFLDCONN_FAILED;
|
|
||||||
QEDI_ERR(&qedi->dbg_ctx,
|
|
||||||
"Offload conn TIMEOUT iscsi_cid=%u, qedi_ep=%p\n",
|
|
||||||
qedi_ep->iscsi_cid, qedi_ep);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
|
static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
|
||||||
{
|
{
|
||||||
struct qedi_ctx *qedi;
|
struct qedi_ctx *qedi;
|
||||||
@ -1342,7 +1342,6 @@ static int qedi_set_path(struct Scsi_Host *shost, struct iscsi_path *path_data)
|
|||||||
qedi_ep->dst_addr, qedi_ep->dst_port);
|
qedi_ep->dst_addr, qedi_ep->dst_port);
|
||||||
}
|
}
|
||||||
|
|
||||||
INIT_WORK(&qedi_ep->offload_work, qedi_offload_work);
|
|
||||||
queue_work(qedi->offload_thread, &qedi_ep->offload_work);
|
queue_work(qedi->offload_thread, &qedi_ep->offload_work);
|
||||||
|
|
||||||
ret = 0;
|
ret = 0;
|
||||||
|
@ -277,6 +277,9 @@ static int atmel_qspi_find_mode(const struct spi_mem_op *op)
|
|||||||
static bool atmel_qspi_supports_op(struct spi_mem *mem,
|
static bool atmel_qspi_supports_op(struct spi_mem *mem,
|
||||||
const struct spi_mem_op *op)
|
const struct spi_mem_op *op)
|
||||||
{
|
{
|
||||||
|
if (!spi_mem_default_supports_op(mem, op))
|
||||||
|
return false;
|
||||||
|
|
||||||
if (atmel_qspi_find_mode(op) < 0)
|
if (atmel_qspi_find_mode(op) < 0)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
|
@ -895,7 +895,17 @@ static int __maybe_unused mtk_nor_suspend(struct device *dev)
|
|||||||
|
|
||||||
static int __maybe_unused mtk_nor_resume(struct device *dev)
|
static int __maybe_unused mtk_nor_resume(struct device *dev)
|
||||||
{
|
{
|
||||||
return pm_runtime_force_resume(dev);
|
struct spi_controller *ctlr = dev_get_drvdata(dev);
|
||||||
|
struct mtk_nor *sp = spi_controller_get_devdata(ctlr);
|
||||||
|
int ret;
|
||||||
|
|
||||||
|
ret = pm_runtime_force_resume(dev);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
mtk_nor_init(sp);
|
||||||
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct dev_pm_ops mtk_nor_pm_ops = {
|
static const struct dev_pm_ops mtk_nor_pm_ops = {
|
||||||
|
@ -898,7 +898,7 @@ cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter)
|
|||||||
ssize_t rc;
|
ssize_t rc;
|
||||||
struct inode *inode = file_inode(iocb->ki_filp);
|
struct inode *inode = file_inode(iocb->ki_filp);
|
||||||
|
|
||||||
if (iocb->ki_filp->f_flags & O_DIRECT)
|
if (iocb->ki_flags & IOCB_DIRECT)
|
||||||
return cifs_user_readv(iocb, iter);
|
return cifs_user_readv(iocb, iter);
|
||||||
|
|
||||||
rc = cifs_revalidate_mapping(inode);
|
rc = cifs_revalidate_mapping(inode);
|
||||||
|
@ -2159,6 +2159,10 @@ static inline int ext4_forced_shutdown(struct ext4_sb_info *sbi)
|
|||||||
* Structure of a directory entry
|
* Structure of a directory entry
|
||||||
*/
|
*/
|
||||||
#define EXT4_NAME_LEN 255
|
#define EXT4_NAME_LEN 255
|
||||||
|
/*
|
||||||
|
* Base length of the ext4 directory entry excluding the name length
|
||||||
|
*/
|
||||||
|
#define EXT4_BASE_DIR_LEN (sizeof(struct ext4_dir_entry_2) - EXT4_NAME_LEN)
|
||||||
|
|
||||||
struct ext4_dir_entry {
|
struct ext4_dir_entry {
|
||||||
__le32 inode; /* Inode number */
|
__le32 inode; /* Inode number */
|
||||||
@ -2915,7 +2919,7 @@ extern int ext4_inode_attach_jinode(struct inode *inode);
|
|||||||
extern int ext4_can_truncate(struct inode *inode);
|
extern int ext4_can_truncate(struct inode *inode);
|
||||||
extern int ext4_truncate(struct inode *);
|
extern int ext4_truncate(struct inode *);
|
||||||
extern int ext4_break_layouts(struct inode *);
|
extern int ext4_break_layouts(struct inode *);
|
||||||
extern int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length);
|
extern int ext4_punch_hole(struct file *file, loff_t offset, loff_t length);
|
||||||
extern void ext4_set_inode_flags(struct inode *, bool init);
|
extern void ext4_set_inode_flags(struct inode *, bool init);
|
||||||
extern int ext4_alloc_da_blocks(struct inode *inode);
|
extern int ext4_alloc_da_blocks(struct inode *inode);
|
||||||
extern void ext4_set_aops(struct inode *inode);
|
extern void ext4_set_aops(struct inode *inode);
|
||||||
|
@ -4498,9 +4498,9 @@ static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,
|
|||||||
return ret > 0 ? ret2 : ret;
|
return ret > 0 ? ret2 : ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len);
|
static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len);
|
||||||
|
|
||||||
static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len);
|
static int ext4_insert_range(struct file *file, loff_t offset, loff_t len);
|
||||||
|
|
||||||
static long ext4_zero_range(struct file *file, loff_t offset,
|
static long ext4_zero_range(struct file *file, loff_t offset,
|
||||||
loff_t len, int mode)
|
loff_t len, int mode)
|
||||||
@ -4571,6 +4571,10 @@ static long ext4_zero_range(struct file *file, loff_t offset,
|
|||||||
/* Wait all existing dio workers, newcomers will block on i_mutex */
|
/* Wait all existing dio workers, newcomers will block on i_mutex */
|
||||||
inode_dio_wait(inode);
|
inode_dio_wait(inode);
|
||||||
|
|
||||||
|
ret = file_modified(file);
|
||||||
|
if (ret)
|
||||||
|
goto out_mutex;
|
||||||
|
|
||||||
/* Preallocate the range including the unaligned edges */
|
/* Preallocate the range including the unaligned edges */
|
||||||
if (partial_begin || partial_end) {
|
if (partial_begin || partial_end) {
|
||||||
ret = ext4_alloc_file_blocks(file,
|
ret = ext4_alloc_file_blocks(file,
|
||||||
@ -4689,7 +4693,7 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
|
|||||||
ext4_fc_start_update(inode);
|
ext4_fc_start_update(inode);
|
||||||
|
|
||||||
if (mode & FALLOC_FL_PUNCH_HOLE) {
|
if (mode & FALLOC_FL_PUNCH_HOLE) {
|
||||||
ret = ext4_punch_hole(inode, offset, len);
|
ret = ext4_punch_hole(file, offset, len);
|
||||||
goto exit;
|
goto exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -4698,12 +4702,12 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
|
|||||||
goto exit;
|
goto exit;
|
||||||
|
|
||||||
if (mode & FALLOC_FL_COLLAPSE_RANGE) {
|
if (mode & FALLOC_FL_COLLAPSE_RANGE) {
|
||||||
ret = ext4_collapse_range(inode, offset, len);
|
ret = ext4_collapse_range(file, offset, len);
|
||||||
goto exit;
|
goto exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (mode & FALLOC_FL_INSERT_RANGE) {
|
if (mode & FALLOC_FL_INSERT_RANGE) {
|
||||||
ret = ext4_insert_range(inode, offset, len);
|
ret = ext4_insert_range(file, offset, len);
|
||||||
goto exit;
|
goto exit;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -4739,6 +4743,10 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len)
|
|||||||
/* Wait all existing dio workers, newcomers will block on i_mutex */
|
/* Wait all existing dio workers, newcomers will block on i_mutex */
|
||||||
inode_dio_wait(inode);
|
inode_dio_wait(inode);
|
||||||
|
|
||||||
|
ret = file_modified(file);
|
||||||
|
if (ret)
|
||||||
|
goto out;
|
||||||
|
|
||||||
ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size, flags);
|
ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size, flags);
|
||||||
if (ret)
|
if (ret)
|
||||||
goto out;
|
goto out;
|
||||||
@ -5241,8 +5249,9 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
|
|||||||
* This implements the fallocate's collapse range functionality for ext4
|
* This implements the fallocate's collapse range functionality for ext4
|
||||||
* Returns: 0 and non-zero on error.
|
* Returns: 0 and non-zero on error.
|
||||||
*/
|
*/
|
||||||
static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
|
static int ext4_collapse_range(struct file *file, loff_t offset, loff_t len)
|
||||||
{
|
{
|
||||||
|
struct inode *inode = file_inode(file);
|
||||||
struct super_block *sb = inode->i_sb;
|
struct super_block *sb = inode->i_sb;
|
||||||
ext4_lblk_t punch_start, punch_stop;
|
ext4_lblk_t punch_start, punch_stop;
|
||||||
handle_t *handle;
|
handle_t *handle;
|
||||||
@ -5293,6 +5302,10 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
|
|||||||
/* Wait for existing dio to complete */
|
/* Wait for existing dio to complete */
|
||||||
inode_dio_wait(inode);
|
inode_dio_wait(inode);
|
||||||
|
|
||||||
|
ret = file_modified(file);
|
||||||
|
if (ret)
|
||||||
|
goto out_mutex;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Prevent page faults from reinstantiating pages we have released from
|
* Prevent page faults from reinstantiating pages we have released from
|
||||||
* page cache.
|
* page cache.
|
||||||
@ -5387,8 +5400,9 @@ static int ext4_collapse_range(struct inode *inode, loff_t offset, loff_t len)
|
|||||||
* by len bytes.
|
* by len bytes.
|
||||||
* Returns 0 on success, error otherwise.
|
* Returns 0 on success, error otherwise.
|
||||||
*/
|
*/
|
||||||
static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
|
static int ext4_insert_range(struct file *file, loff_t offset, loff_t len)
|
||||||
{
|
{
|
||||||
|
struct inode *inode = file_inode(file);
|
||||||
struct super_block *sb = inode->i_sb;
|
struct super_block *sb = inode->i_sb;
|
||||||
handle_t *handle;
|
handle_t *handle;
|
||||||
struct ext4_ext_path *path;
|
struct ext4_ext_path *path;
|
||||||
@ -5444,6 +5458,10 @@ static int ext4_insert_range(struct inode *inode, loff_t offset, loff_t len)
|
|||||||
/* Wait for existing dio to complete */
|
/* Wait for existing dio to complete */
|
||||||
inode_dio_wait(inode);
|
inode_dio_wait(inode);
|
||||||
|
|
||||||
|
ret = file_modified(file);
|
||||||
|
if (ret)
|
||||||
|
goto out_mutex;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Prevent page faults from reinstantiating pages we have released from
|
* Prevent page faults from reinstantiating pages we have released from
|
||||||
* page cache.
|
* page cache.
|
||||||
|
@ -4060,12 +4060,14 @@ int ext4_break_layouts(struct inode *inode)
|
|||||||
* Returns: 0 on success or negative on failure
|
* Returns: 0 on success or negative on failure
|
||||||
*/
|
*/
|
||||||
|
|
||||||
int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
|
int ext4_punch_hole(struct file *file, loff_t offset, loff_t length)
|
||||||
{
|
{
|
||||||
|
struct inode *inode = file_inode(file);
|
||||||
struct super_block *sb = inode->i_sb;
|
struct super_block *sb = inode->i_sb;
|
||||||
ext4_lblk_t first_block, stop_block;
|
ext4_lblk_t first_block, stop_block;
|
||||||
struct address_space *mapping = inode->i_mapping;
|
struct address_space *mapping = inode->i_mapping;
|
||||||
loff_t first_block_offset, last_block_offset;
|
loff_t first_block_offset, last_block_offset, max_length;
|
||||||
|
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
|
||||||
handle_t *handle;
|
handle_t *handle;
|
||||||
unsigned int credits;
|
unsigned int credits;
|
||||||
int ret = 0, ret2 = 0;
|
int ret = 0, ret2 = 0;
|
||||||
@ -4108,6 +4110,14 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
|
|||||||
offset;
|
offset;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* For punch hole the length + offset needs to be within one block
|
||||||
|
* before last range. Adjust the length if it goes beyond that limit.
|
||||||
|
*/
|
||||||
|
max_length = sbi->s_bitmap_maxbytes - inode->i_sb->s_blocksize;
|
||||||
|
if (offset + length > max_length)
|
||||||
|
length = max_length - offset;
|
||||||
|
|
||||||
if (offset & (sb->s_blocksize - 1) ||
|
if (offset & (sb->s_blocksize - 1) ||
|
||||||
(offset + length) & (sb->s_blocksize - 1)) {
|
(offset + length) & (sb->s_blocksize - 1)) {
|
||||||
/*
|
/*
|
||||||
@ -4123,6 +4133,10 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length)
|
|||||||
/* Wait all existing dio workers, newcomers will block on i_mutex */
|
/* Wait all existing dio workers, newcomers will block on i_mutex */
|
||||||
inode_dio_wait(inode);
|
inode_dio_wait(inode);
|
||||||
|
|
||||||
|
ret = file_modified(file);
|
||||||
|
if (ret)
|
||||||
|
goto out_mutex;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Prevent page faults from reinstantiating pages we have released from
|
* Prevent page faults from reinstantiating pages we have released from
|
||||||
* page cache.
|
* page cache.
|
||||||
|
@ -1461,10 +1461,10 @@ int ext4_search_dir(struct buffer_head *bh, char *search_buf, int buf_size,
|
|||||||
|
|
||||||
de = (struct ext4_dir_entry_2 *)search_buf;
|
de = (struct ext4_dir_entry_2 *)search_buf;
|
||||||
dlimit = search_buf + buf_size;
|
dlimit = search_buf + buf_size;
|
||||||
while ((char *) de < dlimit) {
|
while ((char *) de < dlimit - EXT4_BASE_DIR_LEN) {
|
||||||
/* this code is executed quadratically often */
|
/* this code is executed quadratically often */
|
||||||
/* do minimal checking `by hand' */
|
/* do minimal checking `by hand' */
|
||||||
if ((char *) de + de->name_len <= dlimit &&
|
if (de->name + de->name_len <= dlimit &&
|
||||||
ext4_match(dir, fname, de)) {
|
ext4_match(dir, fname, de)) {
|
||||||
/* found a match - just to be sure, do
|
/* found a match - just to be sure, do
|
||||||
* a full check */
|
* a full check */
|
||||||
|
@ -137,8 +137,10 @@ static void ext4_finish_bio(struct bio *bio)
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
clear_buffer_async_write(bh);
|
clear_buffer_async_write(bh);
|
||||||
if (bio->bi_status)
|
if (bio->bi_status) {
|
||||||
|
set_buffer_write_io_error(bh);
|
||||||
buffer_io_error(bh);
|
buffer_io_error(bh);
|
||||||
|
}
|
||||||
} while ((bh = bh->b_this_page) != head);
|
} while ((bh = bh->b_this_page) != head);
|
||||||
spin_unlock_irqrestore(&head->b_uptodate_lock, flags);
|
spin_unlock_irqrestore(&head->b_uptodate_lock, flags);
|
||||||
if (!under_io) {
|
if (!under_io) {
|
||||||
|
@ -3870,9 +3870,11 @@ static int count_overhead(struct super_block *sb, ext4_group_t grp,
|
|||||||
ext4_fsblk_t first_block, last_block, b;
|
ext4_fsblk_t first_block, last_block, b;
|
||||||
ext4_group_t i, ngroups = ext4_get_groups_count(sb);
|
ext4_group_t i, ngroups = ext4_get_groups_count(sb);
|
||||||
int s, j, count = 0;
|
int s, j, count = 0;
|
||||||
|
int has_super = ext4_bg_has_super(sb, grp);
|
||||||
|
|
||||||
if (!ext4_has_feature_bigalloc(sb))
|
if (!ext4_has_feature_bigalloc(sb))
|
||||||
return (ext4_bg_has_super(sb, grp) + ext4_bg_num_gdb(sb, grp) +
|
return (has_super + ext4_bg_num_gdb(sb, grp) +
|
||||||
|
(has_super ? le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks) : 0) +
|
||||||
sbi->s_itb_per_group + 2);
|
sbi->s_itb_per_group + 2);
|
||||||
|
|
||||||
first_block = le32_to_cpu(sbi->s_es->s_first_data_block) +
|
first_block = le32_to_cpu(sbi->s_es->s_first_data_block) +
|
||||||
@ -4925,9 +4927,18 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
|
|||||||
* Get the # of file system overhead blocks from the
|
* Get the # of file system overhead blocks from the
|
||||||
* superblock if present.
|
* superblock if present.
|
||||||
*/
|
*/
|
||||||
if (es->s_overhead_clusters)
|
|
||||||
sbi->s_overhead = le32_to_cpu(es->s_overhead_clusters);
|
sbi->s_overhead = le32_to_cpu(es->s_overhead_clusters);
|
||||||
else {
|
/* ignore the precalculated value if it is ridiculous */
|
||||||
|
if (sbi->s_overhead > ext4_blocks_count(es))
|
||||||
|
sbi->s_overhead = 0;
|
||||||
|
/*
|
||||||
|
* If the bigalloc feature is not enabled recalculating the
|
||||||
|
* overhead doesn't take long, so we might as well just redo
|
||||||
|
* it to make sure we are using the correct value.
|
||||||
|
*/
|
||||||
|
if (!ext4_has_feature_bigalloc(sb))
|
||||||
|
sbi->s_overhead = 0;
|
||||||
|
if (sbi->s_overhead == 0) {
|
||||||
err = ext4_calculate_overhead(sb);
|
err = ext4_calculate_overhead(sb);
|
||||||
if (err)
|
if (err)
|
||||||
goto failed_mount_wq;
|
goto failed_mount_wq;
|
||||||
|
@ -906,15 +906,15 @@ static int read_rindex_entry(struct gfs2_inode *ip)
|
|||||||
rgd->rd_bitbytes = be32_to_cpu(buf.ri_bitbytes);
|
rgd->rd_bitbytes = be32_to_cpu(buf.ri_bitbytes);
|
||||||
spin_lock_init(&rgd->rd_rsspin);
|
spin_lock_init(&rgd->rd_rsspin);
|
||||||
|
|
||||||
error = compute_bitstructs(rgd);
|
|
||||||
if (error)
|
|
||||||
goto fail;
|
|
||||||
|
|
||||||
error = gfs2_glock_get(sdp, rgd->rd_addr,
|
error = gfs2_glock_get(sdp, rgd->rd_addr,
|
||||||
&gfs2_rgrp_glops, CREATE, &rgd->rd_gl);
|
&gfs2_rgrp_glops, CREATE, &rgd->rd_gl);
|
||||||
if (error)
|
if (error)
|
||||||
goto fail;
|
goto fail;
|
||||||
|
|
||||||
|
error = compute_bitstructs(rgd);
|
||||||
|
if (error)
|
||||||
|
goto fail_glock;
|
||||||
|
|
||||||
rgd->rd_rgl = (struct gfs2_rgrp_lvb *)rgd->rd_gl->gl_lksb.sb_lvbptr;
|
rgd->rd_rgl = (struct gfs2_rgrp_lvb *)rgd->rd_gl->gl_lksb.sb_lvbptr;
|
||||||
rgd->rd_flags &= ~(GFS2_RDF_UPTODATE | GFS2_RDF_PREFERRED);
|
rgd->rd_flags &= ~(GFS2_RDF_UPTODATE | GFS2_RDF_PREFERRED);
|
||||||
if (rgd->rd_data > sdp->sd_max_rg_data)
|
if (rgd->rd_data > sdp->sd_max_rg_data)
|
||||||
@ -928,6 +928,7 @@ static int read_rindex_entry(struct gfs2_inode *ip)
|
|||||||
}
|
}
|
||||||
|
|
||||||
error = 0; /* someone else read in the rgrp; free it and ignore it */
|
error = 0; /* someone else read in the rgrp; free it and ignore it */
|
||||||
|
fail_glock:
|
||||||
gfs2_glock_put(rgd->rd_gl);
|
gfs2_glock_put(rgd->rd_gl);
|
||||||
|
|
||||||
fail:
|
fail:
|
||||||
|
@ -206,7 +206,7 @@ hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr,
|
|||||||
info.flags = 0;
|
info.flags = 0;
|
||||||
info.length = len;
|
info.length = len;
|
||||||
info.low_limit = current->mm->mmap_base;
|
info.low_limit = current->mm->mmap_base;
|
||||||
info.high_limit = TASK_SIZE;
|
info.high_limit = arch_get_mmap_end(addr);
|
||||||
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
|
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
|
||||||
info.align_offset = 0;
|
info.align_offset = 0;
|
||||||
return vm_unmapped_area(&info);
|
return vm_unmapped_area(&info);
|
||||||
@ -222,7 +222,7 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr,
|
|||||||
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
|
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
|
||||||
info.length = len;
|
info.length = len;
|
||||||
info.low_limit = max(PAGE_SIZE, mmap_min_addr);
|
info.low_limit = max(PAGE_SIZE, mmap_min_addr);
|
||||||
info.high_limit = current->mm->mmap_base;
|
info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base);
|
||||||
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
|
info.align_mask = PAGE_MASK & ~huge_page_mask(h);
|
||||||
info.align_offset = 0;
|
info.align_offset = 0;
|
||||||
addr = vm_unmapped_area(&info);
|
addr = vm_unmapped_area(&info);
|
||||||
@ -237,7 +237,7 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr,
|
|||||||
VM_BUG_ON(addr != -ENOMEM);
|
VM_BUG_ON(addr != -ENOMEM);
|
||||||
info.flags = 0;
|
info.flags = 0;
|
||||||
info.low_limit = current->mm->mmap_base;
|
info.low_limit = current->mm->mmap_base;
|
||||||
info.high_limit = TASK_SIZE;
|
info.high_limit = arch_get_mmap_end(addr);
|
||||||
addr = vm_unmapped_area(&info);
|
addr = vm_unmapped_area(&info);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -251,6 +251,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
|
|||||||
struct mm_struct *mm = current->mm;
|
struct mm_struct *mm = current->mm;
|
||||||
struct vm_area_struct *vma;
|
struct vm_area_struct *vma;
|
||||||
struct hstate *h = hstate_file(file);
|
struct hstate *h = hstate_file(file);
|
||||||
|
const unsigned long mmap_end = arch_get_mmap_end(addr);
|
||||||
|
|
||||||
if (len & ~huge_page_mask(h))
|
if (len & ~huge_page_mask(h))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@ -266,7 +267,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
|
|||||||
if (addr) {
|
if (addr) {
|
||||||
addr = ALIGN(addr, huge_page_size(h));
|
addr = ALIGN(addr, huge_page_size(h));
|
||||||
vma = find_vma(mm, addr);
|
vma = find_vma(mm, addr);
|
||||||
if (TASK_SIZE - len >= addr &&
|
if (mmap_end - len >= addr &&
|
||||||
(!vma || addr + len <= vm_start_gap(vma)))
|
(!vma || addr + len <= vm_start_gap(vma)))
|
||||||
return addr;
|
return addr;
|
||||||
}
|
}
|
||||||
|
@ -501,7 +501,6 @@ void jbd2_journal_commit_transaction(journal_t *journal)
|
|||||||
}
|
}
|
||||||
spin_unlock(&commit_transaction->t_handle_lock);
|
spin_unlock(&commit_transaction->t_handle_lock);
|
||||||
commit_transaction->t_state = T_SWITCH;
|
commit_transaction->t_state = T_SWITCH;
|
||||||
write_unlock(&journal->j_state_lock);
|
|
||||||
|
|
||||||
J_ASSERT (atomic_read(&commit_transaction->t_outstanding_credits) <=
|
J_ASSERT (atomic_read(&commit_transaction->t_outstanding_credits) <=
|
||||||
journal->j_max_transaction_buffers);
|
journal->j_max_transaction_buffers);
|
||||||
@ -521,6 +520,8 @@ void jbd2_journal_commit_transaction(journal_t *journal)
|
|||||||
* has reserved. This is consistent with the existing behaviour
|
* has reserved. This is consistent with the existing behaviour
|
||||||
* that multiple jbd2_journal_get_write_access() calls to the same
|
* that multiple jbd2_journal_get_write_access() calls to the same
|
||||||
* buffer are perfectly permissible.
|
* buffer are perfectly permissible.
|
||||||
|
* We use journal->j_state_lock here to serialize processing of
|
||||||
|
* t_reserved_list with eviction of buffers from journal_unmap_buffer().
|
||||||
*/
|
*/
|
||||||
while (commit_transaction->t_reserved_list) {
|
while (commit_transaction->t_reserved_list) {
|
||||||
jh = commit_transaction->t_reserved_list;
|
jh = commit_transaction->t_reserved_list;
|
||||||
@ -540,6 +541,7 @@ void jbd2_journal_commit_transaction(journal_t *journal)
|
|||||||
jbd2_journal_refile_buffer(journal, jh);
|
jbd2_journal_refile_buffer(journal, jh);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
write_unlock(&journal->j_state_lock);
|
||||||
/*
|
/*
|
||||||
* Now try to drop any written-back buffers from the journal's
|
* Now try to drop any written-back buffers from the journal's
|
||||||
* checkpoint lists. We do this *before* commit because it potentially
|
* checkpoint lists. We do this *before* commit because it potentially
|
||||||
|
19
fs/stat.c
19
fs/stat.c
@ -306,9 +306,6 @@ SYSCALL_DEFINE2(fstat, unsigned int, fd, struct __old_kernel_stat __user *, stat
|
|||||||
# define choose_32_64(a,b) b
|
# define choose_32_64(a,b) b
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#define valid_dev(x) choose_32_64(old_valid_dev(x),true)
|
|
||||||
#define encode_dev(x) choose_32_64(old_encode_dev,new_encode_dev)(x)
|
|
||||||
|
|
||||||
#ifndef INIT_STRUCT_STAT_PADDING
|
#ifndef INIT_STRUCT_STAT_PADDING
|
||||||
# define INIT_STRUCT_STAT_PADDING(st) memset(&st, 0, sizeof(st))
|
# define INIT_STRUCT_STAT_PADDING(st) memset(&st, 0, sizeof(st))
|
||||||
#endif
|
#endif
|
||||||
@ -317,7 +314,9 @@ static int cp_new_stat(struct kstat *stat, struct stat __user *statbuf)
|
|||||||
{
|
{
|
||||||
struct stat tmp;
|
struct stat tmp;
|
||||||
|
|
||||||
if (!valid_dev(stat->dev) || !valid_dev(stat->rdev))
|
if (sizeof(tmp.st_dev) < 4 && !old_valid_dev(stat->dev))
|
||||||
|
return -EOVERFLOW;
|
||||||
|
if (sizeof(tmp.st_rdev) < 4 && !old_valid_dev(stat->rdev))
|
||||||
return -EOVERFLOW;
|
return -EOVERFLOW;
|
||||||
#if BITS_PER_LONG == 32
|
#if BITS_PER_LONG == 32
|
||||||
if (stat->size > MAX_NON_LFS)
|
if (stat->size > MAX_NON_LFS)
|
||||||
@ -325,7 +324,7 @@ static int cp_new_stat(struct kstat *stat, struct stat __user *statbuf)
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
INIT_STRUCT_STAT_PADDING(tmp);
|
INIT_STRUCT_STAT_PADDING(tmp);
|
||||||
tmp.st_dev = encode_dev(stat->dev);
|
tmp.st_dev = new_encode_dev(stat->dev);
|
||||||
tmp.st_ino = stat->ino;
|
tmp.st_ino = stat->ino;
|
||||||
if (sizeof(tmp.st_ino) < sizeof(stat->ino) && tmp.st_ino != stat->ino)
|
if (sizeof(tmp.st_ino) < sizeof(stat->ino) && tmp.st_ino != stat->ino)
|
||||||
return -EOVERFLOW;
|
return -EOVERFLOW;
|
||||||
@ -335,7 +334,7 @@ static int cp_new_stat(struct kstat *stat, struct stat __user *statbuf)
|
|||||||
return -EOVERFLOW;
|
return -EOVERFLOW;
|
||||||
SET_UID(tmp.st_uid, from_kuid_munged(current_user_ns(), stat->uid));
|
SET_UID(tmp.st_uid, from_kuid_munged(current_user_ns(), stat->uid));
|
||||||
SET_GID(tmp.st_gid, from_kgid_munged(current_user_ns(), stat->gid));
|
SET_GID(tmp.st_gid, from_kgid_munged(current_user_ns(), stat->gid));
|
||||||
tmp.st_rdev = encode_dev(stat->rdev);
|
tmp.st_rdev = new_encode_dev(stat->rdev);
|
||||||
tmp.st_size = stat->size;
|
tmp.st_size = stat->size;
|
||||||
tmp.st_atime = stat->atime.tv_sec;
|
tmp.st_atime = stat->atime.tv_sec;
|
||||||
tmp.st_mtime = stat->mtime.tv_sec;
|
tmp.st_mtime = stat->mtime.tv_sec;
|
||||||
@ -616,11 +615,13 @@ static int cp_compat_stat(struct kstat *stat, struct compat_stat __user *ubuf)
|
|||||||
{
|
{
|
||||||
struct compat_stat tmp;
|
struct compat_stat tmp;
|
||||||
|
|
||||||
if (!old_valid_dev(stat->dev) || !old_valid_dev(stat->rdev))
|
if (sizeof(tmp.st_dev) < 4 && !old_valid_dev(stat->dev))
|
||||||
|
return -EOVERFLOW;
|
||||||
|
if (sizeof(tmp.st_rdev) < 4 && !old_valid_dev(stat->rdev))
|
||||||
return -EOVERFLOW;
|
return -EOVERFLOW;
|
||||||
|
|
||||||
memset(&tmp, 0, sizeof(tmp));
|
memset(&tmp, 0, sizeof(tmp));
|
||||||
tmp.st_dev = old_encode_dev(stat->dev);
|
tmp.st_dev = new_encode_dev(stat->dev);
|
||||||
tmp.st_ino = stat->ino;
|
tmp.st_ino = stat->ino;
|
||||||
if (sizeof(tmp.st_ino) < sizeof(stat->ino) && tmp.st_ino != stat->ino)
|
if (sizeof(tmp.st_ino) < sizeof(stat->ino) && tmp.st_ino != stat->ino)
|
||||||
return -EOVERFLOW;
|
return -EOVERFLOW;
|
||||||
@ -630,7 +631,7 @@ static int cp_compat_stat(struct kstat *stat, struct compat_stat __user *ubuf)
|
|||||||
return -EOVERFLOW;
|
return -EOVERFLOW;
|
||||||
SET_UID(tmp.st_uid, from_kuid_munged(current_user_ns(), stat->uid));
|
SET_UID(tmp.st_uid, from_kuid_munged(current_user_ns(), stat->uid));
|
||||||
SET_GID(tmp.st_gid, from_kgid_munged(current_user_ns(), stat->gid));
|
SET_GID(tmp.st_gid, from_kgid_munged(current_user_ns(), stat->gid));
|
||||||
tmp.st_rdev = old_encode_dev(stat->rdev);
|
tmp.st_rdev = new_encode_dev(stat->rdev);
|
||||||
if ((u64) stat->size > MAX_NON_LFS)
|
if ((u64) stat->size > MAX_NON_LFS)
|
||||||
return -EOVERFLOW;
|
return -EOVERFLOW;
|
||||||
tmp.st_size = stat->size;
|
tmp.st_size = stat->size;
|
||||||
|
@ -127,7 +127,7 @@ static inline bool is_multicast_ether_addr(const u8 *addr)
|
|||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline bool is_multicast_ether_addr_64bits(const u8 addr[6+2])
|
static inline bool is_multicast_ether_addr_64bits(const u8 *addr)
|
||||||
{
|
{
|
||||||
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
|
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
|
||||||
#ifdef __BIG_ENDIAN
|
#ifdef __BIG_ENDIAN
|
||||||
@ -352,8 +352,7 @@ static inline bool ether_addr_equal(const u8 *addr1, const u8 *addr2)
|
|||||||
* Please note that alignment of addr1 & addr2 are only guaranteed to be 16 bits.
|
* Please note that alignment of addr1 & addr2 are only guaranteed to be 16 bits.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
static inline bool ether_addr_equal_64bits(const u8 addr1[6+2],
|
static inline bool ether_addr_equal_64bits(const u8 *addr1, const u8 *addr2)
|
||||||
const u8 addr2[6+2])
|
|
||||||
{
|
{
|
||||||
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
|
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64
|
||||||
u64 fold = (*(const u64 *)addr1) ^ (*(const u64 *)addr2);
|
u64 fold = (*(const u64 *)addr1) ^ (*(const u64 *)addr2);
|
||||||
|
@ -1346,6 +1346,7 @@ struct task_struct {
|
|||||||
int pagefault_disabled;
|
int pagefault_disabled;
|
||||||
#ifdef CONFIG_MMU
|
#ifdef CONFIG_MMU
|
||||||
struct task_struct *oom_reaper_list;
|
struct task_struct *oom_reaper_list;
|
||||||
|
struct timer_list oom_reaper_timer;
|
||||||
#endif
|
#endif
|
||||||
#ifdef CONFIG_VMAP_STACK
|
#ifdef CONFIG_VMAP_STACK
|
||||||
struct vm_struct *stack_vm_area;
|
struct vm_struct *stack_vm_area;
|
||||||
|
@ -106,6 +106,14 @@ static inline void mm_update_next_owner(struct mm_struct *mm)
|
|||||||
#endif /* CONFIG_MEMCG */
|
#endif /* CONFIG_MEMCG */
|
||||||
|
|
||||||
#ifdef CONFIG_MMU
|
#ifdef CONFIG_MMU
|
||||||
|
#ifndef arch_get_mmap_end
|
||||||
|
#define arch_get_mmap_end(addr) (TASK_SIZE)
|
||||||
|
#endif
|
||||||
|
|
||||||
|
#ifndef arch_get_mmap_base
|
||||||
|
#define arch_get_mmap_base(addr, base) (base)
|
||||||
|
#endif
|
||||||
|
|
||||||
extern void arch_pick_mmap_layout(struct mm_struct *mm,
|
extern void arch_pick_mmap_layout(struct mm_struct *mm,
|
||||||
struct rlimit *rlim_stack);
|
struct rlimit *rlim_stack);
|
||||||
extern unsigned long
|
extern unsigned long
|
||||||
|
@ -4,8 +4,6 @@
|
|||||||
|
|
||||||
#include <linux/skbuff.h>
|
#include <linux/skbuff.h>
|
||||||
|
|
||||||
#define ESP_SKB_FRAG_MAXSIZE (PAGE_SIZE << SKB_FRAG_PAGE_ORDER)
|
|
||||||
|
|
||||||
struct ip_esp_hdr;
|
struct ip_esp_hdr;
|
||||||
|
|
||||||
static inline struct ip_esp_hdr *ip_esp_hdr(const struct sk_buff *skb)
|
static inline struct ip_esp_hdr *ip_esp_hdr(const struct sk_buff *skb)
|
||||||
|
@ -79,7 +79,7 @@ struct netns_ipv6 {
|
|||||||
struct dst_ops ip6_dst_ops;
|
struct dst_ops ip6_dst_ops;
|
||||||
rwlock_t fib6_walker_lock;
|
rwlock_t fib6_walker_lock;
|
||||||
spinlock_t fib6_gc_lock;
|
spinlock_t fib6_gc_lock;
|
||||||
unsigned int ip6_rt_gc_expire;
|
atomic_t ip6_rt_gc_expire;
|
||||||
unsigned long ip6_rt_last_gc;
|
unsigned long ip6_rt_last_gc;
|
||||||
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
|
#ifdef CONFIG_IPV6_MULTIPLE_TABLES
|
||||||
unsigned int fib6_rules_require_fldissect;
|
unsigned int fib6_rules_require_fldissect;
|
||||||
|
@ -6175,7 +6175,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
|
|||||||
again:
|
again:
|
||||||
mutex_lock(&event->mmap_mutex);
|
mutex_lock(&event->mmap_mutex);
|
||||||
if (event->rb) {
|
if (event->rb) {
|
||||||
if (event->rb->nr_pages != nr_pages) {
|
if (data_page_nr(event->rb) != nr_pages) {
|
||||||
ret = -EINVAL;
|
ret = -EINVAL;
|
||||||
goto unlock;
|
goto unlock;
|
||||||
}
|
}
|
||||||
|
@ -116,6 +116,11 @@ static inline int page_order(struct perf_buffer *rb)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
static inline int data_page_nr(struct perf_buffer *rb)
|
||||||
|
{
|
||||||
|
return rb->nr_pages << page_order(rb);
|
||||||
|
}
|
||||||
|
|
||||||
static inline unsigned long perf_data_size(struct perf_buffer *rb)
|
static inline unsigned long perf_data_size(struct perf_buffer *rb)
|
||||||
{
|
{
|
||||||
return rb->nr_pages << (PAGE_SHIFT + page_order(rb));
|
return rb->nr_pages << (PAGE_SHIFT + page_order(rb));
|
||||||
|
@ -856,11 +856,6 @@ void rb_free(struct perf_buffer *rb)
|
|||||||
}
|
}
|
||||||
|
|
||||||
#else
|
#else
|
||||||
static int data_page_nr(struct perf_buffer *rb)
|
|
||||||
{
|
|
||||||
return rb->nr_pages << page_order(rb);
|
|
||||||
}
|
|
||||||
|
|
||||||
static struct page *
|
static struct page *
|
||||||
__perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
|
__perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
|
||||||
{
|
{
|
||||||
|
@ -3759,11 +3759,11 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
|
|||||||
|
|
||||||
se->avg.runnable_sum = se->avg.runnable_avg * divider;
|
se->avg.runnable_sum = se->avg.runnable_avg * divider;
|
||||||
|
|
||||||
se->avg.load_sum = divider;
|
se->avg.load_sum = se->avg.load_avg * divider;
|
||||||
if (se_weight(se)) {
|
if (se_weight(se) < se->avg.load_sum)
|
||||||
se->avg.load_sum =
|
se->avg.load_sum = div_u64(se->avg.load_sum, se_weight(se));
|
||||||
div_u64(se->avg.load_avg * se->avg.load_sum, se_weight(se));
|
else
|
||||||
}
|
se->avg.load_sum = 1;
|
||||||
|
|
||||||
enqueue_load_avg(cfs_rq, se);
|
enqueue_load_avg(cfs_rq, se);
|
||||||
cfs_rq->avg.util_avg += se->avg.util_avg;
|
cfs_rq->avg.util_avg += se->avg.util_avg;
|
||||||
|
@ -1219,6 +1219,13 @@ static void
|
|||||||
stacktrace_trigger(struct event_trigger_data *data, void *rec,
|
stacktrace_trigger(struct event_trigger_data *data, void *rec,
|
||||||
struct ring_buffer_event *event)
|
struct ring_buffer_event *event)
|
||||||
{
|
{
|
||||||
|
struct trace_event_file *file = data->private_data;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
if (file) {
|
||||||
|
local_save_flags(flags);
|
||||||
|
__trace_stack(file->tr, flags, STACK_SKIP, preempt_count());
|
||||||
|
} else
|
||||||
trace_dump_stack(STACK_SKIP);
|
trace_dump_stack(STACK_SKIP);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2218,14 +2218,6 @@ unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info)
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(vm_unmapped_area);
|
EXPORT_SYMBOL_GPL(vm_unmapped_area);
|
||||||
|
|
||||||
#ifndef arch_get_mmap_end
|
|
||||||
#define arch_get_mmap_end(addr) (TASK_SIZE)
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifndef arch_get_mmap_base
|
|
||||||
#define arch_get_mmap_base(addr, base) (base)
|
|
||||||
#endif
|
|
||||||
|
|
||||||
/* Get an address range which is currently unmapped.
|
/* Get an address range which is currently unmapped.
|
||||||
* For shmat() with addr=0.
|
* For shmat() with addr=0.
|
||||||
*
|
*
|
||||||
|
@ -1081,6 +1081,18 @@ int mmu_interval_notifier_insert_locked(
|
|||||||
}
|
}
|
||||||
EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert_locked);
|
EXPORT_SYMBOL_GPL(mmu_interval_notifier_insert_locked);
|
||||||
|
|
||||||
|
static bool
|
||||||
|
mmu_interval_seq_released(struct mmu_notifier_subscriptions *subscriptions,
|
||||||
|
unsigned long seq)
|
||||||
|
{
|
||||||
|
bool ret;
|
||||||
|
|
||||||
|
spin_lock(&subscriptions->lock);
|
||||||
|
ret = subscriptions->invalidate_seq != seq;
|
||||||
|
spin_unlock(&subscriptions->lock);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* mmu_interval_notifier_remove - Remove a interval notifier
|
* mmu_interval_notifier_remove - Remove a interval notifier
|
||||||
* @interval_sub: Interval subscription to unregister
|
* @interval_sub: Interval subscription to unregister
|
||||||
@ -1128,7 +1140,7 @@ void mmu_interval_notifier_remove(struct mmu_interval_notifier *interval_sub)
|
|||||||
lock_map_release(&__mmu_notifier_invalidate_range_start_map);
|
lock_map_release(&__mmu_notifier_invalidate_range_start_map);
|
||||||
if (seq)
|
if (seq)
|
||||||
wait_event(subscriptions->wq,
|
wait_event(subscriptions->wq,
|
||||||
READ_ONCE(subscriptions->invalidate_seq) != seq);
|
mmu_interval_seq_released(subscriptions, seq));
|
||||||
|
|
||||||
/* pairs with mmgrab in mmu_interval_notifier_insert() */
|
/* pairs with mmgrab in mmu_interval_notifier_insert() */
|
||||||
mmdrop(mm);
|
mmdrop(mm);
|
||||||
|
@ -673,7 +673,7 @@ static void oom_reap_task(struct task_struct *tsk)
|
|||||||
*/
|
*/
|
||||||
set_bit(MMF_OOM_SKIP, &mm->flags);
|
set_bit(MMF_OOM_SKIP, &mm->flags);
|
||||||
|
|
||||||
/* Drop a reference taken by wake_oom_reaper */
|
/* Drop a reference taken by queue_oom_reaper */
|
||||||
put_task_struct(tsk);
|
put_task_struct(tsk);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -683,12 +683,12 @@ static int oom_reaper(void *unused)
|
|||||||
struct task_struct *tsk = NULL;
|
struct task_struct *tsk = NULL;
|
||||||
|
|
||||||
wait_event_freezable(oom_reaper_wait, oom_reaper_list != NULL);
|
wait_event_freezable(oom_reaper_wait, oom_reaper_list != NULL);
|
||||||
spin_lock(&oom_reaper_lock);
|
spin_lock_irq(&oom_reaper_lock);
|
||||||
if (oom_reaper_list != NULL) {
|
if (oom_reaper_list != NULL) {
|
||||||
tsk = oom_reaper_list;
|
tsk = oom_reaper_list;
|
||||||
oom_reaper_list = tsk->oom_reaper_list;
|
oom_reaper_list = tsk->oom_reaper_list;
|
||||||
}
|
}
|
||||||
spin_unlock(&oom_reaper_lock);
|
spin_unlock_irq(&oom_reaper_lock);
|
||||||
|
|
||||||
if (tsk)
|
if (tsk)
|
||||||
oom_reap_task(tsk);
|
oom_reap_task(tsk);
|
||||||
@ -697,20 +697,46 @@ static int oom_reaper(void *unused)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void wake_oom_reaper(struct task_struct *tsk)
|
static void wake_oom_reaper(struct timer_list *timer)
|
||||||
|
{
|
||||||
|
struct task_struct *tsk = container_of(timer, struct task_struct,
|
||||||
|
oom_reaper_timer);
|
||||||
|
struct mm_struct *mm = tsk->signal->oom_mm;
|
||||||
|
unsigned long flags;
|
||||||
|
|
||||||
|
/* The victim managed to terminate on its own - see exit_mmap */
|
||||||
|
if (test_bit(MMF_OOM_SKIP, &mm->flags)) {
|
||||||
|
put_task_struct(tsk);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
spin_lock_irqsave(&oom_reaper_lock, flags);
|
||||||
|
tsk->oom_reaper_list = oom_reaper_list;
|
||||||
|
oom_reaper_list = tsk;
|
||||||
|
spin_unlock_irqrestore(&oom_reaper_lock, flags);
|
||||||
|
trace_wake_reaper(tsk->pid);
|
||||||
|
wake_up(&oom_reaper_wait);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Give the OOM victim time to exit naturally before invoking the oom_reaping.
|
||||||
|
* The timers timeout is arbitrary... the longer it is, the longer the worst
|
||||||
|
* case scenario for the OOM can take. If it is too small, the oom_reaper can
|
||||||
|
* get in the way and release resources needed by the process exit path.
|
||||||
|
* e.g. The futex robust list can sit in Anon|Private memory that gets reaped
|
||||||
|
* before the exit path is able to wake the futex waiters.
|
||||||
|
*/
|
||||||
|
#define OOM_REAPER_DELAY (2*HZ)
|
||||||
|
static void queue_oom_reaper(struct task_struct *tsk)
|
||||||
{
|
{
|
||||||
/* mm is already queued? */
|
/* mm is already queued? */
|
||||||
if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags))
|
if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
get_task_struct(tsk);
|
get_task_struct(tsk);
|
||||||
|
timer_setup(&tsk->oom_reaper_timer, wake_oom_reaper, 0);
|
||||||
spin_lock(&oom_reaper_lock);
|
tsk->oom_reaper_timer.expires = jiffies + OOM_REAPER_DELAY;
|
||||||
tsk->oom_reaper_list = oom_reaper_list;
|
add_timer(&tsk->oom_reaper_timer);
|
||||||
oom_reaper_list = tsk;
|
|
||||||
spin_unlock(&oom_reaper_lock);
|
|
||||||
trace_wake_reaper(tsk->pid);
|
|
||||||
wake_up(&oom_reaper_wait);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static int __init oom_init(void)
|
static int __init oom_init(void)
|
||||||
@ -720,7 +746,7 @@ static int __init oom_init(void)
|
|||||||
}
|
}
|
||||||
subsys_initcall(oom_init)
|
subsys_initcall(oom_init)
|
||||||
#else
|
#else
|
||||||
static inline void wake_oom_reaper(struct task_struct *tsk)
|
static inline void queue_oom_reaper(struct task_struct *tsk)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
#endif /* CONFIG_MMU */
|
#endif /* CONFIG_MMU */
|
||||||
@ -980,7 +1006,7 @@ static void __oom_kill_process(struct task_struct *victim, const char *message)
|
|||||||
rcu_read_unlock();
|
rcu_read_unlock();
|
||||||
|
|
||||||
if (can_oom_reap)
|
if (can_oom_reap)
|
||||||
wake_oom_reaper(victim);
|
queue_oom_reaper(victim);
|
||||||
|
|
||||||
mmdrop(mm);
|
mmdrop(mm);
|
||||||
put_task_struct(victim);
|
put_task_struct(victim);
|
||||||
@ -1016,7 +1042,7 @@ static void oom_kill_process(struct oom_control *oc, const char *message)
|
|||||||
task_lock(victim);
|
task_lock(victim);
|
||||||
if (task_will_free_mem(victim)) {
|
if (task_will_free_mem(victim)) {
|
||||||
mark_oom_victim(victim);
|
mark_oom_victim(victim);
|
||||||
wake_oom_reaper(victim);
|
queue_oom_reaper(victim);
|
||||||
task_unlock(victim);
|
task_unlock(victim);
|
||||||
put_task_struct(victim);
|
put_task_struct(victim);
|
||||||
return;
|
return;
|
||||||
@ -1114,7 +1140,7 @@ bool out_of_memory(struct oom_control *oc)
|
|||||||
*/
|
*/
|
||||||
if (task_will_free_mem(current)) {
|
if (task_will_free_mem(current)) {
|
||||||
mark_oom_victim(current);
|
mark_oom_victim(current);
|
||||||
wake_oom_reaper(current);
|
queue_oom_reaper(current);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -7847,7 +7847,7 @@ void __init mem_init_print_info(const char *str)
|
|||||||
*/
|
*/
|
||||||
#define adj_init_size(start, end, size, pos, adj) \
|
#define adj_init_size(start, end, size, pos, adj) \
|
||||||
do { \
|
do { \
|
||||||
if (start <= pos && pos < end && size > adj) \
|
if (&start[0] <= &pos[0] && &pos[0] < &end[0] && size > adj) \
|
||||||
size -= adj; \
|
size -= adj; \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
|
@ -864,6 +864,7 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
|
|||||||
struct canfd_frame *cf;
|
struct canfd_frame *cf;
|
||||||
int ae = (so->opt.flags & CAN_ISOTP_EXTEND_ADDR) ? 1 : 0;
|
int ae = (so->opt.flags & CAN_ISOTP_EXTEND_ADDR) ? 1 : 0;
|
||||||
int wait_tx_done = (so->opt.flags & CAN_ISOTP_WAIT_TX_DONE) ? 1 : 0;
|
int wait_tx_done = (so->opt.flags & CAN_ISOTP_WAIT_TX_DONE) ? 1 : 0;
|
||||||
|
s64 hrtimer_sec = 0;
|
||||||
int off;
|
int off;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
@ -962,7 +963,9 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
|
|||||||
isotp_create_fframe(cf, so, ae);
|
isotp_create_fframe(cf, so, ae);
|
||||||
|
|
||||||
/* start timeout for FC */
|
/* start timeout for FC */
|
||||||
hrtimer_start(&so->txtimer, ktime_set(1, 0), HRTIMER_MODE_REL_SOFT);
|
hrtimer_sec = 1;
|
||||||
|
hrtimer_start(&so->txtimer, ktime_set(hrtimer_sec, 0),
|
||||||
|
HRTIMER_MODE_REL_SOFT);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* send the first or only CAN frame */
|
/* send the first or only CAN frame */
|
||||||
@ -975,6 +978,11 @@ static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
|
|||||||
if (err) {
|
if (err) {
|
||||||
pr_notice_once("can-isotp: %s: can_send_ret %d\n",
|
pr_notice_once("can-isotp: %s: can_send_ret %d\n",
|
||||||
__func__, err);
|
__func__, err);
|
||||||
|
|
||||||
|
/* no transmission -> no timeout monitoring */
|
||||||
|
if (hrtimer_sec)
|
||||||
|
hrtimer_cancel(&so->txtimer);
|
||||||
|
|
||||||
goto err_out_drop;
|
goto err_out_drop;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -448,7 +448,6 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
|
|||||||
struct page *page;
|
struct page *page;
|
||||||
struct sk_buff *trailer;
|
struct sk_buff *trailer;
|
||||||
int tailen = esp->tailen;
|
int tailen = esp->tailen;
|
||||||
unsigned int allocsz;
|
|
||||||
|
|
||||||
/* this is non-NULL only with TCP/UDP Encapsulation */
|
/* this is non-NULL only with TCP/UDP Encapsulation */
|
||||||
if (x->encap) {
|
if (x->encap) {
|
||||||
@ -458,8 +457,8 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
|
if (ALIGN(tailen, L1_CACHE_BYTES) > PAGE_SIZE ||
|
||||||
if (allocsz > ESP_SKB_FRAG_MAXSIZE)
|
ALIGN(skb->data_len, L1_CACHE_BYTES) > PAGE_SIZE)
|
||||||
goto cow;
|
goto cow;
|
||||||
|
|
||||||
if (!skb_cloned(skb)) {
|
if (!skb_cloned(skb)) {
|
||||||
|
@ -483,7 +483,6 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
|
|||||||
struct page *page;
|
struct page *page;
|
||||||
struct sk_buff *trailer;
|
struct sk_buff *trailer;
|
||||||
int tailen = esp->tailen;
|
int tailen = esp->tailen;
|
||||||
unsigned int allocsz;
|
|
||||||
|
|
||||||
if (x->encap) {
|
if (x->encap) {
|
||||||
int err = esp6_output_encap(x, skb, esp);
|
int err = esp6_output_encap(x, skb, esp);
|
||||||
@ -492,8 +491,8 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info
|
|||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES);
|
if (ALIGN(tailen, L1_CACHE_BYTES) > PAGE_SIZE ||
|
||||||
if (allocsz > ESP_SKB_FRAG_MAXSIZE)
|
ALIGN(skb->data_len, L1_CACHE_BYTES) > PAGE_SIZE)
|
||||||
goto cow;
|
goto cow;
|
||||||
|
|
||||||
if (!skb_cloned(skb)) {
|
if (!skb_cloned(skb)) {
|
||||||
|
@ -733,9 +733,6 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
|
|||||||
else
|
else
|
||||||
fl6->daddr = tunnel->parms.raddr;
|
fl6->daddr = tunnel->parms.raddr;
|
||||||
|
|
||||||
if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
|
|
||||||
return -ENOMEM;
|
|
||||||
|
|
||||||
/* Push GRE header. */
|
/* Push GRE header. */
|
||||||
protocol = (dev->type == ARPHRD_ETHER) ? htons(ETH_P_TEB) : proto;
|
protocol = (dev->type == ARPHRD_ETHER) ? htons(ETH_P_TEB) : proto;
|
||||||
|
|
||||||
@ -743,6 +740,7 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
|
|||||||
struct ip_tunnel_info *tun_info;
|
struct ip_tunnel_info *tun_info;
|
||||||
const struct ip_tunnel_key *key;
|
const struct ip_tunnel_key *key;
|
||||||
__be16 flags;
|
__be16 flags;
|
||||||
|
int tun_hlen;
|
||||||
|
|
||||||
tun_info = skb_tunnel_info_txcheck(skb);
|
tun_info = skb_tunnel_info_txcheck(skb);
|
||||||
if (IS_ERR(tun_info) ||
|
if (IS_ERR(tun_info) ||
|
||||||
@ -760,9 +758,12 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
|
|||||||
dsfield = key->tos;
|
dsfield = key->tos;
|
||||||
flags = key->tun_flags &
|
flags = key->tun_flags &
|
||||||
(TUNNEL_CSUM | TUNNEL_KEY | TUNNEL_SEQ);
|
(TUNNEL_CSUM | TUNNEL_KEY | TUNNEL_SEQ);
|
||||||
tunnel->tun_hlen = gre_calc_hlen(flags);
|
tun_hlen = gre_calc_hlen(flags);
|
||||||
|
|
||||||
gre_build_header(skb, tunnel->tun_hlen,
|
if (skb_cow_head(skb, dev->needed_headroom ?: tun_hlen + tunnel->encap_hlen))
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
|
gre_build_header(skb, tun_hlen,
|
||||||
flags, protocol,
|
flags, protocol,
|
||||||
tunnel_id_to_key32(tun_info->key.tun_id),
|
tunnel_id_to_key32(tun_info->key.tun_id),
|
||||||
(flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++)
|
(flags & TUNNEL_SEQ) ? htonl(tunnel->o_seqno++)
|
||||||
@ -772,6 +773,9 @@ static netdev_tx_t __gre6_xmit(struct sk_buff *skb,
|
|||||||
if (tunnel->parms.o_flags & TUNNEL_SEQ)
|
if (tunnel->parms.o_flags & TUNNEL_SEQ)
|
||||||
tunnel->o_seqno++;
|
tunnel->o_seqno++;
|
||||||
|
|
||||||
|
if (skb_cow_head(skb, dev->needed_headroom ?: tunnel->hlen))
|
||||||
|
return -ENOMEM;
|
||||||
|
|
||||||
gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
|
gre_build_header(skb, tunnel->tun_hlen, tunnel->parms.o_flags,
|
||||||
protocol, tunnel->parms.o_key,
|
protocol, tunnel->parms.o_key,
|
||||||
htonl(tunnel->o_seqno));
|
htonl(tunnel->o_seqno));
|
||||||
|
@ -3192,6 +3192,7 @@ static int ip6_dst_gc(struct dst_ops *ops)
|
|||||||
int rt_elasticity = net->ipv6.sysctl.ip6_rt_gc_elasticity;
|
int rt_elasticity = net->ipv6.sysctl.ip6_rt_gc_elasticity;
|
||||||
int rt_gc_timeout = net->ipv6.sysctl.ip6_rt_gc_timeout;
|
int rt_gc_timeout = net->ipv6.sysctl.ip6_rt_gc_timeout;
|
||||||
unsigned long rt_last_gc = net->ipv6.ip6_rt_last_gc;
|
unsigned long rt_last_gc = net->ipv6.ip6_rt_last_gc;
|
||||||
|
unsigned int val;
|
||||||
int entries;
|
int entries;
|
||||||
|
|
||||||
entries = dst_entries_get_fast(ops);
|
entries = dst_entries_get_fast(ops);
|
||||||
@ -3202,13 +3203,13 @@ static int ip6_dst_gc(struct dst_ops *ops)
|
|||||||
entries <= rt_max_size)
|
entries <= rt_max_size)
|
||||||
goto out;
|
goto out;
|
||||||
|
|
||||||
net->ipv6.ip6_rt_gc_expire++;
|
fib6_run_gc(atomic_inc_return(&net->ipv6.ip6_rt_gc_expire), net, true);
|
||||||
fib6_run_gc(net->ipv6.ip6_rt_gc_expire, net, true);
|
|
||||||
entries = dst_entries_get_slow(ops);
|
entries = dst_entries_get_slow(ops);
|
||||||
if (entries < ops->gc_thresh)
|
if (entries < ops->gc_thresh)
|
||||||
net->ipv6.ip6_rt_gc_expire = rt_gc_timeout>>1;
|
atomic_set(&net->ipv6.ip6_rt_gc_expire, rt_gc_timeout >> 1);
|
||||||
out:
|
out:
|
||||||
net->ipv6.ip6_rt_gc_expire -= net->ipv6.ip6_rt_gc_expire>>rt_elasticity;
|
val = atomic_read(&net->ipv6.ip6_rt_gc_expire);
|
||||||
|
atomic_set(&net->ipv6.ip6_rt_gc_expire, val - (val >> rt_elasticity));
|
||||||
return entries > rt_max_size;
|
return entries > rt_max_size;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -6321,7 +6322,7 @@ static int __net_init ip6_route_net_init(struct net *net)
|
|||||||
net->ipv6.sysctl.ip6_rt_min_advmss = IPV6_MIN_MTU - 20 - 40;
|
net->ipv6.sysctl.ip6_rt_min_advmss = IPV6_MIN_MTU - 20 - 40;
|
||||||
net->ipv6.sysctl.skip_notify_on_dev_down = 0;
|
net->ipv6.sysctl.skip_notify_on_dev_down = 0;
|
||||||
|
|
||||||
net->ipv6.ip6_rt_gc_expire = 30*HZ;
|
atomic_set(&net->ipv6.ip6_rt_gc_expire, 30*HZ);
|
||||||
|
|
||||||
ret = 0;
|
ret = 0;
|
||||||
out:
|
out:
|
||||||
|
@ -147,7 +147,7 @@ int l3mdev_master_upper_ifindex_by_index_rcu(struct net *net, int ifindex)
|
|||||||
|
|
||||||
dev = dev_get_by_index_rcu(net, ifindex);
|
dev = dev_get_by_index_rcu(net, ifindex);
|
||||||
while (dev && !netif_is_l3_master(dev))
|
while (dev && !netif_is_l3_master(dev))
|
||||||
dev = netdev_master_upper_dev_get(dev);
|
dev = netdev_master_upper_dev_get_rcu(dev);
|
||||||
|
|
||||||
return dev ? dev->ifindex : 0;
|
return dev ? dev->ifindex : 0;
|
||||||
}
|
}
|
||||||
|
@ -2276,6 +2276,13 @@ static int netlink_dump(struct sock *sk)
|
|||||||
* single netdev. The outcome is MSG_TRUNC error.
|
* single netdev. The outcome is MSG_TRUNC error.
|
||||||
*/
|
*/
|
||||||
skb_reserve(skb, skb_tailroom(skb) - alloc_size);
|
skb_reserve(skb, skb_tailroom(skb) - alloc_size);
|
||||||
|
|
||||||
|
/* Make sure malicious BPF programs can not read unitialized memory
|
||||||
|
* from skb->head -> skb->data
|
||||||
|
*/
|
||||||
|
skb_reset_network_header(skb);
|
||||||
|
skb_reset_mac_header(skb);
|
||||||
|
|
||||||
netlink_skb_set_owner_r(skb, sk);
|
netlink_skb_set_owner_r(skb, sk);
|
||||||
|
|
||||||
if (nlk->dump_done_errno > 0) {
|
if (nlk->dump_done_errno > 0) {
|
||||||
|
@ -2436,7 +2436,7 @@ static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa,
|
|||||||
new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2);
|
new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2);
|
||||||
|
|
||||||
if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
|
if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
|
||||||
if ((MAX_ACTIONS_BUFSIZE - next_offset) < req_size) {
|
if ((next_offset + req_size) > MAX_ACTIONS_BUFSIZE) {
|
||||||
OVS_NLERR(log, "Flow action size exceeds max %u",
|
OVS_NLERR(log, "Flow action size exceeds max %u",
|
||||||
MAX_ACTIONS_BUFSIZE);
|
MAX_ACTIONS_BUFSIZE);
|
||||||
return ERR_PTR(-EMSGSIZE);
|
return ERR_PTR(-EMSGSIZE);
|
||||||
|
@ -2817,7 +2817,8 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
|
|||||||
|
|
||||||
status = TP_STATUS_SEND_REQUEST;
|
status = TP_STATUS_SEND_REQUEST;
|
||||||
err = po->xmit(skb);
|
err = po->xmit(skb);
|
||||||
if (unlikely(err > 0)) {
|
if (unlikely(err != 0)) {
|
||||||
|
if (err > 0)
|
||||||
err = net_xmit_errno(err);
|
err = net_xmit_errno(err);
|
||||||
if (err && __packet_get_status(po, ph) ==
|
if (err && __packet_get_status(po, ph) ==
|
||||||
TP_STATUS_AVAILABLE) {
|
TP_STATUS_AVAILABLE) {
|
||||||
@ -3019,8 +3020,12 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
|
|||||||
skb->no_fcs = 1;
|
skb->no_fcs = 1;
|
||||||
|
|
||||||
err = po->xmit(skb);
|
err = po->xmit(skb);
|
||||||
if (err > 0 && (err = net_xmit_errno(err)) != 0)
|
if (unlikely(err != 0)) {
|
||||||
|
if (err > 0)
|
||||||
|
err = net_xmit_errno(err);
|
||||||
|
if (err)
|
||||||
goto out_unlock;
|
goto out_unlock;
|
||||||
|
}
|
||||||
|
|
||||||
dev_put(dev);
|
dev_put(dev);
|
||||||
|
|
||||||
|
@ -113,7 +113,9 @@ static __net_exit void rxrpc_exit_net(struct net *net)
|
|||||||
struct rxrpc_net *rxnet = rxrpc_net(net);
|
struct rxrpc_net *rxnet = rxrpc_net(net);
|
||||||
|
|
||||||
rxnet->live = false;
|
rxnet->live = false;
|
||||||
|
del_timer_sync(&rxnet->peer_keepalive_timer);
|
||||||
cancel_work_sync(&rxnet->peer_keepalive_work);
|
cancel_work_sync(&rxnet->peer_keepalive_work);
|
||||||
|
/* Remove the timer again as the worker may have restarted it. */
|
||||||
del_timer_sync(&rxnet->peer_keepalive_timer);
|
del_timer_sync(&rxnet->peer_keepalive_timer);
|
||||||
rxrpc_destroy_all_calls(rxnet);
|
rxrpc_destroy_all_calls(rxnet);
|
||||||
rxrpc_destroy_all_connections(rxnet);
|
rxrpc_destroy_all_connections(rxnet);
|
||||||
|
@ -386,14 +386,19 @@ static int u32_init(struct tcf_proto *tp)
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int u32_destroy_key(struct tc_u_knode *n, bool free_pf)
|
static void __u32_destroy_key(struct tc_u_knode *n)
|
||||||
{
|
{
|
||||||
struct tc_u_hnode *ht = rtnl_dereference(n->ht_down);
|
struct tc_u_hnode *ht = rtnl_dereference(n->ht_down);
|
||||||
|
|
||||||
tcf_exts_destroy(&n->exts);
|
tcf_exts_destroy(&n->exts);
|
||||||
tcf_exts_put_net(&n->exts);
|
|
||||||
if (ht && --ht->refcnt == 0)
|
if (ht && --ht->refcnt == 0)
|
||||||
kfree(ht);
|
kfree(ht);
|
||||||
|
kfree(n);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void u32_destroy_key(struct tc_u_knode *n, bool free_pf)
|
||||||
|
{
|
||||||
|
tcf_exts_put_net(&n->exts);
|
||||||
#ifdef CONFIG_CLS_U32_PERF
|
#ifdef CONFIG_CLS_U32_PERF
|
||||||
if (free_pf)
|
if (free_pf)
|
||||||
free_percpu(n->pf);
|
free_percpu(n->pf);
|
||||||
@ -402,8 +407,7 @@ static int u32_destroy_key(struct tc_u_knode *n, bool free_pf)
|
|||||||
if (free_pf)
|
if (free_pf)
|
||||||
free_percpu(n->pcpu_success);
|
free_percpu(n->pcpu_success);
|
||||||
#endif
|
#endif
|
||||||
kfree(n);
|
__u32_destroy_key(n);
|
||||||
return 0;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/* u32_delete_key_rcu should be called when free'ing a copied
|
/* u32_delete_key_rcu should be called when free'ing a copied
|
||||||
@ -810,10 +814,6 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
|
|||||||
new->flags = n->flags;
|
new->flags = n->flags;
|
||||||
RCU_INIT_POINTER(new->ht_down, ht);
|
RCU_INIT_POINTER(new->ht_down, ht);
|
||||||
|
|
||||||
/* bump reference count as long as we hold pointer to structure */
|
|
||||||
if (ht)
|
|
||||||
ht->refcnt++;
|
|
||||||
|
|
||||||
#ifdef CONFIG_CLS_U32_PERF
|
#ifdef CONFIG_CLS_U32_PERF
|
||||||
/* Statistics may be incremented by readers during update
|
/* Statistics may be incremented by readers during update
|
||||||
* so we must keep them in tact. When the node is later destroyed
|
* so we must keep them in tact. When the node is later destroyed
|
||||||
@ -835,6 +835,10 @@ static struct tc_u_knode *u32_init_knode(struct net *net, struct tcf_proto *tp,
|
|||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* bump reference count as long as we hold pointer to structure */
|
||||||
|
if (ht)
|
||||||
|
ht->refcnt++;
|
||||||
|
|
||||||
return new;
|
return new;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -898,13 +902,13 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
|
|||||||
tca[TCA_RATE], ovr, extack);
|
tca[TCA_RATE], ovr, extack);
|
||||||
|
|
||||||
if (err) {
|
if (err) {
|
||||||
u32_destroy_key(new, false);
|
__u32_destroy_key(new);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
err = u32_replace_hw_knode(tp, new, flags, extack);
|
err = u32_replace_hw_knode(tp, new, flags, extack);
|
||||||
if (err) {
|
if (err) {
|
||||||
u32_destroy_key(new, false);
|
__u32_destroy_key(new);
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2144,8 +2144,10 @@ static int smc_shutdown(struct socket *sock, int how)
|
|||||||
if (smc->use_fallback) {
|
if (smc->use_fallback) {
|
||||||
rc = kernel_sock_shutdown(smc->clcsock, how);
|
rc = kernel_sock_shutdown(smc->clcsock, how);
|
||||||
sk->sk_shutdown = smc->clcsock->sk->sk_shutdown;
|
sk->sk_shutdown = smc->clcsock->sk->sk_shutdown;
|
||||||
if (sk->sk_shutdown == SHUTDOWN_MASK)
|
if (sk->sk_shutdown == SHUTDOWN_MASK) {
|
||||||
sk->sk_state = SMC_CLOSED;
|
sk->sk_state = SMC_CLOSED;
|
||||||
|
sock_put(sk);
|
||||||
|
}
|
||||||
goto out;
|
goto out;
|
||||||
}
|
}
|
||||||
switch (how) {
|
switch (how) {
|
||||||
|
@ -8897,6 +8897,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
|
|||||||
SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[5|7][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
|
SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[5|7][0-9]RZ[Q]", ALC269_FIXUP_DMIC),
|
||||||
SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
|
SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
|
||||||
SND_PCI_QUIRK(0x1558, 0x866d, "Clevo NP5[05]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
|
SND_PCI_QUIRK(0x1558, 0x866d, "Clevo NP5[05]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
|
||||||
|
SND_PCI_QUIRK(0x1558, 0x867c, "Clevo NP7[01]PNP", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
|
||||||
SND_PCI_QUIRK(0x1558, 0x867d, "Clevo NP7[01]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
|
SND_PCI_QUIRK(0x1558, 0x867d, "Clevo NP7[01]PN[HJK]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
|
||||||
SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
|
SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
|
||||||
SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME),
|
SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME),
|
||||||
|
@ -46,35 +46,6 @@
|
|||||||
*/
|
*/
|
||||||
#undef ENABLE_MIC_INPUT
|
#undef ENABLE_MIC_INPUT
|
||||||
|
|
||||||
static struct clk *mclk;
|
|
||||||
|
|
||||||
static int at91sam9g20ek_set_bias_level(struct snd_soc_card *card,
|
|
||||||
struct snd_soc_dapm_context *dapm,
|
|
||||||
enum snd_soc_bias_level level)
|
|
||||||
{
|
|
||||||
static int mclk_on;
|
|
||||||
int ret = 0;
|
|
||||||
|
|
||||||
switch (level) {
|
|
||||||
case SND_SOC_BIAS_ON:
|
|
||||||
case SND_SOC_BIAS_PREPARE:
|
|
||||||
if (!mclk_on)
|
|
||||||
ret = clk_enable(mclk);
|
|
||||||
if (ret == 0)
|
|
||||||
mclk_on = 1;
|
|
||||||
break;
|
|
||||||
|
|
||||||
case SND_SOC_BIAS_OFF:
|
|
||||||
case SND_SOC_BIAS_STANDBY:
|
|
||||||
if (mclk_on)
|
|
||||||
clk_disable(mclk);
|
|
||||||
mclk_on = 0;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
static const struct snd_soc_dapm_widget at91sam9g20ek_dapm_widgets[] = {
|
static const struct snd_soc_dapm_widget at91sam9g20ek_dapm_widgets[] = {
|
||||||
SND_SOC_DAPM_MIC("Int Mic", NULL),
|
SND_SOC_DAPM_MIC("Int Mic", NULL),
|
||||||
SND_SOC_DAPM_SPK("Ext Spk", NULL),
|
SND_SOC_DAPM_SPK("Ext Spk", NULL),
|
||||||
@ -135,7 +106,6 @@ static struct snd_soc_card snd_soc_at91sam9g20ek = {
|
|||||||
.owner = THIS_MODULE,
|
.owner = THIS_MODULE,
|
||||||
.dai_link = &at91sam9g20ek_dai,
|
.dai_link = &at91sam9g20ek_dai,
|
||||||
.num_links = 1,
|
.num_links = 1,
|
||||||
.set_bias_level = at91sam9g20ek_set_bias_level,
|
|
||||||
|
|
||||||
.dapm_widgets = at91sam9g20ek_dapm_widgets,
|
.dapm_widgets = at91sam9g20ek_dapm_widgets,
|
||||||
.num_dapm_widgets = ARRAY_SIZE(at91sam9g20ek_dapm_widgets),
|
.num_dapm_widgets = ARRAY_SIZE(at91sam9g20ek_dapm_widgets),
|
||||||
@ -148,7 +118,6 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
|
|||||||
{
|
{
|
||||||
struct device_node *np = pdev->dev.of_node;
|
struct device_node *np = pdev->dev.of_node;
|
||||||
struct device_node *codec_np, *cpu_np;
|
struct device_node *codec_np, *cpu_np;
|
||||||
struct clk *pllb;
|
|
||||||
struct snd_soc_card *card = &snd_soc_at91sam9g20ek;
|
struct snd_soc_card *card = &snd_soc_at91sam9g20ek;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
@ -162,31 +131,6 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
|
|||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
|
||||||
* Codec MCLK is supplied by PCK0 - set it up.
|
|
||||||
*/
|
|
||||||
mclk = clk_get(NULL, "pck0");
|
|
||||||
if (IS_ERR(mclk)) {
|
|
||||||
dev_err(&pdev->dev, "Failed to get MCLK\n");
|
|
||||||
ret = PTR_ERR(mclk);
|
|
||||||
goto err;
|
|
||||||
}
|
|
||||||
|
|
||||||
pllb = clk_get(NULL, "pllb");
|
|
||||||
if (IS_ERR(pllb)) {
|
|
||||||
dev_err(&pdev->dev, "Failed to get PLLB\n");
|
|
||||||
ret = PTR_ERR(pllb);
|
|
||||||
goto err_mclk;
|
|
||||||
}
|
|
||||||
ret = clk_set_parent(mclk, pllb);
|
|
||||||
clk_put(pllb);
|
|
||||||
if (ret != 0) {
|
|
||||||
dev_err(&pdev->dev, "Failed to set MCLK parent\n");
|
|
||||||
goto err_mclk;
|
|
||||||
}
|
|
||||||
|
|
||||||
clk_set_rate(mclk, MCLK_RATE);
|
|
||||||
|
|
||||||
card->dev = &pdev->dev;
|
card->dev = &pdev->dev;
|
||||||
|
|
||||||
/* Parse device node info */
|
/* Parse device node info */
|
||||||
@ -230,9 +174,6 @@ static int at91sam9g20ek_audio_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
err_mclk:
|
|
||||||
clk_put(mclk);
|
|
||||||
mclk = NULL;
|
|
||||||
err:
|
err:
|
||||||
atmel_ssc_put_audio(0);
|
atmel_ssc_put_audio(0);
|
||||||
return ret;
|
return ret;
|
||||||
@ -242,8 +183,6 @@ static int at91sam9g20ek_audio_remove(struct platform_device *pdev)
|
|||||||
{
|
{
|
||||||
struct snd_soc_card *card = platform_get_drvdata(pdev);
|
struct snd_soc_card *card = platform_get_drvdata(pdev);
|
||||||
|
|
||||||
clk_disable(mclk);
|
|
||||||
mclk = NULL;
|
|
||||||
snd_soc_unregister_card(card);
|
snd_soc_unregister_card(card);
|
||||||
atmel_ssc_put_audio(0);
|
atmel_ssc_put_audio(0);
|
||||||
|
|
||||||
|
@ -1206,9 +1206,16 @@ static int msm8916_wcd_digital_probe(struct platform_device *pdev)
|
|||||||
|
|
||||||
dev_set_drvdata(dev, priv);
|
dev_set_drvdata(dev, priv);
|
||||||
|
|
||||||
return devm_snd_soc_register_component(dev, &msm8916_wcd_digital,
|
ret = devm_snd_soc_register_component(dev, &msm8916_wcd_digital,
|
||||||
msm8916_wcd_digital_dai,
|
msm8916_wcd_digital_dai,
|
||||||
ARRAY_SIZE(msm8916_wcd_digital_dai));
|
ARRAY_SIZE(msm8916_wcd_digital_dai));
|
||||||
|
if (ret)
|
||||||
|
goto err_mclk;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err_mclk:
|
||||||
|
clk_disable_unprepare(priv->mclk);
|
||||||
err_clk:
|
err_clk:
|
||||||
clk_disable_unprepare(priv->ahbclk);
|
clk_disable_unprepare(priv->ahbclk);
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -1188,29 +1188,7 @@ static int wcd934x_set_sido_input_src(struct wcd934x_codec *wcd, int sido_src)
|
|||||||
if (sido_src == wcd->sido_input_src)
|
if (sido_src == wcd->sido_input_src)
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
if (sido_src == SIDO_SOURCE_INTERNAL) {
|
if (sido_src == SIDO_SOURCE_RCO_BG) {
|
||||||
regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
|
|
||||||
WCD934X_ANA_BUCK_HI_ACCU_EN_MASK, 0);
|
|
||||||
usleep_range(100, 110);
|
|
||||||
regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
|
|
||||||
WCD934X_ANA_BUCK_HI_ACCU_PRE_ENX_MASK, 0x0);
|
|
||||||
usleep_range(100, 110);
|
|
||||||
regmap_update_bits(wcd->regmap, WCD934X_ANA_RCO,
|
|
||||||
WCD934X_ANA_RCO_BG_EN_MASK, 0);
|
|
||||||
usleep_range(100, 110);
|
|
||||||
regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
|
|
||||||
WCD934X_ANA_BUCK_PRE_EN1_MASK,
|
|
||||||
WCD934X_ANA_BUCK_PRE_EN1_ENABLE);
|
|
||||||
usleep_range(100, 110);
|
|
||||||
regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
|
|
||||||
WCD934X_ANA_BUCK_PRE_EN2_MASK,
|
|
||||||
WCD934X_ANA_BUCK_PRE_EN2_ENABLE);
|
|
||||||
usleep_range(100, 110);
|
|
||||||
regmap_update_bits(wcd->regmap, WCD934X_ANA_BUCK_CTL,
|
|
||||||
WCD934X_ANA_BUCK_HI_ACCU_EN_MASK,
|
|
||||||
WCD934X_ANA_BUCK_HI_ACCU_ENABLE);
|
|
||||||
usleep_range(100, 110);
|
|
||||||
} else if (sido_src == SIDO_SOURCE_RCO_BG) {
|
|
||||||
regmap_update_bits(wcd->regmap, WCD934X_ANA_RCO,
|
regmap_update_bits(wcd->regmap, WCD934X_ANA_RCO,
|
||||||
WCD934X_ANA_RCO_BG_EN_MASK,
|
WCD934X_ANA_RCO_BG_EN_MASK,
|
||||||
WCD934X_ANA_RCO_BG_ENABLE);
|
WCD934X_ANA_RCO_BG_ENABLE);
|
||||||
@ -1296,8 +1274,6 @@ static int wcd934x_disable_ana_bias_and_syclk(struct wcd934x_codec *wcd)
|
|||||||
regmap_update_bits(wcd->regmap, WCD934X_CLK_SYS_MCLK_PRG,
|
regmap_update_bits(wcd->regmap, WCD934X_CLK_SYS_MCLK_PRG,
|
||||||
WCD934X_EXT_CLK_BUF_EN_MASK |
|
WCD934X_EXT_CLK_BUF_EN_MASK |
|
||||||
WCD934X_MCLK_EN_MASK, 0x0);
|
WCD934X_MCLK_EN_MASK, 0x0);
|
||||||
wcd934x_set_sido_input_src(wcd, SIDO_SOURCE_INTERNAL);
|
|
||||||
|
|
||||||
regmap_update_bits(wcd->regmap, WCD934X_ANA_BIAS,
|
regmap_update_bits(wcd->regmap, WCD934X_ANA_BIAS,
|
||||||
WCD934X_ANA_BIAS_EN_MASK, 0);
|
WCD934X_ANA_BIAS_EN_MASK, 0);
|
||||||
regmap_update_bits(wcd->regmap, WCD934X_ANA_BIAS,
|
regmap_update_bits(wcd->regmap, WCD934X_ANA_BIAS,
|
||||||
|
@ -1683,8 +1683,7 @@ static void dapm_seq_run(struct snd_soc_card *card,
|
|||||||
switch (w->id) {
|
switch (w->id) {
|
||||||
case snd_soc_dapm_pre:
|
case snd_soc_dapm_pre:
|
||||||
if (!w->event)
|
if (!w->event)
|
||||||
list_for_each_entry_safe_continue(w, n, list,
|
continue;
|
||||||
power_list);
|
|
||||||
|
|
||||||
if (event == SND_SOC_DAPM_STREAM_START)
|
if (event == SND_SOC_DAPM_STREAM_START)
|
||||||
ret = w->event(w,
|
ret = w->event(w,
|
||||||
@ -1696,8 +1695,7 @@ static void dapm_seq_run(struct snd_soc_card *card,
|
|||||||
|
|
||||||
case snd_soc_dapm_post:
|
case snd_soc_dapm_post:
|
||||||
if (!w->event)
|
if (!w->event)
|
||||||
list_for_each_entry_safe_continue(w, n, list,
|
continue;
|
||||||
power_list);
|
|
||||||
|
|
||||||
if (event == SND_SOC_DAPM_STREAM_START)
|
if (event == SND_SOC_DAPM_STREAM_START)
|
||||||
ret = w->event(w,
|
ret = w->event(w,
|
||||||
|
@ -1210,6 +1210,7 @@ static void snd_usbmidi_output_drain(struct snd_rawmidi_substream *substream)
|
|||||||
} while (drain_urbs && timeout);
|
} while (drain_urbs && timeout);
|
||||||
finish_wait(&ep->drain_wait, &wait);
|
finish_wait(&ep->drain_wait, &wait);
|
||||||
}
|
}
|
||||||
|
port->active = 0;
|
||||||
spin_unlock_irq(&ep->buffer_lock);
|
spin_unlock_irq(&ep->buffer_lock);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -8,7 +8,7 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
/* handling of USB vendor/product ID pairs as 32-bit numbers */
|
/* handling of USB vendor/product ID pairs as 32-bit numbers */
|
||||||
#define USB_ID(vendor, product) (((vendor) << 16) | (product))
|
#define USB_ID(vendor, product) (((unsigned int)(vendor) << 16) | (product))
|
||||||
#define USB_ID_VENDOR(id) ((id) >> 16)
|
#define USB_ID_VENDOR(id) ((id) >> 16)
|
||||||
#define USB_ID_PRODUCT(id) ((u16)(id))
|
#define USB_ID_PRODUCT(id) ((u16)(id))
|
||||||
|
|
||||||
|
@ -571,7 +571,6 @@ int perf_evlist__mmap_ops(struct perf_evlist *evlist,
|
|||||||
{
|
{
|
||||||
struct perf_evsel *evsel;
|
struct perf_evsel *evsel;
|
||||||
const struct perf_cpu_map *cpus = evlist->cpus;
|
const struct perf_cpu_map *cpus = evlist->cpus;
|
||||||
const struct perf_thread_map *threads = evlist->threads;
|
|
||||||
|
|
||||||
if (!ops || !ops->get || !ops->mmap)
|
if (!ops || !ops->get || !ops->mmap)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
@ -583,7 +582,7 @@ int perf_evlist__mmap_ops(struct perf_evlist *evlist,
|
|||||||
perf_evlist__for_each_entry(evlist, evsel) {
|
perf_evlist__for_each_entry(evlist, evsel) {
|
||||||
if ((evsel->attr.read_format & PERF_FORMAT_ID) &&
|
if ((evsel->attr.read_format & PERF_FORMAT_ID) &&
|
||||||
evsel->sample_id == NULL &&
|
evsel->sample_id == NULL &&
|
||||||
perf_evsel__alloc_id(evsel, perf_cpu_map__nr(cpus), threads->nr) < 0)
|
perf_evsel__alloc_id(evsel, evsel->fd->max_x, evsel->fd->max_y) < 0)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -340,6 +340,7 @@ static int report__setup_sample_type(struct report *rep)
|
|||||||
struct perf_session *session = rep->session;
|
struct perf_session *session = rep->session;
|
||||||
u64 sample_type = evlist__combined_sample_type(session->evlist);
|
u64 sample_type = evlist__combined_sample_type(session->evlist);
|
||||||
bool is_pipe = perf_data__is_pipe(session->data);
|
bool is_pipe = perf_data__is_pipe(session->data);
|
||||||
|
struct evsel *evsel;
|
||||||
|
|
||||||
if (session->itrace_synth_opts->callchain ||
|
if (session->itrace_synth_opts->callchain ||
|
||||||
session->itrace_synth_opts->add_callchain ||
|
session->itrace_synth_opts->add_callchain ||
|
||||||
@ -394,6 +395,19 @@ static int report__setup_sample_type(struct report *rep)
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (sort__mode == SORT_MODE__MEMORY) {
|
if (sort__mode == SORT_MODE__MEMORY) {
|
||||||
|
/*
|
||||||
|
* FIXUP: prior to kernel 5.18, Arm SPE missed to set
|
||||||
|
* PERF_SAMPLE_DATA_SRC bit in sample type. For backward
|
||||||
|
* compatibility, set the bit if it's an old perf data file.
|
||||||
|
*/
|
||||||
|
evlist__for_each_entry(session->evlist, evsel) {
|
||||||
|
if (strstr(evsel->name, "arm_spe") &&
|
||||||
|
!(sample_type & PERF_SAMPLE_DATA_SRC)) {
|
||||||
|
evsel->core.attr.sample_type |= PERF_SAMPLE_DATA_SRC;
|
||||||
|
sample_type |= PERF_SAMPLE_DATA_SRC;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (!is_pipe && !(sample_type & PERF_SAMPLE_DATA_SRC)) {
|
if (!is_pipe && !(sample_type & PERF_SAMPLE_DATA_SRC)) {
|
||||||
ui__error("Selected --mem-mode but no mem data. "
|
ui__error("Selected --mem-mode but no mem data. "
|
||||||
"Did you call perf record without -d?\n");
|
"Did you call perf record without -d?\n");
|
||||||
|
@ -172,6 +172,17 @@ flooding_filters_add()
|
|||||||
local lsb
|
local lsb
|
||||||
local i
|
local i
|
||||||
|
|
||||||
|
# Prevent unwanted packets from entering the bridge and interfering
|
||||||
|
# with the test.
|
||||||
|
tc qdisc add dev br0 clsact
|
||||||
|
tc filter add dev br0 egress protocol all pref 1 handle 1 \
|
||||||
|
matchall skip_hw action drop
|
||||||
|
tc qdisc add dev $h1 clsact
|
||||||
|
tc filter add dev $h1 egress protocol all pref 1 handle 1 \
|
||||||
|
flower skip_hw dst_mac de:ad:be:ef:13:37 action pass
|
||||||
|
tc filter add dev $h1 egress protocol all pref 2 handle 2 \
|
||||||
|
matchall skip_hw action drop
|
||||||
|
|
||||||
tc qdisc add dev $rp2 clsact
|
tc qdisc add dev $rp2 clsact
|
||||||
|
|
||||||
for i in $(eval echo {1..$num_remotes}); do
|
for i in $(eval echo {1..$num_remotes}); do
|
||||||
@ -194,6 +205,12 @@ flooding_filters_del()
|
|||||||
done
|
done
|
||||||
|
|
||||||
tc qdisc del dev $rp2 clsact
|
tc qdisc del dev $rp2 clsact
|
||||||
|
|
||||||
|
tc filter del dev $h1 egress protocol all pref 2 handle 2 matchall
|
||||||
|
tc filter del dev $h1 egress protocol all pref 1 handle 1 flower
|
||||||
|
tc qdisc del dev $h1 clsact
|
||||||
|
tc filter del dev br0 egress protocol all pref 1 handle 1 matchall
|
||||||
|
tc qdisc del dev br0 clsact
|
||||||
}
|
}
|
||||||
|
|
||||||
flooding_check_packets()
|
flooding_check_packets()
|
||||||
|
Loading…
Reference in New Issue
Block a user