This is the 5.10.176 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmQa9NgACgkQONu9yGCS aT4Iew//X/3+Bpiu+FyaYe0NZ4I95rQvNh4fG6wXCFd/PVbCRpxVOAKQ91GnkU+D iMeuGBPqkpPhHvesRybsq0u8GmJ+fJj58+fgy1ABI7UzkWihzNDu1n2RntYmuRvl TEEsAIS+6/lhVKosDhyYcXAL5eT8F06zFOI9HspWRe+lYoRBIQyykcLgZQwt5mBX qyKAFkvhH0Z77ATiID5alRkVArgi/t3qBUANTrJ7LqOlhtY42EOS0Sp7wpZWskqI 7Mpb6pfODsOq5d+6zNvZzdrtMaKRBal0Inxj2+zLEYdSv+xbTqp4Cb6UI18gJTA7 zsvItAzTRxp+7KiZVS2HP3uMRRV4lQ5HxgMJhSsONHSSRh7ndhkW7NQq/o/dRFm2 IgVf1beHk2pE+LN0Plf2oQCOMV8h/vQRZLCejoQEbFy6oNQ6bA4btJaXZnfluqDb KXONyDqXZ3uX3DSrKO4pCNCTsm5JhinkFHhO125kjSkPp/k2YWXdnBftQT1mWPYf dbWu1z/E+3qvObedwNn+icuu/MUznZMTYwDOD31tJp+1iEBgeBQWI+IRaIaWbDyD dxSoV8cScNZz+X4M70EFlwJMYL/VcIzDljeH2EA3CImDycDH0tspo6z8Z+xFhsrg D1wshmaT9XkSEJ92xDMw82B/1noOati75HpkUW1W/PKTqvjH/uU= =/t/A -----END PGP SIGNATURE----- Merge 5.10.176 into android12-5.10-lts Changes in 5.10.176 xfrm: Allow transport-mode states with AF_UNSPEC selector drm/panfrost: Don't sync rpm suspension after mmu flushing cifs: Move the in_send statistic to __smb_send_rqst() drm/meson: fix 1px pink line on GXM when scaling video overlay clk: HI655X: select REGMAP instead of depending on it docs: Correct missing "d_" prefix for dentry_operations member d_weak_revalidate scsi: mpt3sas: Fix NULL pointer access in mpt3sas_transport_port_add() ALSA: hda: Match only Intel devices with CONTROLLER_IN_GPU() netfilter: nft_nat: correct length for loading protocol registers netfilter: nft_masq: correct length for loading protocol registers netfilter: nft_redir: correct length for loading protocol registers netfilter: nft_redir: correct value of inet type `.maxattrs` scsi: core: Fix a comment in function scsi_host_dev_release() scsi: core: Fix a procfs host directory removal regression tcp: tcp_make_synack() can be called from process context nfc: pn533: initialize struct pn533_out_arg properly ipvlan: Make skb->skb_iif track skb->dev for l3s mode i40e: Fix kernel crash during reboot when adapter is in recovery mode net/smc: fix NULL sndbuf_desc in smc_cdc_tx_handler() qed/qed_dev: guard against a possible division by zero net: tunnels: annotate lockless accesses to dev->needed_headroom net: phy: smsc: bail out in lan87xx_read_status if genphy_read_status fails nfc: st-nci: Fix use after free bug in ndlc_remove due to race condition net/smc: fix deadlock triggered by cancel_delayed_work_syn() net: usb: smsc75xx: Limit packet length to skb->len drm/bridge: Fix returned array size name for atomic_get_input_bus_fmts kdoc null_blk: Move driver into its own directory block: null_blk: Fix handling of fake timeout request nvme: fix handling single range discard request nvmet: avoid potential UAF in nvmet_req_complete() block: sunvdc: add check for mdesc_grab() returning NULL ice: xsk: disable txq irq before flushing hw net: dsa: mv88e6xxx: fix max_mtu of 1492 on 6165, 6191, 6220, 6250, 6290 ipv4: Fix incorrect table ID in IOCTL path net: usb: smsc75xx: Move packet length check to prevent kernel panic in skb_pull net/iucv: Fix size of interrupt data selftests: net: devlink_port_split.py: skip test if no suitable device available qed/qed_mng_tlv: correctly zero out ->min instead of ->hour ethernet: sun: add check for the mdesc_grab() hwmon: (adt7475) Display smoothing attributes in correct order hwmon: (adt7475) Fix masking of hysteresis registers hwmon: (xgene) Fix use after free bug in xgene_hwmon_remove due to race condition hwmon: (ina3221) return prober error code hwmon: (ucd90320) Add minimum delay between bus accesses hwmon: tmp512: drop of_match_ptr for ID table hwmon: (adm1266) Set `can_sleep` flag for GPIO chip media: m5mols: fix off-by-one loop termination error mmc: atmel-mci: fix race between stop command and start of next command jffs2: correct logic when creating a hole in jffs2_write_begin ext4: fail ext4_iget if special inode unallocated ext4: fix task hung in ext4_xattr_delete_inode drm/amdkfd: Fix an illegal memory access sh: intc: Avoid spurious sizeof-pointer-div warning drm/amd/display: fix shift-out-of-bounds in CalculateVMAndRowBytes ext4: fix possible double unlock when moving a directory tty: serial: fsl_lpuart: skip waiting for transmission complete when UARTCTRL_SBK is asserted serial: 8250_em: Fix UART port type firmware: xilinx: don't make a sleepable memory allocation from an atomic context interconnect: fix mem leak when freeing nodes tracing: Make splice_read available again tracing: Check field value in hist_field_name() tracing: Make tracepoint lockdep check actually test something cifs: Fix smb2_set_path_size() KVM: nVMX: add missing consistency checks for CR0 and CR4 ALSA: hda: intel-dsp-config: add MTL PCI id ALSA: hda/realtek: Fix the speaker output on Samsung Galaxy Book2 Pro drm/shmem-helper: Remove another errant put in error path mptcp: avoid setting TCP_CLOSE state twice ftrace: Fix invalid address access in lookup_rec() when index is 0 mm/userfaultfd: propagate uffd-wp bit when PTE-mapping the huge zeropage mmc: sdhci_am654: lower power-on failed message severity fbdev: stifb: Provide valid pixelclock and add fb_check_var() checks cpuidle: psci: Iterate backwards over list in psci_pd_remove() x86/mce: Make sure logged MCEs are processed after sysfs update x86/mm: Fix use of uninitialized buffer in sme_enable() drm/i915: Don't use stolen memory for ring buffers with LLC drm/i915/active: Fix misuse of non-idle barriers as fence trackers io_uring: avoid null-ptr-deref in io_arm_poll_handler s390/ipl: add missing intersection check to ipl_report handling PCI: Unify delay handling for reset and resume PCI/DPC: Await readiness of secondary bus after reset xfs: don't assert fail on perag references on teardown xfs: purge dquots after inode walk fails during quotacheck xfs: don't leak btree cursor when insrec fails after a split xfs: remove XFS_PREALLOC_SYNC xfs: fallocate() should call file_modified() xfs: set prealloc flag in xfs_alloc_file_space() xfs: use setattr_copy to set vfs inode attributes fs: add mode_strip_sgid() helper fs: move S_ISGID stripping into the vfs_*() helpers attr: add in_group_or_capable() fs: move should_remove_suid() attr: add setattr_should_drop_sgid() attr: use consistent sgid stripping checks fs: use consistent setgid checks in is_sxid() xfs: remove xfs_setattr_time() declaration HID: core: Provide new max_buffer_size attribute to over-ride the default HID: uhid: Over-ride the default maximum data buffer value with our own Linux 5.10.176 Change-Id: Icd45189f4182c749d1758c13e18705abb4ea9c5a Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
df23049a96
@ -1188,7 +1188,7 @@ defined:
|
||||
return
|
||||
-ECHILD and it will be called again in ref-walk mode.
|
||||
|
||||
``_weak_revalidate``
|
||||
``d_weak_revalidate``
|
||||
called when the VFS needs to revalidate a "jumped" dentry. This
|
||||
is called when a path-walk ends at dentry that was not acquired
|
||||
by doing a lookup in the parent directory. This includes "/",
|
||||
|
@ -2926,7 +2926,7 @@ Produces::
|
||||
bash-1994 [000] .... 4342.324898: ima_get_action <-process_measurement
|
||||
bash-1994 [000] .... 4342.324898: ima_match_policy <-ima_get_action
|
||||
bash-1994 [000] .... 4342.324899: do_truncate <-do_last
|
||||
bash-1994 [000] .... 4342.324899: should_remove_suid <-do_truncate
|
||||
bash-1994 [000] .... 4342.324899: setattr_should_drop_suidgid <-do_truncate
|
||||
bash-1994 [000] .... 4342.324899: notify_change <-do_truncate
|
||||
bash-1994 [000] .... 4342.324900: current_fs_time <-notify_change
|
||||
bash-1994 [000] .... 4342.324900: current_kernel_time <-current_fs_time
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 10
|
||||
SUBLEVEL = 175
|
||||
SUBLEVEL = 176
|
||||
EXTRAVERSION =
|
||||
NAME = Dare mighty things
|
||||
|
||||
|
@ -57,11 +57,19 @@ static unsigned long find_bootdata_space(struct ipl_rb_components *comps,
|
||||
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && INITRD_START && INITRD_SIZE &&
|
||||
intersects(INITRD_START, INITRD_SIZE, safe_addr, size))
|
||||
safe_addr = INITRD_START + INITRD_SIZE;
|
||||
if (intersects(safe_addr, size, (unsigned long)comps, comps->len)) {
|
||||
safe_addr = (unsigned long)comps + comps->len;
|
||||
goto repeat;
|
||||
}
|
||||
for_each_rb_entry(comp, comps)
|
||||
if (intersects(safe_addr, size, comp->addr, comp->len)) {
|
||||
safe_addr = comp->addr + comp->len;
|
||||
goto repeat;
|
||||
}
|
||||
if (intersects(safe_addr, size, (unsigned long)certs, certs->len)) {
|
||||
safe_addr = (unsigned long)certs + certs->len;
|
||||
goto repeat;
|
||||
}
|
||||
for_each_rb_entry(cert, certs)
|
||||
if (intersects(safe_addr, size, cert->addr, cert->len)) {
|
||||
safe_addr = cert->addr + cert->len;
|
||||
|
@ -2309,6 +2309,7 @@ static void mce_restart(void)
|
||||
{
|
||||
mce_timer_delete_all();
|
||||
on_each_cpu(mce_cpu_restart, NULL, 1);
|
||||
mce_schedule_work();
|
||||
}
|
||||
|
||||
/* Toggle features for corrected errors */
|
||||
|
@ -2998,7 +2998,7 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
|
||||
struct vmcs12 *vmcs12,
|
||||
enum vm_entry_failure_code *entry_failure_code)
|
||||
{
|
||||
bool ia32e;
|
||||
bool ia32e = !!(vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE);
|
||||
|
||||
*entry_failure_code = ENTRY_FAIL_DEFAULT;
|
||||
|
||||
@ -3024,6 +3024,13 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
|
||||
vmcs12->guest_ia32_perf_global_ctrl)))
|
||||
return -EINVAL;
|
||||
|
||||
if (CC((vmcs12->guest_cr0 & (X86_CR0_PG | X86_CR0_PE)) == X86_CR0_PG))
|
||||
return -EINVAL;
|
||||
|
||||
if (CC(ia32e && !(vmcs12->guest_cr4 & X86_CR4_PAE)) ||
|
||||
CC(ia32e && !(vmcs12->guest_cr0 & X86_CR0_PG)))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* If the load IA32_EFER VM-entry control is 1, the following checks
|
||||
* are performed on the field for the IA32_EFER MSR:
|
||||
@ -3035,7 +3042,6 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
|
||||
*/
|
||||
if (to_vmx(vcpu)->nested.nested_run_pending &&
|
||||
(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_EFER)) {
|
||||
ia32e = (vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) != 0;
|
||||
if (CC(!kvm_valid_efer(vcpu, vmcs12->guest_ia32_efer)) ||
|
||||
CC(ia32e != !!(vmcs12->guest_ia32_efer & EFER_LMA)) ||
|
||||
CC(((vmcs12->guest_cr0 & X86_CR0_PG) &&
|
||||
|
@ -586,7 +586,8 @@ void __init sme_enable(struct boot_params *bp)
|
||||
cmdline_ptr = (const char *)((u64)bp->hdr.cmd_line_ptr |
|
||||
((u64)bp->ext_cmd_line_ptr << 32));
|
||||
|
||||
cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer));
|
||||
if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0)
|
||||
return;
|
||||
|
||||
if (!strncmp(buffer, cmdline_on, sizeof(buffer)))
|
||||
sme_me_mask = me_mask;
|
||||
|
@ -16,13 +16,7 @@ menuconfig BLK_DEV
|
||||
|
||||
if BLK_DEV
|
||||
|
||||
config BLK_DEV_NULL_BLK
|
||||
tristate "Null test block driver"
|
||||
select CONFIGFS_FS
|
||||
|
||||
config BLK_DEV_NULL_BLK_FAULT_INJECTION
|
||||
bool "Support fault injection for Null test block driver"
|
||||
depends on BLK_DEV_NULL_BLK && FAULT_INJECTION
|
||||
source "drivers/block/null_blk/Kconfig"
|
||||
|
||||
config BLK_DEV_FD
|
||||
tristate "Normal floppy disk support"
|
||||
|
@ -41,12 +41,7 @@ obj-$(CONFIG_BLK_DEV_RSXX) += rsxx/
|
||||
obj-$(CONFIG_ZRAM) += zram/
|
||||
obj-$(CONFIG_BLK_DEV_RNBD) += rnbd/
|
||||
|
||||
obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk.o
|
||||
null_blk-objs := null_blk_main.o
|
||||
ifeq ($(CONFIG_BLK_DEV_ZONED), y)
|
||||
null_blk-$(CONFIG_TRACING) += null_blk_trace.o
|
||||
endif
|
||||
null_blk-$(CONFIG_BLK_DEV_ZONED) += null_blk_zoned.o
|
||||
obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk/
|
||||
|
||||
skd-y := skd_main.o
|
||||
swim_mod-y := swim.o swim_asm.o
|
||||
|
12
drivers/block/null_blk/Kconfig
Normal file
12
drivers/block/null_blk/Kconfig
Normal file
@ -0,0 +1,12 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
#
|
||||
# Null block device driver configuration
|
||||
#
|
||||
|
||||
config BLK_DEV_NULL_BLK
|
||||
tristate "Null test block driver"
|
||||
select CONFIGFS_FS
|
||||
|
||||
config BLK_DEV_NULL_BLK_FAULT_INJECTION
|
||||
bool "Support fault injection for Null test block driver"
|
||||
depends on BLK_DEV_NULL_BLK && FAULT_INJECTION
|
11
drivers/block/null_blk/Makefile
Normal file
11
drivers/block/null_blk/Makefile
Normal file
@ -0,0 +1,11 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
# needed for trace events
|
||||
ccflags-y += -I$(src)
|
||||
|
||||
obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk.o
|
||||
null_blk-objs := main.o
|
||||
ifeq ($(CONFIG_BLK_DEV_ZONED), y)
|
||||
null_blk-$(CONFIG_TRACING) += trace.o
|
||||
endif
|
||||
null_blk-$(CONFIG_BLK_DEV_ZONED) += zoned.o
|
@ -1309,8 +1309,7 @@ static inline void nullb_complete_cmd(struct nullb_cmd *cmd)
|
||||
case NULL_IRQ_SOFTIRQ:
|
||||
switch (cmd->nq->dev->queue_mode) {
|
||||
case NULL_Q_MQ:
|
||||
if (likely(!blk_should_fake_timeout(cmd->rq->q)))
|
||||
blk_mq_complete_request(cmd->rq);
|
||||
blk_mq_complete_request(cmd->rq);
|
||||
break;
|
||||
case NULL_Q_BIO:
|
||||
/*
|
||||
@ -1486,7 +1485,8 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
|
||||
cmd->rq = bd->rq;
|
||||
cmd->error = BLK_STS_OK;
|
||||
cmd->nq = nq;
|
||||
cmd->fake_timeout = should_timeout_request(bd->rq);
|
||||
cmd->fake_timeout = should_timeout_request(bd->rq) ||
|
||||
blk_should_fake_timeout(bd->rq->q);
|
||||
|
||||
blk_mq_start_request(bd->rq);
|
||||
|
@ -4,7 +4,7 @@
|
||||
*
|
||||
* Copyright (C) 2020 Western Digital Corporation or its affiliates.
|
||||
*/
|
||||
#include "null_blk_trace.h"
|
||||
#include "trace.h"
|
||||
|
||||
/*
|
||||
* Helper to use for all null_blk traces to extract disk name.
|
@ -73,7 +73,7 @@ TRACE_EVENT(nullb_report_zones,
|
||||
#undef TRACE_INCLUDE_PATH
|
||||
#define TRACE_INCLUDE_PATH .
|
||||
#undef TRACE_INCLUDE_FILE
|
||||
#define TRACE_INCLUDE_FILE null_blk_trace
|
||||
#define TRACE_INCLUDE_FILE trace
|
||||
|
||||
/* This part must be outside protection */
|
||||
#include <trace/define_trace.h>
|
@ -4,7 +4,7 @@
|
||||
#include "null_blk.h"
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include "null_blk_trace.h"
|
||||
#include "trace.h"
|
||||
|
||||
#define MB_TO_SECTS(mb) (((sector_t)mb * SZ_1M) >> SECTOR_SHIFT)
|
||||
|
@ -984,6 +984,8 @@ static int vdc_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
|
||||
print_version();
|
||||
|
||||
hp = mdesc_grab();
|
||||
if (!hp)
|
||||
return -ENODEV;
|
||||
|
||||
err = -ENODEV;
|
||||
if ((vdev->dev_no << PARTITION_SHIFT) & ~(u64)MINORMASK) {
|
||||
|
@ -79,7 +79,7 @@ config COMMON_CLK_RK808
|
||||
config COMMON_CLK_HI655X
|
||||
tristate "Clock driver for Hi655x" if EXPERT
|
||||
depends on (MFD_HI655X_PMIC || COMPILE_TEST)
|
||||
depends on REGMAP
|
||||
select REGMAP
|
||||
default MFD_HI655X_PMIC
|
||||
help
|
||||
This driver supports the hi655x PMIC clock. This
|
||||
|
@ -182,7 +182,8 @@ static void psci_pd_remove(void)
|
||||
struct psci_pd_provider *pd_provider, *it;
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
list_for_each_entry_safe(pd_provider, it, &psci_pd_providers, link) {
|
||||
list_for_each_entry_safe_reverse(pd_provider, it,
|
||||
&psci_pd_providers, link) {
|
||||
of_genpd_del_provider(pd_provider->node);
|
||||
|
||||
genpd = of_genpd_remove_last(pd_provider->node);
|
||||
|
@ -171,7 +171,7 @@ static int zynqmp_pm_feature(u32 api_id)
|
||||
}
|
||||
|
||||
/* Add new entry if not present */
|
||||
feature_data = kmalloc(sizeof(*feature_data), GFP_KERNEL);
|
||||
feature_data = kmalloc(sizeof(*feature_data), GFP_ATOMIC);
|
||||
if (!feature_data)
|
||||
return -ENOMEM;
|
||||
|
||||
|
@ -528,16 +528,13 @@ static struct kfd_event_waiter *alloc_event_waiters(uint32_t num_events)
|
||||
struct kfd_event_waiter *event_waiters;
|
||||
uint32_t i;
|
||||
|
||||
event_waiters = kmalloc_array(num_events,
|
||||
sizeof(struct kfd_event_waiter),
|
||||
GFP_KERNEL);
|
||||
event_waiters = kcalloc(num_events, sizeof(struct kfd_event_waiter),
|
||||
GFP_KERNEL);
|
||||
if (!event_waiters)
|
||||
return NULL;
|
||||
|
||||
for (i = 0; (event_waiters) && (i < num_events) ; i++) {
|
||||
for (i = 0; i < num_events; i++)
|
||||
init_wait(&event_waiters[i].wait);
|
||||
event_waiters[i].activated = false;
|
||||
}
|
||||
|
||||
return event_waiters;
|
||||
}
|
||||
|
@ -1868,7 +1868,10 @@ static unsigned int CalculateVMAndRowBytes(
|
||||
}
|
||||
|
||||
if (SurfaceTiling == dm_sw_linear) {
|
||||
*dpte_row_height = dml_min(128, 1 << (unsigned int) dml_floor(dml_log2(PTEBufferSizeInRequests * *PixelPTEReqWidth / Pitch), 1));
|
||||
if (PTEBufferSizeInRequests == 0)
|
||||
*dpte_row_height = 1;
|
||||
else
|
||||
*dpte_row_height = dml_min(128, 1 << (unsigned int) dml_floor(dml_log2(PTEBufferSizeInRequests * *PixelPTEReqWidth / Pitch), 1));
|
||||
*dpte_row_width_ub = (dml_ceil(((double) SwathWidth - 1) / *PixelPTEReqWidth, 1) + 1) * *PixelPTEReqWidth;
|
||||
*PixelPTEBytesPerRow = *dpte_row_width_ub / *PixelPTEReqWidth * *PTERequestSize;
|
||||
} else if (ScanDirection != dm_vert) {
|
||||
|
@ -614,11 +614,14 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
|
||||
int ret;
|
||||
|
||||
if (obj->import_attach) {
|
||||
/* Drop the reference drm_gem_mmap_obj() acquired.*/
|
||||
drm_gem_object_put(obj);
|
||||
vma->vm_private_data = NULL;
|
||||
ret = dma_buf_mmap(obj->dma_buf, vma, 0);
|
||||
|
||||
return dma_buf_mmap(obj->dma_buf, vma, 0);
|
||||
/* Drop the reference drm_gem_mmap_obj() acquired.*/
|
||||
if (!ret)
|
||||
drm_gem_object_put(obj);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
shmem = to_drm_gem_shmem_obj(obj);
|
||||
|
@ -108,7 +108,7 @@ static struct i915_vma *create_ring_vma(struct i915_ggtt *ggtt, int size)
|
||||
struct i915_vma *vma;
|
||||
|
||||
obj = ERR_PTR(-ENODEV);
|
||||
if (i915_ggtt_has_aperture(ggtt))
|
||||
if (i915_ggtt_has_aperture(ggtt) && !HAS_LLC(i915))
|
||||
obj = i915_gem_object_create_stolen(i915, size);
|
||||
if (IS_ERR(obj))
|
||||
obj = i915_gem_object_create_internal(i915, size);
|
||||
|
@ -432,8 +432,7 @@ replace_barrier(struct i915_active *ref, struct i915_active_fence *active)
|
||||
* we can use it to substitute for the pending idle-barrer
|
||||
* request that we want to emit on the kernel_context.
|
||||
*/
|
||||
__active_del_barrier(ref, node_from_active(active));
|
||||
return true;
|
||||
return __active_del_barrier(ref, node_from_active(active));
|
||||
}
|
||||
|
||||
int i915_active_ref(struct i915_active *ref, u64 idx, struct dma_fence *fence)
|
||||
@ -446,16 +445,19 @@ int i915_active_ref(struct i915_active *ref, u64 idx, struct dma_fence *fence)
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
active = active_instance(ref, idx);
|
||||
if (!active) {
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
do {
|
||||
active = active_instance(ref, idx);
|
||||
if (!active) {
|
||||
err = -ENOMEM;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (replace_barrier(ref, active)) {
|
||||
RCU_INIT_POINTER(active->fence, NULL);
|
||||
atomic_dec(&ref->count);
|
||||
}
|
||||
} while (unlikely(is_barrier(active)));
|
||||
|
||||
if (replace_barrier(ref, active)) {
|
||||
RCU_INIT_POINTER(active->fence, NULL);
|
||||
atomic_dec(&ref->count);
|
||||
}
|
||||
if (!__i915_active_fence_set(active, fence))
|
||||
__i915_active_acquire(ref);
|
||||
|
||||
|
@ -100,6 +100,8 @@ void meson_vpp_init(struct meson_drm *priv)
|
||||
priv->io_base + _REG(VPP_DOLBY_CTRL));
|
||||
writel_relaxed(0x1020080,
|
||||
priv->io_base + _REG(VPP_DUMMY_DATA1));
|
||||
writel_relaxed(0x42020,
|
||||
priv->io_base + _REG(VPP_DUMMY_DATA));
|
||||
} else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A))
|
||||
writel_relaxed(0xf, priv->io_base + _REG(DOLBY_PATH_CTRL));
|
||||
|
||||
|
@ -236,7 +236,7 @@ static void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
|
||||
if (pm_runtime_active(pfdev->dev))
|
||||
mmu_hw_do_operation(pfdev, mmu, iova, size, AS_COMMAND_FLUSH_PT);
|
||||
|
||||
pm_runtime_put_sync_autosuspend(pfdev->dev);
|
||||
pm_runtime_put_autosuspend(pfdev->dev);
|
||||
}
|
||||
|
||||
static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
|
||||
|
@ -258,6 +258,7 @@ static int hid_add_field(struct hid_parser *parser, unsigned report_type, unsign
|
||||
{
|
||||
struct hid_report *report;
|
||||
struct hid_field *field;
|
||||
unsigned int max_buffer_size = HID_MAX_BUFFER_SIZE;
|
||||
unsigned int usages;
|
||||
unsigned int offset;
|
||||
unsigned int i;
|
||||
@ -288,8 +289,11 @@ static int hid_add_field(struct hid_parser *parser, unsigned report_type, unsign
|
||||
offset = report->size;
|
||||
report->size += parser->global.report_size * parser->global.report_count;
|
||||
|
||||
if (parser->device->ll_driver->max_buffer_size)
|
||||
max_buffer_size = parser->device->ll_driver->max_buffer_size;
|
||||
|
||||
/* Total size check: Allow for possible report index byte */
|
||||
if (report->size > (HID_MAX_BUFFER_SIZE - 1) << 3) {
|
||||
if (report->size > (max_buffer_size - 1) << 3) {
|
||||
hid_err(parser->device, "report is too long\n");
|
||||
return -1;
|
||||
}
|
||||
@ -1752,6 +1756,7 @@ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
|
||||
struct hid_report_enum *report_enum = hid->report_enum + type;
|
||||
struct hid_report *report;
|
||||
struct hid_driver *hdrv;
|
||||
int max_buffer_size = HID_MAX_BUFFER_SIZE;
|
||||
unsigned int a;
|
||||
u32 rsize, csize = size;
|
||||
u8 *cdata = data;
|
||||
@ -1768,10 +1773,13 @@ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
|
||||
|
||||
rsize = hid_compute_report_size(report);
|
||||
|
||||
if (report_enum->numbered && rsize >= HID_MAX_BUFFER_SIZE)
|
||||
rsize = HID_MAX_BUFFER_SIZE - 1;
|
||||
else if (rsize > HID_MAX_BUFFER_SIZE)
|
||||
rsize = HID_MAX_BUFFER_SIZE;
|
||||
if (hid->ll_driver->max_buffer_size)
|
||||
max_buffer_size = hid->ll_driver->max_buffer_size;
|
||||
|
||||
if (report_enum->numbered && rsize >= max_buffer_size)
|
||||
rsize = max_buffer_size - 1;
|
||||
else if (rsize > max_buffer_size)
|
||||
rsize = max_buffer_size;
|
||||
|
||||
if (csize < rsize) {
|
||||
dbg_hid("report %d is too short, (%d < %d)\n", report->id,
|
||||
|
@ -395,6 +395,7 @@ struct hid_ll_driver uhid_hid_driver = {
|
||||
.parse = uhid_hid_parse,
|
||||
.raw_request = uhid_hid_raw_request,
|
||||
.output_report = uhid_hid_output_report,
|
||||
.max_buffer_size = UHID_DATA_MAX,
|
||||
};
|
||||
EXPORT_SYMBOL_GPL(uhid_hid_driver);
|
||||
|
||||
|
@ -486,10 +486,10 @@ static ssize_t temp_store(struct device *dev, struct device_attribute *attr,
|
||||
val = (temp - val) / 1000;
|
||||
|
||||
if (sattr->index != 1) {
|
||||
data->temp[HYSTERSIS][sattr->index] &= 0xF0;
|
||||
data->temp[HYSTERSIS][sattr->index] &= 0x0F;
|
||||
data->temp[HYSTERSIS][sattr->index] |= (val & 0xF) << 4;
|
||||
} else {
|
||||
data->temp[HYSTERSIS][sattr->index] &= 0x0F;
|
||||
data->temp[HYSTERSIS][sattr->index] &= 0xF0;
|
||||
data->temp[HYSTERSIS][sattr->index] |= (val & 0xF);
|
||||
}
|
||||
|
||||
@ -554,11 +554,11 @@ static ssize_t temp_st_show(struct device *dev, struct device_attribute *attr,
|
||||
val = data->enh_acoustics[0] & 0xf;
|
||||
break;
|
||||
case 1:
|
||||
val = (data->enh_acoustics[1] >> 4) & 0xf;
|
||||
val = data->enh_acoustics[1] & 0xf;
|
||||
break;
|
||||
case 2:
|
||||
default:
|
||||
val = data->enh_acoustics[1] & 0xf;
|
||||
val = (data->enh_acoustics[1] >> 4) & 0xf;
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -772,7 +772,7 @@ static int ina3221_probe_child_from_dt(struct device *dev,
|
||||
return ret;
|
||||
} else if (val > INA3221_CHANNEL3) {
|
||||
dev_err(dev, "invalid reg %d of %pOFn\n", val, child);
|
||||
return ret;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
input = &ina->inputs[val];
|
||||
|
@ -301,6 +301,7 @@ static int adm1266_config_gpio(struct adm1266_data *data)
|
||||
data->gc.label = name;
|
||||
data->gc.parent = &data->client->dev;
|
||||
data->gc.owner = THIS_MODULE;
|
||||
data->gc.can_sleep = true;
|
||||
data->gc.base = -1;
|
||||
data->gc.names = data->gpio_names;
|
||||
data->gc.ngpio = ARRAY_SIZE(data->gpio_names);
|
||||
|
@ -7,6 +7,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
@ -16,6 +17,7 @@
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/pmbus.h>
|
||||
#include <linux/gpio/driver.h>
|
||||
#include <linux/timekeeping.h>
|
||||
#include "pmbus.h"
|
||||
|
||||
enum chips { ucd9000, ucd90120, ucd90124, ucd90160, ucd90320, ucd9090,
|
||||
@ -65,6 +67,7 @@ struct ucd9000_data {
|
||||
struct gpio_chip gpio;
|
||||
#endif
|
||||
struct dentry *debugfs;
|
||||
ktime_t write_time;
|
||||
};
|
||||
#define to_ucd9000_data(_info) container_of(_info, struct ucd9000_data, info)
|
||||
|
||||
@ -73,6 +76,73 @@ struct ucd9000_debugfs_entry {
|
||||
u8 index;
|
||||
};
|
||||
|
||||
/*
|
||||
* It has been observed that the UCD90320 randomly fails register access when
|
||||
* doing another access right on the back of a register write. To mitigate this
|
||||
* make sure that there is a minimum delay between a write access and the
|
||||
* following access. The 250us is based on experimental data. At a delay of
|
||||
* 200us the issue seems to go away. Add a bit of extra margin to allow for
|
||||
* system to system differences.
|
||||
*/
|
||||
#define UCD90320_WAIT_DELAY_US 250
|
||||
|
||||
static inline void ucd90320_wait(const struct ucd9000_data *data)
|
||||
{
|
||||
s64 delta = ktime_us_delta(ktime_get(), data->write_time);
|
||||
|
||||
if (delta < UCD90320_WAIT_DELAY_US)
|
||||
udelay(UCD90320_WAIT_DELAY_US - delta);
|
||||
}
|
||||
|
||||
static int ucd90320_read_word_data(struct i2c_client *client, int page,
|
||||
int phase, int reg)
|
||||
{
|
||||
const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
|
||||
struct ucd9000_data *data = to_ucd9000_data(info);
|
||||
|
||||
if (reg >= PMBUS_VIRT_BASE)
|
||||
return -ENXIO;
|
||||
|
||||
ucd90320_wait(data);
|
||||
return pmbus_read_word_data(client, page, phase, reg);
|
||||
}
|
||||
|
||||
static int ucd90320_read_byte_data(struct i2c_client *client, int page, int reg)
|
||||
{
|
||||
const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
|
||||
struct ucd9000_data *data = to_ucd9000_data(info);
|
||||
|
||||
ucd90320_wait(data);
|
||||
return pmbus_read_byte_data(client, page, reg);
|
||||
}
|
||||
|
||||
static int ucd90320_write_word_data(struct i2c_client *client, int page,
|
||||
int reg, u16 word)
|
||||
{
|
||||
const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
|
||||
struct ucd9000_data *data = to_ucd9000_data(info);
|
||||
int ret;
|
||||
|
||||
ucd90320_wait(data);
|
||||
ret = pmbus_write_word_data(client, page, reg, word);
|
||||
data->write_time = ktime_get();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ucd90320_write_byte(struct i2c_client *client, int page, u8 value)
|
||||
{
|
||||
const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
|
||||
struct ucd9000_data *data = to_ucd9000_data(info);
|
||||
int ret;
|
||||
|
||||
ucd90320_wait(data);
|
||||
ret = pmbus_write_byte(client, page, value);
|
||||
data->write_time = ktime_get();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ucd9000_get_fan_config(struct i2c_client *client, int fan)
|
||||
{
|
||||
int fan_config = 0;
|
||||
@ -598,6 +668,11 @@ static int ucd9000_probe(struct i2c_client *client)
|
||||
info->read_byte_data = ucd9000_read_byte_data;
|
||||
info->func[0] |= PMBUS_HAVE_FAN12 | PMBUS_HAVE_STATUS_FAN12
|
||||
| PMBUS_HAVE_FAN34 | PMBUS_HAVE_STATUS_FAN34;
|
||||
} else if (mid->driver_data == ucd90320) {
|
||||
info->read_byte_data = ucd90320_read_byte_data;
|
||||
info->read_word_data = ucd90320_read_word_data;
|
||||
info->write_byte = ucd90320_write_byte;
|
||||
info->write_word_data = ucd90320_write_word_data;
|
||||
}
|
||||
|
||||
ucd9000_probe_gpio(client, mid, data);
|
||||
|
@ -758,7 +758,7 @@ static int tmp51x_probe(struct i2c_client *client)
|
||||
static struct i2c_driver tmp51x_driver = {
|
||||
.driver = {
|
||||
.name = "tmp51x",
|
||||
.of_match_table = of_match_ptr(tmp51x_of_match),
|
||||
.of_match_table = tmp51x_of_match,
|
||||
},
|
||||
.probe_new = tmp51x_probe,
|
||||
.id_table = tmp51x_id,
|
||||
|
@ -768,6 +768,7 @@ static int xgene_hwmon_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct xgene_hwmon_dev *ctx = platform_get_drvdata(pdev);
|
||||
|
||||
cancel_work_sync(&ctx->workq);
|
||||
hwmon_device_unregister(ctx->hwmon_dev);
|
||||
kfifo_free(&ctx->async_msg_fifo);
|
||||
if (acpi_disabled)
|
||||
|
@ -850,6 +850,10 @@ void icc_node_destroy(int id)
|
||||
|
||||
mutex_unlock(&icc_lock);
|
||||
|
||||
if (!node)
|
||||
return;
|
||||
|
||||
kfree(node->links);
|
||||
kfree(node);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(icc_node_destroy);
|
||||
|
@ -488,7 +488,7 @@ static enum m5mols_restype __find_restype(u32 code)
|
||||
do {
|
||||
if (code == m5mols_default_ffmt[type].code)
|
||||
return type;
|
||||
} while (type++ != SIZE_DEFAULT_FFMT);
|
||||
} while (++type != SIZE_DEFAULT_FFMT);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1818,7 +1818,6 @@ static void atmci_tasklet_func(unsigned long priv)
|
||||
atmci_writel(host, ATMCI_IER, ATMCI_NOTBUSY);
|
||||
state = STATE_WAITING_NOTBUSY;
|
||||
} else if (host->mrq->stop) {
|
||||
atmci_writel(host, ATMCI_IER, ATMCI_CMDRDY);
|
||||
atmci_send_stop_cmd(host, data);
|
||||
state = STATE_SENDING_STOP;
|
||||
} else {
|
||||
@ -1851,8 +1850,6 @@ static void atmci_tasklet_func(unsigned long priv)
|
||||
* command to send.
|
||||
*/
|
||||
if (host->mrq->stop) {
|
||||
atmci_writel(host, ATMCI_IER,
|
||||
ATMCI_CMDRDY);
|
||||
atmci_send_stop_cmd(host, data);
|
||||
state = STATE_SENDING_STOP;
|
||||
} else {
|
||||
|
@ -369,7 +369,7 @@ static void sdhci_am654_write_b(struct sdhci_host *host, u8 val, int reg)
|
||||
MAX_POWER_ON_TIMEOUT, false, host, val,
|
||||
reg);
|
||||
if (ret)
|
||||
dev_warn(mmc_dev(host->mmc), "Power on failed\n");
|
||||
dev_info(mmc_dev(host->mmc), "Power on failed\n");
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2734,7 +2734,7 @@ static int mv88e6xxx_get_max_mtu(struct dsa_switch *ds, int port)
|
||||
return 10240 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
|
||||
else if (chip->info->ops->set_max_frame_size)
|
||||
return 1632 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
|
||||
return 1522 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
|
||||
return ETH_DATA_LEN;
|
||||
}
|
||||
|
||||
static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
|
||||
@ -2742,6 +2742,17 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
|
||||
struct mv88e6xxx_chip *chip = ds->priv;
|
||||
int ret = 0;
|
||||
|
||||
/* For families where we don't know how to alter the MTU,
|
||||
* just accept any value up to ETH_DATA_LEN
|
||||
*/
|
||||
if (!chip->info->ops->port_set_jumbo_size &&
|
||||
!chip->info->ops->set_max_frame_size) {
|
||||
if (new_mtu > ETH_DATA_LEN)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
|
||||
new_mtu += EDSA_HLEN;
|
||||
|
||||
@ -2750,9 +2761,6 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
|
||||
ret = chip->info->ops->port_set_jumbo_size(chip, port, new_mtu);
|
||||
else if (chip->info->ops->set_max_frame_size)
|
||||
ret = chip->info->ops->set_max_frame_size(chip, new_mtu);
|
||||
else
|
||||
if (new_mtu > 1522)
|
||||
ret = -EINVAL;
|
||||
mv88e6xxx_reg_unlock(chip);
|
||||
|
||||
return ret;
|
||||
|
@ -14851,6 +14851,7 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw)
|
||||
int err;
|
||||
int v_idx;
|
||||
|
||||
pci_set_drvdata(pf->pdev, pf);
|
||||
pci_save_state(pf->pdev);
|
||||
|
||||
/* set up periodic task facility */
|
||||
|
@ -169,8 +169,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
|
||||
}
|
||||
netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
|
||||
|
||||
ice_qvec_dis_irq(vsi, rx_ring, q_vector);
|
||||
|
||||
ice_fill_txq_meta(vsi, tx_ring, &txq_meta);
|
||||
err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta);
|
||||
if (err)
|
||||
@ -185,6 +183,8 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
ice_qvec_dis_irq(vsi, rx_ring, q_vector);
|
||||
|
||||
err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true);
|
||||
if (err)
|
||||
return err;
|
||||
|
@ -4986,6 +4986,11 @@ static int qed_init_wfq_param(struct qed_hwfn *p_hwfn,
|
||||
|
||||
num_vports = p_hwfn->qm_info.num_vports;
|
||||
|
||||
if (num_vports < 2) {
|
||||
DP_NOTICE(p_hwfn, "Unexpected num_vports: %d\n", num_vports);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Accounting for the vports which are configured for WFQ explicitly */
|
||||
for (i = 0; i < num_vports; i++) {
|
||||
u32 tmp_speed;
|
||||
|
@ -422,7 +422,7 @@ qed_mfw_get_tlv_time_value(struct qed_mfw_tlv_time *p_time,
|
||||
if (p_time->hour > 23)
|
||||
p_time->hour = 0;
|
||||
if (p_time->min > 59)
|
||||
p_time->hour = 0;
|
||||
p_time->min = 0;
|
||||
if (p_time->msec > 999)
|
||||
p_time->msec = 0;
|
||||
if (p_time->usec > 999)
|
||||
|
@ -290,6 +290,9 @@ static int vsw_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
|
||||
|
||||
hp = mdesc_grab();
|
||||
|
||||
if (!hp)
|
||||
return -ENODEV;
|
||||
|
||||
rmac = mdesc_get_property(hp, vdev->mp, remote_macaddr_prop, &len);
|
||||
err = -ENODEV;
|
||||
if (!rmac) {
|
||||
|
@ -431,6 +431,9 @@ static int vnet_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
|
||||
|
||||
hp = mdesc_grab();
|
||||
|
||||
if (!hp)
|
||||
return -ENODEV;
|
||||
|
||||
vp = vnet_find_parent(hp, vdev->mp, vdev);
|
||||
if (IS_ERR(vp)) {
|
||||
pr_err("Cannot find port parent vnet\n");
|
||||
|
@ -101,6 +101,7 @@ static unsigned int ipvlan_nf_input(void *priv, struct sk_buff *skb,
|
||||
goto out;
|
||||
|
||||
skb->dev = addr->master->dev;
|
||||
skb->skb_iif = skb->dev->ifindex;
|
||||
len = skb->len + ETH_HLEN;
|
||||
ipvlan_count_rx(addr->master, len, true, false);
|
||||
out:
|
||||
|
@ -181,8 +181,11 @@ static int lan95xx_config_aneg_ext(struct phy_device *phydev)
|
||||
static int lan87xx_read_status(struct phy_device *phydev)
|
||||
{
|
||||
struct smsc_phy_priv *priv = phydev->priv;
|
||||
int err;
|
||||
|
||||
int err = genphy_read_status(phydev);
|
||||
err = genphy_read_status(phydev);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (!phydev->link && priv->energy_enable) {
|
||||
/* Disable EDPD to wake up PHY */
|
||||
|
@ -2199,6 +2199,13 @@ static int smsc75xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
|
||||
size = (rx_cmd_a & RX_CMD_A_LEN) - RXW_PADDING;
|
||||
align_count = (4 - ((size + RXW_PADDING) % 4)) % 4;
|
||||
|
||||
if (unlikely(size > skb->len)) {
|
||||
netif_dbg(dev, rx_err, dev->net,
|
||||
"size err rx_cmd_a=0x%08x\n",
|
||||
rx_cmd_a);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (unlikely(rx_cmd_a & RX_CMD_A_RED)) {
|
||||
netif_dbg(dev, rx_err, dev->net,
|
||||
"Error rx_cmd_a=0x%08x\n", rx_cmd_a);
|
||||
|
@ -175,6 +175,7 @@ static int pn533_usb_send_frame(struct pn533 *dev,
|
||||
print_hex_dump_debug("PN533 TX: ", DUMP_PREFIX_NONE, 16, 1,
|
||||
out->data, out->len, false);
|
||||
|
||||
arg.phy = phy;
|
||||
init_completion(&arg.done);
|
||||
cntx = phy->out_urb->context;
|
||||
phy->out_urb->context = &arg;
|
||||
|
@ -286,13 +286,15 @@ EXPORT_SYMBOL(ndlc_probe);
|
||||
|
||||
void ndlc_remove(struct llt_ndlc *ndlc)
|
||||
{
|
||||
st_nci_remove(ndlc->ndev);
|
||||
|
||||
/* cancel timers */
|
||||
del_timer_sync(&ndlc->t1_timer);
|
||||
del_timer_sync(&ndlc->t2_timer);
|
||||
ndlc->t2_active = false;
|
||||
ndlc->t1_active = false;
|
||||
/* cancel work */
|
||||
cancel_work_sync(&ndlc->sm_work);
|
||||
|
||||
st_nci_remove(ndlc->ndev);
|
||||
|
||||
skb_queue_purge(&ndlc->rcv_q);
|
||||
skb_queue_purge(&ndlc->send_q);
|
||||
|
@ -723,16 +723,26 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req,
|
||||
range = page_address(ns->ctrl->discard_page);
|
||||
}
|
||||
|
||||
__rq_for_each_bio(bio, req) {
|
||||
u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
|
||||
u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
|
||||
if (queue_max_discard_segments(req->q) == 1) {
|
||||
u64 slba = nvme_sect_to_lba(ns, blk_rq_pos(req));
|
||||
u32 nlb = blk_rq_sectors(req) >> (ns->lba_shift - 9);
|
||||
|
||||
if (n < segments) {
|
||||
range[n].cattr = cpu_to_le32(0);
|
||||
range[n].nlb = cpu_to_le32(nlb);
|
||||
range[n].slba = cpu_to_le64(slba);
|
||||
range[0].cattr = cpu_to_le32(0);
|
||||
range[0].nlb = cpu_to_le32(nlb);
|
||||
range[0].slba = cpu_to_le64(slba);
|
||||
n = 1;
|
||||
} else {
|
||||
__rq_for_each_bio(bio, req) {
|
||||
u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
|
||||
u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
|
||||
|
||||
if (n < segments) {
|
||||
range[n].cattr = cpu_to_le32(0);
|
||||
range[n].nlb = cpu_to_le32(nlb);
|
||||
range[n].slba = cpu_to_le64(slba);
|
||||
}
|
||||
n++;
|
||||
}
|
||||
n++;
|
||||
}
|
||||
|
||||
if (WARN_ON_ONCE(n != segments)) {
|
||||
|
@ -749,8 +749,10 @@ static void __nvmet_req_complete(struct nvmet_req *req, u16 status)
|
||||
|
||||
void nvmet_req_complete(struct nvmet_req *req, u16 status)
|
||||
{
|
||||
struct nvmet_sq *sq = req->sq;
|
||||
|
||||
__nvmet_req_complete(req, status);
|
||||
percpu_ref_put(&req->sq->ref);
|
||||
percpu_ref_put(&sq->ref);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nvmet_req_complete);
|
||||
|
||||
|
@ -911,7 +911,7 @@ static int pci_pm_resume_noirq(struct device *dev)
|
||||
pcie_pme_root_status_cleanup(pci_dev);
|
||||
|
||||
if (!skip_bus_pm && prev_state == PCI_D3cold)
|
||||
pci_bridge_wait_for_secondary_bus(pci_dev);
|
||||
pci_bridge_wait_for_secondary_bus(pci_dev, "resume", PCI_RESET_WAIT);
|
||||
|
||||
if (pci_has_legacy_pm_support(pci_dev))
|
||||
return 0;
|
||||
@ -1298,7 +1298,7 @@ static int pci_pm_runtime_resume(struct device *dev)
|
||||
pci_pm_default_resume(pci_dev);
|
||||
|
||||
if (prev_state == PCI_D3cold)
|
||||
pci_bridge_wait_for_secondary_bus(pci_dev);
|
||||
pci_bridge_wait_for_secondary_bus(pci_dev, "resume", PCI_RESET_WAIT);
|
||||
|
||||
if (pm && pm->runtime_resume)
|
||||
error = pm->runtime_resume(dev);
|
||||
|
@ -164,9 +164,6 @@ static int __init pcie_port_pm_setup(char *str)
|
||||
}
|
||||
__setup("pcie_port_pm=", pcie_port_pm_setup);
|
||||
|
||||
/* Time to wait after a reset for device to become responsive */
|
||||
#define PCIE_RESET_READY_POLL_MS 60000
|
||||
|
||||
/**
|
||||
* pci_bus_max_busnr - returns maximum PCI bus number of given bus' children
|
||||
* @bus: pointer to PCI bus structure to search
|
||||
@ -1228,7 +1225,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
|
||||
return -ENOTTY;
|
||||
}
|
||||
|
||||
if (delay > 1000)
|
||||
if (delay > PCI_RESET_WAIT)
|
||||
pci_info(dev, "not ready %dms after %s; waiting\n",
|
||||
delay - 1, reset_type);
|
||||
|
||||
@ -1237,7 +1234,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
|
||||
pci_read_config_dword(dev, PCI_COMMAND, &id);
|
||||
}
|
||||
|
||||
if (delay > 1000)
|
||||
if (delay > PCI_RESET_WAIT)
|
||||
pci_info(dev, "ready %dms after %s\n", delay - 1,
|
||||
reset_type);
|
||||
|
||||
@ -4799,24 +4796,31 @@ static int pci_bus_max_d3cold_delay(const struct pci_bus *bus)
|
||||
/**
|
||||
* pci_bridge_wait_for_secondary_bus - Wait for secondary bus to be accessible
|
||||
* @dev: PCI bridge
|
||||
* @reset_type: reset type in human-readable form
|
||||
* @timeout: maximum time to wait for devices on secondary bus (milliseconds)
|
||||
*
|
||||
* Handle necessary delays before access to the devices on the secondary
|
||||
* side of the bridge are permitted after D3cold to D0 transition.
|
||||
* side of the bridge are permitted after D3cold to D0 transition
|
||||
* or Conventional Reset.
|
||||
*
|
||||
* For PCIe this means the delays in PCIe 5.0 section 6.6.1. For
|
||||
* conventional PCI it means Tpvrh + Trhfa specified in PCI 3.0 section
|
||||
* 4.3.2.
|
||||
*
|
||||
* Return 0 on success or -ENOTTY if the first device on the secondary bus
|
||||
* failed to become accessible.
|
||||
*/
|
||||
void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
|
||||
int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
|
||||
int timeout)
|
||||
{
|
||||
struct pci_dev *child;
|
||||
int delay;
|
||||
|
||||
if (pci_dev_is_disconnected(dev))
|
||||
return;
|
||||
return 0;
|
||||
|
||||
if (!pci_is_bridge(dev))
|
||||
return;
|
||||
return 0;
|
||||
|
||||
down_read(&pci_bus_sem);
|
||||
|
||||
@ -4828,14 +4832,14 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
|
||||
*/
|
||||
if (!dev->subordinate || list_empty(&dev->subordinate->devices)) {
|
||||
up_read(&pci_bus_sem);
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Take d3cold_delay requirements into account */
|
||||
delay = pci_bus_max_d3cold_delay(dev->subordinate);
|
||||
if (!delay) {
|
||||
up_read(&pci_bus_sem);
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
|
||||
child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
|
||||
@ -4844,14 +4848,12 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
|
||||
|
||||
/*
|
||||
* Conventional PCI and PCI-X we need to wait Tpvrh + Trhfa before
|
||||
* accessing the device after reset (that is 1000 ms + 100 ms). In
|
||||
* practice this should not be needed because we don't do power
|
||||
* management for them (see pci_bridge_d3_possible()).
|
||||
* accessing the device after reset (that is 1000 ms + 100 ms).
|
||||
*/
|
||||
if (!pci_is_pcie(dev)) {
|
||||
pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
|
||||
msleep(1000 + delay);
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -4868,11 +4870,11 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
|
||||
* configuration requests if we only wait for 100 ms (see
|
||||
* https://bugzilla.kernel.org/show_bug.cgi?id=203885).
|
||||
*
|
||||
* Therefore we wait for 100 ms and check for the device presence.
|
||||
* If it is still not present give it an additional 100 ms.
|
||||
* Therefore we wait for 100 ms and check for the device presence
|
||||
* until the timeout expires.
|
||||
*/
|
||||
if (!pcie_downstream_port(dev))
|
||||
return;
|
||||
return 0;
|
||||
|
||||
if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
|
||||
pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
|
||||
@ -4883,14 +4885,11 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
|
||||
if (!pcie_wait_for_link_delay(dev, true, delay)) {
|
||||
/* Did not train, no need to wait any further */
|
||||
pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n");
|
||||
return;
|
||||
return -ENOTTY;
|
||||
}
|
||||
}
|
||||
|
||||
if (!pci_device_is_present(child)) {
|
||||
pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
|
||||
msleep(delay);
|
||||
}
|
||||
return pci_dev_wait(child, reset_type, timeout - delay);
|
||||
}
|
||||
|
||||
void pci_reset_secondary_bus(struct pci_dev *dev)
|
||||
@ -4909,15 +4908,6 @@ void pci_reset_secondary_bus(struct pci_dev *dev)
|
||||
|
||||
ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
|
||||
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl);
|
||||
|
||||
/*
|
||||
* Trhfa for conventional PCI is 2^25 clock cycles.
|
||||
* Assuming a minimum 33MHz clock this results in a 1s
|
||||
* delay before we can consider subordinate devices to
|
||||
* be re-initialized. PCIe has some ways to shorten this,
|
||||
* but we don't make use of them yet.
|
||||
*/
|
||||
ssleep(1);
|
||||
}
|
||||
|
||||
void __weak pcibios_reset_secondary_bus(struct pci_dev *dev)
|
||||
@ -4936,7 +4926,8 @@ int pci_bridge_secondary_bus_reset(struct pci_dev *dev)
|
||||
{
|
||||
pcibios_reset_secondary_bus(dev);
|
||||
|
||||
return pci_dev_wait(dev, "bus reset", PCIE_RESET_READY_POLL_MS);
|
||||
return pci_bridge_wait_for_secondary_bus(dev, "bus reset",
|
||||
PCIE_RESET_READY_POLL_MS);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pci_bridge_secondary_bus_reset);
|
||||
|
||||
|
@ -48,6 +48,19 @@ int pci_bus_error_reset(struct pci_dev *dev);
|
||||
#define PCI_PM_D3HOT_WAIT 10 /* msec */
|
||||
#define PCI_PM_D3COLD_WAIT 100 /* msec */
|
||||
|
||||
/*
|
||||
* Following exit from Conventional Reset, devices must be ready within 1 sec
|
||||
* (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional
|
||||
* Reset (PCIe r6.0 sec 5.8).
|
||||
*/
|
||||
#define PCI_RESET_WAIT 1000 /* msec */
|
||||
/*
|
||||
* Devices may extend the 1 sec period through Request Retry Status completions
|
||||
* (PCIe r6.0 sec 2.3.1). The spec does not provide an upper limit, but 60 sec
|
||||
* ought to be enough for any device to become responsive.
|
||||
*/
|
||||
#define PCIE_RESET_READY_POLL_MS 60000 /* msec */
|
||||
|
||||
/**
|
||||
* struct pci_platform_pm_ops - Firmware PM callbacks
|
||||
*
|
||||
@ -109,7 +122,8 @@ void pci_allocate_cap_save_buffers(struct pci_dev *dev);
|
||||
void pci_free_cap_save_buffers(struct pci_dev *dev);
|
||||
bool pci_bridge_d3_possible(struct pci_dev *dev);
|
||||
void pci_bridge_d3_update(struct pci_dev *dev);
|
||||
void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev);
|
||||
int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
|
||||
int timeout);
|
||||
|
||||
static inline void pci_wakeup_event(struct pci_dev *dev)
|
||||
{
|
||||
|
@ -170,8 +170,8 @@ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
|
||||
pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
|
||||
PCI_EXP_DPC_STATUS_TRIGGER);
|
||||
|
||||
if (!pcie_wait_for_link(pdev, true)) {
|
||||
pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n");
|
||||
if (pci_bridge_wait_for_secondary_bus(pdev, "DPC",
|
||||
PCIE_RESET_READY_POLL_MS)) {
|
||||
clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
|
||||
ret = PCI_ERS_RESULT_DISCONNECT;
|
||||
} else {
|
||||
|
@ -322,10 +322,7 @@ static void scsi_host_dev_release(struct device *dev)
|
||||
struct Scsi_Host *shost = dev_to_shost(dev);
|
||||
struct device *parent = dev->parent;
|
||||
|
||||
/* In case scsi_remove_host() has not been called. */
|
||||
scsi_proc_hostdir_rm(shost->hostt);
|
||||
|
||||
/* Wait for functions invoked through call_rcu(&shost->rcu, ...) */
|
||||
/* Wait for functions invoked through call_rcu(&scmd->rcu, ...) */
|
||||
rcu_barrier();
|
||||
|
||||
if (shost->tmf_work_q)
|
||||
|
@ -670,7 +670,7 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
|
||||
goto out_fail;
|
||||
}
|
||||
port = sas_port_alloc_num(sas_node->parent_dev);
|
||||
if ((sas_port_add(port))) {
|
||||
if (!port || (sas_port_add(port))) {
|
||||
ioc_err(ioc, "failure at %s:%d/%s()!\n",
|
||||
__FILE__, __LINE__, __func__);
|
||||
goto out_fail;
|
||||
@ -695,6 +695,12 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
|
||||
rphy = sas_expander_alloc(port,
|
||||
mpt3sas_port->remote_identify.device_type);
|
||||
|
||||
if (!rphy) {
|
||||
ioc_err(ioc, "failure at %s:%d/%s()!\n",
|
||||
__FILE__, __LINE__, __func__);
|
||||
goto out_delete_port;
|
||||
}
|
||||
|
||||
rphy->identify = mpt3sas_port->remote_identify;
|
||||
|
||||
if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE) {
|
||||
@ -714,6 +720,7 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
|
||||
__FILE__, __LINE__, __func__);
|
||||
sas_rphy_free(rphy);
|
||||
rphy = NULL;
|
||||
goto out_delete_port;
|
||||
}
|
||||
|
||||
if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE) {
|
||||
@ -740,7 +747,10 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
|
||||
rphy_to_expander_device(rphy));
|
||||
return mpt3sas_port;
|
||||
|
||||
out_fail:
|
||||
out_delete_port:
|
||||
sas_port_delete(port);
|
||||
|
||||
out_fail:
|
||||
list_for_each_entry_safe(mpt3sas_phy, next, &mpt3sas_port->phy_list,
|
||||
port_siblings)
|
||||
list_del(&mpt3sas_phy->port_siblings);
|
||||
|
@ -106,8 +106,8 @@ static int serial8250_em_probe(struct platform_device *pdev)
|
||||
memset(&up, 0, sizeof(up));
|
||||
up.port.mapbase = regs->start;
|
||||
up.port.irq = irq;
|
||||
up.port.type = PORT_UNKNOWN;
|
||||
up.port.flags = UPF_BOOT_AUTOCONF | UPF_FIXED_PORT | UPF_IOREMAP;
|
||||
up.port.type = PORT_16750;
|
||||
up.port.flags = UPF_FIXED_PORT | UPF_IOREMAP | UPF_FIXED_TYPE;
|
||||
up.port.dev = &pdev->dev;
|
||||
up.port.private_data = priv;
|
||||
|
||||
|
@ -2159,9 +2159,15 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
|
||||
/* update the per-port timeout */
|
||||
uart_update_timeout(port, termios->c_cflag, baud);
|
||||
|
||||
/* wait transmit engin complete */
|
||||
lpuart32_write(&sport->port, 0, UARTMODIR);
|
||||
lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
|
||||
/*
|
||||
* LPUART Transmission Complete Flag may never be set while queuing a break
|
||||
* character, so skip waiting for transmission complete when UARTCTRL_SBK is
|
||||
* asserted.
|
||||
*/
|
||||
if (!(old_ctrl & UARTCTRL_SBK)) {
|
||||
lpuart32_write(&sport->port, 0, UARTMODIR);
|
||||
lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
|
||||
}
|
||||
|
||||
/* disable transmit and receive */
|
||||
lpuart32_write(&sport->port, old_ctrl & ~(UARTCTRL_TE | UARTCTRL_RE),
|
||||
|
@ -921,6 +921,28 @@ SETUP_HCRX(struct stifb_info *fb)
|
||||
|
||||
/* ------------------- driver specific functions --------------------------- */
|
||||
|
||||
static int
|
||||
stifb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
|
||||
{
|
||||
struct stifb_info *fb = container_of(info, struct stifb_info, info);
|
||||
|
||||
if (var->xres != fb->info.var.xres ||
|
||||
var->yres != fb->info.var.yres ||
|
||||
var->bits_per_pixel != fb->info.var.bits_per_pixel)
|
||||
return -EINVAL;
|
||||
|
||||
var->xres_virtual = var->xres;
|
||||
var->yres_virtual = var->yres;
|
||||
var->xoffset = 0;
|
||||
var->yoffset = 0;
|
||||
var->grayscale = fb->info.var.grayscale;
|
||||
var->red.length = fb->info.var.red.length;
|
||||
var->green.length = fb->info.var.green.length;
|
||||
var->blue.length = fb->info.var.blue.length;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
stifb_setcolreg(u_int regno, u_int red, u_int green,
|
||||
u_int blue, u_int transp, struct fb_info *info)
|
||||
@ -1145,6 +1167,7 @@ stifb_init_display(struct stifb_info *fb)
|
||||
|
||||
static const struct fb_ops stifb_ops = {
|
||||
.owner = THIS_MODULE,
|
||||
.fb_check_var = stifb_check_var,
|
||||
.fb_setcolreg = stifb_setcolreg,
|
||||
.fb_blank = stifb_blank,
|
||||
.fb_fillrect = stifb_fillrect,
|
||||
@ -1164,6 +1187,7 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
|
||||
struct stifb_info *fb;
|
||||
struct fb_info *info;
|
||||
unsigned long sti_rom_address;
|
||||
char modestr[32];
|
||||
char *dev_name;
|
||||
int bpp, xres, yres;
|
||||
|
||||
@ -1342,6 +1366,9 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
|
||||
info->flags = FBINFO_HWACCEL_COPYAREA | FBINFO_HWACCEL_FILLRECT;
|
||||
info->pseudo_palette = &fb->pseudo_palette;
|
||||
|
||||
scnprintf(modestr, sizeof(modestr), "%dx%d-%d", xres, yres, bpp);
|
||||
fb_find_mode(&info->var, info, modestr, NULL, 0, NULL, bpp);
|
||||
|
||||
/* This has to be done !!! */
|
||||
if (fb_alloc_cmap(&info->cmap, NR_PALETTE, 0))
|
||||
goto out_err1;
|
||||
|
70
fs/attr.c
70
fs/attr.c
@ -18,6 +18,65 @@
|
||||
#include <linux/evm.h>
|
||||
#include <linux/ima.h>
|
||||
|
||||
#include "internal.h"
|
||||
|
||||
/**
|
||||
* setattr_should_drop_sgid - determine whether the setgid bit needs to be
|
||||
* removed
|
||||
* @inode: inode to check
|
||||
*
|
||||
* This function determines whether the setgid bit needs to be removed.
|
||||
* We retain backwards compatibility and require setgid bit to be removed
|
||||
* unconditionally if S_IXGRP is set. Otherwise we have the exact same
|
||||
* requirements as setattr_prepare() and setattr_copy().
|
||||
*
|
||||
* Return: ATTR_KILL_SGID if setgid bit needs to be removed, 0 otherwise.
|
||||
*/
|
||||
int setattr_should_drop_sgid(const struct inode *inode)
|
||||
{
|
||||
umode_t mode = inode->i_mode;
|
||||
|
||||
if (!(mode & S_ISGID))
|
||||
return 0;
|
||||
if (mode & S_IXGRP)
|
||||
return ATTR_KILL_SGID;
|
||||
if (!in_group_or_capable(inode, inode->i_gid))
|
||||
return ATTR_KILL_SGID;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* setattr_should_drop_suidgid - determine whether the set{g,u}id bit needs to
|
||||
* be dropped
|
||||
* @inode: inode to check
|
||||
*
|
||||
* This function determines whether the set{g,u}id bits need to be removed.
|
||||
* If the setuid bit needs to be removed ATTR_KILL_SUID is returned. If the
|
||||
* setgid bit needs to be removed ATTR_KILL_SGID is returned. If both
|
||||
* set{g,u}id bits need to be removed the corresponding mask of both flags is
|
||||
* returned.
|
||||
*
|
||||
* Return: A mask of ATTR_KILL_S{G,U}ID indicating which - if any - setid bits
|
||||
* to remove, 0 otherwise.
|
||||
*/
|
||||
int setattr_should_drop_suidgid(struct inode *inode)
|
||||
{
|
||||
umode_t mode = inode->i_mode;
|
||||
int kill = 0;
|
||||
|
||||
/* suid always must be killed */
|
||||
if (unlikely(mode & S_ISUID))
|
||||
kill = ATTR_KILL_SUID;
|
||||
|
||||
kill |= setattr_should_drop_sgid(inode);
|
||||
|
||||
if (unlikely(kill && !capable(CAP_FSETID) && S_ISREG(mode)))
|
||||
return kill;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(setattr_should_drop_suidgid);
|
||||
|
||||
static bool chown_ok(const struct inode *inode, kuid_t uid)
|
||||
{
|
||||
if (uid_eq(current_fsuid(), inode->i_uid) &&
|
||||
@ -90,9 +149,8 @@ int setattr_prepare(struct dentry *dentry, struct iattr *attr)
|
||||
if (!inode_owner_or_capable(inode))
|
||||
return -EPERM;
|
||||
/* Also check the setgid bit! */
|
||||
if (!in_group_p((ia_valid & ATTR_GID) ? attr->ia_gid :
|
||||
inode->i_gid) &&
|
||||
!capable_wrt_inode_uidgid(inode, CAP_FSETID))
|
||||
if (!in_group_or_capable(inode, (ia_valid & ATTR_GID) ?
|
||||
attr->ia_gid : inode->i_gid))
|
||||
attr->ia_mode &= ~S_ISGID;
|
||||
}
|
||||
|
||||
@ -193,9 +251,7 @@ void setattr_copy(struct inode *inode, const struct iattr *attr)
|
||||
inode->i_ctime = attr->ia_ctime;
|
||||
if (ia_valid & ATTR_MODE) {
|
||||
umode_t mode = attr->ia_mode;
|
||||
|
||||
if (!in_group_p(inode->i_gid) &&
|
||||
!capable_wrt_inode_uidgid(inode, CAP_FSETID))
|
||||
if (!in_group_or_capable(inode, inode->i_gid))
|
||||
mode &= ~S_ISGID;
|
||||
inode->i_mode = mode;
|
||||
}
|
||||
@ -297,7 +353,7 @@ int notify_change(struct dentry * dentry, struct iattr * attr, struct inode **de
|
||||
}
|
||||
}
|
||||
if (ia_valid & ATTR_KILL_SGID) {
|
||||
if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
|
||||
if (mode & S_ISGID) {
|
||||
if (!(ia_valid & ATTR_MODE)) {
|
||||
ia_valid = attr->ia_valid |= ATTR_MODE;
|
||||
attr->ia_mode = inode->i_mode;
|
||||
|
@ -236,15 +236,32 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
|
||||
size[0] = 8; /* sizeof __le64 */
|
||||
data[0] = ptr;
|
||||
|
||||
rc = SMB2_set_info_init(tcon, server,
|
||||
&rqst[num_rqst], COMPOUND_FID,
|
||||
COMPOUND_FID, current->tgid,
|
||||
FILE_END_OF_FILE_INFORMATION,
|
||||
SMB2_O_INFO_FILE, 0, data, size);
|
||||
if (cfile) {
|
||||
rc = SMB2_set_info_init(tcon, server,
|
||||
&rqst[num_rqst],
|
||||
cfile->fid.persistent_fid,
|
||||
cfile->fid.volatile_fid,
|
||||
current->tgid,
|
||||
FILE_END_OF_FILE_INFORMATION,
|
||||
SMB2_O_INFO_FILE, 0,
|
||||
data, size);
|
||||
} else {
|
||||
rc = SMB2_set_info_init(tcon, server,
|
||||
&rqst[num_rqst],
|
||||
COMPOUND_FID,
|
||||
COMPOUND_FID,
|
||||
current->tgid,
|
||||
FILE_END_OF_FILE_INFORMATION,
|
||||
SMB2_O_INFO_FILE, 0,
|
||||
data, size);
|
||||
if (!rc) {
|
||||
smb2_set_next_command(tcon, &rqst[num_rqst]);
|
||||
smb2_set_related(&rqst[num_rqst]);
|
||||
}
|
||||
}
|
||||
if (rc)
|
||||
goto finished;
|
||||
smb2_set_next_command(tcon, &rqst[num_rqst]);
|
||||
smb2_set_related(&rqst[num_rqst++]);
|
||||
num_rqst++;
|
||||
trace_smb3_set_eof_enter(xid, ses->Suid, tcon->tid, full_path);
|
||||
break;
|
||||
case SMB2_OP_SET_INFO:
|
||||
|
@ -312,7 +312,7 @@ static int
|
||||
__smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
|
||||
struct smb_rqst *rqst)
|
||||
{
|
||||
int rc = 0;
|
||||
int rc;
|
||||
struct kvec *iov;
|
||||
int n_vec;
|
||||
unsigned int send_length = 0;
|
||||
@ -323,6 +323,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
|
||||
struct msghdr smb_msg = {};
|
||||
__be32 rfc1002_marker;
|
||||
|
||||
cifs_in_send_inc(server);
|
||||
if (cifs_rdma_enabled(server)) {
|
||||
/* return -EAGAIN when connecting or reconnecting */
|
||||
rc = -EAGAIN;
|
||||
@ -331,14 +332,17 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
|
||||
goto smbd_done;
|
||||
}
|
||||
|
||||
rc = -EAGAIN;
|
||||
if (ssocket == NULL)
|
||||
return -EAGAIN;
|
||||
goto out;
|
||||
|
||||
rc = -ERESTARTSYS;
|
||||
if (fatal_signal_pending(current)) {
|
||||
cifs_dbg(FYI, "signal pending before send request\n");
|
||||
return -ERESTARTSYS;
|
||||
goto out;
|
||||
}
|
||||
|
||||
rc = 0;
|
||||
/* cork the socket */
|
||||
tcp_sock_set_cork(ssocket->sk, true);
|
||||
|
||||
@ -449,7 +453,8 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
|
||||
rc);
|
||||
else if (rc > 0)
|
||||
rc = 0;
|
||||
|
||||
out:
|
||||
cifs_in_send_dec(server);
|
||||
return rc;
|
||||
}
|
||||
|
||||
@ -826,9 +831,7 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst,
|
||||
* I/O response may come back and free the mid entry on another thread.
|
||||
*/
|
||||
cifs_save_when_sent(mid);
|
||||
cifs_in_send_inc(server);
|
||||
rc = smb_send_rqst(server, 1, rqst, flags);
|
||||
cifs_in_send_dec(server);
|
||||
|
||||
if (rc < 0) {
|
||||
revert_current_mid(server, mid->credits);
|
||||
@ -1117,9 +1120,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
|
||||
else
|
||||
midQ[i]->callback = cifs_compound_last_callback;
|
||||
}
|
||||
cifs_in_send_inc(server);
|
||||
rc = smb_send_rqst(server, num_rqst, rqst, flags);
|
||||
cifs_in_send_dec(server);
|
||||
|
||||
for (i = 0; i < num_rqst; i++)
|
||||
cifs_save_when_sent(midQ[i]);
|
||||
@ -1356,9 +1357,7 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses,
|
||||
|
||||
midQ->mid_state = MID_REQUEST_SUBMITTED;
|
||||
|
||||
cifs_in_send_inc(server);
|
||||
rc = smb_send(server, in_buf, len);
|
||||
cifs_in_send_dec(server);
|
||||
cifs_save_when_sent(midQ);
|
||||
|
||||
if (rc < 0)
|
||||
@ -1495,9 +1494,7 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
|
||||
}
|
||||
|
||||
midQ->mid_state = MID_REQUEST_SUBMITTED;
|
||||
cifs_in_send_inc(server);
|
||||
rc = smb_send(server, in_buf, len);
|
||||
cifs_in_send_dec(server);
|
||||
cifs_save_when_sent(midQ);
|
||||
|
||||
if (rc < 0)
|
||||
|
@ -4753,13 +4753,6 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
|
||||
goto bad_inode;
|
||||
raw_inode = ext4_raw_inode(&iloc);
|
||||
|
||||
if ((ino == EXT4_ROOT_INO) && (raw_inode->i_links_count == 0)) {
|
||||
ext4_error_inode(inode, function, line, 0,
|
||||
"iget: root inode unallocated");
|
||||
ret = -EFSCORRUPTED;
|
||||
goto bad_inode;
|
||||
}
|
||||
|
||||
if ((flags & EXT4_IGET_HANDLE) &&
|
||||
(raw_inode->i_links_count == 0) && (raw_inode->i_mode == 0)) {
|
||||
ret = -ESTALE;
|
||||
@ -4832,11 +4825,16 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
|
||||
* NeilBrown 1999oct15
|
||||
*/
|
||||
if (inode->i_nlink == 0) {
|
||||
if ((inode->i_mode == 0 ||
|
||||
if ((inode->i_mode == 0 || flags & EXT4_IGET_SPECIAL ||
|
||||
!(EXT4_SB(inode->i_sb)->s_mount_state & EXT4_ORPHAN_FS)) &&
|
||||
ino != EXT4_BOOT_LOADER_INO) {
|
||||
/* this inode is deleted */
|
||||
ret = -ESTALE;
|
||||
/* this inode is deleted or unallocated */
|
||||
if (flags & EXT4_IGET_SPECIAL) {
|
||||
ext4_error_inode(inode, function, line, 0,
|
||||
"iget: special inode unallocated");
|
||||
ret = -EFSCORRUPTED;
|
||||
} else
|
||||
ret = -ESTALE;
|
||||
goto bad_inode;
|
||||
}
|
||||
/* The only unlinked inodes we let through here have
|
||||
|
@ -4037,10 +4037,8 @@ static int ext4_rename(struct inode *old_dir, struct dentry *old_dentry,
|
||||
goto end_rename;
|
||||
}
|
||||
retval = ext4_rename_dir_prepare(handle, &old);
|
||||
if (retval) {
|
||||
inode_unlock(old.inode);
|
||||
if (retval)
|
||||
goto end_rename;
|
||||
}
|
||||
}
|
||||
/*
|
||||
* If we're renaming a file within an inline_data dir and adding or
|
||||
|
@ -386,6 +386,17 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
|
||||
struct inode *inode;
|
||||
int err;
|
||||
|
||||
/*
|
||||
* We have to check for this corruption early as otherwise
|
||||
* iget_locked() could wait indefinitely for the state of our
|
||||
* parent inode.
|
||||
*/
|
||||
if (parent->i_ino == ea_ino) {
|
||||
ext4_error(parent->i_sb,
|
||||
"Parent and EA inode have the same ino %lu", ea_ino);
|
||||
return -EFSCORRUPTED;
|
||||
}
|
||||
|
||||
inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_NORMAL);
|
||||
if (IS_ERR(inode)) {
|
||||
err = PTR_ERR(inode);
|
||||
|
80
fs/inode.c
80
fs/inode.c
@ -1854,35 +1854,6 @@ void touch_atime(const struct path *path)
|
||||
}
|
||||
EXPORT_SYMBOL_NS(touch_atime, ANDROID_GKI_VFS_EXPORT_ONLY);
|
||||
|
||||
/*
|
||||
* The logic we want is
|
||||
*
|
||||
* if suid or (sgid and xgrp)
|
||||
* remove privs
|
||||
*/
|
||||
int should_remove_suid(struct dentry *dentry)
|
||||
{
|
||||
umode_t mode = d_inode(dentry)->i_mode;
|
||||
int kill = 0;
|
||||
|
||||
/* suid always must be killed */
|
||||
if (unlikely(mode & S_ISUID))
|
||||
kill = ATTR_KILL_SUID;
|
||||
|
||||
/*
|
||||
* sgid without any exec bits is just a mandatory locking mark; leave
|
||||
* it alone. If some exec bits are set, it's a real sgid; kill it.
|
||||
*/
|
||||
if (unlikely((mode & S_ISGID) && (mode & S_IXGRP)))
|
||||
kill |= ATTR_KILL_SGID;
|
||||
|
||||
if (unlikely(kill && !capable(CAP_FSETID) && S_ISREG(mode)))
|
||||
return kill;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(should_remove_suid);
|
||||
|
||||
/*
|
||||
* Return mask of changes for notify_change() that need to be done as a
|
||||
* response to write or truncate. Return 0 if nothing has to be changed.
|
||||
@ -1897,7 +1868,7 @@ int dentry_needs_remove_privs(struct dentry *dentry)
|
||||
if (IS_NOSEC(inode))
|
||||
return 0;
|
||||
|
||||
mask = should_remove_suid(dentry);
|
||||
mask = setattr_should_drop_suidgid(inode);
|
||||
ret = security_inode_need_killpriv(dentry);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -2147,10 +2118,6 @@ void inode_init_owner(struct inode *inode, const struct inode *dir,
|
||||
/* Directories are special, and always inherit S_ISGID */
|
||||
if (S_ISDIR(mode))
|
||||
mode |= S_ISGID;
|
||||
else if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP) &&
|
||||
!in_group_p(inode->i_gid) &&
|
||||
!capable_wrt_inode_uidgid(dir, CAP_FSETID))
|
||||
mode &= ~S_ISGID;
|
||||
} else
|
||||
inode->i_gid = current_fsgid();
|
||||
inode->i_mode = mode;
|
||||
@ -2382,3 +2349,48 @@ int vfs_ioc_fssetxattr_check(struct inode *inode, const struct fsxattr *old_fa,
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(vfs_ioc_fssetxattr_check);
|
||||
|
||||
/**
|
||||
* in_group_or_capable - check whether caller is CAP_FSETID privileged
|
||||
* @inode: inode to check
|
||||
* @gid: the new/current gid of @inode
|
||||
*
|
||||
* Check wether @gid is in the caller's group list or if the caller is
|
||||
* privileged with CAP_FSETID over @inode. This can be used to determine
|
||||
* whether the setgid bit can be kept or must be dropped.
|
||||
*
|
||||
* Return: true if the caller is sufficiently privileged, false if not.
|
||||
*/
|
||||
bool in_group_or_capable(const struct inode *inode, kgid_t gid)
|
||||
{
|
||||
if (in_group_p(gid))
|
||||
return true;
|
||||
if (capable_wrt_inode_uidgid(inode, CAP_FSETID))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* mode_strip_sgid - handle the sgid bit for non-directories
|
||||
* @dir: parent directory inode
|
||||
* @mode: mode of the file to be created in @dir
|
||||
*
|
||||
* If the @mode of the new file has both the S_ISGID and S_IXGRP bit
|
||||
* raised and @dir has the S_ISGID bit raised ensure that the caller is
|
||||
* either in the group of the parent directory or they have CAP_FSETID
|
||||
* in their user namespace and are privileged over the parent directory.
|
||||
* In all other cases, strip the S_ISGID bit from @mode.
|
||||
*
|
||||
* Return: the new mode to use for the file
|
||||
*/
|
||||
umode_t mode_strip_sgid(const struct inode *dir, umode_t mode)
|
||||
{
|
||||
if ((mode & (S_ISGID | S_IXGRP)) != (S_ISGID | S_IXGRP))
|
||||
return mode;
|
||||
if (S_ISDIR(mode) || !dir || !(dir->i_mode & S_ISGID))
|
||||
return mode;
|
||||
if (in_group_or_capable(dir, dir->i_gid))
|
||||
return mode;
|
||||
return mode & ~S_ISGID;
|
||||
}
|
||||
EXPORT_SYMBOL(mode_strip_sgid);
|
||||
|
@ -149,6 +149,7 @@ extern int vfs_open(const struct path *, struct file *);
|
||||
extern long prune_icache_sb(struct super_block *sb, struct shrink_control *sc);
|
||||
extern void inode_add_lru(struct inode *inode);
|
||||
extern int dentry_needs_remove_privs(struct dentry *dentry);
|
||||
bool in_group_or_capable(const struct inode *inode, kgid_t gid);
|
||||
|
||||
/*
|
||||
* fs-writeback.c
|
||||
@ -196,3 +197,8 @@ int sb_init_dio_done_wq(struct super_block *sb);
|
||||
*/
|
||||
int do_statx(int dfd, const char __user *filename, unsigned flags,
|
||||
unsigned int mask, struct statx __user *buffer);
|
||||
|
||||
/*
|
||||
* fs/attr.c
|
||||
*/
|
||||
int setattr_should_drop_sgid(const struct inode *inode);
|
||||
|
@ -138,19 +138,18 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
|
||||
struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
|
||||
struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
|
||||
pgoff_t index = pos >> PAGE_SHIFT;
|
||||
uint32_t pageofs = index << PAGE_SHIFT;
|
||||
int ret = 0;
|
||||
|
||||
jffs2_dbg(1, "%s()\n", __func__);
|
||||
|
||||
if (pageofs > inode->i_size) {
|
||||
/* Make new hole frag from old EOF to new page */
|
||||
if (pos > inode->i_size) {
|
||||
/* Make new hole frag from old EOF to new position */
|
||||
struct jffs2_raw_inode ri;
|
||||
struct jffs2_full_dnode *fn;
|
||||
uint32_t alloc_len;
|
||||
|
||||
jffs2_dbg(1, "Writing new hole frag 0x%x-0x%x between current EOF and new page\n",
|
||||
(unsigned int)inode->i_size, pageofs);
|
||||
jffs2_dbg(1, "Writing new hole frag 0x%x-0x%x between current EOF and new position\n",
|
||||
(unsigned int)inode->i_size, (uint32_t)pos);
|
||||
|
||||
ret = jffs2_reserve_space(c, sizeof(ri), &alloc_len,
|
||||
ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
|
||||
@ -170,10 +169,10 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
|
||||
ri.mode = cpu_to_jemode(inode->i_mode);
|
||||
ri.uid = cpu_to_je16(i_uid_read(inode));
|
||||
ri.gid = cpu_to_je16(i_gid_read(inode));
|
||||
ri.isize = cpu_to_je32(max((uint32_t)inode->i_size, pageofs));
|
||||
ri.isize = cpu_to_je32((uint32_t)pos);
|
||||
ri.atime = ri.ctime = ri.mtime = cpu_to_je32(JFFS2_NOW());
|
||||
ri.offset = cpu_to_je32(inode->i_size);
|
||||
ri.dsize = cpu_to_je32(pageofs - inode->i_size);
|
||||
ri.dsize = cpu_to_je32((uint32_t)pos - inode->i_size);
|
||||
ri.csize = cpu_to_je32(0);
|
||||
ri.compr = JFFS2_COMPR_ZERO;
|
||||
ri.node_crc = cpu_to_je32(crc32(0, &ri, sizeof(ri)-8));
|
||||
@ -203,7 +202,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
|
||||
goto out_err;
|
||||
}
|
||||
jffs2_complete_reservation(c);
|
||||
inode->i_size = pageofs;
|
||||
inode->i_size = pos;
|
||||
mutex_unlock(&f->sem);
|
||||
}
|
||||
|
||||
|
80
fs/namei.c
80
fs/namei.c
@ -2882,6 +2882,63 @@ void unlock_rename(struct dentry *p1, struct dentry *p2)
|
||||
}
|
||||
EXPORT_SYMBOL(unlock_rename);
|
||||
|
||||
/**
|
||||
* mode_strip_umask - handle vfs umask stripping
|
||||
* @dir: parent directory of the new inode
|
||||
* @mode: mode of the new inode to be created in @dir
|
||||
*
|
||||
* Umask stripping depends on whether or not the filesystem supports POSIX
|
||||
* ACLs. If the filesystem doesn't support it umask stripping is done directly
|
||||
* in here. If the filesystem does support POSIX ACLs umask stripping is
|
||||
* deferred until the filesystem calls posix_acl_create().
|
||||
*
|
||||
* Returns: mode
|
||||
*/
|
||||
static inline umode_t mode_strip_umask(const struct inode *dir, umode_t mode)
|
||||
{
|
||||
if (!IS_POSIXACL(dir))
|
||||
mode &= ~current_umask();
|
||||
return mode;
|
||||
}
|
||||
|
||||
/**
|
||||
* vfs_prepare_mode - prepare the mode to be used for a new inode
|
||||
* @dir: parent directory of the new inode
|
||||
* @mode: mode of the new inode
|
||||
* @mask_perms: allowed permission by the vfs
|
||||
* @type: type of file to be created
|
||||
*
|
||||
* This helper consolidates and enforces vfs restrictions on the @mode of a new
|
||||
* object to be created.
|
||||
*
|
||||
* Umask stripping depends on whether the filesystem supports POSIX ACLs (see
|
||||
* the kernel documentation for mode_strip_umask()). Moving umask stripping
|
||||
* after setgid stripping allows the same ordering for both non-POSIX ACL and
|
||||
* POSIX ACL supporting filesystems.
|
||||
*
|
||||
* Note that it's currently valid for @type to be 0 if a directory is created.
|
||||
* Filesystems raise that flag individually and we need to check whether each
|
||||
* filesystem can deal with receiving S_IFDIR from the vfs before we enforce a
|
||||
* non-zero type.
|
||||
*
|
||||
* Returns: mode to be passed to the filesystem
|
||||
*/
|
||||
static inline umode_t vfs_prepare_mode(const struct inode *dir, umode_t mode,
|
||||
umode_t mask_perms, umode_t type)
|
||||
{
|
||||
mode = mode_strip_sgid(dir, mode);
|
||||
mode = mode_strip_umask(dir, mode);
|
||||
|
||||
/*
|
||||
* Apply the vfs mandated allowed permission mask and set the type of
|
||||
* file to be created before we call into the filesystem.
|
||||
*/
|
||||
mode &= (mask_perms & ~S_IFMT);
|
||||
mode |= (type & S_IFMT);
|
||||
|
||||
return mode;
|
||||
}
|
||||
|
||||
int vfs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
|
||||
bool want_excl)
|
||||
{
|
||||
@ -2891,8 +2948,8 @@ int vfs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
|
||||
|
||||
if (!dir->i_op->create)
|
||||
return -EACCES; /* shouldn't it be ENOSYS? */
|
||||
mode &= S_IALLUGO;
|
||||
mode |= S_IFREG;
|
||||
|
||||
mode = vfs_prepare_mode(dir, mode, S_IALLUGO, S_IFREG);
|
||||
error = security_inode_create(dir, dentry, mode);
|
||||
if (error)
|
||||
return error;
|
||||
@ -3156,8 +3213,7 @@ static struct dentry *lookup_open(struct nameidata *nd, struct file *file,
|
||||
if (open_flag & O_CREAT) {
|
||||
if (open_flag & O_EXCL)
|
||||
open_flag &= ~O_TRUNC;
|
||||
if (!IS_POSIXACL(dir->d_inode))
|
||||
mode &= ~current_umask();
|
||||
mode = vfs_prepare_mode(dir->d_inode, mode, mode, mode);
|
||||
if (likely(got_write))
|
||||
create_error = may_o_create(&nd->path, dentry, mode);
|
||||
else
|
||||
@ -3370,8 +3426,7 @@ struct dentry *vfs_tmpfile(struct dentry *dentry, umode_t mode, int open_flag)
|
||||
child = d_alloc(dentry, &slash_name);
|
||||
if (unlikely(!child))
|
||||
goto out_err;
|
||||
if (!IS_POSIXACL(dir))
|
||||
mode &= ~current_umask();
|
||||
mode = vfs_prepare_mode(dir, mode, mode, mode);
|
||||
error = dir->i_op->tmpfile(dir, child, mode);
|
||||
if (error)
|
||||
goto out_err;
|
||||
@ -3632,6 +3687,7 @@ int vfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode, dev_t dev)
|
||||
if (!dir->i_op->mknod)
|
||||
return -EPERM;
|
||||
|
||||
mode = vfs_prepare_mode(dir, mode, mode, mode);
|
||||
error = devcgroup_inode_mknod(mode, dev);
|
||||
if (error)
|
||||
return error;
|
||||
@ -3680,9 +3736,8 @@ static long do_mknodat(int dfd, const char __user *filename, umode_t mode,
|
||||
if (IS_ERR(dentry))
|
||||
return PTR_ERR(dentry);
|
||||
|
||||
if (!IS_POSIXACL(path.dentry->d_inode))
|
||||
mode &= ~current_umask();
|
||||
error = security_path_mknod(&path, dentry, mode, dev);
|
||||
error = security_path_mknod(&path, dentry,
|
||||
mode_strip_umask(path.dentry->d_inode, mode), dev);
|
||||
if (error)
|
||||
goto out;
|
||||
switch (mode & S_IFMT) {
|
||||
@ -3730,7 +3785,7 @@ int vfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
|
||||
if (!dir->i_op->mkdir)
|
||||
return -EPERM;
|
||||
|
||||
mode &= (S_IRWXUGO|S_ISVTX);
|
||||
mode = vfs_prepare_mode(dir, mode, S_IRWXUGO | S_ISVTX, 0);
|
||||
error = security_inode_mkdir(dir, dentry, mode);
|
||||
if (error)
|
||||
return error;
|
||||
@ -3757,9 +3812,8 @@ static long do_mkdirat(int dfd, const char __user *pathname, umode_t mode)
|
||||
if (IS_ERR(dentry))
|
||||
return PTR_ERR(dentry);
|
||||
|
||||
if (!IS_POSIXACL(path.dentry->d_inode))
|
||||
mode &= ~current_umask();
|
||||
error = security_path_mkdir(&path, dentry, mode);
|
||||
error = security_path_mkdir(&path, dentry,
|
||||
mode_strip_umask(path.dentry->d_inode, mode));
|
||||
if (!error)
|
||||
error = vfs_mkdir(path.dentry->d_inode, dentry, mode);
|
||||
done_path_create(&path, dentry);
|
||||
|
@ -1994,7 +1994,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
|
||||
}
|
||||
}
|
||||
|
||||
if (file && should_remove_suid(file->f_path.dentry)) {
|
||||
if (file && setattr_should_drop_suidgid(file_inode(file))) {
|
||||
ret = __ocfs2_write_remove_suid(inode, di_bh);
|
||||
if (ret) {
|
||||
mlog_errno(ret);
|
||||
@ -2282,7 +2282,7 @@ static int ocfs2_prepare_inode_for_write(struct file *file,
|
||||
* inode. There's also the dinode i_size state which
|
||||
* can be lost via setattr during extending writes (we
|
||||
* set inode->i_size at the end of a write. */
|
||||
if (should_remove_suid(dentry)) {
|
||||
if (setattr_should_drop_suidgid(inode)) {
|
||||
if (meta_level == 0) {
|
||||
ocfs2_inode_unlock_for_extent_tree(inode,
|
||||
&di_bh,
|
||||
|
@ -198,6 +198,7 @@ static struct inode *ocfs2_get_init_inode(struct inode *dir, umode_t mode)
|
||||
* callers. */
|
||||
if (S_ISDIR(mode))
|
||||
set_nlink(inode, 2);
|
||||
mode = mode_strip_sgid(dir, mode);
|
||||
inode_init_owner(inode, dir, mode);
|
||||
status = dquot_initialize(inode);
|
||||
if (status)
|
||||
|
@ -666,10 +666,10 @@ int chown_common(const struct path *path, uid_t user, gid_t group)
|
||||
newattrs.ia_valid |= ATTR_GID;
|
||||
newattrs.ia_gid = gid;
|
||||
}
|
||||
if (!S_ISDIR(inode->i_mode))
|
||||
newattrs.ia_valid |=
|
||||
ATTR_KILL_SUID | ATTR_KILL_SGID | ATTR_KILL_PRIV;
|
||||
inode_lock(inode);
|
||||
if (!S_ISDIR(inode->i_mode))
|
||||
newattrs.ia_valid |= ATTR_KILL_SUID | ATTR_KILL_PRIV |
|
||||
setattr_should_drop_sgid(inode);
|
||||
error = security_path_chown(path, uid, gid);
|
||||
if (!error)
|
||||
error = notify_change(path->dentry, &newattrs, &delegated_inode);
|
||||
|
@ -3190,7 +3190,7 @@ xfs_btree_insrec(
|
||||
struct xfs_btree_block *block; /* btree block */
|
||||
struct xfs_buf *bp; /* buffer for block */
|
||||
union xfs_btree_ptr nptr; /* new block ptr */
|
||||
struct xfs_btree_cur *ncur; /* new btree cursor */
|
||||
struct xfs_btree_cur *ncur = NULL; /* new btree cursor */
|
||||
union xfs_btree_key nkey; /* new block key */
|
||||
union xfs_btree_key *lkey;
|
||||
int optr; /* old key/record index */
|
||||
@ -3270,7 +3270,7 @@ xfs_btree_insrec(
|
||||
#ifdef DEBUG
|
||||
error = xfs_btree_check_block(cur, block, level, bp);
|
||||
if (error)
|
||||
return error;
|
||||
goto error0;
|
||||
#endif
|
||||
|
||||
/*
|
||||
@ -3290,7 +3290,7 @@ xfs_btree_insrec(
|
||||
for (i = numrecs - ptr; i >= 0; i--) {
|
||||
error = xfs_btree_debug_check_ptr(cur, pp, i, level);
|
||||
if (error)
|
||||
return error;
|
||||
goto error0;
|
||||
}
|
||||
|
||||
xfs_btree_shift_keys(cur, kp, 1, numrecs - ptr + 1);
|
||||
@ -3375,6 +3375,8 @@ xfs_btree_insrec(
|
||||
return 0;
|
||||
|
||||
error0:
|
||||
if (ncur)
|
||||
xfs_btree_del_cursor(ncur, error);
|
||||
return error;
|
||||
}
|
||||
|
||||
|
@ -800,9 +800,6 @@ xfs_alloc_file_space(
|
||||
quota_flag = XFS_QMOPT_RES_REGBLKS;
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocate and setup the transaction.
|
||||
*/
|
||||
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks,
|
||||
resrtextents, 0, &tp);
|
||||
|
||||
@ -830,9 +827,9 @@ xfs_alloc_file_space(
|
||||
if (error)
|
||||
goto error0;
|
||||
|
||||
/*
|
||||
* Complete the transaction
|
||||
*/
|
||||
ip->i_d.di_flags |= XFS_DIFLAG_PREALLOC;
|
||||
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
|
||||
|
||||
error = xfs_trans_commit(tp);
|
||||
xfs_iunlock(ip, XFS_ILOCK_EXCL);
|
||||
if (error)
|
||||
|
@ -94,8 +94,6 @@ xfs_update_prealloc_flags(
|
||||
ip->i_d.di_flags &= ~XFS_DIFLAG_PREALLOC;
|
||||
|
||||
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
|
||||
if (flags & XFS_PREALLOC_SYNC)
|
||||
xfs_trans_set_sync(tp);
|
||||
return xfs_trans_commit(tp);
|
||||
}
|
||||
|
||||
@ -852,7 +850,6 @@ xfs_file_fallocate(
|
||||
struct inode *inode = file_inode(file);
|
||||
struct xfs_inode *ip = XFS_I(inode);
|
||||
long error;
|
||||
enum xfs_prealloc_flags flags = 0;
|
||||
uint iolock = XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL;
|
||||
loff_t new_size = 0;
|
||||
bool do_file_insert = false;
|
||||
@ -897,6 +894,10 @@ xfs_file_fallocate(
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
error = file_modified(file);
|
||||
if (error)
|
||||
goto out_unlock;
|
||||
|
||||
if (mode & FALLOC_FL_PUNCH_HOLE) {
|
||||
error = xfs_free_file_space(ip, offset, len);
|
||||
if (error)
|
||||
@ -946,8 +947,6 @@ xfs_file_fallocate(
|
||||
}
|
||||
do_file_insert = true;
|
||||
} else {
|
||||
flags |= XFS_PREALLOC_SET;
|
||||
|
||||
if (!(mode & FALLOC_FL_KEEP_SIZE) &&
|
||||
offset + len > i_size_read(inode)) {
|
||||
new_size = offset + len;
|
||||
@ -1000,13 +999,6 @@ xfs_file_fallocate(
|
||||
}
|
||||
}
|
||||
|
||||
if (file->f_flags & O_DSYNC)
|
||||
flags |= XFS_PREALLOC_SYNC;
|
||||
|
||||
error = xfs_update_prealloc_flags(ip, flags);
|
||||
if (error)
|
||||
goto out_unlock;
|
||||
|
||||
/* Change file size if needed */
|
||||
if (new_size) {
|
||||
struct iattr iattr;
|
||||
@ -1024,8 +1016,14 @@ xfs_file_fallocate(
|
||||
* leave shifted extents past EOF and hence losing access to
|
||||
* the data that is contained within them.
|
||||
*/
|
||||
if (do_file_insert)
|
||||
if (do_file_insert) {
|
||||
error = xfs_insert_file_space(ip, offset, len);
|
||||
if (error)
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
if (file->f_flags & O_DSYNC)
|
||||
error = xfs_log_force_inode(ip);
|
||||
|
||||
out_unlock:
|
||||
xfs_iunlock(ip, iolock);
|
||||
|
@ -595,37 +595,6 @@ xfs_vn_getattr(
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
xfs_setattr_mode(
|
||||
struct xfs_inode *ip,
|
||||
struct iattr *iattr)
|
||||
{
|
||||
struct inode *inode = VFS_I(ip);
|
||||
umode_t mode = iattr->ia_mode;
|
||||
|
||||
ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
|
||||
|
||||
inode->i_mode &= S_IFMT;
|
||||
inode->i_mode |= mode & ~S_IFMT;
|
||||
}
|
||||
|
||||
void
|
||||
xfs_setattr_time(
|
||||
struct xfs_inode *ip,
|
||||
struct iattr *iattr)
|
||||
{
|
||||
struct inode *inode = VFS_I(ip);
|
||||
|
||||
ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
|
||||
|
||||
if (iattr->ia_valid & ATTR_ATIME)
|
||||
inode->i_atime = iattr->ia_atime;
|
||||
if (iattr->ia_valid & ATTR_CTIME)
|
||||
inode->i_ctime = iattr->ia_ctime;
|
||||
if (iattr->ia_valid & ATTR_MTIME)
|
||||
inode->i_mtime = iattr->ia_mtime;
|
||||
}
|
||||
|
||||
static int
|
||||
xfs_vn_change_ok(
|
||||
struct dentry *dentry,
|
||||
@ -740,16 +709,6 @@ xfs_setattr_nonsize(
|
||||
goto out_cancel;
|
||||
}
|
||||
|
||||
/*
|
||||
* CAP_FSETID overrides the following restrictions:
|
||||
*
|
||||
* The set-user-ID and set-group-ID bits of a file will be
|
||||
* cleared upon successful return from chown()
|
||||
*/
|
||||
if ((inode->i_mode & (S_ISUID|S_ISGID)) &&
|
||||
!capable(CAP_FSETID))
|
||||
inode->i_mode &= ~(S_ISUID|S_ISGID);
|
||||
|
||||
/*
|
||||
* Change the ownerships and register quota modifications
|
||||
* in the transaction.
|
||||
@ -761,7 +720,6 @@ xfs_setattr_nonsize(
|
||||
olddquot1 = xfs_qm_vop_chown(tp, ip,
|
||||
&ip->i_udquot, udqp);
|
||||
}
|
||||
inode->i_uid = uid;
|
||||
}
|
||||
if (!gid_eq(igid, gid)) {
|
||||
if (XFS_IS_QUOTA_RUNNING(mp) && XFS_IS_GQUOTA_ON(mp)) {
|
||||
@ -772,15 +730,10 @@ xfs_setattr_nonsize(
|
||||
olddquot2 = xfs_qm_vop_chown(tp, ip,
|
||||
&ip->i_gdquot, gdqp);
|
||||
}
|
||||
inode->i_gid = gid;
|
||||
}
|
||||
}
|
||||
|
||||
if (mask & ATTR_MODE)
|
||||
xfs_setattr_mode(ip, iattr);
|
||||
if (mask & (ATTR_ATIME|ATTR_CTIME|ATTR_MTIME))
|
||||
xfs_setattr_time(ip, iattr);
|
||||
|
||||
setattr_copy(inode, iattr);
|
||||
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
|
||||
|
||||
XFS_STATS_INC(mp, xs_ig_attrchg);
|
||||
@ -1025,11 +978,8 @@ xfs_setattr_size(
|
||||
xfs_inode_clear_eofblocks_tag(ip);
|
||||
}
|
||||
|
||||
if (iattr->ia_valid & ATTR_MODE)
|
||||
xfs_setattr_mode(ip, iattr);
|
||||
if (iattr->ia_valid & (ATTR_ATIME|ATTR_CTIME|ATTR_MTIME))
|
||||
xfs_setattr_time(ip, iattr);
|
||||
|
||||
ASSERT(!(iattr->ia_valid & (ATTR_UID | ATTR_GID)));
|
||||
setattr_copy(inode, iattr);
|
||||
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
|
||||
|
||||
XFS_STATS_INC(mp, xs_ig_attrchg);
|
||||
|
@ -18,7 +18,6 @@ extern ssize_t xfs_vn_listxattr(struct dentry *, char *data, size_t size);
|
||||
*/
|
||||
#define XFS_ATTR_NOACL 0x01 /* Don't call posix_acl_chmod */
|
||||
|
||||
extern void xfs_setattr_time(struct xfs_inode *ip, struct iattr *iattr);
|
||||
extern int xfs_setattr_nonsize(struct xfs_inode *ip, struct iattr *vap,
|
||||
int flags);
|
||||
extern int xfs_vn_setattr_nonsize(struct dentry *dentry, struct iattr *vap);
|
||||
|
@ -126,7 +126,6 @@ __xfs_free_perag(
|
||||
{
|
||||
struct xfs_perag *pag = container_of(head, struct xfs_perag, rcu_head);
|
||||
|
||||
ASSERT(atomic_read(&pag->pag_ref) == 0);
|
||||
kmem_free(pag);
|
||||
}
|
||||
|
||||
@ -145,7 +144,7 @@ xfs_free_perag(
|
||||
pag = radix_tree_delete(&mp->m_perag_tree, agno);
|
||||
spin_unlock(&mp->m_perag_lock);
|
||||
ASSERT(pag);
|
||||
ASSERT(atomic_read(&pag->pag_ref) == 0);
|
||||
XFS_IS_CORRUPT(pag->pag_mount, atomic_read(&pag->pag_ref) != 0);
|
||||
xfs_iunlink_destroy(pag);
|
||||
xfs_buf_hash_destroy(pag);
|
||||
call_rcu(&pag->rcu_head, __xfs_free_perag);
|
||||
|
@ -164,10 +164,12 @@ xfs_fs_map_blocks(
|
||||
* that the blocks allocated and handed out to the client are
|
||||
* guaranteed to be present even after a server crash.
|
||||
*/
|
||||
error = xfs_update_prealloc_flags(ip,
|
||||
XFS_PREALLOC_SET | XFS_PREALLOC_SYNC);
|
||||
error = xfs_update_prealloc_flags(ip, XFS_PREALLOC_SET);
|
||||
if (!error)
|
||||
error = xfs_log_force_inode(ip);
|
||||
if (error)
|
||||
goto out_unlock;
|
||||
|
||||
} else {
|
||||
xfs_iunlock(ip, lock_flags);
|
||||
}
|
||||
@ -283,7 +285,8 @@ xfs_fs_commit_blocks(
|
||||
xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
|
||||
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
|
||||
|
||||
xfs_setattr_time(ip, iattr);
|
||||
ASSERT(!(iattr->ia_valid & (ATTR_UID | ATTR_GID)));
|
||||
setattr_copy(inode, iattr);
|
||||
if (update_isize) {
|
||||
i_size_write(inode, iattr->ia_size);
|
||||
ip->i_d.di_size = iattr->ia_size;
|
||||
|
@ -1318,8 +1318,15 @@ xfs_qm_quotacheck(
|
||||
|
||||
error = xfs_iwalk_threaded(mp, 0, 0, xfs_qm_dqusage_adjust, 0, true,
|
||||
NULL);
|
||||
if (error)
|
||||
if (error) {
|
||||
/*
|
||||
* The inode walk may have partially populated the dquot
|
||||
* caches. We must purge them before disabling quota and
|
||||
* tearing down the quotainfo, or else the dquots will leak.
|
||||
*/
|
||||
xfs_qm_dqpurge_all(mp, XFS_QMOPT_QUOTALL);
|
||||
goto error_return;
|
||||
}
|
||||
|
||||
/*
|
||||
* We've made all the changes that we need to make incore. Flush them
|
||||
|
@ -427,11 +427,11 @@ struct drm_bridge_funcs {
|
||||
*
|
||||
* The returned array must be allocated with kmalloc() and will be
|
||||
* freed by the caller. If the allocation fails, NULL should be
|
||||
* returned. num_output_fmts must be set to the returned array size.
|
||||
* returned. num_input_fmts must be set to the returned array size.
|
||||
* Formats listed in the returned array should be listed in decreasing
|
||||
* preference order (the core will try all formats until it finds one
|
||||
* that works). When the format is not supported NULL should be
|
||||
* returned and num_output_fmts should be set to 0.
|
||||
* returned and num_input_fmts should be set to 0.
|
||||
*
|
||||
* This method is called on all elements of the bridge chain as part of
|
||||
* the bus format negotiation process that happens in
|
||||
|
@ -1816,6 +1816,7 @@ extern long compat_ptr_ioctl(struct file *file, unsigned int cmd,
|
||||
extern void inode_init_owner(struct inode *inode, const struct inode *dir,
|
||||
umode_t mode);
|
||||
extern bool may_open_dev(const struct path *path);
|
||||
umode_t mode_strip_sgid(const struct inode *dir, umode_t mode);
|
||||
|
||||
/*
|
||||
* This is the "filldir" function type, used by readdir() to let
|
||||
@ -3027,7 +3028,7 @@ extern void __destroy_inode(struct inode *);
|
||||
extern struct inode *new_inode_pseudo(struct super_block *sb);
|
||||
extern struct inode *new_inode(struct super_block *sb);
|
||||
extern void free_inode_nonrcu(struct inode *inode);
|
||||
extern int should_remove_suid(struct dentry *);
|
||||
extern int setattr_should_drop_suidgid(struct inode *);
|
||||
extern int file_remove_privs(struct file *);
|
||||
|
||||
extern void __insert_inode_hash(struct inode *, unsigned long hashval);
|
||||
@ -3476,7 +3477,7 @@ int __init get_filesystem_list(char *buf);
|
||||
|
||||
static inline bool is_sxid(umode_t mode)
|
||||
{
|
||||
return (mode & S_ISUID) || ((mode & S_ISGID) && (mode & S_IXGRP));
|
||||
return mode & (S_ISUID | S_ISGID);
|
||||
}
|
||||
|
||||
static inline int check_sticky(struct inode *dir, struct inode *inode)
|
||||
|
@ -798,6 +798,7 @@ struct hid_driver {
|
||||
* @raw_request: send raw report request to device (e.g. feature report)
|
||||
* @output_report: send output report to device
|
||||
* @idle: send idle request to device
|
||||
* @max_buffer_size: over-ride maximum data buffer size (default: HID_MAX_BUFFER_SIZE)
|
||||
*/
|
||||
struct hid_ll_driver {
|
||||
int (*start)(struct hid_device *hdev);
|
||||
@ -822,6 +823,8 @@ struct hid_ll_driver {
|
||||
int (*output_report) (struct hid_device *hdev, __u8 *buf, size_t len);
|
||||
|
||||
int (*idle)(struct hid_device *hdev, int report, int idle, int reqtype);
|
||||
|
||||
unsigned int max_buffer_size;
|
||||
};
|
||||
|
||||
extern struct hid_ll_driver i2c_hid_ll_driver;
|
||||
|
@ -264,9 +264,11 @@ struct hh_cache {
|
||||
* relationship HH alignment <= LL alignment.
|
||||
*/
|
||||
#define LL_RESERVED_SPACE(dev) \
|
||||
((((dev)->hard_header_len+(dev)->needed_headroom)&~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
|
||||
((((dev)->hard_header_len + READ_ONCE((dev)->needed_headroom)) \
|
||||
& ~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
|
||||
#define LL_RESERVED_SPACE_EXTRA(dev,extra) \
|
||||
((((dev)->hard_header_len+(dev)->needed_headroom+(extra))&~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
|
||||
((((dev)->hard_header_len + READ_ONCE((dev)->needed_headroom) + (extra)) \
|
||||
& ~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
|
||||
|
||||
struct header_ops {
|
||||
int (*create) (struct sk_buff *skb, struct net_device *dev,
|
||||
|
@ -97,7 +97,10 @@ struct intc_hw_desc {
|
||||
unsigned int nr_subgroups;
|
||||
};
|
||||
|
||||
#define _INTC_ARRAY(a) a, __same_type(a, NULL) ? 0 : sizeof(a)/sizeof(*a)
|
||||
#define _INTC_SIZEOF_OR_ZERO(a) (_Generic(a, \
|
||||
typeof(NULL): 0, \
|
||||
default: sizeof(a)))
|
||||
#define _INTC_ARRAY(a) a, _INTC_SIZEOF_OR_ZERO(a)/sizeof(*a)
|
||||
|
||||
#define INTC_HW_DESC(vectors, groups, mask_regs, \
|
||||
prio_regs, sense_regs, ack_regs) \
|
||||
|
@ -234,12 +234,11 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
||||
* not add unwanted padding between the beginning of the section and the
|
||||
* structure. Force alignment to the same alignment as the section start.
|
||||
*
|
||||
* When lockdep is enabled, we make sure to always do the RCU portions of
|
||||
* the tracepoint code, regardless of whether tracing is on. However,
|
||||
* don't check if the condition is false, due to interaction with idle
|
||||
* instrumentation. This lets us find RCU issues triggered with tracepoints
|
||||
* even when this tracepoint is off. This code has no purpose other than
|
||||
* poking RCU a bit.
|
||||
* When lockdep is enabled, we make sure to always test if RCU is
|
||||
* "watching" regardless if the tracepoint is enabled or not. Tracepoints
|
||||
* require RCU to be active, and it should always warn at the tracepoint
|
||||
* site if it is not watching, as it will need to be active when the
|
||||
* tracepoint is enabled.
|
||||
*/
|
||||
#define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
|
||||
extern int __traceiter_##name(data_proto); \
|
||||
@ -253,9 +252,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
||||
TP_ARGS(data_args), \
|
||||
TP_CONDITION(cond), 0); \
|
||||
if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \
|
||||
rcu_read_lock_sched_notrace(); \
|
||||
rcu_dereference_sched(__tracepoint_##name.funcs);\
|
||||
rcu_read_unlock_sched_notrace(); \
|
||||
WARN_ON_ONCE(!rcu_is_watching()); \
|
||||
} \
|
||||
} \
|
||||
__DECLARE_TRACE_RCU(name, PARAMS(proto), PARAMS(args), \
|
||||
|
@ -5802,10 +5802,10 @@ static int io_arm_poll_handler(struct io_kiocb *req)
|
||||
}
|
||||
} else {
|
||||
apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
|
||||
if (unlikely(!apoll))
|
||||
return IO_APOLL_ABORTED;
|
||||
apoll->poll.retries = APOLL_MAX_RETRY;
|
||||
}
|
||||
if (unlikely(!apoll))
|
||||
return IO_APOLL_ABORTED;
|
||||
apoll->double_poll = NULL;
|
||||
req->apoll = apoll;
|
||||
req->flags |= REQ_F_POLLED;
|
||||
|
@ -1538,7 +1538,8 @@ static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end)
|
||||
key.flags = end; /* overload flags, as it is unsigned long */
|
||||
|
||||
for (pg = ftrace_pages_start; pg; pg = pg->next) {
|
||||
if (end < pg->records[0].ip ||
|
||||
if (pg->index == 0 ||
|
||||
end < pg->records[0].ip ||
|
||||
start >= (pg->records[pg->index - 1].ip + MCOUNT_INSN_SIZE))
|
||||
continue;
|
||||
rec = bsearch(&key, pg->records, pg->index,
|
||||
|
@ -4706,6 +4706,8 @@ loff_t tracing_lseek(struct file *file, loff_t offset, int whence)
|
||||
static const struct file_operations tracing_fops = {
|
||||
.open = tracing_open,
|
||||
.read = seq_read,
|
||||
.read_iter = seq_read_iter,
|
||||
.splice_read = generic_file_splice_read,
|
||||
.write = tracing_write_stub,
|
||||
.llseek = tracing_lseek,
|
||||
.release = tracing_release,
|
||||
|
@ -1087,6 +1087,9 @@ static const char *hist_field_name(struct hist_field *field,
|
||||
{
|
||||
const char *field_name = "";
|
||||
|
||||
if (WARN_ON_ONCE(!field))
|
||||
return field_name;
|
||||
|
||||
if (level > 1)
|
||||
return field_name;
|
||||
|
||||
|
@ -1994,7 +1994,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
|
||||
{
|
||||
struct mm_struct *mm = vma->vm_mm;
|
||||
pgtable_t pgtable;
|
||||
pmd_t _pmd;
|
||||
pmd_t _pmd, old_pmd;
|
||||
int i;
|
||||
|
||||
/*
|
||||
@ -2005,7 +2005,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
|
||||
*
|
||||
* See Documentation/vm/mmu_notifier.rst
|
||||
*/
|
||||
pmdp_huge_clear_flush(vma, haddr, pmd);
|
||||
old_pmd = pmdp_huge_clear_flush(vma, haddr, pmd);
|
||||
|
||||
pgtable = pgtable_trans_huge_withdraw(mm, pmd);
|
||||
pmd_populate(mm, &_pmd, pgtable);
|
||||
@ -2014,6 +2014,8 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
|
||||
pte_t *pte, entry;
|
||||
entry = pfn_pte(my_zero_pfn(haddr), vma->vm_page_prot);
|
||||
entry = pte_mkspecial(entry);
|
||||
if (pmd_uffd_wp(old_pmd))
|
||||
entry = pte_mkuffd_wp(entry);
|
||||
pte = pte_offset_map(&_pmd, haddr);
|
||||
VM_BUG_ON(!pte_none(*pte));
|
||||
set_pte_at(mm, haddr, pte, entry);
|
||||
|
@ -573,6 +573,9 @@ static int rtentry_to_fib_config(struct net *net, int cmd, struct rtentry *rt,
|
||||
cfg->fc_scope = RT_SCOPE_UNIVERSE;
|
||||
}
|
||||
|
||||
if (!cfg->fc_table)
|
||||
cfg->fc_table = RT_TABLE_MAIN;
|
||||
|
||||
if (cmd == SIOCDELRT)
|
||||
return 0;
|
||||
|
||||
|
@ -613,10 +613,10 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
|
||||
}
|
||||
|
||||
headroom += LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len;
|
||||
if (headroom > dev->needed_headroom)
|
||||
dev->needed_headroom = headroom;
|
||||
if (headroom > READ_ONCE(dev->needed_headroom))
|
||||
WRITE_ONCE(dev->needed_headroom, headroom);
|
||||
|
||||
if (skb_cow_head(skb, dev->needed_headroom)) {
|
||||
if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
|
||||
ip_rt_put(rt);
|
||||
goto tx_dropped;
|
||||
}
|
||||
@ -797,10 +797,10 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
|
||||
|
||||
max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr)
|
||||
+ rt->dst.header_len + ip_encap_hlen(&tunnel->encap);
|
||||
if (max_headroom > dev->needed_headroom)
|
||||
dev->needed_headroom = max_headroom;
|
||||
if (max_headroom > READ_ONCE(dev->needed_headroom))
|
||||
WRITE_ONCE(dev->needed_headroom, max_headroom);
|
||||
|
||||
if (skb_cow_head(skb, dev->needed_headroom)) {
|
||||
if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
|
||||
ip_rt_put(rt);
|
||||
dev->stats.tx_dropped++;
|
||||
kfree_skb(skb);
|
||||
|
@ -3609,7 +3609,7 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
|
||||
th->window = htons(min(req->rsk_rcv_wnd, 65535U));
|
||||
tcp_options_write((__be32 *)(th + 1), NULL, &opts);
|
||||
th->doff = (tcp_header_size >> 2);
|
||||
__TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
|
||||
TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
|
||||
|
||||
#ifdef CONFIG_TCP_MD5SIG
|
||||
/* Okay, we have all we need - do the md5 hash if needed */
|
||||
|
@ -1267,8 +1267,8 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
|
||||
*/
|
||||
max_headroom = LL_RESERVED_SPACE(dst->dev) + sizeof(struct ipv6hdr)
|
||||
+ dst->header_len + t->hlen;
|
||||
if (max_headroom > dev->needed_headroom)
|
||||
dev->needed_headroom = max_headroom;
|
||||
if (max_headroom > READ_ONCE(dev->needed_headroom))
|
||||
WRITE_ONCE(dev->needed_headroom, max_headroom);
|
||||
|
||||
err = ip6_tnl_encap(skb, t, &proto, fl6);
|
||||
if (err)
|
||||
|
@ -83,7 +83,7 @@ struct iucv_irq_data {
|
||||
u16 ippathid;
|
||||
u8 ipflags1;
|
||||
u8 iptype;
|
||||
u32 res2[8];
|
||||
u32 res2[9];
|
||||
};
|
||||
|
||||
struct iucv_irq_list {
|
||||
|
@ -275,7 +275,6 @@ void mptcp_subflow_reset(struct sock *ssk)
|
||||
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
|
||||
struct sock *sk = subflow->conn;
|
||||
|
||||
tcp_set_state(ssk, TCP_CLOSE);
|
||||
tcp_send_active_reset(ssk, GFP_ATOMIC);
|
||||
tcp_done(ssk);
|
||||
if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags) &&
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user