Merge 5.10.211 into android12-5.10-lts
Changes in 5.10.211
net/sched: Retire CBQ qdisc
net/sched: Retire ATM qdisc
net/sched: Retire dsmark qdisc
smb: client: fix OOB in receive_encrypted_standard()
smb: client: fix potential OOBs in smb2_parse_contexts()
smb: client: fix parsing of SMB3.1.1 POSIX create context
sched/rt: sysctl_sched_rr_timeslice show default timeslice after reset
userfaultfd: fix mmap_changing checking in mfill_atomic_hugetlb
zonefs: Improve error handling
sched/rt: Fix sysctl_sched_rr_timeslice intial value
sched/rt: Disallow writing invalid values to sched_rt_period_us
scsi: target: core: Add TMF to tmr_list handling
dmaengine: shdma: increase size of 'dev_id'
dmaengine: fsl-qdma: increase size of 'irq_name'
wifi: cfg80211: fix missing interfaces when dumping
wifi: mac80211: fix race condition on enabling fast-xmit
fbdev: savage: Error out if pixclock equals zero
fbdev: sis: Error out if pixclock equals zero
spi: hisi-sfc-v3xx: Return IRQ_NONE if no interrupts were detected
ahci: asm1166: correct count of reported ports
ahci: add 43-bit DMA address quirk for ASMedia ASM1061 controllers
ext4: avoid allocating blocks from corrupted group in ext4_mb_try_best_found()
ext4: avoid allocating blocks from corrupted group in ext4_mb_find_by_goal()
dmaengine: ti: edma: Add some null pointer checks to the edma_probe
regulator: pwm-regulator: Add validity checks in continuous .get_voltage
nvmet-tcp: fix nvme tcp ida memory leak
ASoC: sunxi: sun4i-spdif: Add support for Allwinner H616
spi: sh-msiof: avoid integer overflow in constants
netfilter: conntrack: check SCTP_CID_SHUTDOWN_ACK for vtag setting in sctp_new
nvme-fc: do not wait in vain when unloading module
nvmet-fcloop: swap the list_add_tail arguments
nvmet-fc: release reference on target port
nvmet-fc: abort command when there is no binding
ext4: correct the hole length returned by ext4_map_blocks()
Input: i8042 - add Fujitsu Lifebook U728 to i8042 quirk table
efi: runtime: Fix potential overflow of soft-reserved region size
efi: Don't add memblocks for soft-reserved memory
hwmon: (coretemp) Enlarge per package core count limit
scsi: lpfc: Use unsigned type for num_sge
firewire: core: send bus reset promptly on gap count error
virtio-blk: Ensure no requests in virtqueues before deleting vqs.
pmdomain: renesas: r8a77980-sysc: CR7 must be always on
ARM: dts: BCM53573: Drop nonexistent "default-off" LED trigger
irqchip/mips-gic: Don't touch vl_map if a local interrupt is not routable
ARM: dts: imx: Set default tuning step for imx6sx usdhc
ASoC: fsl_micfil: register platform component before registering cpu dai
media: av7110: prevent underflow in write_ts_to_decoder()
hvc/xen: prevent concurrent accesses to the shared ring
hsr: Avoid double remove of a node.
x86/uaccess: Implement macros for CMPXCHG on user addresses
seccomp: Invalidate seccomp mode to catch death failures
block: ataflop: fix breakage introduced at blk-mq refactoring
powerpc/watchpoint: Workaround P10 DD1 issue with VSX-32 byte instructions
powerpc/watchpoints: Annotate atomic context in more places
cifs: add a warning when the in-flight count goes negative
mtd: spinand: macronix: Add support for MX35LFxGE4AD
ASoC: Intel: boards: harden codec property handling
ASoC: Intel: boards: get codec device with ACPI instead of bus search
ASoC: Intel: bytcr_rt5651: Drop reference count of ACPI device after use
task_stack, x86/cea: Force-inline stack helpers
btrfs: tree-checker: check for overlapping extent items
btrfs: introduce btrfs_lookup_match_dir
btrfs: unify lookup return value when dir entry is missing
btrfs: do not pin logs too early during renames
lan743x: fix for potential NULL pointer dereference with bare card
platform/x86: intel-vbtn: Support for tablet mode on HP Pavilion 13 x360 PC
iwlwifi: mvm: do more useful queue sync accounting
iwlwifi: mvm: write queue_sync_state only for sync
jbd2: remove redundant buffer io error checks
jbd2: recheck chechpointing non-dirty buffer
jbd2: Fix wrongly judgement for buffer head removing while doing checkpoint
x86: drop bogus "cc" clobber from __try_cmpxchg_user_asm()
erofs: fix lz4 inplace decompression
IB/hfi1: Fix sdma.h tx->num_descs off-by-one error
s390/cio: fix invalid -EBUSY on ccw_device_start
dm-crypt: don't modify the data when using authenticated encryption
KVM: arm64: vgic-its: Test for valid IRQ in MOVALL handler
KVM: arm64: vgic-its: Test for valid IRQ in its_sync_lpi_pending_table()
gtp: fix use-after-free and null-ptr-deref in gtp_genl_dump_pdp()
PCI/MSI: Prevent MSI hardware interrupt number truncation
l2tp: pass correct message length to ip6_append_data
ARM: ep93xx: Add terminator to gpiod_lookup_table
Revert "x86/ftrace: Use alternative RET encoding"
x86/text-patching: Make text_gen_insn() play nice with ANNOTATE_NOENDBR
x86/ibt,paravirt: Use text_gen_insn() for paravirt_patch()
x86/ftrace: Use alternative RET encoding
x86/returnthunk: Allow different return thunks
Revert "x86/alternative: Make custom return thunk unconditional"
x86/alternative: Make custom return thunk unconditional
usb: cdns3: fixed memory use after free at cdns3_gadget_ep_disable()
usb: cdns3: fix memory double free when handle zero packet
usb: gadget: ncm: Avoid dropping datagrams of properly parsed NTBs
usb: roles: fix NULL pointer issue when put module's reference
usb: roles: don't get/set_role() when usb_role_switch is unregistered
mptcp: fix lockless access in subflow ULP diag
IB/hfi1: Fix a memleak in init_credit_return
RDMA/bnxt_re: Return error for SRQ resize
RDMA/srpt: Support specifying the srpt_service_guid parameter
RDMA/qedr: Fix qedr_create_user_qp error flow
arm64: dts: rockchip: set num-cs property for spi on px30
RDMA/srpt: fix function pointer cast warnings
bpf, scripts: Correct GPL license name
scsi: jazz_esp: Only build if SCSI core is builtin
nouveau: fix function cast warnings
ipv4: properly combine dev_base_seq and ipv4.dev_addr_genid
ipv6: properly combine dev_base_seq and ipv6.dev_addr_genid
afs: Increase buffer size in afs_update_volume_status()
ipv6: sr: fix possible use-after-free and null-ptr-deref
packet: move from strlcpy with unused retval to strscpy
net: dev: Convert sa_data to flexible array in struct sockaddr
s390: use the correct count for __iowrite64_copy()
tls: rx: jump to a more appropriate label
tls: rx: drop pointless else after goto
tls: stop recv() if initial process_rx_list gave us non-DATA
netfilter: nf_tables: set dormant flag on hook register failure
drm/syncobj: make lockdep complain on WAIT_FOR_SUBMIT v3
drm/syncobj: call drm_syncobj_fence_add_wait when WAIT_AVAILABLE flag is set
drm/amd/display: Fix memory leak in dm_sw_fini()
block: ataflop: more blk-mq refactoring fixes
fs/aio: Restrict kiocb_set_cancel_fn() to I/O submitted via libaio
arp: Prevent overflow in arp_req_get().
ext4: regenerate buddy after block freeing failed if under fc replay
Linux 5.10.211
Note, this merges away the following commit:
a0180e940c
("erofs: fix lz4 inplace decompression")
as it conflicted too badly with the existing erofs changes in this
branch that are not upstream. If it is needed, it can be brought back
in the future in a safe way.
Change-Id: I432a4a0964e0708d2cd337872ad75d57cbf92cce
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
commit
e92b643b4b
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 10
|
||||
SUBLEVEL = 210
|
||||
SUBLEVEL = 211
|
||||
EXTRAVERSION =
|
||||
NAME = Dare mighty things
|
||||
|
||||
|
@ -26,7 +26,6 @@ leds {
|
||||
wlan {
|
||||
label = "bcm53xx:blue:wlan";
|
||||
gpios = <&chipcommon 10 GPIO_ACTIVE_LOW>;
|
||||
linux,default-trigger = "default-off";
|
||||
};
|
||||
|
||||
system {
|
||||
|
@ -26,7 +26,6 @@ leds {
|
||||
5ghz {
|
||||
label = "bcm53xx:blue:5ghz";
|
||||
gpios = <&chipcommon 11 GPIO_ACTIVE_HIGH>;
|
||||
linux,default-trigger = "default-off";
|
||||
};
|
||||
|
||||
system {
|
||||
@ -42,7 +41,6 @@ pcie0_leds {
|
||||
2ghz {
|
||||
label = "bcm53xx:blue:2ghz";
|
||||
gpios = <&pcie0_chipcommon 3 GPIO_ACTIVE_HIGH>;
|
||||
linux,default-trigger = "default-off";
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -981,6 +981,8 @@ usdhc1: mmc@2190000 {
|
||||
<&clks IMX6SX_CLK_USDHC1>;
|
||||
clock-names = "ipg", "ahb", "per";
|
||||
bus-width = <4>;
|
||||
fsl,tuning-start-tap = <20>;
|
||||
fsl,tuning-step= <2>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
@ -993,6 +995,8 @@ usdhc2: mmc@2194000 {
|
||||
<&clks IMX6SX_CLK_USDHC2>;
|
||||
clock-names = "ipg", "ahb", "per";
|
||||
bus-width = <4>;
|
||||
fsl,tuning-start-tap = <20>;
|
||||
fsl,tuning-step= <2>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
@ -1005,6 +1009,8 @@ usdhc3: mmc@2198000 {
|
||||
<&clks IMX6SX_CLK_USDHC3>;
|
||||
clock-names = "ipg", "ahb", "per";
|
||||
bus-width = <4>;
|
||||
fsl,tuning-start-tap = <20>;
|
||||
fsl,tuning-step= <2>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
|
@ -337,6 +337,7 @@ static struct gpiod_lookup_table ep93xx_i2c_gpiod_table = {
|
||||
GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
|
||||
GPIO_LOOKUP_IDX("G", 0, NULL, 1,
|
||||
GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN),
|
||||
{ }
|
||||
},
|
||||
};
|
||||
|
||||
|
@ -577,6 +577,7 @@ spi0: spi@ff1d0000 {
|
||||
clock-names = "spiclk", "apb_pclk";
|
||||
dmas = <&dmac 12>, <&dmac 13>;
|
||||
dma-names = "tx", "rx";
|
||||
num-cs = <2>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&spi0_clk &spi0_csn &spi0_miso &spi0_mosi>;
|
||||
#address-cells = <1>;
|
||||
@ -592,6 +593,7 @@ spi1: spi@ff1d8000 {
|
||||
clock-names = "spiclk", "apb_pclk";
|
||||
dmas = <&dmac 14>, <&dmac 15>;
|
||||
dma-names = "tx", "rx";
|
||||
num-cs = <2>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&spi1_clk &spi1_csn0 &spi1_csn1 &spi1_miso &spi1_mosi>;
|
||||
#address-cells = <1>;
|
||||
|
@ -462,6 +462,9 @@ static int its_sync_lpi_pending_table(struct kvm_vcpu *vcpu)
|
||||
}
|
||||
|
||||
irq = vgic_get_irq(vcpu->kvm, NULL, intids[i]);
|
||||
if (!irq)
|
||||
continue;
|
||||
|
||||
raw_spin_lock_irqsave(&irq->irq_lock, flags);
|
||||
irq->pending_latch = pendmask & (1U << bit_nr);
|
||||
vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
|
||||
@ -1374,6 +1377,8 @@ static int vgic_its_cmd_handle_movall(struct kvm *kvm, struct vgic_its *its,
|
||||
|
||||
for (i = 0; i < irq_count; i++) {
|
||||
irq = vgic_get_irq(kvm, NULL, intids[i]);
|
||||
if (!irq)
|
||||
continue;
|
||||
|
||||
update_affinity(irq, vcpu2);
|
||||
|
||||
|
@ -504,6 +504,11 @@ static bool is_larx_stcx_instr(int type)
|
||||
return type == LARX || type == STCX;
|
||||
}
|
||||
|
||||
static bool is_octword_vsx_instr(int type, int size)
|
||||
{
|
||||
return ((type == LOAD_VSX || type == STORE_VSX) && size == 32);
|
||||
}
|
||||
|
||||
/*
|
||||
* We've failed in reliably handling the hw-breakpoint. Unregister
|
||||
* it and throw a warning message to let the user know about it.
|
||||
@ -554,6 +559,63 @@ static bool stepping_handler(struct pt_regs *regs, struct perf_event **bp,
|
||||
return true;
|
||||
}
|
||||
|
||||
static void handle_p10dd1_spurious_exception(struct arch_hw_breakpoint **info,
|
||||
int *hit, unsigned long ea)
|
||||
{
|
||||
int i;
|
||||
unsigned long hw_end_addr;
|
||||
|
||||
/*
|
||||
* Handle spurious exception only when any bp_per_reg is set.
|
||||
* Otherwise this might be created by xmon and not actually a
|
||||
* spurious exception.
|
||||
*/
|
||||
for (i = 0; i < nr_wp_slots(); i++) {
|
||||
if (!info[i])
|
||||
continue;
|
||||
|
||||
hw_end_addr = ALIGN(info[i]->address + info[i]->len, HW_BREAKPOINT_SIZE);
|
||||
|
||||
/*
|
||||
* Ending address of DAWR range is less than starting
|
||||
* address of op.
|
||||
*/
|
||||
if ((hw_end_addr - 1) >= ea)
|
||||
continue;
|
||||
|
||||
/*
|
||||
* Those addresses need to be in the same or in two
|
||||
* consecutive 512B blocks;
|
||||
*/
|
||||
if (((hw_end_addr - 1) >> 10) != (ea >> 10))
|
||||
continue;
|
||||
|
||||
/*
|
||||
* 'op address + 64B' generates an address that has a
|
||||
* carry into bit 52 (crosses 2K boundary).
|
||||
*/
|
||||
if ((ea & 0x800) == ((ea + 64) & 0x800))
|
||||
continue;
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if (i == nr_wp_slots())
|
||||
return;
|
||||
|
||||
for (i = 0; i < nr_wp_slots(); i++) {
|
||||
if (info[i]) {
|
||||
hit[i] = 1;
|
||||
info[i]->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Handle a DABR or DAWR exception.
|
||||
*
|
||||
* Called in atomic context.
|
||||
*/
|
||||
int hw_breakpoint_handler(struct die_args *args)
|
||||
{
|
||||
bool err = false;
|
||||
@ -612,9 +674,15 @@ int hw_breakpoint_handler(struct die_args *args)
|
||||
goto reset;
|
||||
|
||||
if (!nr_hit) {
|
||||
/* Workaround for Power10 DD1 */
|
||||
if (!IS_ENABLED(CONFIG_PPC_8xx) && mfspr(SPRN_PVR) == 0x800100 &&
|
||||
is_octword_vsx_instr(type, size)) {
|
||||
handle_p10dd1_spurious_exception(info, hit, ea);
|
||||
} else {
|
||||
rc = NOTIFY_DONE;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Return early after invoking user-callback function without restoring
|
||||
@ -674,6 +742,8 @@ NOKPROBE_SYMBOL(hw_breakpoint_handler);
|
||||
|
||||
/*
|
||||
* Handle single-step exceptions following a DABR hit.
|
||||
*
|
||||
* Called in atomic context.
|
||||
*/
|
||||
static int single_step_dabr_instruction(struct die_args *args)
|
||||
{
|
||||
@ -731,6 +801,8 @@ NOKPROBE_SYMBOL(single_step_dabr_instruction);
|
||||
|
||||
/*
|
||||
* Handle debug exception notifications.
|
||||
*
|
||||
* Called in atomic context.
|
||||
*/
|
||||
int hw_breakpoint_exceptions_notify(
|
||||
struct notifier_block *unused, unsigned long val, void *data)
|
||||
|
@ -225,7 +225,7 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
|
||||
/* combine single writes by using store-block insn */
|
||||
void __iowrite64_copy(void __iomem *to, const void *from, size_t count)
|
||||
{
|
||||
zpci_memcpy_toio(to, from, count);
|
||||
zpci_memcpy_toio(to, from, count * 8);
|
||||
}
|
||||
|
||||
static void __iomem *__ioremap(phys_addr_t addr, size_t size, pgprot_t prot)
|
||||
|
@ -143,7 +143,7 @@ extern void cea_set_pte(void *cea_vaddr, phys_addr_t pa, pgprot_t flags);
|
||||
|
||||
extern struct cpu_entry_area *get_cpu_entry_area(int cpu);
|
||||
|
||||
static inline struct entry_stack *cpu_entry_stack(int cpu)
|
||||
static __always_inline struct entry_stack *cpu_entry_stack(int cpu)
|
||||
{
|
||||
return &get_cpu_entry_area(cpu)->entry_stack_page.stack;
|
||||
}
|
||||
|
@ -207,6 +207,8 @@ extern void srso_alias_untrain_ret(void);
|
||||
extern void entry_untrain_ret(void);
|
||||
extern void entry_ibpb(void);
|
||||
|
||||
extern void (*x86_return_thunk)(void);
|
||||
|
||||
#ifdef CONFIG_RETPOLINE
|
||||
|
||||
typedef u8 retpoline_thunk_t[RETPOLINE_THUNK_SIZE];
|
||||
|
@ -95,25 +95,41 @@ union text_poke_insn {
|
||||
} __attribute__((packed));
|
||||
};
|
||||
|
||||
static __always_inline
|
||||
void __text_gen_insn(void *buf, u8 opcode, const void *addr, const void *dest, int size)
|
||||
{
|
||||
union text_poke_insn *insn = buf;
|
||||
|
||||
BUG_ON(size < text_opcode_size(opcode));
|
||||
|
||||
/*
|
||||
* Hide the addresses to avoid the compiler folding in constants when
|
||||
* referencing code, these can mess up annotations like
|
||||
* ANNOTATE_NOENDBR.
|
||||
*/
|
||||
OPTIMIZER_HIDE_VAR(insn);
|
||||
OPTIMIZER_HIDE_VAR(addr);
|
||||
OPTIMIZER_HIDE_VAR(dest);
|
||||
|
||||
insn->opcode = opcode;
|
||||
|
||||
if (size > 1) {
|
||||
insn->disp = (long)dest - (long)(addr + size);
|
||||
if (size == 2) {
|
||||
/*
|
||||
* Ensure that for JMP8 the displacement
|
||||
* actually fits the signed byte.
|
||||
*/
|
||||
BUG_ON((insn->disp >> 31) != (insn->disp >> 7));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static __always_inline
|
||||
void *text_gen_insn(u8 opcode, const void *addr, const void *dest)
|
||||
{
|
||||
static union text_poke_insn insn; /* per instance */
|
||||
int size = text_opcode_size(opcode);
|
||||
|
||||
insn.opcode = opcode;
|
||||
|
||||
if (size > 1) {
|
||||
insn.disp = (long)dest - (long)(addr + size);
|
||||
if (size == 2) {
|
||||
/*
|
||||
* Ensure that for JMP9 the displacement
|
||||
* actually fits the signed byte.
|
||||
*/
|
||||
BUG_ON((insn.disp >> 31) != (insn.disp >> 7));
|
||||
}
|
||||
}
|
||||
|
||||
__text_gen_insn(&insn, opcode, addr, dest, text_opcode_size(opcode));
|
||||
return &insn.text;
|
||||
}
|
||||
|
||||
|
@ -414,6 +414,103 @@ do { \
|
||||
|
||||
#endif // CONFIG_CC_ASM_GOTO_OUTPUT
|
||||
|
||||
#ifdef CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT
|
||||
#define __try_cmpxchg_user_asm(itype, ltype, _ptr, _pold, _new, label) ({ \
|
||||
bool success; \
|
||||
__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
|
||||
__typeof__(*(_ptr)) __old = *_old; \
|
||||
__typeof__(*(_ptr)) __new = (_new); \
|
||||
asm_volatile_goto("\n" \
|
||||
"1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\
|
||||
_ASM_EXTABLE_UA(1b, %l[label]) \
|
||||
: CC_OUT(z) (success), \
|
||||
[ptr] "+m" (*_ptr), \
|
||||
[old] "+a" (__old) \
|
||||
: [new] ltype (__new) \
|
||||
: "memory" \
|
||||
: label); \
|
||||
if (unlikely(!success)) \
|
||||
*_old = __old; \
|
||||
likely(success); })
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
#define __try_cmpxchg64_user_asm(_ptr, _pold, _new, label) ({ \
|
||||
bool success; \
|
||||
__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
|
||||
__typeof__(*(_ptr)) __old = *_old; \
|
||||
__typeof__(*(_ptr)) __new = (_new); \
|
||||
asm_volatile_goto("\n" \
|
||||
"1: " LOCK_PREFIX "cmpxchg8b %[ptr]\n" \
|
||||
_ASM_EXTABLE_UA(1b, %l[label]) \
|
||||
: CC_OUT(z) (success), \
|
||||
"+A" (__old), \
|
||||
[ptr] "+m" (*_ptr) \
|
||||
: "b" ((u32)__new), \
|
||||
"c" ((u32)((u64)__new >> 32)) \
|
||||
: "memory" \
|
||||
: label); \
|
||||
if (unlikely(!success)) \
|
||||
*_old = __old; \
|
||||
likely(success); })
|
||||
#endif // CONFIG_X86_32
|
||||
#else // !CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT
|
||||
#define __try_cmpxchg_user_asm(itype, ltype, _ptr, _pold, _new, label) ({ \
|
||||
int __err = 0; \
|
||||
bool success; \
|
||||
__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
|
||||
__typeof__(*(_ptr)) __old = *_old; \
|
||||
__typeof__(*(_ptr)) __new = (_new); \
|
||||
asm volatile("\n" \
|
||||
"1: " LOCK_PREFIX "cmpxchg"itype" %[new], %[ptr]\n"\
|
||||
CC_SET(z) \
|
||||
"2:\n" \
|
||||
_ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, \
|
||||
%[errout]) \
|
||||
: CC_OUT(z) (success), \
|
||||
[errout] "+r" (__err), \
|
||||
[ptr] "+m" (*_ptr), \
|
||||
[old] "+a" (__old) \
|
||||
: [new] ltype (__new) \
|
||||
: "memory"); \
|
||||
if (unlikely(__err)) \
|
||||
goto label; \
|
||||
if (unlikely(!success)) \
|
||||
*_old = __old; \
|
||||
likely(success); })
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
/*
|
||||
* Unlike the normal CMPXCHG, hardcode ECX for both success/fail and error.
|
||||
* There are only six GPRs available and four (EAX, EBX, ECX, and EDX) are
|
||||
* hardcoded by CMPXCHG8B, leaving only ESI and EDI. If the compiler uses
|
||||
* both ESI and EDI for the memory operand, compilation will fail if the error
|
||||
* is an input+output as there will be no register available for input.
|
||||
*/
|
||||
#define __try_cmpxchg64_user_asm(_ptr, _pold, _new, label) ({ \
|
||||
int __result; \
|
||||
__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
|
||||
__typeof__(*(_ptr)) __old = *_old; \
|
||||
__typeof__(*(_ptr)) __new = (_new); \
|
||||
asm volatile("\n" \
|
||||
"1: " LOCK_PREFIX "cmpxchg8b %[ptr]\n" \
|
||||
"mov $0, %%ecx\n\t" \
|
||||
"setz %%cl\n" \
|
||||
"2:\n" \
|
||||
_ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_EFAULT_REG, %%ecx) \
|
||||
: [result]"=c" (__result), \
|
||||
"+A" (__old), \
|
||||
[ptr] "+m" (*_ptr) \
|
||||
: "b" ((u32)__new), \
|
||||
"c" ((u32)((u64)__new >> 32)) \
|
||||
: "memory", "cc"); \
|
||||
if (unlikely(__result < 0)) \
|
||||
goto label; \
|
||||
if (unlikely(!__result)) \
|
||||
*_old = __old; \
|
||||
likely(__result); })
|
||||
#endif // CONFIG_X86_32
|
||||
#endif // CONFIG_CC_HAS_ASM_GOTO_TIED_OUTPUT
|
||||
|
||||
/* FIXME: this hack is definitely wrong -AK */
|
||||
struct __large_struct { unsigned long buf[100]; };
|
||||
#define __m(x) (*(struct __large_struct __user *)(x))
|
||||
@ -506,6 +603,51 @@ do { \
|
||||
} while (0)
|
||||
#endif // CONFIG_CC_HAS_ASM_GOTO_OUTPUT
|
||||
|
||||
extern void __try_cmpxchg_user_wrong_size(void);
|
||||
|
||||
#ifndef CONFIG_X86_32
|
||||
#define __try_cmpxchg64_user_asm(_ptr, _oldp, _nval, _label) \
|
||||
__try_cmpxchg_user_asm("q", "r", (_ptr), (_oldp), (_nval), _label)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Force the pointer to u<size> to match the size expected by the asm helper.
|
||||
* clang/LLVM compiles all cases and only discards the unused paths after
|
||||
* processing errors, which breaks i386 if the pointer is an 8-byte value.
|
||||
*/
|
||||
#define unsafe_try_cmpxchg_user(_ptr, _oldp, _nval, _label) ({ \
|
||||
bool __ret; \
|
||||
__chk_user_ptr(_ptr); \
|
||||
switch (sizeof(*(_ptr))) { \
|
||||
case 1: __ret = __try_cmpxchg_user_asm("b", "q", \
|
||||
(__force u8 *)(_ptr), (_oldp), \
|
||||
(_nval), _label); \
|
||||
break; \
|
||||
case 2: __ret = __try_cmpxchg_user_asm("w", "r", \
|
||||
(__force u16 *)(_ptr), (_oldp), \
|
||||
(_nval), _label); \
|
||||
break; \
|
||||
case 4: __ret = __try_cmpxchg_user_asm("l", "r", \
|
||||
(__force u32 *)(_ptr), (_oldp), \
|
||||
(_nval), _label); \
|
||||
break; \
|
||||
case 8: __ret = __try_cmpxchg64_user_asm((__force u64 *)(_ptr), (_oldp),\
|
||||
(_nval), _label); \
|
||||
break; \
|
||||
default: __try_cmpxchg_user_wrong_size(); \
|
||||
} \
|
||||
__ret; })
|
||||
|
||||
/* "Returns" 0 on success, 1 on failure, -EFAULT if the access faults. */
|
||||
#define __try_cmpxchg_user(_ptr, _oldp, _nval, _label) ({ \
|
||||
int __ret = -EFAULT; \
|
||||
__uaccess_begin_nospec(); \
|
||||
__ret = !unsafe_try_cmpxchg_user(_ptr, _oldp, _nval, _label); \
|
||||
_label: \
|
||||
__uaccess_end(); \
|
||||
__ret; \
|
||||
})
|
||||
|
||||
/*
|
||||
* We want the unsafe accessors to always be inlined and use
|
||||
* the error labels - thus the macro games.
|
||||
|
@ -676,6 +676,7 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end)
|
||||
}
|
||||
|
||||
#ifdef CONFIG_RETHUNK
|
||||
|
||||
/*
|
||||
* Rewrite the compiler generated return thunk tail-calls.
|
||||
*
|
||||
@ -691,14 +692,18 @@ static int patch_return(void *addr, struct insn *insn, u8 *bytes)
|
||||
{
|
||||
int i = 0;
|
||||
|
||||
if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
|
||||
if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) {
|
||||
if (x86_return_thunk == __x86_return_thunk)
|
||||
return -1;
|
||||
|
||||
i = JMP32_INSN_SIZE;
|
||||
__text_gen_insn(bytes, JMP32_INSN_OPCODE, addr, x86_return_thunk, i);
|
||||
} else {
|
||||
bytes[i++] = RET_INSN_OPCODE;
|
||||
}
|
||||
|
||||
for (; i < insn->length;)
|
||||
bytes[i++] = INT3_INSN_OPCODE;
|
||||
|
||||
return i;
|
||||
}
|
||||
|
||||
|
@ -367,10 +367,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
|
||||
goto fail;
|
||||
|
||||
ip = trampoline + size;
|
||||
|
||||
/* The trampoline ends with ret(q) */
|
||||
if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
|
||||
memcpy(ip, text_gen_insn(JMP32_INSN_OPCODE, ip, &__x86_return_thunk), JMP32_INSN_SIZE);
|
||||
__text_gen_insn(ip, JMP32_INSN_OPCODE, ip, x86_return_thunk, JMP32_INSN_SIZE);
|
||||
else
|
||||
memcpy(ip, retq, sizeof(retq));
|
||||
|
||||
|
@ -62,21 +62,9 @@ struct branch {
|
||||
static unsigned paravirt_patch_call(void *insn_buff, const void *target,
|
||||
unsigned long addr, unsigned len)
|
||||
{
|
||||
const int call_len = 5;
|
||||
struct branch *b = insn_buff;
|
||||
unsigned long delta = (unsigned long)target - (addr+call_len);
|
||||
|
||||
if (len < call_len) {
|
||||
pr_warn("paravirt: Failed to patch indirect CALL at %ps\n", (void *)addr);
|
||||
/* Kernel might not be viable if patching fails, bail out: */
|
||||
BUG_ON(1);
|
||||
}
|
||||
|
||||
b->opcode = 0xe8; /* call */
|
||||
b->delta = delta;
|
||||
BUILD_BUG_ON(sizeof(*b) != call_len);
|
||||
|
||||
return call_len;
|
||||
__text_gen_insn(insn_buff, CALL_INSN_OPCODE,
|
||||
(void *)addr, target, CALL_INSN_SIZE);
|
||||
return CALL_INSN_SIZE;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PARAVIRT_XXL
|
||||
|
@ -41,7 +41,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type,
|
||||
|
||||
case RET:
|
||||
if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
|
||||
code = text_gen_insn(JMP32_INSN_OPCODE, insn, &__x86_return_thunk);
|
||||
code = text_gen_insn(JMP32_INSN_OPCODE, insn, x86_return_thunk);
|
||||
else
|
||||
code = &retinsn;
|
||||
break;
|
||||
|
@ -405,7 +405,7 @@ static void emit_return(u8 **pprog, u8 *ip)
|
||||
int cnt = 0;
|
||||
|
||||
if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) {
|
||||
emit_jump(&prog, &__x86_return_thunk, ip);
|
||||
emit_jump(&prog, x86_return_thunk, ip);
|
||||
} else {
|
||||
EMIT1(0xC3); /* ret */
|
||||
if (IS_ENABLED(CONFIG_SLS))
|
||||
|
@ -49,6 +49,7 @@ enum {
|
||||
enum board_ids {
|
||||
/* board IDs by feature in alphabetical order */
|
||||
board_ahci,
|
||||
board_ahci_43bit_dma,
|
||||
board_ahci_ign_iferr,
|
||||
board_ahci_low_power,
|
||||
board_ahci_no_debounce_delay,
|
||||
@ -129,6 +130,13 @@ static const struct ata_port_info ahci_port_info[] = {
|
||||
.udma_mask = ATA_UDMA6,
|
||||
.port_ops = &ahci_ops,
|
||||
},
|
||||
[board_ahci_43bit_dma] = {
|
||||
AHCI_HFLAGS (AHCI_HFLAG_43BIT_ONLY),
|
||||
.flags = AHCI_FLAG_COMMON,
|
||||
.pio_mask = ATA_PIO4,
|
||||
.udma_mask = ATA_UDMA6,
|
||||
.port_ops = &ahci_ops,
|
||||
},
|
||||
[board_ahci_ign_iferr] = {
|
||||
AHCI_HFLAGS (AHCI_HFLAG_IGN_IRQ_IF_ERR),
|
||||
.flags = AHCI_FLAG_COMMON,
|
||||
@ -594,11 +602,11 @@ static const struct pci_device_id ahci_pci_tbl[] = {
|
||||
{ PCI_VDEVICE(PROMISE, 0x3f20), board_ahci }, /* PDC42819 */
|
||||
{ PCI_VDEVICE(PROMISE, 0x3781), board_ahci }, /* FastTrak TX8660 ahci-mode */
|
||||
|
||||
/* Asmedia */
|
||||
/* ASMedia */
|
||||
{ PCI_VDEVICE(ASMEDIA, 0x0601), board_ahci }, /* ASM1060 */
|
||||
{ PCI_VDEVICE(ASMEDIA, 0x0602), board_ahci }, /* ASM1060 */
|
||||
{ PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci }, /* ASM1061 */
|
||||
{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1062 */
|
||||
{ PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci_43bit_dma }, /* ASM1061 */
|
||||
{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci_43bit_dma }, /* ASM1061/1062 */
|
||||
{ PCI_VDEVICE(ASMEDIA, 0x0621), board_ahci }, /* ASM1061R */
|
||||
{ PCI_VDEVICE(ASMEDIA, 0x0622), board_ahci }, /* ASM1062R */
|
||||
|
||||
@ -654,6 +662,11 @@ MODULE_PARM_DESC(mobile_lpm_policy, "Default LPM policy for mobile chipsets");
|
||||
static void ahci_pci_save_initial_config(struct pci_dev *pdev,
|
||||
struct ahci_host_priv *hpriv)
|
||||
{
|
||||
if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == 0x1166) {
|
||||
dev_info(&pdev->dev, "ASM1166 has only six ports\n");
|
||||
hpriv->saved_port_map = 0x3f;
|
||||
}
|
||||
|
||||
if (pdev->vendor == PCI_VENDOR_ID_JMICRON && pdev->device == 0x2361) {
|
||||
dev_info(&pdev->dev, "JMB361 has only one port\n");
|
||||
hpriv->force_port_map = 1;
|
||||
@ -946,11 +959,20 @@ static int ahci_pci_device_resume(struct device *dev)
|
||||
|
||||
#endif /* CONFIG_PM */
|
||||
|
||||
static int ahci_configure_dma_masks(struct pci_dev *pdev, int using_dac)
|
||||
static int ahci_configure_dma_masks(struct pci_dev *pdev,
|
||||
struct ahci_host_priv *hpriv)
|
||||
{
|
||||
const int dma_bits = using_dac ? 64 : 32;
|
||||
int dma_bits;
|
||||
int rc;
|
||||
|
||||
if (hpriv->cap & HOST_CAP_64) {
|
||||
dma_bits = 64;
|
||||
if (hpriv->flags & AHCI_HFLAG_43BIT_ONLY)
|
||||
dma_bits = 43;
|
||||
} else {
|
||||
dma_bits = 32;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the device fixup already set the dma_mask to some non-standard
|
||||
* value, don't extend it here. This happens on STA2X11, for example.
|
||||
@ -1928,7 +1950,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
ahci_gtf_filter_workaround(host);
|
||||
|
||||
/* initialize adapter */
|
||||
rc = ahci_configure_dma_masks(pdev, hpriv->cap & HOST_CAP_64);
|
||||
rc = ahci_configure_dma_masks(pdev, hpriv);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
|
@ -244,6 +244,7 @@ enum {
|
||||
AHCI_HFLAG_IGN_NOTSUPP_POWER_ON = BIT(27), /* ignore -EOPNOTSUPP
|
||||
from phy_power_on() */
|
||||
AHCI_HFLAG_NO_SXS = BIT(28), /* SXS not supported */
|
||||
AHCI_HFLAG_43BIT_ONLY = BIT(29), /* 43bit DMA addr limit */
|
||||
|
||||
/* ap->flags bits */
|
||||
|
||||
|
@ -456,10 +456,20 @@ static DEFINE_TIMER(fd_timer, check_change);
|
||||
|
||||
static void fd_end_request_cur(blk_status_t err)
|
||||
{
|
||||
DPRINT(("fd_end_request_cur(), bytes %d of %d\n",
|
||||
blk_rq_cur_bytes(fd_request),
|
||||
blk_rq_bytes(fd_request)));
|
||||
|
||||
if (!blk_update_request(fd_request, err,
|
||||
blk_rq_cur_bytes(fd_request))) {
|
||||
DPRINT(("calling __blk_mq_end_request()\n"));
|
||||
__blk_mq_end_request(fd_request, err);
|
||||
fd_request = NULL;
|
||||
} else {
|
||||
/* requeue rest of request */
|
||||
DPRINT(("calling blk_mq_requeue_request()\n"));
|
||||
blk_mq_requeue_request(fd_request, true);
|
||||
fd_request = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
@ -653,9 +663,6 @@ static inline void copy_buffer(void *from, void *to)
|
||||
*p2++ = *p1++;
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
/* General Interrupt Handling */
|
||||
|
||||
static void (*FloppyIRQHandler)( int status ) = NULL;
|
||||
@ -700,12 +707,21 @@ static void fd_error( void )
|
||||
if (fd_request->error_count >= MAX_ERRORS) {
|
||||
printk(KERN_ERR "fd%d: too many errors.\n", SelectedDrive );
|
||||
fd_end_request_cur(BLK_STS_IOERR);
|
||||
finish_fdc();
|
||||
return;
|
||||
}
|
||||
else if (fd_request->error_count == RECALIBRATE_ERRORS) {
|
||||
printk(KERN_WARNING "fd%d: recalibrating\n", SelectedDrive );
|
||||
if (SelectedDrive != -1)
|
||||
SUD.track = -1;
|
||||
}
|
||||
/* need to re-run request to recalibrate */
|
||||
atari_disable_irq( IRQ_MFP_FDC );
|
||||
|
||||
setup_req_params( SelectedDrive );
|
||||
do_fd_action( SelectedDrive );
|
||||
|
||||
atari_enable_irq( IRQ_MFP_FDC );
|
||||
}
|
||||
|
||||
|
||||
@ -740,6 +756,7 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
|
||||
if (type) {
|
||||
if (--type >= NUM_DISK_MINORS ||
|
||||
minor2disktype[type].drive_types > DriveType) {
|
||||
finish_fdc();
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@ -748,6 +765,7 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
|
||||
}
|
||||
|
||||
if (!UDT || desc->track >= UDT->blocks/UDT->spt/2 || desc->head >= 2) {
|
||||
finish_fdc();
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@ -788,6 +806,7 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
|
||||
|
||||
wait_for_completion(&format_wait);
|
||||
|
||||
finish_fdc();
|
||||
ret = FormatError ? -EIO : 0;
|
||||
out:
|
||||
blk_mq_unquiesce_queue(q);
|
||||
@ -822,6 +841,7 @@ static void do_fd_action( int drive )
|
||||
else {
|
||||
/* all sectors finished */
|
||||
fd_end_request_cur(BLK_STS_OK);
|
||||
finish_fdc();
|
||||
return;
|
||||
}
|
||||
}
|
||||
@ -1226,6 +1246,7 @@ static void fd_rwsec_done1(int status)
|
||||
else {
|
||||
/* all sectors finished */
|
||||
fd_end_request_cur(BLK_STS_OK);
|
||||
finish_fdc();
|
||||
}
|
||||
return;
|
||||
|
||||
@ -1347,7 +1368,7 @@ static void fd_times_out(struct timer_list *unused)
|
||||
|
||||
static void finish_fdc( void )
|
||||
{
|
||||
if (!NeedSeek) {
|
||||
if (!NeedSeek || !stdma_is_locked_by(floppy_irq)) {
|
||||
finish_fdc_done( 0 );
|
||||
}
|
||||
else {
|
||||
@ -1382,6 +1403,7 @@ static void finish_fdc_done( int dummy )
|
||||
start_motor_off_timer();
|
||||
|
||||
local_irq_save(flags);
|
||||
if (stdma_is_locked_by(floppy_irq))
|
||||
stdma_release();
|
||||
local_irq_restore(flags);
|
||||
|
||||
@ -1472,15 +1494,6 @@ static void setup_req_params( int drive )
|
||||
ReqTrack, ReqSector, (unsigned long)ReqData ));
|
||||
}
|
||||
|
||||
static void ataflop_commit_rqs(struct blk_mq_hw_ctx *hctx)
|
||||
{
|
||||
spin_lock_irq(&ataflop_lock);
|
||||
atari_disable_irq(IRQ_MFP_FDC);
|
||||
finish_fdc();
|
||||
atari_enable_irq(IRQ_MFP_FDC);
|
||||
spin_unlock_irq(&ataflop_lock);
|
||||
}
|
||||
|
||||
static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
|
||||
const struct blk_mq_queue_data *bd)
|
||||
{
|
||||
@ -1488,6 +1501,10 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
|
||||
int drive = floppy - unit;
|
||||
int type = floppy->type;
|
||||
|
||||
DPRINT(("Queue request: drive %d type %d sectors %d of %d last %d\n",
|
||||
drive, type, blk_rq_cur_sectors(bd->rq),
|
||||
blk_rq_sectors(bd->rq), bd->last));
|
||||
|
||||
spin_lock_irq(&ataflop_lock);
|
||||
if (fd_request) {
|
||||
spin_unlock_irq(&ataflop_lock);
|
||||
@ -1508,6 +1525,7 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
|
||||
/* drive not connected */
|
||||
printk(KERN_ERR "Unknown Device: fd%d\n", drive );
|
||||
fd_end_request_cur(BLK_STS_IOERR);
|
||||
stdma_release();
|
||||
goto out;
|
||||
}
|
||||
|
||||
@ -1524,11 +1542,13 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
|
||||
if (--type >= NUM_DISK_MINORS) {
|
||||
printk(KERN_WARNING "fd%d: invalid disk format", drive );
|
||||
fd_end_request_cur(BLK_STS_IOERR);
|
||||
stdma_release();
|
||||
goto out;
|
||||
}
|
||||
if (minor2disktype[type].drive_types > DriveType) {
|
||||
printk(KERN_WARNING "fd%d: unsupported disk format", drive );
|
||||
fd_end_request_cur(BLK_STS_IOERR);
|
||||
stdma_release();
|
||||
goto out;
|
||||
}
|
||||
type = minor2disktype[type].index;
|
||||
@ -1547,8 +1567,6 @@ static blk_status_t ataflop_queue_rq(struct blk_mq_hw_ctx *hctx,
|
||||
setup_req_params( drive );
|
||||
do_fd_action( drive );
|
||||
|
||||
if (bd->last)
|
||||
finish_fdc();
|
||||
atari_enable_irq( IRQ_MFP_FDC );
|
||||
|
||||
out:
|
||||
@ -1631,6 +1649,7 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
/* what if type > 0 here? Overwrite specified entry ? */
|
||||
if (type) {
|
||||
/* refuse to re-set a predefined type for now */
|
||||
finish_fdc();
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -1698,8 +1717,10 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode,
|
||||
|
||||
/* sanity check */
|
||||
if (setprm.track != dtp->blocks/dtp->spt/2 ||
|
||||
setprm.head != 2)
|
||||
setprm.head != 2) {
|
||||
finish_fdc();
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
UDT = dtp;
|
||||
set_capacity(floppy->disk, UDT->blocks);
|
||||
@ -1959,7 +1980,6 @@ static const struct block_device_operations floppy_fops = {
|
||||
|
||||
static const struct blk_mq_ops ataflop_mq_ops = {
|
||||
.queue_rq = ataflop_queue_rq,
|
||||
.commit_rqs = ataflop_commit_rqs,
|
||||
};
|
||||
|
||||
static struct kobject *floppy_find(dev_t dev, int *part, void *data)
|
||||
|
@ -956,14 +956,15 @@ static int virtblk_freeze(struct virtio_device *vdev)
|
||||
{
|
||||
struct virtio_blk *vblk = vdev->priv;
|
||||
|
||||
/* Ensure no requests in virtqueues before deleting vqs. */
|
||||
blk_mq_freeze_queue(vblk->disk->queue);
|
||||
|
||||
/* Ensure we don't receive any more interrupts */
|
||||
vdev->config->reset(vdev);
|
||||
|
||||
/* Make sure no work handler is accessing the device. */
|
||||
flush_work(&vblk->config_work);
|
||||
|
||||
blk_mq_quiesce_queue(vblk->disk->queue);
|
||||
|
||||
vdev->config->del_vqs(vdev);
|
||||
kfree(vblk->vqs);
|
||||
|
||||
@ -981,7 +982,7 @@ static int virtblk_restore(struct virtio_device *vdev)
|
||||
|
||||
virtio_device_ready(vdev);
|
||||
|
||||
blk_mq_unquiesce_queue(vblk->disk->queue);
|
||||
blk_mq_unfreeze_queue(vblk->disk->queue);
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
@ -805,7 +805,7 @@ fsl_qdma_irq_init(struct platform_device *pdev,
|
||||
int i;
|
||||
int cpu;
|
||||
int ret;
|
||||
char irq_name[20];
|
||||
char irq_name[32];
|
||||
|
||||
fsl_qdma->error_irq =
|
||||
platform_get_irq_byname(pdev, "qdma-error");
|
||||
|
@ -25,7 +25,7 @@ struct sh_dmae_chan {
|
||||
const struct sh_dmae_slave_config *config; /* Slave DMA configuration */
|
||||
int xmit_shift; /* log_2(bytes_per_xfer) */
|
||||
void __iomem *base;
|
||||
char dev_id[16]; /* unique name per DMAC of channel */
|
||||
char dev_id[32]; /* unique name per DMAC of channel */
|
||||
int pm_error;
|
||||
dma_addr_t slave_addr;
|
||||
};
|
||||
|
@ -2462,6 +2462,11 @@ static int edma_probe(struct platform_device *pdev)
|
||||
if (irq > 0) {
|
||||
irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_ccint",
|
||||
dev_name(dev));
|
||||
if (!irq_name) {
|
||||
ret = -ENOMEM;
|
||||
goto err_disable_pm;
|
||||
}
|
||||
|
||||
ret = devm_request_irq(dev, irq, dma_irq_handler, 0, irq_name,
|
||||
ecc);
|
||||
if (ret) {
|
||||
@ -2478,6 +2483,11 @@ static int edma_probe(struct platform_device *pdev)
|
||||
if (irq > 0) {
|
||||
irq_name = devm_kasprintf(dev, GFP_KERNEL, "%s_ccerrint",
|
||||
dev_name(dev));
|
||||
if (!irq_name) {
|
||||
ret = -ENOMEM;
|
||||
goto err_disable_pm;
|
||||
}
|
||||
|
||||
ret = devm_request_irq(dev, irq, dma_ccerr_handler, 0, irq_name,
|
||||
ecc);
|
||||
if (ret) {
|
||||
|
@ -429,7 +429,23 @@ static void bm_work(struct work_struct *work)
|
||||
*/
|
||||
card->bm_generation = generation;
|
||||
|
||||
if (root_device == NULL) {
|
||||
if (card->gap_count == 0) {
|
||||
/*
|
||||
* If self IDs have inconsistent gap counts, do a
|
||||
* bus reset ASAP. The config rom read might never
|
||||
* complete, so don't wait for it. However, still
|
||||
* send a PHY configuration packet prior to the
|
||||
* bus reset. The PHY configuration packet might
|
||||
* fail, but 1394-2008 8.4.5.2 explicitly permits
|
||||
* it in this case, so it should be safe to try.
|
||||
*/
|
||||
new_root_id = local_id;
|
||||
/*
|
||||
* We must always send a bus reset if the gap count
|
||||
* is inconsistent, so bypass the 5-reset limit.
|
||||
*/
|
||||
card->bm_retries = 0;
|
||||
} else if (root_device == NULL) {
|
||||
/*
|
||||
* Either link_on is false, or we failed to read the
|
||||
* config rom. In either case, pick another root.
|
||||
|
@ -107,7 +107,7 @@ static int __init arm_enable_runtime_services(void)
|
||||
efi_memory_desc_t *md;
|
||||
|
||||
for_each_efi_memory_desc(md) {
|
||||
int md_size = md->num_pages << EFI_PAGE_SHIFT;
|
||||
u64 md_size = md->num_pages << EFI_PAGE_SHIFT;
|
||||
struct resource *res;
|
||||
|
||||
if (!(md->attribute & EFI_MEMORY_SP))
|
||||
|
@ -141,15 +141,6 @@ static __init int is_usable_memory(efi_memory_desc_t *md)
|
||||
case EFI_BOOT_SERVICES_DATA:
|
||||
case EFI_CONVENTIONAL_MEMORY:
|
||||
case EFI_PERSISTENT_MEMORY:
|
||||
/*
|
||||
* Special purpose memory is 'soft reserved', which means it
|
||||
* is set aside initially, but can be hotplugged back in or
|
||||
* be assigned to the dax driver after boot.
|
||||
*/
|
||||
if (efi_soft_reserve_enabled() &&
|
||||
(md->attribute & EFI_MEMORY_SP))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* According to the spec, these regions are no longer reserved
|
||||
* after calling ExitBootServices(). However, we can only use
|
||||
@ -194,6 +185,16 @@ static __init void reserve_regions(void)
|
||||
size = npages << PAGE_SHIFT;
|
||||
|
||||
if (is_memory(md)) {
|
||||
/*
|
||||
* Special purpose memory is 'soft reserved', which
|
||||
* means it is set aside initially. Don't add a memblock
|
||||
* for it now so that it can be hotplugged back in or
|
||||
* be assigned to the dax driver after boot.
|
||||
*/
|
||||
if (efi_soft_reserve_enabled() &&
|
||||
(md->attribute & EFI_MEMORY_SP))
|
||||
continue;
|
||||
|
||||
early_init_dt_add_memory_arch(paddr, size);
|
||||
|
||||
if (!is_usable_memory(md))
|
||||
|
@ -85,7 +85,7 @@ static int __init riscv_enable_runtime_services(void)
|
||||
efi_memory_desc_t *md;
|
||||
|
||||
for_each_efi_memory_desc(md) {
|
||||
int md_size = md->num_pages << EFI_PAGE_SHIFT;
|
||||
u64 md_size = md->num_pages << EFI_PAGE_SHIFT;
|
||||
struct resource *res;
|
||||
|
||||
if (!(md->attribute & EFI_MEMORY_SP))
|
||||
|
@ -1456,6 +1456,7 @@ static int dm_sw_fini(void *handle)
|
||||
|
||||
if (adev->dm.dmub_srv) {
|
||||
dmub_srv_destroy(adev->dm.dmub_srv);
|
||||
kfree(adev->dm.dmub_srv);
|
||||
adev->dm.dmub_srv = NULL;
|
||||
}
|
||||
|
||||
|
@ -387,6 +387,15 @@ int drm_syncobj_find_fence(struct drm_file *file_private,
|
||||
if (!syncobj)
|
||||
return -ENOENT;
|
||||
|
||||
/* Waiting for userspace with locks help is illegal cause that can
|
||||
* trivial deadlock with page faults for example. Make lockdep complain
|
||||
* about it early on.
|
||||
*/
|
||||
if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
|
||||
might_sleep();
|
||||
lockdep_assert_none_held_once();
|
||||
}
|
||||
|
||||
*fence = drm_syncobj_fence_get(syncobj);
|
||||
|
||||
if (*fence) {
|
||||
@ -951,6 +960,10 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
|
||||
uint64_t *points;
|
||||
uint32_t signaled_count, i;
|
||||
|
||||
if (flags & (DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
|
||||
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE))
|
||||
lockdep_assert_none_held_once();
|
||||
|
||||
points = kmalloc_array(count, sizeof(*points), GFP_KERNEL);
|
||||
if (points == NULL)
|
||||
return -ENOMEM;
|
||||
@ -1017,7 +1030,8 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
|
||||
* fallthough and try a 0 timeout wait!
|
||||
*/
|
||||
|
||||
if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
|
||||
if (flags & (DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT |
|
||||
DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE)) {
|
||||
for (i = 0; i < count; ++i)
|
||||
drm_syncobj_fence_add_wait(syncobjs[i], &entries[i]);
|
||||
}
|
||||
|
@ -154,11 +154,17 @@ shadow_fw_init(struct nvkm_bios *bios, const char *name)
|
||||
return (void *)fw;
|
||||
}
|
||||
|
||||
static void
|
||||
shadow_fw_release(void *fw)
|
||||
{
|
||||
release_firmware(fw);
|
||||
}
|
||||
|
||||
static const struct nvbios_source
|
||||
shadow_fw = {
|
||||
.name = "firmware",
|
||||
.init = shadow_fw_init,
|
||||
.fini = (void(*)(void *))release_firmware,
|
||||
.fini = shadow_fw_release,
|
||||
.read = shadow_fw_read,
|
||||
.rw = false,
|
||||
};
|
||||
|
@ -40,7 +40,7 @@ MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");
|
||||
|
||||
#define PKG_SYSFS_ATTR_NO 1 /* Sysfs attribute for package temp */
|
||||
#define BASE_SYSFS_ATTR_NO 2 /* Sysfs Base attr no for coretemp */
|
||||
#define NUM_REAL_CORES 128 /* Number of Real cores per cpu */
|
||||
#define NUM_REAL_CORES 512 /* Number of Real cores per cpu */
|
||||
#define CORETEMP_NAME_LENGTH 28 /* String Length of attrs */
|
||||
#define MAX_CORE_ATTRS 4 /* Maximum no of basic attrs */
|
||||
#define TOTAL_ATTRS (MAX_CORE_ATTRS + 1)
|
||||
|
@ -1711,7 +1711,7 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr,
|
||||
switch (srq_attr_mask) {
|
||||
case IB_SRQ_MAX_WR:
|
||||
/* SRQ resize is not supported */
|
||||
break;
|
||||
return -EINVAL;
|
||||
case IB_SRQ_LIMIT:
|
||||
/* Change the SRQ threshold */
|
||||
if (srq_attr->srq_limit > srq->qplib_srq.max_wqe)
|
||||
@ -1726,13 +1726,12 @@ int bnxt_re_modify_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr,
|
||||
/* On success, update the shadow */
|
||||
srq->srq_limit = srq_attr->srq_limit;
|
||||
/* No need to Build and send response back to udata */
|
||||
break;
|
||||
return 0;
|
||||
default:
|
||||
ibdev_err(&rdev->ibdev,
|
||||
"Unsupported srq_attr_mask 0x%x", srq_attr_mask);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int bnxt_re_query_srq(struct ib_srq *ib_srq, struct ib_srq_attr *srq_attr)
|
||||
|
@ -2131,7 +2131,7 @@ int init_credit_return(struct hfi1_devdata *dd)
|
||||
"Unable to allocate credit return DMA range for NUMA %d\n",
|
||||
i);
|
||||
ret = -ENOMEM;
|
||||
goto done;
|
||||
goto free_cr_base;
|
||||
}
|
||||
}
|
||||
set_dev_node(&dd->pcidev->dev, dd->node);
|
||||
@ -2139,6 +2139,10 @@ int init_credit_return(struct hfi1_devdata *dd)
|
||||
ret = 0;
|
||||
done:
|
||||
return ret;
|
||||
|
||||
free_cr_base:
|
||||
free_credit_return(dd);
|
||||
goto done;
|
||||
}
|
||||
|
||||
void free_credit_return(struct hfi1_devdata *dd)
|
||||
|
@ -3200,7 +3200,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
|
||||
{
|
||||
int rval = 0;
|
||||
|
||||
if ((unlikely(tx->num_desc + 1 == tx->desc_limit))) {
|
||||
if ((unlikely(tx->num_desc == tx->desc_limit))) {
|
||||
rval = _extend_sdma_tx_descs(dd, tx);
|
||||
if (rval) {
|
||||
__sdma_txclean(dd, tx);
|
||||
|
@ -1865,9 +1865,18 @@ static int qedr_create_user_qp(struct qedr_dev *dev,
|
||||
/* RQ - read access only (0) */
|
||||
rc = qedr_init_user_queue(udata, dev, &qp->urq, ureq.rq_addr,
|
||||
ureq.rq_len, true, 0, alloc_and_init);
|
||||
if (rc)
|
||||
if (rc) {
|
||||
ib_umem_release(qp->usq.umem);
|
||||
qp->usq.umem = NULL;
|
||||
if (rdma_protocol_roce(&dev->ibdev, 1)) {
|
||||
qedr_free_pbl(dev, &qp->usq.pbl_info,
|
||||
qp->usq.pbl_tbl);
|
||||
} else {
|
||||
kfree(qp->usq.pbl_tbl);
|
||||
}
|
||||
return rc;
|
||||
}
|
||||
}
|
||||
|
||||
memset(&in_params, 0, sizeof(in_params));
|
||||
qedr_init_common_qp_in_params(dev, pd, qp, attrs, false, &in_params);
|
||||
|
@ -79,12 +79,16 @@ module_param(srpt_srq_size, int, 0444);
|
||||
MODULE_PARM_DESC(srpt_srq_size,
|
||||
"Shared receive queue (SRQ) size.");
|
||||
|
||||
static int srpt_set_u64_x(const char *buffer, const struct kernel_param *kp)
|
||||
{
|
||||
return kstrtou64(buffer, 16, (u64 *)kp->arg);
|
||||
}
|
||||
static int srpt_get_u64_x(char *buffer, const struct kernel_param *kp)
|
||||
{
|
||||
return sprintf(buffer, "0x%016llx\n", *(u64 *)kp->arg);
|
||||
}
|
||||
module_param_call(srpt_service_guid, NULL, srpt_get_u64_x, &srpt_service_guid,
|
||||
0444);
|
||||
module_param_call(srpt_service_guid, srpt_set_u64_x, srpt_get_u64_x,
|
||||
&srpt_service_guid, 0444);
|
||||
MODULE_PARM_DESC(srpt_service_guid,
|
||||
"Using this value for ioc_guid, id_ext, and cm_listen_id instead of using the node_guid of the first HCA.");
|
||||
|
||||
@ -210,10 +214,12 @@ static const char *get_ch_state_name(enum rdma_ch_state s)
|
||||
/**
|
||||
* srpt_qp_event - QP event callback function
|
||||
* @event: Description of the event that occurred.
|
||||
* @ch: SRPT RDMA channel.
|
||||
* @ptr: SRPT RDMA channel.
|
||||
*/
|
||||
static void srpt_qp_event(struct ib_event *event, struct srpt_rdma_ch *ch)
|
||||
static void srpt_qp_event(struct ib_event *event, void *ptr)
|
||||
{
|
||||
struct srpt_rdma_ch *ch = ptr;
|
||||
|
||||
pr_debug("QP event %d on ch=%p sess_name=%s-%d state=%s\n",
|
||||
event->event, ch, ch->sess_name, ch->qp->qp_num,
|
||||
get_ch_state_name(ch->state));
|
||||
@ -1803,8 +1809,7 @@ static int srpt_create_ch_ib(struct srpt_rdma_ch *ch)
|
||||
ch->cq_size = ch->rq_size + sq_size;
|
||||
|
||||
qp_init->qp_context = (void *)ch;
|
||||
qp_init->event_handler
|
||||
= (void(*)(struct ib_event *, void*))srpt_qp_event;
|
||||
qp_init->event_handler = srpt_qp_event;
|
||||
qp_init->send_cq = ch->cq;
|
||||
qp_init->recv_cq = ch->cq;
|
||||
qp_init->sq_sig_type = IB_SIGNAL_REQ_WR;
|
||||
|
@ -625,6 +625,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
|
||||
},
|
||||
.driver_data = (void *)(SERIO_QUIRK_NOAUX)
|
||||
},
|
||||
{
|
||||
/* Fujitsu Lifebook U728 */
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK U728"),
|
||||
},
|
||||
.driver_data = (void *)(SERIO_QUIRK_NOAUX)
|
||||
},
|
||||
{
|
||||
/* Gigabyte M912 */
|
||||
.matches = {
|
||||
|
@ -398,6 +398,8 @@ static void gic_all_vpes_irq_cpu_online(void)
|
||||
unsigned int intr = local_intrs[i];
|
||||
struct gic_all_vpes_chip_data *cd;
|
||||
|
||||
if (!gic_local_irq_is_routable(intr))
|
||||
continue;
|
||||
cd = &gic_all_vpes_chip_data[intr];
|
||||
write_gic_vl_map(mips_gic_vx_map_reg(intr), cd->map);
|
||||
if (cd->mask)
|
||||
|
@ -2064,6 +2064,12 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
|
||||
io->ctx.bio_out = clone;
|
||||
io->ctx.iter_out = clone->bi_iter;
|
||||
|
||||
if (crypt_integrity_aead(cc)) {
|
||||
bio_copy_data(clone, io->base_bio);
|
||||
io->ctx.bio_in = clone;
|
||||
io->ctx.iter_in = clone->bi_iter;
|
||||
}
|
||||
|
||||
sector += bio_sectors(clone);
|
||||
|
||||
crypt_inc_pending(io);
|
||||
|
@ -822,10 +822,10 @@ static int write_ts_to_decoder(struct av7110 *av7110, int type, const u8 *buf, s
|
||||
av7110_ipack_flush(ipack);
|
||||
|
||||
if (buf[3] & ADAPT_FIELD) {
|
||||
if (buf[4] > len - 1 - 4)
|
||||
return 0;
|
||||
len -= buf[4] + 1;
|
||||
buf += buf[4] + 1;
|
||||
if (!len)
|
||||
return 0;
|
||||
}
|
||||
|
||||
av7110_ipack_instant_repack(buf + 4, len - 4, ipack);
|
||||
|
@ -119,6 +119,26 @@ static const struct spinand_info macronix_spinand_table[] = {
|
||||
&update_cache_variants),
|
||||
SPINAND_HAS_QE_BIT,
|
||||
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
|
||||
SPINAND_INFO("MX35LF2GE4AD",
|
||||
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x26),
|
||||
NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1),
|
||||
NAND_ECCREQ(8, 512),
|
||||
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
|
||||
&write_cache_variants,
|
||||
&update_cache_variants),
|
||||
0,
|
||||
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
|
||||
mx35lf1ge4ab_ecc_get_status)),
|
||||
SPINAND_INFO("MX35LF4GE4AD",
|
||||
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x37),
|
||||
NAND_MEMORG(1, 4096, 128, 64, 2048, 40, 1, 1, 1),
|
||||
NAND_ECCREQ(8, 512),
|
||||
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
|
||||
&write_cache_variants,
|
||||
&update_cache_variants),
|
||||
0,
|
||||
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
|
||||
mx35lf1ge4ab_ecc_get_status)),
|
||||
SPINAND_INFO("MX31LF1GE4BC",
|
||||
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x1e),
|
||||
NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
|
||||
|
@ -780,6 +780,8 @@ static void lan743x_ethtool_get_wol(struct net_device *netdev,
|
||||
|
||||
wol->supported = 0;
|
||||
wol->wolopts = 0;
|
||||
|
||||
if (netdev->phydev)
|
||||
phy_ethtool_get_wol(netdev->phydev, wol);
|
||||
|
||||
wol->supported |= WAKE_BCAST | WAKE_UCAST | WAKE_MCAST |
|
||||
@ -809,9 +811,8 @@ static int lan743x_ethtool_set_wol(struct net_device *netdev,
|
||||
|
||||
device_set_wakeup_enable(&adapter->pdev->dev, (bool)wol->wolopts);
|
||||
|
||||
phy_ethtool_set_wol(netdev->phydev, wol);
|
||||
|
||||
return 0;
|
||||
return netdev->phydev ? phy_ethtool_set_wol(netdev->phydev, wol)
|
||||
: -ENETDOWN;
|
||||
}
|
||||
#endif /* CONFIG_PM */
|
||||
|
||||
|
@ -1410,20 +1410,20 @@ static int __init gtp_init(void)
|
||||
if (err < 0)
|
||||
goto error_out;
|
||||
|
||||
err = genl_register_family(>p_genl_family);
|
||||
err = register_pernet_subsys(>p_net_ops);
|
||||
if (err < 0)
|
||||
goto unreg_rtnl_link;
|
||||
|
||||
err = register_pernet_subsys(>p_net_ops);
|
||||
err = genl_register_family(>p_genl_family);
|
||||
if (err < 0)
|
||||
goto unreg_genl_family;
|
||||
goto unreg_pernet_subsys;
|
||||
|
||||
pr_info("GTP module loaded (pdp ctx size %zd bytes)\n",
|
||||
sizeof(struct pdp_ctx));
|
||||
return 0;
|
||||
|
||||
unreg_genl_family:
|
||||
genl_unregister_family(>p_genl_family);
|
||||
unreg_pernet_subsys:
|
||||
unregister_pernet_subsys(>p_net_ops);
|
||||
unreg_rtnl_link:
|
||||
rtnl_link_unregister(>p_link_ops);
|
||||
error_out:
|
||||
|
@ -5155,8 +5155,7 @@ void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
|
||||
|
||||
if (notif->sync) {
|
||||
notif->cookie = mvm->queue_sync_cookie;
|
||||
atomic_set(&mvm->queue_sync_counter,
|
||||
mvm->trans->num_rx_queues);
|
||||
mvm->queue_sync_state = (1 << mvm->trans->num_rx_queues) - 1;
|
||||
}
|
||||
|
||||
ret = iwl_mvm_notify_rx_queue(mvm, qmask, (u8 *)notif,
|
||||
@ -5169,16 +5168,19 @@ void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
|
||||
if (notif->sync) {
|
||||
lockdep_assert_held(&mvm->mutex);
|
||||
ret = wait_event_timeout(mvm->rx_sync_waitq,
|
||||
atomic_read(&mvm->queue_sync_counter) == 0 ||
|
||||
READ_ONCE(mvm->queue_sync_state) == 0 ||
|
||||
iwl_mvm_is_radio_killed(mvm),
|
||||
HZ);
|
||||
WARN_ON_ONCE(!ret && !iwl_mvm_is_radio_killed(mvm));
|
||||
WARN_ONCE(!ret && !iwl_mvm_is_radio_killed(mvm),
|
||||
"queue sync: failed to sync, state is 0x%lx\n",
|
||||
mvm->queue_sync_state);
|
||||
}
|
||||
|
||||
out:
|
||||
atomic_set(&mvm->queue_sync_counter, 0);
|
||||
if (notif->sync)
|
||||
if (notif->sync) {
|
||||
mvm->queue_sync_state = 0;
|
||||
mvm->queue_sync_cookie++;
|
||||
}
|
||||
}
|
||||
|
||||
static void iwl_mvm_sync_rx_queues(struct ieee80211_hw *hw)
|
||||
|
@ -842,7 +842,7 @@ struct iwl_mvm {
|
||||
unsigned long status;
|
||||
|
||||
u32 queue_sync_cookie;
|
||||
atomic_t queue_sync_counter;
|
||||
unsigned long queue_sync_state;
|
||||
/*
|
||||
* for beacon filtering -
|
||||
* currently only one interface can be supported
|
||||
|
@ -725,7 +725,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
|
||||
|
||||
init_waitqueue_head(&mvm->rx_sync_waitq);
|
||||
|
||||
atomic_set(&mvm->queue_sync_counter, 0);
|
||||
mvm->queue_sync_state = 0;
|
||||
|
||||
SET_IEEE80211_DEV(mvm->hw, mvm->trans->dev);
|
||||
|
||||
|
@ -853,9 +853,13 @@ void iwl_mvm_rx_queue_notif(struct iwl_mvm *mvm, struct napi_struct *napi,
|
||||
WARN_ONCE(1, "Invalid identifier %d", internal_notif->type);
|
||||
}
|
||||
|
||||
if (internal_notif->sync &&
|
||||
!atomic_dec_return(&mvm->queue_sync_counter))
|
||||
if (internal_notif->sync) {
|
||||
WARN_ONCE(!test_and_clear_bit(queue, &mvm->queue_sync_state),
|
||||
"queue sync: queue %d responded a second time!\n",
|
||||
queue);
|
||||
if (READ_ONCE(mvm->queue_sync_state) == 0)
|
||||
wake_up(&mvm->rx_sync_waitq);
|
||||
}
|
||||
}
|
||||
|
||||
static void iwl_mvm_oldsn_workaround(struct iwl_mvm *mvm,
|
||||
|
@ -220,11 +220,6 @@ static LIST_HEAD(nvme_fc_lport_list);
|
||||
static DEFINE_IDA(nvme_fc_local_port_cnt);
|
||||
static DEFINE_IDA(nvme_fc_ctrl_cnt);
|
||||
|
||||
static struct workqueue_struct *nvme_fc_wq;
|
||||
|
||||
static bool nvme_fc_waiting_to_unload;
|
||||
static DECLARE_COMPLETION(nvme_fc_unload_proceed);
|
||||
|
||||
/*
|
||||
* These items are short-term. They will eventually be moved into
|
||||
* a generic FC class. See comments in module init.
|
||||
@ -254,8 +249,6 @@ nvme_fc_free_lport(struct kref *ref)
|
||||
/* remove from transport list */
|
||||
spin_lock_irqsave(&nvme_fc_lock, flags);
|
||||
list_del(&lport->port_list);
|
||||
if (nvme_fc_waiting_to_unload && list_empty(&nvme_fc_lport_list))
|
||||
complete(&nvme_fc_unload_proceed);
|
||||
spin_unlock_irqrestore(&nvme_fc_lock, flags);
|
||||
|
||||
ida_simple_remove(&nvme_fc_local_port_cnt, lport->localport.port_num);
|
||||
@ -3823,10 +3816,6 @@ static int __init nvme_fc_init_module(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
nvme_fc_wq = alloc_workqueue("nvme_fc_wq", WQ_MEM_RECLAIM, 0);
|
||||
if (!nvme_fc_wq)
|
||||
return -ENOMEM;
|
||||
|
||||
/*
|
||||
* NOTE:
|
||||
* It is expected that in the future the kernel will combine
|
||||
@ -3844,7 +3833,7 @@ static int __init nvme_fc_init_module(void)
|
||||
ret = class_register(&fc_class);
|
||||
if (ret) {
|
||||
pr_err("couldn't register class fc\n");
|
||||
goto out_destroy_wq;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -3868,8 +3857,6 @@ static int __init nvme_fc_init_module(void)
|
||||
device_destroy(&fc_class, MKDEV(0, 0));
|
||||
out_destroy_class:
|
||||
class_unregister(&fc_class);
|
||||
out_destroy_wq:
|
||||
destroy_workqueue(nvme_fc_wq);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -3889,45 +3876,23 @@ nvme_fc_delete_controllers(struct nvme_fc_rport *rport)
|
||||
spin_unlock(&rport->lock);
|
||||
}
|
||||
|
||||
static void
|
||||
nvme_fc_cleanup_for_unload(void)
|
||||
static void __exit nvme_fc_exit_module(void)
|
||||
{
|
||||
struct nvme_fc_lport *lport;
|
||||
struct nvme_fc_rport *rport;
|
||||
|
||||
list_for_each_entry(lport, &nvme_fc_lport_list, port_list) {
|
||||
list_for_each_entry(rport, &lport->endp_list, endp_list) {
|
||||
nvme_fc_delete_controllers(rport);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void __exit nvme_fc_exit_module(void)
|
||||
{
|
||||
unsigned long flags;
|
||||
bool need_cleanup = false;
|
||||
|
||||
spin_lock_irqsave(&nvme_fc_lock, flags);
|
||||
nvme_fc_waiting_to_unload = true;
|
||||
if (!list_empty(&nvme_fc_lport_list)) {
|
||||
need_cleanup = true;
|
||||
nvme_fc_cleanup_for_unload();
|
||||
}
|
||||
list_for_each_entry(lport, &nvme_fc_lport_list, port_list)
|
||||
list_for_each_entry(rport, &lport->endp_list, endp_list)
|
||||
nvme_fc_delete_controllers(rport);
|
||||
spin_unlock_irqrestore(&nvme_fc_lock, flags);
|
||||
if (need_cleanup) {
|
||||
pr_info("%s: waiting for ctlr deletes\n", __func__);
|
||||
wait_for_completion(&nvme_fc_unload_proceed);
|
||||
pr_info("%s: ctrl deletes complete\n", __func__);
|
||||
}
|
||||
flush_workqueue(nvme_delete_wq);
|
||||
|
||||
nvmf_unregister_transport(&nvme_fc_transport);
|
||||
|
||||
ida_destroy(&nvme_fc_local_port_cnt);
|
||||
ida_destroy(&nvme_fc_ctrl_cnt);
|
||||
|
||||
device_destroy(&fc_class, MKDEV(0, 0));
|
||||
class_unregister(&fc_class);
|
||||
destroy_workqueue(nvme_fc_wq);
|
||||
}
|
||||
|
||||
module_init(nvme_fc_init_module);
|
||||
|
@ -357,7 +357,7 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop)
|
||||
|
||||
if (!lsop->req_queued) {
|
||||
spin_unlock_irqrestore(&tgtport->lock, flags);
|
||||
return;
|
||||
goto out_puttgtport;
|
||||
}
|
||||
|
||||
list_del(&lsop->lsreq_list);
|
||||
@ -370,6 +370,7 @@ __nvmet_fc_finish_ls_req(struct nvmet_fc_ls_req_op *lsop)
|
||||
(lsreq->rqstlen + lsreq->rsplen),
|
||||
DMA_BIDIRECTIONAL);
|
||||
|
||||
out_puttgtport:
|
||||
nvmet_fc_tgtport_put(tgtport);
|
||||
}
|
||||
|
||||
@ -1101,6 +1102,9 @@ nvmet_fc_alloc_target_assoc(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
|
||||
int idx;
|
||||
bool needrandom = true;
|
||||
|
||||
if (!tgtport->pe)
|
||||
return NULL;
|
||||
|
||||
assoc = kzalloc(sizeof(*assoc), GFP_KERNEL);
|
||||
if (!assoc)
|
||||
return NULL;
|
||||
@ -2528,7 +2532,8 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
|
||||
|
||||
fod->req.cmd = &fod->cmdiubuf.sqe;
|
||||
fod->req.cqe = &fod->rspiubuf.cqe;
|
||||
if (tgtport->pe)
|
||||
if (!tgtport->pe)
|
||||
goto transport_error;
|
||||
fod->req.port = tgtport->pe->port;
|
||||
|
||||
/* clear any response payload */
|
||||
|
@ -358,7 +358,7 @@ fcloop_h2t_ls_req(struct nvme_fc_local_port *localport,
|
||||
if (!rport->targetport) {
|
||||
tls_req->status = -ECONNREFUSED;
|
||||
spin_lock(&rport->lock);
|
||||
list_add_tail(&rport->ls_list, &tls_req->ls_list);
|
||||
list_add_tail(&tls_req->ls_list, &rport->ls_list);
|
||||
spin_unlock(&rport->lock);
|
||||
schedule_work(&rport->ls_work);
|
||||
return ret;
|
||||
@ -391,7 +391,7 @@ fcloop_h2t_xmt_ls_rsp(struct nvmet_fc_target_port *targetport,
|
||||
if (remoteport) {
|
||||
rport = remoteport->private;
|
||||
spin_lock(&rport->lock);
|
||||
list_add_tail(&rport->ls_list, &tls_req->ls_list);
|
||||
list_add_tail(&tls_req->ls_list, &rport->ls_list);
|
||||
spin_unlock(&rport->lock);
|
||||
schedule_work(&rport->ls_work);
|
||||
}
|
||||
@ -446,7 +446,7 @@ fcloop_t2h_ls_req(struct nvmet_fc_target_port *targetport, void *hosthandle,
|
||||
if (!tport->remoteport) {
|
||||
tls_req->status = -ECONNREFUSED;
|
||||
spin_lock(&tport->lock);
|
||||
list_add_tail(&tport->ls_list, &tls_req->ls_list);
|
||||
list_add_tail(&tls_req->ls_list, &tport->ls_list);
|
||||
spin_unlock(&tport->lock);
|
||||
schedule_work(&tport->ls_work);
|
||||
return ret;
|
||||
|
@ -1852,6 +1852,7 @@ static void __exit nvmet_tcp_exit(void)
|
||||
flush_scheduled_work();
|
||||
|
||||
destroy_workqueue(nvmet_tcp_wq);
|
||||
ida_destroy(&nvmet_tcp_queue_ida);
|
||||
}
|
||||
|
||||
module_init(nvmet_tcp_init);
|
||||
|
@ -1408,7 +1408,7 @@ static irq_hw_number_t pci_msi_domain_calc_hwirq(struct msi_desc *desc)
|
||||
|
||||
return (irq_hw_number_t)desc->msi_attrib.entry_nr |
|
||||
pci_dev_id(dev) << 11 |
|
||||
(pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27;
|
||||
((irq_hw_number_t)(pci_domain_nr(dev->bus) & 0xFFFFFFFF)) << 27;
|
||||
}
|
||||
|
||||
static inline bool pci_msi_desc_is_multi_msi(struct msi_desc *desc)
|
||||
|
@ -230,6 +230,12 @@ static const struct dmi_system_id dmi_switches_allow_list[] = {
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7352"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion 13 x360 PC"),
|
||||
},
|
||||
},
|
||||
{} /* Array terminator */
|
||||
};
|
||||
|
||||
|
@ -158,6 +158,9 @@ static int pwm_regulator_get_voltage(struct regulator_dev *rdev)
|
||||
pwm_get_state(drvdata->pwm, &pstate);
|
||||
|
||||
voltage = pwm_get_relative_duty_cycle(&pstate, duty_unit);
|
||||
if (voltage < min(max_uV_duty, min_uV_duty) ||
|
||||
voltage > max(max_uV_duty, min_uV_duty))
|
||||
return -ENOTRECOVERABLE;
|
||||
|
||||
/*
|
||||
* The dutycycle for min_uV might be greater than the one for max_uV.
|
||||
|
@ -202,7 +202,8 @@ int ccw_device_start_timeout_key(struct ccw_device *cdev, struct ccw1 *cpa,
|
||||
return -EINVAL;
|
||||
if (cdev->private->state == DEV_STATE_NOT_OPER)
|
||||
return -ENODEV;
|
||||
if (cdev->private->state == DEV_STATE_VERIFY) {
|
||||
if (cdev->private->state == DEV_STATE_VERIFY ||
|
||||
cdev->private->flags.doverify) {
|
||||
/* Remember to fake irb when finished. */
|
||||
if (!cdev->private->flags.fake_irb) {
|
||||
cdev->private->flags.fake_irb = FAKE_CMD_IRB;
|
||||
@ -214,8 +215,7 @@ int ccw_device_start_timeout_key(struct ccw_device *cdev, struct ccw1 *cpa,
|
||||
}
|
||||
if (cdev->private->state != DEV_STATE_ONLINE ||
|
||||
((sch->schib.scsw.cmd.stctl & SCSW_STCTL_PRIM_STATUS) &&
|
||||
!(sch->schib.scsw.cmd.stctl & SCSW_STCTL_SEC_STATUS)) ||
|
||||
cdev->private->flags.doverify)
|
||||
!(sch->schib.scsw.cmd.stctl & SCSW_STCTL_SEC_STATUS)))
|
||||
return -EBUSY;
|
||||
ret = cio_set_options (sch, flags);
|
||||
if (ret)
|
||||
|
@ -1289,7 +1289,7 @@ source "drivers/scsi/arm/Kconfig"
|
||||
|
||||
config JAZZ_ESP
|
||||
bool "MIPS JAZZ FAS216 SCSI support"
|
||||
depends on MACH_JAZZ && SCSI
|
||||
depends on MACH_JAZZ && SCSI=y
|
||||
select SCSI_SPI_ATTRS
|
||||
help
|
||||
This is the driver for the onboard SCSI host adapter of MIPS Magnum
|
||||
|
@ -1944,7 +1944,7 @@ lpfc_bg_setup_bpl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
|
||||
*
|
||||
* Returns the number of SGEs added to the SGL.
|
||||
**/
|
||||
static int
|
||||
static uint32_t
|
||||
lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
|
||||
struct sli4_sge *sgl, int datasegcnt,
|
||||
struct lpfc_io_buf *lpfc_cmd)
|
||||
@ -1952,8 +1952,8 @@ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
|
||||
struct scatterlist *sgde = NULL; /* s/g data entry */
|
||||
struct sli4_sge_diseed *diseed = NULL;
|
||||
dma_addr_t physaddr;
|
||||
int i = 0, num_sge = 0, status;
|
||||
uint32_t reftag;
|
||||
int i = 0, status;
|
||||
uint32_t reftag, num_sge = 0;
|
||||
uint8_t txop, rxop;
|
||||
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
|
||||
uint32_t rc;
|
||||
@ -2124,7 +2124,7 @@ lpfc_bg_setup_sgl(struct lpfc_hba *phba, struct scsi_cmnd *sc,
|
||||
*
|
||||
* Returns the number of SGEs added to the SGL.
|
||||
**/
|
||||
static int
|
||||
static uint32_t
|
||||
lpfc_bg_setup_sgl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
|
||||
struct sli4_sge *sgl, int datacnt, int protcnt,
|
||||
struct lpfc_io_buf *lpfc_cmd)
|
||||
@ -2148,8 +2148,8 @@ lpfc_bg_setup_sgl_prot(struct lpfc_hba *phba, struct scsi_cmnd *sc,
|
||||
uint32_t rc;
|
||||
#endif
|
||||
uint32_t checking = 1;
|
||||
uint32_t dma_offset = 0;
|
||||
int num_sge = 0, j = 2;
|
||||
uint32_t dma_offset = 0, num_sge = 0;
|
||||
int j = 2;
|
||||
struct sli4_hybrid_sgl *sgl_xtra = NULL;
|
||||
|
||||
sgpe = scsi_prot_sglist(sc);
|
||||
|
@ -25,7 +25,8 @@ static const struct rcar_sysc_area r8a77980_areas[] __initconst = {
|
||||
PD_CPU_NOCR },
|
||||
{ "ca53-cpu3", 0x200, 3, R8A77980_PD_CA53_CPU3, R8A77980_PD_CA53_SCU,
|
||||
PD_CPU_NOCR },
|
||||
{ "cr7", 0x240, 0, R8A77980_PD_CR7, R8A77980_PD_ALWAYS_ON },
|
||||
{ "cr7", 0x240, 0, R8A77980_PD_CR7, R8A77980_PD_ALWAYS_ON,
|
||||
PD_CPU_NOCR },
|
||||
{ "a3ir", 0x180, 0, R8A77980_PD_A3IR, R8A77980_PD_ALWAYS_ON },
|
||||
{ "a2ir0", 0x400, 0, R8A77980_PD_A2IR0, R8A77980_PD_A3IR },
|
||||
{ "a2ir1", 0x400, 1, R8A77980_PD_A2IR1, R8A77980_PD_A3IR },
|
||||
|
@ -365,6 +365,11 @@ static const struct spi_controller_mem_ops hisi_sfc_v3xx_mem_ops = {
|
||||
static irqreturn_t hisi_sfc_v3xx_isr(int irq, void *data)
|
||||
{
|
||||
struct hisi_sfc_v3xx_host *host = data;
|
||||
u32 reg;
|
||||
|
||||
reg = readl(host->regbase + HISI_SFC_V3XX_INT_STAT);
|
||||
if (!reg)
|
||||
return IRQ_NONE;
|
||||
|
||||
hisi_sfc_v3xx_disable_int(host);
|
||||
|
||||
|
@ -137,14 +137,14 @@ struct sh_msiof_spi_priv {
|
||||
|
||||
/* SIFCTR */
|
||||
#define SIFCTR_TFWM_MASK GENMASK(31, 29) /* Transmit FIFO Watermark */
|
||||
#define SIFCTR_TFWM_64 (0 << 29) /* Transfer Request when 64 empty stages */
|
||||
#define SIFCTR_TFWM_32 (1 << 29) /* Transfer Request when 32 empty stages */
|
||||
#define SIFCTR_TFWM_24 (2 << 29) /* Transfer Request when 24 empty stages */
|
||||
#define SIFCTR_TFWM_16 (3 << 29) /* Transfer Request when 16 empty stages */
|
||||
#define SIFCTR_TFWM_12 (4 << 29) /* Transfer Request when 12 empty stages */
|
||||
#define SIFCTR_TFWM_8 (5 << 29) /* Transfer Request when 8 empty stages */
|
||||
#define SIFCTR_TFWM_4 (6 << 29) /* Transfer Request when 4 empty stages */
|
||||
#define SIFCTR_TFWM_1 (7 << 29) /* Transfer Request when 1 empty stage */
|
||||
#define SIFCTR_TFWM_64 (0UL << 29) /* Transfer Request when 64 empty stages */
|
||||
#define SIFCTR_TFWM_32 (1UL << 29) /* Transfer Request when 32 empty stages */
|
||||
#define SIFCTR_TFWM_24 (2UL << 29) /* Transfer Request when 24 empty stages */
|
||||
#define SIFCTR_TFWM_16 (3UL << 29) /* Transfer Request when 16 empty stages */
|
||||
#define SIFCTR_TFWM_12 (4UL << 29) /* Transfer Request when 12 empty stages */
|
||||
#define SIFCTR_TFWM_8 (5UL << 29) /* Transfer Request when 8 empty stages */
|
||||
#define SIFCTR_TFWM_4 (6UL << 29) /* Transfer Request when 4 empty stages */
|
||||
#define SIFCTR_TFWM_1 (7UL << 29) /* Transfer Request when 1 empty stage */
|
||||
#define SIFCTR_TFUA_MASK GENMASK(26, 20) /* Transmit FIFO Usable Area */
|
||||
#define SIFCTR_TFUA_SHIFT 20
|
||||
#define SIFCTR_TFUA(i) ((i) << SIFCTR_TFUA_SHIFT)
|
||||
|
@ -150,7 +150,6 @@ int transport_lookup_tmr_lun(struct se_cmd *se_cmd)
|
||||
struct se_session *se_sess = se_cmd->se_sess;
|
||||
struct se_node_acl *nacl = se_sess->se_node_acl;
|
||||
struct se_tmr_req *se_tmr = se_cmd->se_tmr_req;
|
||||
unsigned long flags;
|
||||
|
||||
rcu_read_lock();
|
||||
deve = target_nacl_find_deve(nacl, se_cmd->orig_fe_lun);
|
||||
@ -181,10 +180,6 @@ int transport_lookup_tmr_lun(struct se_cmd *se_cmd)
|
||||
se_cmd->se_dev = rcu_dereference_raw(se_lun->lun_se_dev);
|
||||
se_tmr->tmr_dev = rcu_dereference_raw(se_lun->lun_se_dev);
|
||||
|
||||
spin_lock_irqsave(&se_tmr->tmr_dev->se_tmr_lock, flags);
|
||||
list_add_tail(&se_tmr->tmr_list, &se_tmr->tmr_dev->dev_tmr_list);
|
||||
spin_unlock_irqrestore(&se_tmr->tmr_dev->se_tmr_lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(transport_lookup_tmr_lun);
|
||||
|
@ -3436,6 +3436,10 @@ int transport_generic_handle_tmr(
|
||||
unsigned long flags;
|
||||
bool aborted = false;
|
||||
|
||||
spin_lock_irqsave(&cmd->se_dev->se_tmr_lock, flags);
|
||||
list_add_tail(&cmd->se_tmr_req->tmr_list, &cmd->se_dev->dev_tmr_list);
|
||||
spin_unlock_irqrestore(&cmd->se_dev->se_tmr_lock, flags);
|
||||
|
||||
spin_lock_irqsave(&cmd->t_state_lock, flags);
|
||||
if (cmd->transport_state & CMD_T_ABORTED) {
|
||||
aborted = true;
|
||||
|
@ -43,6 +43,7 @@ struct xencons_info {
|
||||
int irq;
|
||||
int vtermno;
|
||||
grant_ref_t gntref;
|
||||
spinlock_t ring_lock;
|
||||
};
|
||||
|
||||
static LIST_HEAD(xenconsoles);
|
||||
@ -89,12 +90,15 @@ static int __write_console(struct xencons_info *xencons,
|
||||
XENCONS_RING_IDX cons, prod;
|
||||
struct xencons_interface *intf = xencons->intf;
|
||||
int sent = 0;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&xencons->ring_lock, flags);
|
||||
cons = intf->out_cons;
|
||||
prod = intf->out_prod;
|
||||
mb(); /* update queue values before going on */
|
||||
|
||||
if ((prod - cons) > sizeof(intf->out)) {
|
||||
spin_unlock_irqrestore(&xencons->ring_lock, flags);
|
||||
pr_err_once("xencons: Illegal ring page indices");
|
||||
return -EINVAL;
|
||||
}
|
||||
@ -104,6 +108,7 @@ static int __write_console(struct xencons_info *xencons,
|
||||
|
||||
wmb(); /* write ring before updating pointer */
|
||||
intf->out_prod = prod;
|
||||
spin_unlock_irqrestore(&xencons->ring_lock, flags);
|
||||
|
||||
if (sent)
|
||||
notify_daemon(xencons);
|
||||
@ -146,16 +151,19 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
|
||||
int recv = 0;
|
||||
struct xencons_info *xencons = vtermno_to_xencons(vtermno);
|
||||
unsigned int eoiflag = 0;
|
||||
unsigned long flags;
|
||||
|
||||
if (xencons == NULL)
|
||||
return -EINVAL;
|
||||
intf = xencons->intf;
|
||||
|
||||
spin_lock_irqsave(&xencons->ring_lock, flags);
|
||||
cons = intf->in_cons;
|
||||
prod = intf->in_prod;
|
||||
mb(); /* get pointers before reading ring */
|
||||
|
||||
if ((prod - cons) > sizeof(intf->in)) {
|
||||
spin_unlock_irqrestore(&xencons->ring_lock, flags);
|
||||
pr_err_once("xencons: Illegal ring page indices");
|
||||
return -EINVAL;
|
||||
}
|
||||
@ -179,10 +187,13 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
|
||||
xencons->out_cons = intf->out_cons;
|
||||
xencons->out_cons_same = 0;
|
||||
}
|
||||
if (!recv && xencons->out_cons_same++ > 1) {
|
||||
eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
||||
}
|
||||
spin_unlock_irqrestore(&xencons->ring_lock, flags);
|
||||
|
||||
if (recv) {
|
||||
notify_daemon(xencons);
|
||||
} else if (xencons->out_cons_same++ > 1) {
|
||||
eoiflag = XEN_EOI_FLAG_SPURIOUS;
|
||||
}
|
||||
|
||||
xen_irq_lateeoi(xencons->irq, eoiflag);
|
||||
@ -239,6 +250,7 @@ static int xen_hvm_console_init(void)
|
||||
info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
|
||||
if (!info)
|
||||
return -ENOMEM;
|
||||
spin_lock_init(&info->ring_lock);
|
||||
} else if (info->intf != NULL) {
|
||||
/* already configured */
|
||||
return 0;
|
||||
@ -275,6 +287,7 @@ static int xen_hvm_console_init(void)
|
||||
|
||||
static int xencons_info_pv_init(struct xencons_info *info, int vtermno)
|
||||
{
|
||||
spin_lock_init(&info->ring_lock);
|
||||
info->evtchn = xen_start_info->console.domU.evtchn;
|
||||
/* GFN == MFN for PV guest */
|
||||
info->intf = gfn_to_virt(xen_start_info->console.domU.mfn);
|
||||
@ -325,6 +338,7 @@ static int xen_initial_domain_console_init(void)
|
||||
info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
|
||||
if (!info)
|
||||
return -ENOMEM;
|
||||
spin_lock_init(&info->ring_lock);
|
||||
}
|
||||
|
||||
info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false);
|
||||
@ -485,6 +499,7 @@ static int xencons_probe(struct xenbus_device *dev,
|
||||
info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
|
||||
if (!info)
|
||||
return -ENOMEM;
|
||||
spin_lock_init(&info->ring_lock);
|
||||
dev_set_drvdata(&dev->dev, info);
|
||||
info->xbdev = dev;
|
||||
info->vtermno = xenbus_devid_to_vtermno(devid);
|
||||
|
@ -837,7 +837,11 @@ void cdns3_gadget_giveback(struct cdns3_endpoint *priv_ep,
|
||||
return;
|
||||
}
|
||||
|
||||
if (request->complete) {
|
||||
/*
|
||||
* zlp request is appended by driver, needn't call usb_gadget_giveback_request() to notify
|
||||
* gadget composite driver.
|
||||
*/
|
||||
if (request->complete && request->buf != priv_dev->zlp_buf) {
|
||||
spin_unlock(&priv_dev->lock);
|
||||
usb_gadget_giveback_request(&priv_ep->endpoint,
|
||||
request);
|
||||
@ -2538,11 +2542,11 @@ static int cdns3_gadget_ep_disable(struct usb_ep *ep)
|
||||
|
||||
while (!list_empty(&priv_ep->wa2_descmiss_req_list)) {
|
||||
priv_req = cdns3_next_priv_request(&priv_ep->wa2_descmiss_req_list);
|
||||
list_del_init(&priv_req->list);
|
||||
|
||||
kfree(priv_req->request.buf);
|
||||
cdns3_gadget_ep_free_request(&priv_ep->endpoint,
|
||||
&priv_req->request);
|
||||
list_del_init(&priv_req->list);
|
||||
--priv_ep->wa2_counter;
|
||||
}
|
||||
|
||||
|
@ -1349,7 +1349,15 @@ static int ncm_unwrap_ntb(struct gether *port,
|
||||
"Parsed NTB with %d frames\n", dgram_counter);
|
||||
|
||||
to_process -= block_len;
|
||||
if (to_process != 0) {
|
||||
|
||||
/*
|
||||
* Windows NCM driver avoids USB ZLPs by adding a 1-byte
|
||||
* zero pad as needed.
|
||||
*/
|
||||
if (to_process == 1 &&
|
||||
(*(unsigned char *)(ntb_ptr + block_len) == 0x00)) {
|
||||
to_process--;
|
||||
} else if (to_process > 0) {
|
||||
ntb_ptr = (unsigned char *)(ntb_ptr + block_len);
|
||||
goto parse_ntb;
|
||||
}
|
||||
|
@ -19,7 +19,9 @@ static struct class *role_class;
|
||||
struct usb_role_switch {
|
||||
struct device dev;
|
||||
struct mutex lock; /* device lock*/
|
||||
struct module *module; /* the module this device depends on */
|
||||
enum usb_role role;
|
||||
bool registered;
|
||||
|
||||
/* From descriptor */
|
||||
struct device *usb2_port;
|
||||
@ -46,6 +48,9 @@ int usb_role_switch_set_role(struct usb_role_switch *sw, enum usb_role role)
|
||||
if (IS_ERR_OR_NULL(sw))
|
||||
return 0;
|
||||
|
||||
if (!sw->registered)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
mutex_lock(&sw->lock);
|
||||
|
||||
ret = sw->set(sw, role);
|
||||
@ -71,7 +76,7 @@ enum usb_role usb_role_switch_get_role(struct usb_role_switch *sw)
|
||||
{
|
||||
enum usb_role role;
|
||||
|
||||
if (IS_ERR_OR_NULL(sw))
|
||||
if (IS_ERR_OR_NULL(sw) || !sw->registered)
|
||||
return USB_ROLE_NONE;
|
||||
|
||||
mutex_lock(&sw->lock);
|
||||
@ -133,7 +138,7 @@ struct usb_role_switch *usb_role_switch_get(struct device *dev)
|
||||
usb_role_switch_match);
|
||||
|
||||
if (!IS_ERR_OR_NULL(sw))
|
||||
WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
|
||||
WARN_ON(!try_module_get(sw->module));
|
||||
|
||||
return sw;
|
||||
}
|
||||
@ -155,7 +160,7 @@ struct usb_role_switch *fwnode_usb_role_switch_get(struct fwnode_handle *fwnode)
|
||||
sw = fwnode_connection_find_match(fwnode, "usb-role-switch",
|
||||
NULL, usb_role_switch_match);
|
||||
if (!IS_ERR_OR_NULL(sw))
|
||||
WARN_ON(!try_module_get(sw->dev.parent->driver->owner));
|
||||
WARN_ON(!try_module_get(sw->module));
|
||||
|
||||
return sw;
|
||||
}
|
||||
@ -170,7 +175,7 @@ EXPORT_SYMBOL_GPL(fwnode_usb_role_switch_get);
|
||||
void usb_role_switch_put(struct usb_role_switch *sw)
|
||||
{
|
||||
if (!IS_ERR_OR_NULL(sw)) {
|
||||
module_put(sw->dev.parent->driver->owner);
|
||||
module_put(sw->module);
|
||||
put_device(&sw->dev);
|
||||
}
|
||||
}
|
||||
@ -187,15 +192,18 @@ struct usb_role_switch *
|
||||
usb_role_switch_find_by_fwnode(const struct fwnode_handle *fwnode)
|
||||
{
|
||||
struct device *dev;
|
||||
struct usb_role_switch *sw = NULL;
|
||||
|
||||
if (!fwnode)
|
||||
return NULL;
|
||||
|
||||
dev = class_find_device_by_fwnode(role_class, fwnode);
|
||||
if (dev)
|
||||
WARN_ON(!try_module_get(dev->parent->driver->owner));
|
||||
if (dev) {
|
||||
sw = to_role_switch(dev);
|
||||
WARN_ON(!try_module_get(sw->module));
|
||||
}
|
||||
|
||||
return dev ? to_role_switch(dev) : NULL;
|
||||
return sw;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_role_switch_find_by_fwnode);
|
||||
|
||||
@ -328,6 +336,7 @@ usb_role_switch_register(struct device *parent,
|
||||
sw->set = desc->set;
|
||||
sw->get = desc->get;
|
||||
|
||||
sw->module = parent->driver->owner;
|
||||
sw->dev.parent = parent;
|
||||
sw->dev.fwnode = desc->fwnode;
|
||||
sw->dev.class = role_class;
|
||||
@ -342,6 +351,8 @@ usb_role_switch_register(struct device *parent,
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
sw->registered = true;
|
||||
|
||||
/* TODO: Symlinks for the host port and the device controller. */
|
||||
|
||||
return sw;
|
||||
@ -356,8 +367,10 @@ EXPORT_SYMBOL_GPL(usb_role_switch_register);
|
||||
*/
|
||||
void usb_role_switch_unregister(struct usb_role_switch *sw)
|
||||
{
|
||||
if (!IS_ERR_OR_NULL(sw))
|
||||
if (!IS_ERR_OR_NULL(sw)) {
|
||||
sw->registered = false;
|
||||
device_unregister(&sw->dev);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(usb_role_switch_unregister);
|
||||
|
||||
|
@ -868,6 +868,9 @@ static int savagefb_check_var(struct fb_var_screeninfo *var,
|
||||
|
||||
DBG("savagefb_check_var");
|
||||
|
||||
if (!var->pixclock)
|
||||
return -EINVAL;
|
||||
|
||||
var->transp.offset = 0;
|
||||
var->transp.length = 0;
|
||||
switch (var->bits_per_pixel) {
|
||||
|
@ -1474,6 +1474,8 @@ sisfb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
|
||||
|
||||
vtotal = var->upper_margin + var->lower_margin + var->vsync_len;
|
||||
|
||||
if (!var->pixclock)
|
||||
return -EINVAL;
|
||||
pixclock = var->pixclock;
|
||||
|
||||
if((var->vmode & FB_VMODE_MASK) == FB_VMODE_NONINTERLACED) {
|
||||
|
@ -302,7 +302,7 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key)
|
||||
{
|
||||
struct afs_server_list *new, *old, *discard;
|
||||
struct afs_vldb_entry *vldb;
|
||||
char idbuf[16];
|
||||
char idbuf[24];
|
||||
int ret, idsz;
|
||||
|
||||
_enter("");
|
||||
@ -310,7 +310,7 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key)
|
||||
/* We look up an ID by passing it as a decimal string in the
|
||||
* operation's name parameter.
|
||||
*/
|
||||
idsz = sprintf(idbuf, "%llu", volume->vid);
|
||||
idsz = snprintf(idbuf, sizeof(idbuf), "%llu", volume->vid);
|
||||
|
||||
vldb = afs_vl_lookup_vldb(volume->cell, key, idbuf, idsz);
|
||||
if (IS_ERR(vldb)) {
|
||||
|
9
fs/aio.c
9
fs/aio.c
@ -569,6 +569,13 @@ void kiocb_set_cancel_fn(struct kiocb *iocb, kiocb_cancel_fn *cancel)
|
||||
struct kioctx *ctx = req->ki_ctx;
|
||||
unsigned long flags;
|
||||
|
||||
/*
|
||||
* kiocb didn't come from aio or is neither a read nor a write, hence
|
||||
* ignore it.
|
||||
*/
|
||||
if (!(iocb->ki_flags & IOCB_AIO_RW))
|
||||
return;
|
||||
|
||||
if (WARN_ON_ONCE(!list_empty(&req->ki_list)))
|
||||
return;
|
||||
|
||||
@ -1454,7 +1461,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
|
||||
req->ki_complete = aio_complete_rw;
|
||||
req->private = NULL;
|
||||
req->ki_pos = iocb->aio_offset;
|
||||
req->ki_flags = iocb_flags(req->ki_filp);
|
||||
req->ki_flags = iocb_flags(req->ki_filp) | IOCB_AIO_RW;
|
||||
if (iocb->aio_flags & IOCB_FLAG_RESFD)
|
||||
req->ki_flags |= IOCB_EVENTFD;
|
||||
req->ki_hint = ki_hint_validate(file_write_hint(req->ki_filp));
|
||||
|
@ -2879,7 +2879,7 @@ struct btrfs_dir_item *
|
||||
btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans,
|
||||
struct btrfs_root *root,
|
||||
struct btrfs_path *path, u64 dir,
|
||||
u64 objectid, const char *name, int name_len,
|
||||
u64 index, const char *name, int name_len,
|
||||
int mod);
|
||||
struct btrfs_dir_item *
|
||||
btrfs_search_dir_index_item(struct btrfs_root *root,
|
||||
|
@ -171,10 +171,40 @@ int btrfs_insert_dir_item(struct btrfs_trans_handle *trans, const char *name,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct btrfs_dir_item *btrfs_lookup_match_dir(
|
||||
struct btrfs_trans_handle *trans,
|
||||
struct btrfs_root *root, struct btrfs_path *path,
|
||||
struct btrfs_key *key, const char *name,
|
||||
int name_len, int mod)
|
||||
{
|
||||
const int ins_len = (mod < 0 ? -1 : 0);
|
||||
const int cow = (mod != 0);
|
||||
int ret;
|
||||
|
||||
ret = btrfs_search_slot(trans, root, key, path, ins_len, cow);
|
||||
if (ret < 0)
|
||||
return ERR_PTR(ret);
|
||||
if (ret > 0)
|
||||
return ERR_PTR(-ENOENT);
|
||||
|
||||
return btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
|
||||
}
|
||||
|
||||
/*
|
||||
* lookup a directory item based on name. 'dir' is the objectid
|
||||
* we're searching in, and 'mod' tells us if you plan on deleting the
|
||||
* item (use mod < 0) or changing the options (use mod > 0)
|
||||
* Lookup for a directory item by name.
|
||||
*
|
||||
* @trans: The transaction handle to use. Can be NULL if @mod is 0.
|
||||
* @root: The root of the target tree.
|
||||
* @path: Path to use for the search.
|
||||
* @dir: The inode number (objectid) of the directory.
|
||||
* @name: The name associated to the directory entry we are looking for.
|
||||
* @name_len: The length of the name.
|
||||
* @mod: Used to indicate if the tree search is meant for a read only
|
||||
* lookup, for a modification lookup or for a deletion lookup, so
|
||||
* its value should be 0, 1 or -1, respectively.
|
||||
*
|
||||
* Returns: NULL if the dir item does not exists, an error pointer if an error
|
||||
* happened, or a pointer to a dir item if a dir item exists for the given name.
|
||||
*/
|
||||
struct btrfs_dir_item *btrfs_lookup_dir_item(struct btrfs_trans_handle *trans,
|
||||
struct btrfs_root *root,
|
||||
@ -182,23 +212,18 @@ struct btrfs_dir_item *btrfs_lookup_dir_item(struct btrfs_trans_handle *trans,
|
||||
const char *name, int name_len,
|
||||
int mod)
|
||||
{
|
||||
int ret;
|
||||
struct btrfs_key key;
|
||||
int ins_len = mod < 0 ? -1 : 0;
|
||||
int cow = mod != 0;
|
||||
struct btrfs_dir_item *di;
|
||||
|
||||
key.objectid = dir;
|
||||
key.type = BTRFS_DIR_ITEM_KEY;
|
||||
|
||||
key.offset = btrfs_name_hash(name, name_len);
|
||||
|
||||
ret = btrfs_search_slot(trans, root, &key, path, ins_len, cow);
|
||||
if (ret < 0)
|
||||
return ERR_PTR(ret);
|
||||
if (ret > 0)
|
||||
di = btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);
|
||||
if (IS_ERR(di) && PTR_ERR(di) == -ENOENT)
|
||||
return NULL;
|
||||
|
||||
return btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
|
||||
return di;
|
||||
}
|
||||
|
||||
int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
|
||||
@ -212,7 +237,6 @@ int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
|
||||
int slot;
|
||||
struct btrfs_path *path;
|
||||
|
||||
|
||||
path = btrfs_alloc_path();
|
||||
if (!path)
|
||||
return -ENOMEM;
|
||||
@ -221,20 +245,20 @@ int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
|
||||
key.type = BTRFS_DIR_ITEM_KEY;
|
||||
key.offset = btrfs_name_hash(name, name_len);
|
||||
|
||||
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
|
||||
|
||||
/* return back any errors */
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
/* nothing found, we're safe */
|
||||
if (ret > 0) {
|
||||
di = btrfs_lookup_match_dir(NULL, root, path, &key, name, name_len, 0);
|
||||
if (IS_ERR(di)) {
|
||||
ret = PTR_ERR(di);
|
||||
/* Nothing found, we're safe */
|
||||
if (ret == -ENOENT) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* we found an item, look for our name in the item */
|
||||
di = btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
|
||||
if (di) {
|
||||
/* our exact name was found */
|
||||
ret = -EEXIST;
|
||||
@ -261,35 +285,42 @@ int btrfs_check_dir_item_collision(struct btrfs_root *root, u64 dir,
|
||||
}
|
||||
|
||||
/*
|
||||
* lookup a directory item based on index. 'dir' is the objectid
|
||||
* we're searching in, and 'mod' tells us if you plan on deleting the
|
||||
* item (use mod < 0) or changing the options (use mod > 0)
|
||||
* Lookup for a directory index item by name and index number.
|
||||
*
|
||||
* The name is used to make sure the index really points to the name you were
|
||||
* looking for.
|
||||
* @trans: The transaction handle to use. Can be NULL if @mod is 0.
|
||||
* @root: The root of the target tree.
|
||||
* @path: Path to use for the search.
|
||||
* @dir: The inode number (objectid) of the directory.
|
||||
* @index: The index number.
|
||||
* @name: The name associated to the directory entry we are looking for.
|
||||
* @name_len: The length of the name.
|
||||
* @mod: Used to indicate if the tree search is meant for a read only
|
||||
* lookup, for a modification lookup or for a deletion lookup, so
|
||||
* its value should be 0, 1 or -1, respectively.
|
||||
*
|
||||
* Returns: NULL if the dir index item does not exists, an error pointer if an
|
||||
* error happened, or a pointer to a dir item if the dir index item exists and
|
||||
* matches the criteria (name and index number).
|
||||
*/
|
||||
struct btrfs_dir_item *
|
||||
btrfs_lookup_dir_index_item(struct btrfs_trans_handle *trans,
|
||||
struct btrfs_root *root,
|
||||
struct btrfs_path *path, u64 dir,
|
||||
u64 objectid, const char *name, int name_len,
|
||||
u64 index, const char *name, int name_len,
|
||||
int mod)
|
||||
{
|
||||
int ret;
|
||||
struct btrfs_dir_item *di;
|
||||
struct btrfs_key key;
|
||||
int ins_len = mod < 0 ? -1 : 0;
|
||||
int cow = mod != 0;
|
||||
|
||||
key.objectid = dir;
|
||||
key.type = BTRFS_DIR_INDEX_KEY;
|
||||
key.offset = objectid;
|
||||
key.offset = index;
|
||||
|
||||
ret = btrfs_search_slot(trans, root, &key, path, ins_len, cow);
|
||||
if (ret < 0)
|
||||
return ERR_PTR(ret);
|
||||
if (ret > 0)
|
||||
return ERR_PTR(-ENOENT);
|
||||
return btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
|
||||
di = btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);
|
||||
if (di == ERR_PTR(-ENOENT))
|
||||
return NULL;
|
||||
|
||||
return di;
|
||||
}
|
||||
|
||||
struct btrfs_dir_item *
|
||||
@ -346,21 +377,18 @@ struct btrfs_dir_item *btrfs_lookup_xattr(struct btrfs_trans_handle *trans,
|
||||
const char *name, u16 name_len,
|
||||
int mod)
|
||||
{
|
||||
int ret;
|
||||
struct btrfs_key key;
|
||||
int ins_len = mod < 0 ? -1 : 0;
|
||||
int cow = mod != 0;
|
||||
struct btrfs_dir_item *di;
|
||||
|
||||
key.objectid = dir;
|
||||
key.type = BTRFS_XATTR_ITEM_KEY;
|
||||
key.offset = btrfs_name_hash(name, name_len);
|
||||
ret = btrfs_search_slot(trans, root, &key, path, ins_len, cow);
|
||||
if (ret < 0)
|
||||
return ERR_PTR(ret);
|
||||
if (ret > 0)
|
||||
|
||||
di = btrfs_lookup_match_dir(trans, root, path, &key, name, name_len, mod);
|
||||
if (IS_ERR(di) && PTR_ERR(di) == -ENOENT)
|
||||
return NULL;
|
||||
|
||||
return btrfs_match_dir_item_name(root->fs_info, path, name, name_len);
|
||||
return di;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -8968,8 +8968,6 @@ static int btrfs_rename_exchange(struct inode *old_dir,
|
||||
/* force full log commit if subvolume involved. */
|
||||
btrfs_set_log_full_commit(trans);
|
||||
} else {
|
||||
btrfs_pin_log_trans(root);
|
||||
root_log_pinned = true;
|
||||
ret = btrfs_insert_inode_ref(trans, dest,
|
||||
new_dentry->d_name.name,
|
||||
new_dentry->d_name.len,
|
||||
@ -8986,8 +8984,6 @@ static int btrfs_rename_exchange(struct inode *old_dir,
|
||||
/* force full log commit if subvolume involved. */
|
||||
btrfs_set_log_full_commit(trans);
|
||||
} else {
|
||||
btrfs_pin_log_trans(dest);
|
||||
dest_log_pinned = true;
|
||||
ret = btrfs_insert_inode_ref(trans, root,
|
||||
old_dentry->d_name.name,
|
||||
old_dentry->d_name.len,
|
||||
@ -9018,6 +9014,29 @@ static int btrfs_rename_exchange(struct inode *old_dir,
|
||||
BTRFS_I(new_inode), 1);
|
||||
}
|
||||
|
||||
/*
|
||||
* Now pin the logs of the roots. We do it to ensure that no other task
|
||||
* can sync the logs while we are in progress with the rename, because
|
||||
* that could result in an inconsistency in case any of the inodes that
|
||||
* are part of this rename operation were logged before.
|
||||
*
|
||||
* We pin the logs even if at this precise moment none of the inodes was
|
||||
* logged before. This is because right after we checked for that, some
|
||||
* other task fsyncing some other inode not involved with this rename
|
||||
* operation could log that one of our inodes exists.
|
||||
*
|
||||
* We don't need to pin the logs before the above calls to
|
||||
* btrfs_insert_inode_ref(), since those don't ever need to change a log.
|
||||
*/
|
||||
if (old_ino != BTRFS_FIRST_FREE_OBJECTID) {
|
||||
btrfs_pin_log_trans(root);
|
||||
root_log_pinned = true;
|
||||
}
|
||||
if (new_ino != BTRFS_FIRST_FREE_OBJECTID) {
|
||||
btrfs_pin_log_trans(dest);
|
||||
dest_log_pinned = true;
|
||||
}
|
||||
|
||||
/* src is a subvolume */
|
||||
if (old_ino == BTRFS_FIRST_FREE_OBJECTID) {
|
||||
ret = btrfs_unlink_subvol(trans, old_dir, old_dentry);
|
||||
@ -9267,8 +9286,6 @@ static int btrfs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
||||
/* force full log commit if subvolume involved. */
|
||||
btrfs_set_log_full_commit(trans);
|
||||
} else {
|
||||
btrfs_pin_log_trans(root);
|
||||
log_pinned = true;
|
||||
ret = btrfs_insert_inode_ref(trans, dest,
|
||||
new_dentry->d_name.name,
|
||||
new_dentry->d_name.len,
|
||||
@ -9292,6 +9309,25 @@ static int btrfs_rename(struct inode *old_dir, struct dentry *old_dentry,
|
||||
if (unlikely(old_ino == BTRFS_FIRST_FREE_OBJECTID)) {
|
||||
ret = btrfs_unlink_subvol(trans, old_dir, old_dentry);
|
||||
} else {
|
||||
/*
|
||||
* Now pin the log. We do it to ensure that no other task can
|
||||
* sync the log while we are in progress with the rename, as
|
||||
* that could result in an inconsistency in case any of the
|
||||
* inodes that are part of this rename operation were logged
|
||||
* before.
|
||||
*
|
||||
* We pin the log even if at this precise moment none of the
|
||||
* inodes was logged before. This is because right after we
|
||||
* checked for that, some other task fsyncing some other inode
|
||||
* not involved with this rename operation could log that one of
|
||||
* our inodes exists.
|
||||
*
|
||||
* We don't need to pin the logs before the above call to
|
||||
* btrfs_insert_inode_ref(), since that does not need to change
|
||||
* a log.
|
||||
*/
|
||||
btrfs_pin_log_trans(root);
|
||||
log_pinned = true;
|
||||
ret = __btrfs_unlink_inode(trans, root, BTRFS_I(old_dir),
|
||||
BTRFS_I(d_inode(old_dentry)),
|
||||
old_dentry->d_name.name,
|
||||
|
@ -1189,7 +1189,8 @@ static void extent_err(const struct extent_buffer *eb, int slot,
|
||||
}
|
||||
|
||||
static int check_extent_item(struct extent_buffer *leaf,
|
||||
struct btrfs_key *key, int slot)
|
||||
struct btrfs_key *key, int slot,
|
||||
struct btrfs_key *prev_key)
|
||||
{
|
||||
struct btrfs_fs_info *fs_info = leaf->fs_info;
|
||||
struct btrfs_extent_item *ei;
|
||||
@ -1400,6 +1401,26 @@ static int check_extent_item(struct extent_buffer *leaf,
|
||||
total_refs, inline_refs);
|
||||
return -EUCLEAN;
|
||||
}
|
||||
|
||||
if ((prev_key->type == BTRFS_EXTENT_ITEM_KEY) ||
|
||||
(prev_key->type == BTRFS_METADATA_ITEM_KEY)) {
|
||||
u64 prev_end = prev_key->objectid;
|
||||
|
||||
if (prev_key->type == BTRFS_METADATA_ITEM_KEY)
|
||||
prev_end += fs_info->nodesize;
|
||||
else
|
||||
prev_end += prev_key->offset;
|
||||
|
||||
if (unlikely(prev_end > key->objectid)) {
|
||||
extent_err(leaf, slot,
|
||||
"previous extent [%llu %u %llu] overlaps current extent [%llu %u %llu]",
|
||||
prev_key->objectid, prev_key->type,
|
||||
prev_key->offset, key->objectid, key->type,
|
||||
key->offset);
|
||||
return -EUCLEAN;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1568,7 +1589,7 @@ static int check_leaf_item(struct extent_buffer *leaf,
|
||||
break;
|
||||
case BTRFS_EXTENT_ITEM_KEY:
|
||||
case BTRFS_METADATA_ITEM_KEY:
|
||||
ret = check_extent_item(leaf, key, slot);
|
||||
ret = check_extent_item(leaf, key, slot, prev_key);
|
||||
break;
|
||||
case BTRFS_TREE_BLOCK_REF_KEY:
|
||||
case BTRFS_SHARED_DATA_REF_KEY:
|
||||
|
@ -912,7 +912,6 @@ static noinline int inode_in_dir(struct btrfs_root *root,
|
||||
di = btrfs_lookup_dir_index_item(NULL, root, path, dirid,
|
||||
index, name, name_len, 0);
|
||||
if (IS_ERR(di)) {
|
||||
if (PTR_ERR(di) != -ENOENT)
|
||||
ret = PTR_ERR(di);
|
||||
goto out;
|
||||
} else if (di) {
|
||||
@ -1149,7 +1148,6 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans,
|
||||
di = btrfs_lookup_dir_index_item(trans, root, path, btrfs_ino(dir),
|
||||
ref_index, name, namelen, 0);
|
||||
if (IS_ERR(di)) {
|
||||
if (PTR_ERR(di) != -ENOENT)
|
||||
return PTR_ERR(di);
|
||||
} else if (di) {
|
||||
ret = drop_one_dir_item(trans, root, path, dir, di);
|
||||
@ -1976,9 +1974,6 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans,
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (dst_di == ERR_PTR(-ENOENT))
|
||||
dst_di = NULL;
|
||||
|
||||
if (IS_ERR(dst_di)) {
|
||||
ret = PTR_ERR(dst_di);
|
||||
goto out;
|
||||
@ -2286,7 +2281,7 @@ static noinline int check_item_in_log(struct btrfs_trans_handle *trans,
|
||||
dir_key->offset,
|
||||
name, name_len, 0);
|
||||
}
|
||||
if (!log_di || log_di == ERR_PTR(-ENOENT)) {
|
||||
if (!log_di) {
|
||||
btrfs_dir_item_key_to_cpu(eb, di, &location);
|
||||
btrfs_release_path(path);
|
||||
btrfs_release_path(log_path);
|
||||
@ -3495,8 +3490,7 @@ int btrfs_del_dir_entries_in_log(struct btrfs_trans_handle *trans,
|
||||
if (err == -ENOSPC) {
|
||||
btrfs_set_log_full_commit(trans);
|
||||
err = 0;
|
||||
} else if (err < 0 && err != -ENOENT) {
|
||||
/* ENOENT can be returned if the entry hasn't been fsynced yet */
|
||||
} else if (err < 0) {
|
||||
btrfs_abort_transaction(trans, err);
|
||||
}
|
||||
|
||||
|
@ -82,6 +82,7 @@ smb2_add_credits(struct TCP_Server_Info *server,
|
||||
*val = 65000; /* Don't get near 64K credits, avoid srv bugs */
|
||||
pr_warn_once("server overflowed SMB3 credits\n");
|
||||
}
|
||||
WARN_ON_ONCE(server->in_flight == 0);
|
||||
server->in_flight--;
|
||||
if (server->in_flight == 0 && (optype & CIFS_OP_MASK) != CIFS_NEG_OP)
|
||||
rc = change_conf(server);
|
||||
@ -818,10 +819,12 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon,
|
||||
if (o_rsp->OplockLevel == SMB2_OPLOCK_LEVEL_LEASE) {
|
||||
kref_get(&tcon->crfid.refcount);
|
||||
tcon->crfid.has_lease = true;
|
||||
smb2_parse_contexts(server, o_rsp,
|
||||
rc = smb2_parse_contexts(server, rsp_iov,
|
||||
&oparms.fid->epoch,
|
||||
oparms.fid->lease_key, &oplock,
|
||||
NULL, NULL);
|
||||
if (rc)
|
||||
goto oshr_exit;
|
||||
} else
|
||||
goto oshr_exit;
|
||||
|
||||
@ -4892,6 +4895,7 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
|
||||
struct smb2_sync_hdr *shdr;
|
||||
unsigned int pdu_length = server->pdu_size;
|
||||
unsigned int buf_size;
|
||||
unsigned int next_cmd;
|
||||
struct mid_q_entry *mid_entry;
|
||||
int next_is_large;
|
||||
char *next_buffer = NULL;
|
||||
@ -4920,14 +4924,15 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
|
||||
next_is_large = server->large_buf;
|
||||
one_more:
|
||||
shdr = (struct smb2_sync_hdr *)buf;
|
||||
if (shdr->NextCommand) {
|
||||
next_cmd = le32_to_cpu(shdr->NextCommand);
|
||||
if (next_cmd) {
|
||||
if (WARN_ON_ONCE(next_cmd > pdu_length))
|
||||
return -1;
|
||||
if (next_is_large)
|
||||
next_buffer = (char *)cifs_buf_get();
|
||||
else
|
||||
next_buffer = (char *)cifs_small_buf_get();
|
||||
memcpy(next_buffer,
|
||||
buf + le32_to_cpu(shdr->NextCommand),
|
||||
pdu_length - le32_to_cpu(shdr->NextCommand));
|
||||
memcpy(next_buffer, buf + next_cmd, pdu_length - next_cmd);
|
||||
}
|
||||
|
||||
mid_entry = smb2_find_mid(server, buf);
|
||||
@ -4951,8 +4956,8 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
|
||||
else
|
||||
ret = cifs_handle_standard(server, mid_entry);
|
||||
|
||||
if (ret == 0 && shdr->NextCommand) {
|
||||
pdu_length -= le32_to_cpu(shdr->NextCommand);
|
||||
if (ret == 0 && next_cmd) {
|
||||
pdu_length -= next_cmd;
|
||||
server->large_buf = next_is_large;
|
||||
if (next_is_large)
|
||||
server->bigbuf = buf = next_buffer;
|
||||
|
@ -1991,17 +1991,18 @@ parse_posix_ctxt(struct create_context *cc, struct smb2_file_all_info *info,
|
||||
posix->nlink, posix->mode, posix->reparse_tag);
|
||||
}
|
||||
|
||||
void
|
||||
smb2_parse_contexts(struct TCP_Server_Info *server,
|
||||
struct smb2_create_rsp *rsp,
|
||||
unsigned int *epoch, char *lease_key, __u8 *oplock,
|
||||
int smb2_parse_contexts(struct TCP_Server_Info *server,
|
||||
struct kvec *rsp_iov,
|
||||
unsigned int *epoch,
|
||||
char *lease_key, __u8 *oplock,
|
||||
struct smb2_file_all_info *buf,
|
||||
struct create_posix_rsp *posix)
|
||||
{
|
||||
char *data_offset;
|
||||
struct smb2_create_rsp *rsp = rsp_iov->iov_base;
|
||||
struct create_context *cc;
|
||||
unsigned int next;
|
||||
unsigned int remaining;
|
||||
size_t rem, off, len;
|
||||
size_t doff, dlen;
|
||||
size_t noff, nlen;
|
||||
char *name;
|
||||
static const char smb3_create_tag_posix[] = {
|
||||
0x93, 0xAD, 0x25, 0x50, 0x9C,
|
||||
@ -2010,45 +2011,63 @@ smb2_parse_contexts(struct TCP_Server_Info *server,
|
||||
};
|
||||
|
||||
*oplock = 0;
|
||||
data_offset = (char *)rsp + le32_to_cpu(rsp->CreateContextsOffset);
|
||||
remaining = le32_to_cpu(rsp->CreateContextsLength);
|
||||
cc = (struct create_context *)data_offset;
|
||||
|
||||
off = le32_to_cpu(rsp->CreateContextsOffset);
|
||||
rem = le32_to_cpu(rsp->CreateContextsLength);
|
||||
if (check_add_overflow(off, rem, &len) || len > rsp_iov->iov_len)
|
||||
return -EINVAL;
|
||||
cc = (struct create_context *)((u8 *)rsp + off);
|
||||
|
||||
/* Initialize inode number to 0 in case no valid data in qfid context */
|
||||
if (buf)
|
||||
buf->IndexNumber = 0;
|
||||
|
||||
while (remaining >= sizeof(struct create_context)) {
|
||||
name = le16_to_cpu(cc->NameOffset) + (char *)cc;
|
||||
if (le16_to_cpu(cc->NameLength) == 4 &&
|
||||
strncmp(name, SMB2_CREATE_REQUEST_LEASE, 4) == 0)
|
||||
while (rem >= sizeof(*cc)) {
|
||||
doff = le16_to_cpu(cc->DataOffset);
|
||||
dlen = le32_to_cpu(cc->DataLength);
|
||||
if (check_add_overflow(doff, dlen, &len) || len > rem)
|
||||
return -EINVAL;
|
||||
|
||||
noff = le16_to_cpu(cc->NameOffset);
|
||||
nlen = le16_to_cpu(cc->NameLength);
|
||||
if (noff + nlen > doff)
|
||||
return -EINVAL;
|
||||
|
||||
name = (char *)cc + noff;
|
||||
switch (nlen) {
|
||||
case 4:
|
||||
if (!strncmp(name, SMB2_CREATE_REQUEST_LEASE, 4)) {
|
||||
*oplock = server->ops->parse_lease_buf(cc, epoch,
|
||||
lease_key);
|
||||
else if (buf && (le16_to_cpu(cc->NameLength) == 4) &&
|
||||
strncmp(name, SMB2_CREATE_QUERY_ON_DISK_ID, 4) == 0)
|
||||
} else if (buf &&
|
||||
!strncmp(name, SMB2_CREATE_QUERY_ON_DISK_ID, 4)) {
|
||||
parse_query_id_ctxt(cc, buf);
|
||||
else if ((le16_to_cpu(cc->NameLength) == 16)) {
|
||||
if (posix &&
|
||||
memcmp(name, smb3_create_tag_posix, 16) == 0)
|
||||
parse_posix_ctxt(cc, buf, posix);
|
||||
}
|
||||
/* else {
|
||||
cifs_dbg(FYI, "Context not matched with len %d\n",
|
||||
le16_to_cpu(cc->NameLength));
|
||||
cifs_dump_mem("Cctxt name: ", name, 4);
|
||||
} */
|
||||
|
||||
next = le32_to_cpu(cc->Next);
|
||||
if (!next)
|
||||
break;
|
||||
remaining -= next;
|
||||
cc = (struct create_context *)((char *)cc + next);
|
||||
case 16:
|
||||
if (posix && !memcmp(name, smb3_create_tag_posix, 16))
|
||||
parse_posix_ctxt(cc, buf, posix);
|
||||
break;
|
||||
default:
|
||||
cifs_dbg(FYI, "%s: unhandled context (nlen=%zu dlen=%zu)\n",
|
||||
__func__, nlen, dlen);
|
||||
if (IS_ENABLED(CONFIG_CIFS_DEBUG2))
|
||||
cifs_dump_mem("context data: ", cc, dlen);
|
||||
break;
|
||||
}
|
||||
|
||||
off = le32_to_cpu(cc->Next);
|
||||
if (!off)
|
||||
break;
|
||||
if (check_sub_overflow(rem, off, &rem))
|
||||
return -EINVAL;
|
||||
cc = (struct create_context *)((u8 *)cc + off);
|
||||
}
|
||||
|
||||
if (rsp->OplockLevel != SMB2_OPLOCK_LEVEL_LEASE)
|
||||
*oplock = rsp->OplockLevel;
|
||||
|
||||
return;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
@ -2915,7 +2934,7 @@ SMB2_open(const unsigned int xid, struct cifs_open_parms *oparms, __le16 *path,
|
||||
}
|
||||
|
||||
|
||||
smb2_parse_contexts(server, rsp, &oparms->fid->epoch,
|
||||
rc = smb2_parse_contexts(server, &rsp_iov, &oparms->fid->epoch,
|
||||
oparms->fid->lease_key, oplock, buf, posix);
|
||||
creat_exit:
|
||||
SMB2_open_free(&rqst);
|
||||
|
@ -270,11 +270,13 @@ extern int smb3_validate_negotiate(const unsigned int, struct cifs_tcon *);
|
||||
|
||||
extern enum securityEnum smb2_select_sectype(struct TCP_Server_Info *,
|
||||
enum securityEnum);
|
||||
extern void smb2_parse_contexts(struct TCP_Server_Info *server,
|
||||
struct smb2_create_rsp *rsp,
|
||||
unsigned int *epoch, char *lease_key,
|
||||
__u8 *oplock, struct smb2_file_all_info *buf,
|
||||
int smb2_parse_contexts(struct TCP_Server_Info *server,
|
||||
struct kvec *rsp_iov,
|
||||
unsigned int *epoch,
|
||||
char *lease_key, __u8 *oplock,
|
||||
struct smb2_file_all_info *buf,
|
||||
struct create_posix_rsp *posix);
|
||||
|
||||
extern int smb3_encryption_required(const struct cifs_tcon *tcon);
|
||||
extern int smb2_validate_iov(unsigned int offset, unsigned int buffer_length,
|
||||
struct kvec *iov, unsigned int min_buf_size);
|
||||
|
@ -2222,7 +2222,7 @@ static int ext4_fill_es_cache_info(struct inode *inode,
|
||||
|
||||
|
||||
/*
|
||||
* ext4_ext_determine_hole - determine hole around given block
|
||||
* ext4_ext_find_hole - find hole around given block according to the given path
|
||||
* @inode: inode we lookup in
|
||||
* @path: path in extent tree to @lblk
|
||||
* @lblk: pointer to logical block around which we want to determine hole
|
||||
@ -2234,7 +2234,7 @@ static int ext4_fill_es_cache_info(struct inode *inode,
|
||||
* The function returns the length of a hole starting at @lblk. We update @lblk
|
||||
* to the beginning of the hole if we managed to find it.
|
||||
*/
|
||||
static ext4_lblk_t ext4_ext_determine_hole(struct inode *inode,
|
||||
static ext4_lblk_t ext4_ext_find_hole(struct inode *inode,
|
||||
struct ext4_ext_path *path,
|
||||
ext4_lblk_t *lblk)
|
||||
{
|
||||
@ -2263,30 +2263,6 @@ static ext4_lblk_t ext4_ext_determine_hole(struct inode *inode,
|
||||
return len;
|
||||
}
|
||||
|
||||
/*
|
||||
* ext4_ext_put_gap_in_cache:
|
||||
* calculate boundaries of the gap that the requested block fits into
|
||||
* and cache this gap
|
||||
*/
|
||||
static void
|
||||
ext4_ext_put_gap_in_cache(struct inode *inode, ext4_lblk_t hole_start,
|
||||
ext4_lblk_t hole_len)
|
||||
{
|
||||
struct extent_status es;
|
||||
|
||||
ext4_es_find_extent_range(inode, &ext4_es_is_delayed, hole_start,
|
||||
hole_start + hole_len - 1, &es);
|
||||
if (es.es_len) {
|
||||
/* There's delayed extent containing lblock? */
|
||||
if (es.es_lblk <= hole_start)
|
||||
return;
|
||||
hole_len = min(es.es_lblk - hole_start, hole_len);
|
||||
}
|
||||
ext_debug(inode, " -> %u:%u\n", hole_start, hole_len);
|
||||
ext4_es_insert_extent(inode, hole_start, hole_len, ~0,
|
||||
EXTENT_STATUS_HOLE);
|
||||
}
|
||||
|
||||
/*
|
||||
* ext4_ext_rm_idx:
|
||||
* removes index from the index block.
|
||||
@ -4058,6 +4034,69 @@ static int get_implied_cluster_alloc(struct super_block *sb,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Determine hole length around the given logical block, first try to
|
||||
* locate and expand the hole from the given @path, and then adjust it
|
||||
* if it's partially or completely converted to delayed extents, insert
|
||||
* it into the extent cache tree if it's indeed a hole, finally return
|
||||
* the length of the determined extent.
|
||||
*/
|
||||
static ext4_lblk_t ext4_ext_determine_insert_hole(struct inode *inode,
|
||||
struct ext4_ext_path *path,
|
||||
ext4_lblk_t lblk)
|
||||
{
|
||||
ext4_lblk_t hole_start, len;
|
||||
struct extent_status es;
|
||||
|
||||
hole_start = lblk;
|
||||
len = ext4_ext_find_hole(inode, path, &hole_start);
|
||||
again:
|
||||
ext4_es_find_extent_range(inode, &ext4_es_is_delayed, hole_start,
|
||||
hole_start + len - 1, &es);
|
||||
if (!es.es_len)
|
||||
goto insert_hole;
|
||||
|
||||
/*
|
||||
* There's a delalloc extent in the hole, handle it if the delalloc
|
||||
* extent is in front of, behind and straddle the queried range.
|
||||
*/
|
||||
if (lblk >= es.es_lblk + es.es_len) {
|
||||
/*
|
||||
* The delalloc extent is in front of the queried range,
|
||||
* find again from the queried start block.
|
||||
*/
|
||||
len -= lblk - hole_start;
|
||||
hole_start = lblk;
|
||||
goto again;
|
||||
} else if (in_range(lblk, es.es_lblk, es.es_len)) {
|
||||
/*
|
||||
* The delalloc extent containing lblk, it must have been
|
||||
* added after ext4_map_blocks() checked the extent status
|
||||
* tree, adjust the length to the delalloc extent's after
|
||||
* lblk.
|
||||
*/
|
||||
len = es.es_lblk + es.es_len - lblk;
|
||||
return len;
|
||||
} else {
|
||||
/*
|
||||
* The delalloc extent is partially or completely behind
|
||||
* the queried range, update hole length until the
|
||||
* beginning of the delalloc extent.
|
||||
*/
|
||||
len = min(es.es_lblk - hole_start, len);
|
||||
}
|
||||
|
||||
insert_hole:
|
||||
/* Put just found gap into cache to speed up subsequent requests */
|
||||
ext_debug(inode, " -> %u:%u\n", hole_start, len);
|
||||
ext4_es_insert_extent(inode, hole_start, len, ~0, EXTENT_STATUS_HOLE);
|
||||
|
||||
/* Update hole_len to reflect hole size after lblk */
|
||||
if (hole_start != lblk)
|
||||
len -= lblk - hole_start;
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
/*
|
||||
* Block allocation/map/preallocation routine for extents based files
|
||||
@ -4175,22 +4214,12 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
|
||||
* we couldn't try to create block if create flag is zero
|
||||
*/
|
||||
if ((flags & EXT4_GET_BLOCKS_CREATE) == 0) {
|
||||
ext4_lblk_t hole_start, hole_len;
|
||||
ext4_lblk_t len;
|
||||
|
||||
hole_start = map->m_lblk;
|
||||
hole_len = ext4_ext_determine_hole(inode, path, &hole_start);
|
||||
/*
|
||||
* put just found gap into cache to speed up
|
||||
* subsequent requests
|
||||
*/
|
||||
ext4_ext_put_gap_in_cache(inode, hole_start, hole_len);
|
||||
len = ext4_ext_determine_insert_hole(inode, path, map->m_lblk);
|
||||
|
||||
/* Update hole_len to reflect hole size after map->m_lblk */
|
||||
if (hole_start != map->m_lblk)
|
||||
hole_len -= map->m_lblk - hole_start;
|
||||
map->m_pblk = 0;
|
||||
map->m_len = min_t(unsigned int, map->m_len, hole_len);
|
||||
|
||||
map->m_len = min_t(unsigned int, map->m_len, len);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -823,6 +823,24 @@ void ext4_mb_generate_buddy(struct super_block *sb,
|
||||
atomic64_add(period, &sbi->s_mb_generation_time);
|
||||
}
|
||||
|
||||
static void mb_regenerate_buddy(struct ext4_buddy *e4b)
|
||||
{
|
||||
int count;
|
||||
int order = 1;
|
||||
void *buddy;
|
||||
|
||||
while ((buddy = mb_find_buddy(e4b, order++, &count)))
|
||||
ext4_set_bits(buddy, 0, count);
|
||||
|
||||
e4b->bd_info->bb_fragments = 0;
|
||||
memset(e4b->bd_info->bb_counters, 0,
|
||||
sizeof(*e4b->bd_info->bb_counters) *
|
||||
(e4b->bd_sb->s_blocksize_bits + 2));
|
||||
|
||||
ext4_mb_generate_buddy(e4b->bd_sb, e4b->bd_buddy,
|
||||
e4b->bd_bitmap, e4b->bd_group, e4b->bd_info);
|
||||
}
|
||||
|
||||
/* The buddy information is attached the buddy cache inode
|
||||
* for convenience. The information regarding each group
|
||||
* is loaded via ext4_mb_load_buddy. The information involve
|
||||
@ -1505,6 +1523,8 @@ static void mb_free_blocks(struct inode *inode, struct ext4_buddy *e4b,
|
||||
ext4_mark_group_bitmap_corrupted(
|
||||
sb, e4b->bd_group,
|
||||
EXT4_GROUP_INFO_BBITMAP_CORRUPT);
|
||||
} else {
|
||||
mb_regenerate_buddy(e4b);
|
||||
}
|
||||
goto done;
|
||||
}
|
||||
@ -1854,6 +1874,9 @@ int ext4_mb_try_best_found(struct ext4_allocation_context *ac,
|
||||
return err;
|
||||
|
||||
ext4_lock_group(ac->ac_sb, group);
|
||||
if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info)))
|
||||
goto out;
|
||||
|
||||
max = mb_find_extent(e4b, ex.fe_start, ex.fe_len, &ex);
|
||||
|
||||
if (max > 0) {
|
||||
@ -1861,6 +1884,7 @@ int ext4_mb_try_best_found(struct ext4_allocation_context *ac,
|
||||
ext4_mb_use_best_found(ac, e4b);
|
||||
}
|
||||
|
||||
out:
|
||||
ext4_unlock_group(ac->ac_sb, group);
|
||||
ext4_mb_unload_buddy(e4b);
|
||||
|
||||
@ -1889,12 +1913,10 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info))) {
|
||||
ext4_mb_unload_buddy(e4b);
|
||||
return 0;
|
||||
}
|
||||
|
||||
ext4_lock_group(ac->ac_sb, group);
|
||||
if (unlikely(EXT4_MB_GRP_BBITMAP_CORRUPT(e4b->bd_info)))
|
||||
goto out;
|
||||
|
||||
max = mb_find_extent(e4b, ac->ac_g_ex.fe_start,
|
||||
ac->ac_g_ex.fe_len, &ex);
|
||||
ex.fe_logical = 0xDEADFA11; /* debug value */
|
||||
@ -1927,6 +1949,7 @@ int ext4_mb_find_by_goal(struct ext4_allocation_context *ac,
|
||||
ac->ac_b_ex = ex;
|
||||
ext4_mb_use_best_found(ac, e4b);
|
||||
}
|
||||
out:
|
||||
ext4_unlock_group(ac->ac_sb, group);
|
||||
ext4_mb_unload_buddy(e4b);
|
||||
|
||||
|
@ -57,28 +57,6 @@ static inline void __buffer_unlink(struct journal_head *jh)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Move a buffer from the checkpoint list to the checkpoint io list
|
||||
*
|
||||
* Called with j_list_lock held
|
||||
*/
|
||||
static inline void __buffer_relink_io(struct journal_head *jh)
|
||||
{
|
||||
transaction_t *transaction = jh->b_cp_transaction;
|
||||
|
||||
__buffer_unlink_first(jh);
|
||||
|
||||
if (!transaction->t_checkpoint_io_list) {
|
||||
jh->b_cpnext = jh->b_cpprev = jh;
|
||||
} else {
|
||||
jh->b_cpnext = transaction->t_checkpoint_io_list;
|
||||
jh->b_cpprev = transaction->t_checkpoint_io_list->b_cpprev;
|
||||
jh->b_cpprev->b_cpnext = jh;
|
||||
jh->b_cpnext->b_cpprev = jh;
|
||||
}
|
||||
transaction->t_checkpoint_io_list = jh;
|
||||
}
|
||||
|
||||
/*
|
||||
* Try to release a checkpointed buffer from its transaction.
|
||||
* Returns 1 if we released it and 2 if we also released the
|
||||
@ -91,8 +69,7 @@ static int __try_to_free_cp_buf(struct journal_head *jh)
|
||||
int ret = 0;
|
||||
struct buffer_head *bh = jh2bh(jh);
|
||||
|
||||
if (jh->b_transaction == NULL && !buffer_locked(bh) &&
|
||||
!buffer_dirty(bh) && !buffer_write_io_error(bh)) {
|
||||
if (!jh->b_transaction && !buffer_locked(bh) && !buffer_dirty(bh)) {
|
||||
JBUFFER_TRACE(jh, "remove from checkpoint list");
|
||||
ret = __jbd2_journal_remove_checkpoint(jh) + 1;
|
||||
}
|
||||
@ -191,6 +168,7 @@ __flush_batch(journal_t *journal, int *batch_count)
|
||||
struct buffer_head *bh = journal->j_chkpt_bhs[i];
|
||||
BUFFER_TRACE(bh, "brelse");
|
||||
__brelse(bh);
|
||||
journal->j_chkpt_bhs[i] = NULL;
|
||||
}
|
||||
*batch_count = 0;
|
||||
}
|
||||
@ -228,7 +206,6 @@ int jbd2_log_do_checkpoint(journal_t *journal)
|
||||
* OK, we need to start writing disk blocks. Take one transaction
|
||||
* and write it.
|
||||
*/
|
||||
result = 0;
|
||||
spin_lock(&journal->j_list_lock);
|
||||
if (!journal->j_checkpoint_transactions)
|
||||
goto out;
|
||||
@ -251,15 +228,6 @@ int jbd2_log_do_checkpoint(journal_t *journal)
|
||||
jh = transaction->t_checkpoint_list;
|
||||
bh = jh2bh(jh);
|
||||
|
||||
if (buffer_locked(bh)) {
|
||||
get_bh(bh);
|
||||
spin_unlock(&journal->j_list_lock);
|
||||
wait_on_buffer(bh);
|
||||
/* the journal_head may have gone by now */
|
||||
BUFFER_TRACE(bh, "brelse");
|
||||
__brelse(bh);
|
||||
goto retry;
|
||||
}
|
||||
if (jh->b_transaction != NULL) {
|
||||
transaction_t *t = jh->b_transaction;
|
||||
tid_t tid = t->t_tid;
|
||||
@ -294,32 +262,50 @@ int jbd2_log_do_checkpoint(journal_t *journal)
|
||||
spin_lock(&journal->j_list_lock);
|
||||
goto restart;
|
||||
}
|
||||
if (!buffer_dirty(bh)) {
|
||||
if (unlikely(buffer_write_io_error(bh)) && !result)
|
||||
result = -EIO;
|
||||
BUFFER_TRACE(bh, "remove from checkpoint");
|
||||
if (__jbd2_journal_remove_checkpoint(jh))
|
||||
/* The transaction was released; we're done */
|
||||
goto out;
|
||||
continue;
|
||||
}
|
||||
if (!trylock_buffer(bh)) {
|
||||
/*
|
||||
* Important: we are about to write the buffer, and
|
||||
* possibly block, while still holding the journal
|
||||
* lock. We cannot afford to let the transaction
|
||||
* logic start messing around with this buffer before
|
||||
* we write it to disk, as that would break
|
||||
* recoverability.
|
||||
* The buffer is locked, it may be writing back, or
|
||||
* flushing out in the last couple of cycles, or
|
||||
* re-adding into a new transaction, need to check
|
||||
* it again until it's unlocked.
|
||||
*/
|
||||
get_bh(bh);
|
||||
spin_unlock(&journal->j_list_lock);
|
||||
wait_on_buffer(bh);
|
||||
/* the journal_head may have gone by now */
|
||||
BUFFER_TRACE(bh, "brelse");
|
||||
__brelse(bh);
|
||||
goto retry;
|
||||
} else if (!buffer_dirty(bh)) {
|
||||
unlock_buffer(bh);
|
||||
BUFFER_TRACE(bh, "remove from checkpoint");
|
||||
/*
|
||||
* If the transaction was released or the checkpoint
|
||||
* list was empty, we're done.
|
||||
*/
|
||||
if (__jbd2_journal_remove_checkpoint(jh) ||
|
||||
!transaction->t_checkpoint_list)
|
||||
goto out;
|
||||
} else {
|
||||
unlock_buffer(bh);
|
||||
/*
|
||||
* We are about to write the buffer, it could be
|
||||
* raced by some other transaction shrink or buffer
|
||||
* re-log logic once we release the j_list_lock,
|
||||
* leave it on the checkpoint list and check status
|
||||
* again to make sure it's clean.
|
||||
*/
|
||||
BUFFER_TRACE(bh, "queue");
|
||||
get_bh(bh);
|
||||
J_ASSERT_BH(bh, !buffer_jwrite(bh));
|
||||
journal->j_chkpt_bhs[batch_count++] = bh;
|
||||
__buffer_relink_io(jh);
|
||||
transaction->t_chp_stats.cs_written++;
|
||||
transaction->t_checkpoint_list = jh->b_cpnext;
|
||||
}
|
||||
|
||||
if ((batch_count == JBD2_NR_BATCH) ||
|
||||
need_resched() ||
|
||||
spin_needbreak(&journal->j_list_lock))
|
||||
need_resched() || spin_needbreak(&journal->j_list_lock) ||
|
||||
jh2bh(transaction->t_checkpoint_list) == journal->j_chkpt_bhs[0])
|
||||
goto unlock_and_flush;
|
||||
}
|
||||
|
||||
@ -333,45 +319,8 @@ int jbd2_log_do_checkpoint(journal_t *journal)
|
||||
goto restart;
|
||||
}
|
||||
|
||||
/*
|
||||
* Now we issued all of the transaction's buffers, let's deal
|
||||
* with the buffers that are out for I/O.
|
||||
*/
|
||||
restart2:
|
||||
/* Did somebody clean up the transaction in the meanwhile? */
|
||||
if (journal->j_checkpoint_transactions != transaction ||
|
||||
transaction->t_tid != this_tid)
|
||||
goto out;
|
||||
|
||||
while (transaction->t_checkpoint_io_list) {
|
||||
jh = transaction->t_checkpoint_io_list;
|
||||
bh = jh2bh(jh);
|
||||
if (buffer_locked(bh)) {
|
||||
get_bh(bh);
|
||||
spin_unlock(&journal->j_list_lock);
|
||||
wait_on_buffer(bh);
|
||||
/* the journal_head may have gone by now */
|
||||
BUFFER_TRACE(bh, "brelse");
|
||||
__brelse(bh);
|
||||
spin_lock(&journal->j_list_lock);
|
||||
goto restart2;
|
||||
}
|
||||
if (unlikely(buffer_write_io_error(bh)) && !result)
|
||||
result = -EIO;
|
||||
|
||||
/*
|
||||
* Now in whatever state the buffer currently is, we
|
||||
* know that it has been written out and so we can
|
||||
* drop it from the list
|
||||
*/
|
||||
if (__jbd2_journal_remove_checkpoint(jh))
|
||||
break;
|
||||
}
|
||||
out:
|
||||
spin_unlock(&journal->j_list_lock);
|
||||
if (result < 0)
|
||||
jbd2_journal_abort(journal, result);
|
||||
else
|
||||
result = jbd2_cleanup_journal_tail(journal);
|
||||
|
||||
return (result < 0) ? result : 0;
|
||||
|
@ -319,16 +319,18 @@ static loff_t zonefs_check_zone_condition(struct inode *inode,
|
||||
}
|
||||
}
|
||||
|
||||
struct zonefs_ioerr_data {
|
||||
struct inode *inode;
|
||||
bool write;
|
||||
};
|
||||
|
||||
static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
|
||||
void *data)
|
||||
{
|
||||
struct zonefs_ioerr_data *err = data;
|
||||
struct inode *inode = err->inode;
|
||||
struct blk_zone *z = data;
|
||||
|
||||
*z = *zone;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void zonefs_handle_io_error(struct inode *inode, struct blk_zone *zone,
|
||||
bool write)
|
||||
{
|
||||
struct zonefs_inode_info *zi = ZONEFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
|
||||
@ -344,8 +346,8 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
|
||||
isize = i_size_read(inode);
|
||||
if (zone->cond != BLK_ZONE_COND_OFFLINE &&
|
||||
zone->cond != BLK_ZONE_COND_READONLY &&
|
||||
!err->write && isize == data_size)
|
||||
return 0;
|
||||
!write && isize == data_size)
|
||||
return;
|
||||
|
||||
/*
|
||||
* At this point, we detected either a bad zone or an inconsistency
|
||||
@ -366,8 +368,9 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
|
||||
* In all cases, warn about inode size inconsistency and handle the
|
||||
* IO error according to the zone condition and to the mount options.
|
||||
*/
|
||||
if (zi->i_ztype == ZONEFS_ZTYPE_SEQ && isize != data_size)
|
||||
zonefs_warn(sb, "inode %lu: invalid size %lld (should be %lld)\n",
|
||||
if (isize != data_size)
|
||||
zonefs_warn(sb,
|
||||
"inode %lu: invalid size %lld (should be %lld)\n",
|
||||
inode->i_ino, isize, data_size);
|
||||
|
||||
/*
|
||||
@ -427,8 +430,6 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
|
||||
zonefs_update_stats(inode, data_size);
|
||||
zonefs_i_size_write(inode, data_size);
|
||||
zi->i_wpoffset = data_size;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
@ -442,23 +443,25 @@ static void __zonefs_io_error(struct inode *inode, bool write)
|
||||
{
|
||||
struct zonefs_inode_info *zi = ZONEFS_I(inode);
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct zonefs_sb_info *sbi = ZONEFS_SB(sb);
|
||||
unsigned int noio_flag;
|
||||
unsigned int nr_zones = 1;
|
||||
struct zonefs_ioerr_data err = {
|
||||
.inode = inode,
|
||||
.write = write,
|
||||
};
|
||||
struct blk_zone zone;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* The only files that have more than one zone are conventional zone
|
||||
* files with aggregated conventional zones, for which the inode zone
|
||||
* size is always larger than the device zone size.
|
||||
* Conventional zone have no write pointer and cannot become read-only
|
||||
* or offline. So simply fake a report for a single or aggregated zone
|
||||
* and let zonefs_handle_io_error() correct the zone inode information
|
||||
* according to the mount options.
|
||||
*/
|
||||
if (zi->i_zone_size > bdev_zone_sectors(sb->s_bdev))
|
||||
nr_zones = zi->i_zone_size >>
|
||||
(sbi->s_zone_sectors_shift + SECTOR_SHIFT);
|
||||
if (zi->i_ztype != ZONEFS_ZTYPE_SEQ) {
|
||||
zone.start = zi->i_zsector;
|
||||
zone.len = zi->i_max_size >> SECTOR_SHIFT;
|
||||
zone.wp = zone.start + zone.len;
|
||||
zone.type = BLK_ZONE_TYPE_CONVENTIONAL;
|
||||
zone.cond = BLK_ZONE_COND_NOT_WP;
|
||||
zone.capacity = zone.len;
|
||||
goto handle_io_error;
|
||||
}
|
||||
|
||||
/*
|
||||
* Memory allocations in blkdev_report_zones() can trigger a memory
|
||||
@ -469,12 +472,19 @@ static void __zonefs_io_error(struct inode *inode, bool write)
|
||||
* the GFP_NOIO context avoids both problems.
|
||||
*/
|
||||
noio_flag = memalloc_noio_save();
|
||||
ret = blkdev_report_zones(sb->s_bdev, zi->i_zsector, nr_zones,
|
||||
zonefs_io_error_cb, &err);
|
||||
if (ret != nr_zones)
|
||||
ret = blkdev_report_zones(sb->s_bdev, zi->i_zsector, 1,
|
||||
zonefs_io_error_cb, &zone);
|
||||
memalloc_noio_restore(noio_flag);
|
||||
if (ret != 1) {
|
||||
zonefs_err(sb, "Get inode %lu zone information failed %d\n",
|
||||
inode->i_ino, ret);
|
||||
memalloc_noio_restore(noio_flag);
|
||||
zonefs_warn(sb, "remounting filesystem read-only\n");
|
||||
sb->s_flags |= SB_RDONLY;
|
||||
return;
|
||||
}
|
||||
|
||||
handle_io_error:
|
||||
zonefs_handle_io_error(inode, &zone, write);
|
||||
}
|
||||
|
||||
static void zonefs_io_error(struct inode *inode, bool write)
|
||||
|
@ -318,6 +318,8 @@ enum rw_hint {
|
||||
/* iocb->ki_waitq is valid */
|
||||
#define IOCB_WAITQ (1 << 19)
|
||||
#define IOCB_NOIO (1 << 20)
|
||||
/* kiocb is a read or write operation submitted by fs/aio.c. */
|
||||
#define IOCB_AIO_RW (1 << 23)
|
||||
|
||||
struct kiocb {
|
||||
struct file *ki_filp;
|
||||
|
@ -321,6 +321,10 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie);
|
||||
WARN_ON_ONCE(debug_locks && !lockdep_is_held(l)); \
|
||||
} while (0)
|
||||
|
||||
#define lockdep_assert_none_held_once() do { \
|
||||
WARN_ON_ONCE(debug_locks && current->lockdep_depth); \
|
||||
} while (0)
|
||||
|
||||
#define lockdep_recursing(tsk) ((tsk)->lockdep_recursion)
|
||||
|
||||
#define lockdep_pin_lock(l) lock_pin_lock(&(l)->dep_map)
|
||||
@ -394,6 +398,7 @@ static inline void lockdep_unregister_key(struct lock_class_key *key)
|
||||
#define lockdep_assert_held_write(l) do { (void)(l); } while (0)
|
||||
#define lockdep_assert_held_read(l) do { (void)(l); } while (0)
|
||||
#define lockdep_assert_held_once(l) do { (void)(l); } while (0)
|
||||
#define lockdep_assert_none_held_once() do { } while (0)
|
||||
|
||||
#define lockdep_recursing(tsk) (0)
|
||||
|
||||
|
@ -16,7 +16,7 @@
|
||||
* try_get_task_stack() instead. task_stack_page will return a pointer
|
||||
* that could get freed out from under you.
|
||||
*/
|
||||
static inline void *task_stack_page(const struct task_struct *task)
|
||||
static __always_inline void *task_stack_page(const struct task_struct *task)
|
||||
{
|
||||
return task->stack;
|
||||
}
|
||||
|
@ -31,7 +31,10 @@ typedef __kernel_sa_family_t sa_family_t;
|
||||
|
||||
struct sockaddr {
|
||||
sa_family_t sa_family; /* address family, AF_xxx */
|
||||
char sa_data[14]; /* 14 bytes of protocol address */
|
||||
union {
|
||||
char sa_data_min[14]; /* Minimum 14 bytes of protocol address */
|
||||
DECLARE_FLEX_ARRAY(char, sa_data);
|
||||
};
|
||||
};
|
||||
|
||||
struct linger {
|
||||
|
@ -2228,7 +2228,7 @@ struct tcp_ulp_ops {
|
||||
/* cleanup ulp */
|
||||
void (*release)(struct sock *sk);
|
||||
/* diagnostic */
|
||||
int (*get_info)(const struct sock *sk, struct sk_buff *skb);
|
||||
int (*get_info)(struct sock *sk, struct sk_buff *skb);
|
||||
size_t (*get_info_size)(const struct sock *sk);
|
||||
/* clone ulp */
|
||||
void (*clone)(const struct request_sock *req, struct sock *newsk,
|
||||
|
@ -10,7 +10,7 @@
|
||||
#include <trace/hooks/sched.h>
|
||||
|
||||
int sched_rr_timeslice = RR_TIMESLICE;
|
||||
int sysctl_sched_rr_timeslice = (MSEC_PER_SEC / HZ) * RR_TIMESLICE;
|
||||
int sysctl_sched_rr_timeslice = (MSEC_PER_SEC * RR_TIMESLICE) / HZ;
|
||||
/* More than 4 hours if BW_SHIFT equals 20. */
|
||||
static const u64 max_rt_runtime = MAX_BW;
|
||||
|
||||
@ -2823,9 +2823,6 @@ static int sched_rt_global_constraints(void)
|
||||
|
||||
static int sched_rt_global_validate(void)
|
||||
{
|
||||
if (sysctl_sched_rt_period <= 0)
|
||||
return -EINVAL;
|
||||
|
||||
if ((sysctl_sched_rt_runtime != RUNTIME_INF) &&
|
||||
((sysctl_sched_rt_runtime > sysctl_sched_rt_period) ||
|
||||
((u64)sysctl_sched_rt_runtime *
|
||||
@ -2856,7 +2853,7 @@ int sched_rt_handler(struct ctl_table *table, int write, void *buffer,
|
||||
old_period = sysctl_sched_rt_period;
|
||||
old_runtime = sysctl_sched_rt_runtime;
|
||||
|
||||
ret = proc_dointvec(table, write, buffer, lenp, ppos);
|
||||
ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
|
||||
|
||||
if (!ret && write) {
|
||||
ret = sched_rt_global_validate();
|
||||
@ -2900,6 +2897,9 @@ int sched_rr_handler(struct ctl_table *table, int write, void *buffer,
|
||||
sched_rr_timeslice =
|
||||
sysctl_sched_rr_timeslice <= 0 ? RR_TIMESLICE :
|
||||
msecs_to_jiffies(sysctl_sched_rr_timeslice);
|
||||
|
||||
if (sysctl_sched_rr_timeslice <= 0)
|
||||
sysctl_sched_rr_timeslice = jiffies_to_msecs(RR_TIMESLICE);
|
||||
}
|
||||
mutex_unlock(&mutex);
|
||||
|
||||
|
@ -29,6 +29,9 @@
|
||||
#include <linux/syscalls.h>
|
||||
#include <linux/sysctl.h>
|
||||
|
||||
/* Not exposed in headers: strictly internal use only. */
|
||||
#define SECCOMP_MODE_DEAD (SECCOMP_MODE_FILTER + 1)
|
||||
|
||||
#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
|
||||
#include <asm/syscall.h>
|
||||
#endif
|
||||
@ -1026,6 +1029,7 @@ static void __secure_computing_strict(int this_syscall)
|
||||
#ifdef SECCOMP_DEBUG
|
||||
dump_stack();
|
||||
#endif
|
||||
current->seccomp.mode = SECCOMP_MODE_DEAD;
|
||||
seccomp_log(this_syscall, SIGKILL, SECCOMP_RET_KILL_THREAD, true);
|
||||
do_exit(SIGKILL);
|
||||
}
|
||||
@ -1254,6 +1258,7 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
|
||||
case SECCOMP_RET_KILL_THREAD:
|
||||
case SECCOMP_RET_KILL_PROCESS:
|
||||
default:
|
||||
current->seccomp.mode = SECCOMP_MODE_DEAD;
|
||||
seccomp_log(this_syscall, SIGSYS, action, true);
|
||||
/* Dump core only if this is the last remaining thread. */
|
||||
if (action != SECCOMP_RET_KILL_THREAD ||
|
||||
@ -1306,6 +1311,11 @@ int __secure_computing(const struct seccomp_data *sd)
|
||||
return 0;
|
||||
case SECCOMP_MODE_FILTER:
|
||||
return __seccomp_filter(this_syscall, sd, false);
|
||||
/* Surviving SECCOMP_RET_KILL_* must be proactively impossible. */
|
||||
case SECCOMP_MODE_DEAD:
|
||||
WARN_ON_ONCE(1);
|
||||
do_exit(SIGKILL);
|
||||
return -1;
|
||||
default:
|
||||
BUG();
|
||||
}
|
||||
|
@ -1862,6 +1862,8 @@ static struct ctl_table kern_table[] = {
|
||||
.maxlen = sizeof(unsigned int),
|
||||
.mode = 0644,
|
||||
.proc_handler = sched_rt_handler,
|
||||
.extra1 = SYSCTL_ONE,
|
||||
.extra2 = SYSCTL_INT_MAX,
|
||||
},
|
||||
{
|
||||
.procname = "sched_rt_runtime_us",
|
||||
@ -1869,6 +1871,8 @@ static struct ctl_table kern_table[] = {
|
||||
.maxlen = sizeof(int),
|
||||
.mode = 0644,
|
||||
.proc_handler = sched_rt_handler,
|
||||
.extra1 = SYSCTL_NEG_ONE,
|
||||
.extra2 = SYSCTL_INT_MAX,
|
||||
},
|
||||
{
|
||||
.procname = "sched_deadline_period_max_us",
|
||||
|
@ -281,6 +281,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
|
||||
unsigned long dst_start,
|
||||
unsigned long src_start,
|
||||
unsigned long len,
|
||||
bool *mmap_changing,
|
||||
enum mcopy_atomic_mode mode)
|
||||
{
|
||||
int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED;
|
||||
@ -399,6 +400,15 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
|
||||
goto out;
|
||||
}
|
||||
mmap_read_lock(dst_mm);
|
||||
/*
|
||||
* If memory mappings are changing because of non-cooperative
|
||||
* operation (e.g. mremap) running in parallel, bail out and
|
||||
* request the user to retry later
|
||||
*/
|
||||
if (mmap_changing && READ_ONCE(*mmap_changing)) {
|
||||
err = -EAGAIN;
|
||||
break;
|
||||
}
|
||||
|
||||
dst_vma = NULL;
|
||||
goto retry;
|
||||
@ -480,6 +490,7 @@ extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
|
||||
unsigned long dst_start,
|
||||
unsigned long src_start,
|
||||
unsigned long len,
|
||||
bool *mmap_changing,
|
||||
enum mcopy_atomic_mode mode);
|
||||
#endif /* CONFIG_HUGETLB_PAGE */
|
||||
|
||||
@ -606,7 +617,8 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
|
||||
*/
|
||||
if (is_vm_hugetlb_page(dst_vma))
|
||||
return __mcopy_atomic_hugetlb(dst_mm, dst_vma, dst_start,
|
||||
src_start, len, mcopy_mode);
|
||||
src_start, len, mmap_changing,
|
||||
mcopy_mode);
|
||||
|
||||
if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma))
|
||||
goto out_unlock;
|
||||
|
@ -8792,7 +8792,7 @@ EXPORT_SYMBOL(dev_set_mac_address_user);
|
||||
|
||||
int dev_get_mac_address(struct sockaddr *sa, struct net *net, char *dev_name)
|
||||
{
|
||||
size_t size = sizeof(sa->sa_data);
|
||||
size_t size = sizeof(sa->sa_data_min);
|
||||
struct net_device *dev;
|
||||
int ret = 0;
|
||||
|
||||
|
@ -245,7 +245,7 @@ static int dev_ifsioc(struct net *net, struct ifreq *ifr, unsigned int cmd)
|
||||
if (ifr->ifr_hwaddr.sa_family != dev->type)
|
||||
return -EINVAL;
|
||||
memcpy(dev->broadcast, ifr->ifr_hwaddr.sa_data,
|
||||
min(sizeof(ifr->ifr_hwaddr.sa_data),
|
||||
min(sizeof(ifr->ifr_hwaddr.sa_data_min),
|
||||
(size_t)dev->addr_len));
|
||||
call_netdevice_notifiers(NETDEV_CHANGEADDR, dev);
|
||||
return 0;
|
||||
|
@ -327,9 +327,12 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
|
||||
node_real->addr_B_port = port_rcv->type;
|
||||
|
||||
spin_lock_bh(&hsr->list_lock);
|
||||
if (!node_curr->removed) {
|
||||
list_del_rcu(&node_curr->mac_list);
|
||||
spin_unlock_bh(&hsr->list_lock);
|
||||
node_curr->removed = true;
|
||||
kfree_rcu(node_curr, rcu_head);
|
||||
}
|
||||
spin_unlock_bh(&hsr->list_lock);
|
||||
|
||||
done:
|
||||
/* PRP uses v0 header */
|
||||
@ -506,11 +509,14 @@ void hsr_prune_nodes(struct timer_list *t)
|
||||
if (time_is_before_jiffies(timestamp +
|
||||
msecs_to_jiffies(HSR_NODE_FORGET_TIME))) {
|
||||
hsr_nl_nodedown(hsr, node->macaddress_A);
|
||||
if (!node->removed) {
|
||||
list_del_rcu(&node->mac_list);
|
||||
node->removed = true;
|
||||
/* Note that we need to free this entry later: */
|
||||
kfree_rcu(node, rcu_head);
|
||||
}
|
||||
}
|
||||
}
|
||||
spin_unlock_bh(&hsr->list_lock);
|
||||
|
||||
/* Restart timer */
|
||||
|
@ -82,6 +82,7 @@ struct hsr_node {
|
||||
bool san_a;
|
||||
bool san_b;
|
||||
u16 seq_out[HSR_PT_PORTS];
|
||||
bool removed;
|
||||
struct rcu_head rcu_head;
|
||||
};
|
||||
|
||||
|
@ -1104,7 +1104,8 @@ static int arp_req_get(struct arpreq *r, struct net_device *dev)
|
||||
if (neigh) {
|
||||
if (!(neigh->nud_state & NUD_NOARP)) {
|
||||
read_lock_bh(&neigh->lock);
|
||||
memcpy(r->arp_ha.sa_data, neigh->ha, dev->addr_len);
|
||||
memcpy(r->arp_ha.sa_data, neigh->ha,
|
||||
min(dev->addr_len, (unsigned char)sizeof(r->arp_ha.sa_data_min)));
|
||||
r->arp_flags = arp_state_to_flags(neigh);
|
||||
read_unlock_bh(&neigh->lock);
|
||||
r->arp_ha.sa_family = dev->type;
|
||||
|
@ -1798,6 +1798,21 @@ static int in_dev_dump_addr(struct in_device *in_dev, struct sk_buff *skb,
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Combine dev_addr_genid and dev_base_seq to detect changes.
|
||||
*/
|
||||
static u32 inet_base_seq(const struct net *net)
|
||||
{
|
||||
u32 res = atomic_read(&net->ipv4.dev_addr_genid) +
|
||||
net->dev_base_seq;
|
||||
|
||||
/* Must not return 0 (see nl_dump_check_consistent()).
|
||||
* Chose a value far away from 0.
|
||||
*/
|
||||
if (!res)
|
||||
res = 0x80000000;
|
||||
return res;
|
||||
}
|
||||
|
||||
static int inet_dump_ifaddr(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
{
|
||||
const struct nlmsghdr *nlh = cb->nlh;
|
||||
@ -1849,8 +1864,7 @@ static int inet_dump_ifaddr(struct sk_buff *skb, struct netlink_callback *cb)
|
||||
idx = 0;
|
||||
head = &tgt_net->dev_index_head[h];
|
||||
rcu_read_lock();
|
||||
cb->seq = atomic_read(&tgt_net->ipv4.dev_addr_genid) ^
|
||||
tgt_net->dev_base_seq;
|
||||
cb->seq = inet_base_seq(tgt_net);
|
||||
hlist_for_each_entry_rcu(dev, head, index_hlist) {
|
||||
if (idx < s_idx)
|
||||
goto cont;
|
||||
@ -2249,8 +2263,7 @@ static int inet_netconf_dump_devconf(struct sk_buff *skb,
|
||||
idx = 0;
|
||||
head = &net->dev_index_head[h];
|
||||
rcu_read_lock();
|
||||
cb->seq = atomic_read(&net->ipv4.dev_addr_genid) ^
|
||||
net->dev_base_seq;
|
||||
cb->seq = inet_base_seq(net);
|
||||
hlist_for_each_entry_rcu(dev, head, index_hlist) {
|
||||
if (idx < s_idx)
|
||||
goto cont;
|
||||
|
@ -702,6 +702,22 @@ static int inet6_netconf_get_devconf(struct sk_buff *in_skb,
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Combine dev_addr_genid and dev_base_seq to detect changes.
|
||||
*/
|
||||
static u32 inet6_base_seq(const struct net *net)
|
||||
{
|
||||
u32 res = atomic_read(&net->ipv6.dev_addr_genid) +
|
||||
net->dev_base_seq;
|
||||
|
||||
/* Must not return 0 (see nl_dump_check_consistent()).
|
||||
* Chose a value far away from 0.
|
||||
*/
|
||||
if (!res)
|
||||
res = 0x80000000;
|
||||
return res;
|
||||
}
|
||||
|
||||
|
||||
static int inet6_netconf_dump_devconf(struct sk_buff *skb,
|
||||
struct netlink_callback *cb)
|
||||
{
|
||||
@ -735,8 +751,7 @@ static int inet6_netconf_dump_devconf(struct sk_buff *skb,
|
||||
idx = 0;
|
||||
head = &net->dev_index_head[h];
|
||||
rcu_read_lock();
|
||||
cb->seq = atomic_read(&net->ipv6.dev_addr_genid) ^
|
||||
net->dev_base_seq;
|
||||
cb->seq = inet6_base_seq(net);
|
||||
hlist_for_each_entry_rcu(dev, head, index_hlist) {
|
||||
if (idx < s_idx)
|
||||
goto cont;
|
||||
@ -5317,7 +5332,7 @@ static int inet6_dump_addr(struct sk_buff *skb, struct netlink_callback *cb,
|
||||
}
|
||||
|
||||
rcu_read_lock();
|
||||
cb->seq = atomic_read(&tgt_net->ipv6.dev_addr_genid) ^ tgt_net->dev_base_seq;
|
||||
cb->seq = inet6_base_seq(tgt_net);
|
||||
for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) {
|
||||
idx = 0;
|
||||
head = &tgt_net->dev_index_head[h];
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user