Merge keystone/android12-5.10-keystone-qcom-release.81+ (2c95d7c) into msm-5.10

* refs/heads/tmp-2c95d7c:
  Revert "BACKPORT: FROMLIST: scsi: core: Reserve one tag for the UFS driver"
  UPSTREAM: binder: Add invalid handle info in user error log
  UPSTREAM: ARM: fix Thumb2 regression with Spectre BHB
  UPSTREAM: ARM: Spectre-BHB: provide empty stub for non-config
  UPSTREAM: ARM: fix build warning in proc-v7-bugs.c
  UPSTREAM: ARM: Do not use NOCROSSREFS directive with ld.lld
  UPSTREAM: ARM: fix co-processor register typo
  UPSTREAM: ARM: fix build error when BPF_SYSCALL is disabled
  UPSTREAM: ARM: include unprivileged BPF status in Spectre V2 reporting
  UPSTREAM: ARM: Spectre-BHB workaround
  UPSTREAM: ARM: use LOADADDR() to get load address of sections
  UPSTREAM: ARM: early traps initialisation
  UPSTREAM: ARM: report Spectre v2 status through sysfs
  UPSTREAM: x86/speculation: Warn about eIBRS + LFENCE + Unprivileged eBPF + SMT
  UPSTREAM: x86/speculation: Warn about Spectre v2 LFENCE mitigation
  UPSTREAM: x86/speculation: Update link to AMD speculation whitepaper
  UPSTREAM: x86/speculation: Use generic retpoline by default on AMD
  UPSTREAM: x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting
  UPSTREAM: Documentation/hw-vuln: Update spectre doc
  UPSTREAM: x86/speculation: Add eIBRS + Retpoline options
  UPSTREAM: x86/speculation: Rename RETPOLINE_AMD to RETPOLINE_LFENCE
  UPSTREAM: x86,bugs: Unconditionally allow spectre_v2=retpoline,amd
  UPSTREAM: bpf: Add kconfig knob for disabling unpriv bpf by default
  ANDROID: dm-bow: Protect Ranges fetched and erased from the RB tree
  ANDROID: mm: page_pinner: fix build warning
  ANDROID: fault: Add vendor hook for TLB conflict
  BACKPORT: sched: Fix yet more sched_fork() races
  ANDROID: mm/slub: Fix Kasan issue with for_each_object_track
  ANDROID: dm kcopyd: Use reserved memory for the copy buffer
  ANDROID: GKI: add allowed list file for xiaomi
  ANDROID: GKI: Update symbols to symbol list
  FROMGIT: f2fs: quota: fix loop condition at f2fs_quota_sync()
  FROMGIT: f2fs: Restore rwsem lockdep support
  ANDROID: ABI: update allowed list for galaxy
  UPSTREAM: mac80211_hwsim: initialize ieee80211_tx_info at hw_scan_work
  ANDROID: GKI: remove vfs-only namespace from 2 symbols

Change-Id: I54118d206503f63d289dc825d51819e1e34540cc
Signed-off-by: Sivasri Kumar, Vanka <quic_svanka@quicinc.com>
This commit is contained in:
Sivasri Kumar, Vanka 2022-03-21 13:16:06 +05:30
commit da856597eb
48 changed files with 1445 additions and 596 deletions

View File

@ -60,8 +60,8 @@ privileged data touched during the speculative execution.
Spectre variant 1 attacks take advantage of speculative execution of
conditional branches, while Spectre variant 2 attacks use speculative
execution of indirect branches to leak privileged memory.
See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>`
:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[6] <spec_ref6>`
:ref:`[7] <spec_ref7>` :ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
Spectre variant 1 (Bounds Check Bypass)
---------------------------------------
@ -131,6 +131,19 @@ steer its indirect branch speculations to gadget code, and measure the
speculative execution's side effects left in level 1 cache to infer the
victim's data.
Yet another variant 2 attack vector is for the attacker to poison the
Branch History Buffer (BHB) to speculatively steer an indirect branch
to a specific Branch Target Buffer (BTB) entry, even if the entry isn't
associated with the source address of the indirect branch. Specifically,
the BHB might be shared across privilege levels even in the presence of
Enhanced IBRS.
Currently the only known real-world BHB attack vector is via
unprivileged eBPF. Therefore, it's highly recommended to not enable
unprivileged eBPF, especially when eIBRS is used (without retpolines).
For a full mitigation against BHB attacks, it's recommended to use
retpolines (or eIBRS combined with retpolines).
Attack scenarios
----------------
@ -364,13 +377,15 @@ The possible values in this file are:
- Kernel status:
==================================== =================================
'Not affected' The processor is not vulnerable
'Vulnerable' Vulnerable, no mitigation
'Mitigation: Full generic retpoline' Software-focused mitigation
'Mitigation: Full AMD retpoline' AMD-specific software mitigation
'Mitigation: Enhanced IBRS' Hardware-focused mitigation
==================================== =================================
======================================== =================================
'Not affected' The processor is not vulnerable
'Mitigation: None' Vulnerable, no mitigation
'Mitigation: Retpolines' Use Retpoline thunks
'Mitigation: LFENCE' Use LFENCE instructions
'Mitigation: Enhanced IBRS' Hardware-focused mitigation
'Mitigation: Enhanced IBRS + Retpolines' Hardware-focused + Retpolines
'Mitigation: Enhanced IBRS + LFENCE' Hardware-focused + LFENCE
======================================== =================================
- Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
@ -584,12 +599,13 @@ kernel command line.
Specific mitigations can also be selected manually:
retpoline
replace indirect branches
retpoline,generic
google's original retpoline
retpoline,amd
AMD-specific minimal thunk
retpoline auto pick between generic,lfence
retpoline,generic Retpolines
retpoline,lfence LFENCE; indirect branch
retpoline,amd alias for retpoline,lfence
eibrs enhanced IBRS
eibrs,retpoline enhanced IBRS + Retpolines
eibrs,lfence enhanced IBRS + LFENCE
Not specifying this option is equivalent to
spectre_v2=auto.
@ -730,7 +746,7 @@ AMD white papers:
.. _spec_ref6:
[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_.
[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf>`_.
ARM white papers:

View File

@ -5050,8 +5050,12 @@
Specific mitigations can also be selected manually:
retpoline - replace indirect branches
retpoline,generic - google's original retpoline
retpoline,amd - AMD-specific minimal thunk
retpoline,generic - Retpolines
retpoline,lfence - LFENCE; indirect branch
retpoline,amd - alias for retpoline,lfence
eibrs - enhanced IBRS
eibrs,retpoline - enhanced IBRS + Retpolines
eibrs,lfence - enhanced IBRS + LFENCE
Not specifying this option is equivalent to
spectre_v2=auto.

View File

@ -1457,11 +1457,22 @@ unprivileged_bpf_disabled
=========================
Writing 1 to this entry will disable unprivileged calls to ``bpf()``;
once disabled, calling ``bpf()`` without ``CAP_SYS_ADMIN`` will return
``-EPERM``.
once disabled, calling ``bpf()`` without ``CAP_SYS_ADMIN`` or ``CAP_BPF``
will return ``-EPERM``. Once set to 1, this can't be cleared from the
running kernel anymore.
Once set, this can't be cleared.
Writing 2 to this entry will also disable unprivileged calls to ``bpf()``,
however, an admin can still change this setting later on, if needed, by
writing 0 or 1 to this entry.
If ``BPF_UNPRIV_DEFAULT_OFF`` is enabled in the kernel config, then this
entry will default to 2 instead of 0.
= =============================================================
0 Unprivileged calls to ``bpf()`` are enabled
1 Unprivileged calls to ``bpf()`` are disabled without recovery
2 Unprivileged calls to ``bpf()`` are disabled
= =============================================================
watchdog
========

View File

@ -1 +1 @@
a817d6ed8746d9de8960acde229c3a9dfa4d465a
306d827cfbb9633f5be11438b2fc8922bf4d2b3c

File diff suppressed because it is too large Load Diff

View File

@ -343,6 +343,7 @@
__traceiter_dwc3_readl
__traceiter_dwc3_writel
__traceiter_gpu_mem_total
__traceiter_kfree_skb
__traceiter_sched_util_est_se_tp
__traceiter_xdp_exception
__tracepoint_android_rvh_account_irq
@ -494,6 +495,7 @@
__tracepoint_ipi_raise
__tracepoint_irq_handler_entry
__tracepoint_irq_handler_exit
__tracepoint_kfree_skb
__tracepoint_pelt_cfs_tp
__tracepoint_pelt_dl_tp
__tracepoint_pelt_irq_tp
@ -4278,6 +4280,7 @@
usb_hcd_start_port_resume
usb_hcd_unlink_urb_from_ep
usb_hcds_loaded
usb_hid_driver
usb_hub_clear_tt_buffer
usb_hub_find_child
usb_ifnum_to_if

View File

@ -2693,6 +2693,7 @@
__traceiter_android_vh_tune_scan_type
__traceiter_android_vh_tune_swappiness
__traceiter_android_vh_page_referenced_check_bypass
__traceiter_android_vh_drain_all_pages_bypass
__traceiter_android_vh_ufs_compl_command
__traceiter_android_vh_ufs_send_command
__traceiter_android_vh_ufs_send_tm_command
@ -2896,6 +2897,7 @@
__tracepoint_android_vh_tune_scan_type
__tracepoint_android_vh_tune_swappiness
__tracepoint_android_vh_page_referenced_check_bypass
__tracepoint_android_vh_drain_all_pages_bypass
__tracepoint_android_vh_ufs_compl_command
__tracepoint_android_vh_ufs_send_command
__tracepoint_android_vh_ufs_send_tm_command

View File

@ -2688,6 +2688,7 @@
__tracepoint_android_vh_ftrace_size_check
__tracepoint_android_vh_gic_resume
__tracepoint_android_vh_gpio_block_read
__tracepoint_android_vh_handle_tlb_conf
__tracepoint_android_vh_iommu_setup_dma_ops
__tracepoint_android_vh_ipi_stop
__tracepoint_android_vh_jiffies_update

View File

@ -194,3 +194,9 @@
#extend_reclaim.ko
try_to_free_mem_cgroup_pages
##required by xm_power_debug.ko module
wakeup_sources_read_lock
wakeup_sources_read_unlock
wakeup_sources_walk_start
wakeup_sources_walk_next

View File

@ -107,6 +107,16 @@
.endm
#endif
#if __LINUX_ARM_ARCH__ < 7
.macro dsb, args
mcr p15, 0, r0, c7, c10, 4
.endm
.macro isb, args
mcr p15, 0, r0, c7, c5, 4
.endm
#endif
.macro asm_trace_hardirqs_off, save=1
#if defined(CONFIG_TRACE_IRQFLAGS)
.if \save

View File

@ -0,0 +1,38 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __ASM_SPECTRE_H
#define __ASM_SPECTRE_H
enum {
SPECTRE_UNAFFECTED,
SPECTRE_MITIGATED,
SPECTRE_VULNERABLE,
};
enum {
__SPECTRE_V2_METHOD_BPIALL,
__SPECTRE_V2_METHOD_ICIALLU,
__SPECTRE_V2_METHOD_SMC,
__SPECTRE_V2_METHOD_HVC,
__SPECTRE_V2_METHOD_LOOP8,
};
enum {
SPECTRE_V2_METHOD_BPIALL = BIT(__SPECTRE_V2_METHOD_BPIALL),
SPECTRE_V2_METHOD_ICIALLU = BIT(__SPECTRE_V2_METHOD_ICIALLU),
SPECTRE_V2_METHOD_SMC = BIT(__SPECTRE_V2_METHOD_SMC),
SPECTRE_V2_METHOD_HVC = BIT(__SPECTRE_V2_METHOD_HVC),
SPECTRE_V2_METHOD_LOOP8 = BIT(__SPECTRE_V2_METHOD_LOOP8),
};
#ifdef CONFIG_GENERIC_CPU_VULNERABILITIES
void spectre_v2_update_state(unsigned int state, unsigned int methods);
#else
static inline void spectre_v2_update_state(unsigned int state,
unsigned int methods)
{}
#endif
int spectre_bhb_update_vectors(unsigned int method);
#endif

View File

@ -26,6 +26,19 @@
#define ARM_MMU_DISCARD(x) x
#endif
/*
* ld.lld does not support NOCROSSREFS:
* https://github.com/ClangBuiltLinux/linux/issues/1609
*/
#ifdef CONFIG_LD_IS_LLD
#define NOCROSSREFS
#endif
/* Set start/end symbol names to the LMA for the section */
#define ARM_LMA(sym, section) \
sym##_start = LOADADDR(section); \
sym##_end = LOADADDR(section) + SIZEOF(section)
#define PROC_INFO \
. = ALIGN(4); \
__proc_info_begin = .; \
@ -110,19 +123,31 @@
* only thing that matters is their relative offsets
*/
#define ARM_VECTORS \
__vectors_start = .; \
.vectors 0xffff0000 : AT(__vectors_start) { \
*(.vectors) \
__vectors_lma = .; \
OVERLAY 0xffff0000 : NOCROSSREFS AT(__vectors_lma) { \
.vectors { \
*(.vectors) \
} \
.vectors.bhb.loop8 { \
*(.vectors.bhb.loop8) \
} \
.vectors.bhb.bpiall { \
*(.vectors.bhb.bpiall) \
} \
} \
. = __vectors_start + SIZEOF(.vectors); \
__vectors_end = .; \
ARM_LMA(__vectors, .vectors); \
ARM_LMA(__vectors_bhb_loop8, .vectors.bhb.loop8); \
ARM_LMA(__vectors_bhb_bpiall, .vectors.bhb.bpiall); \
. = __vectors_lma + SIZEOF(.vectors) + \
SIZEOF(.vectors.bhb.loop8) + \
SIZEOF(.vectors.bhb.bpiall); \
\
__stubs_start = .; \
.stubs ADDR(.vectors) + 0x1000 : AT(__stubs_start) { \
__stubs_lma = .; \
.stubs ADDR(.vectors) + 0x1000 : AT(__stubs_lma) { \
*(.stubs) \
} \
. = __stubs_start + SIZEOF(.stubs); \
__stubs_end = .; \
ARM_LMA(__stubs, .stubs); \
. = __stubs_lma + SIZEOF(.stubs); \
\
PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors));

View File

@ -106,4 +106,6 @@ endif
obj-$(CONFIG_HAVE_ARM_SMCCC) += smccc-call.o
obj-$(CONFIG_GENERIC_CPU_VULNERABILITIES) += spectre.o
extra-y := $(head-y) vmlinux.lds

View File

@ -1007,12 +1007,11 @@ vector_\name:
sub lr, lr, #\correction
.endif
@
@ Save r0, lr_<exception> (parent PC) and spsr_<exception>
@ (parent CPSR)
@
@ Save r0, lr_<exception> (parent PC)
stmia sp, {r0, lr} @ save r0, lr
mrs lr, spsr
@ Save spsr_<exception> (parent CPSR)
2: mrs lr, spsr
str lr, [sp, #8] @ save spsr
@
@ -1033,6 +1032,44 @@ vector_\name:
movs pc, lr @ branch to handler in SVC mode
ENDPROC(vector_\name)
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
.subsection 1
.align 5
vector_bhb_loop8_\name:
.if \correction
sub lr, lr, #\correction
.endif
@ Save r0, lr_<exception> (parent PC)
stmia sp, {r0, lr}
@ bhb workaround
mov r0, #8
3: b . + 4
subs r0, r0, #1
bne 3b
dsb
isb
b 2b
ENDPROC(vector_bhb_loop8_\name)
vector_bhb_bpiall_\name:
.if \correction
sub lr, lr, #\correction
.endif
@ Save r0, lr_<exception> (parent PC)
stmia sp, {r0, lr}
@ bhb workaround
mcr p15, 0, r0, c7, c5, 6 @ BPIALL
@ isb not needed due to "movs pc, lr" in the vector stub
@ which gives a "context synchronisation".
b 2b
ENDPROC(vector_bhb_bpiall_\name)
.previous
#endif
.align 2
@ handler addresses follow this label
1:
@ -1041,6 +1078,10 @@ ENDPROC(vector_\name)
.section .stubs, "ax", %progbits
@ This must be the first word
.word vector_swi
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
.word vector_bhb_loop8_swi
.word vector_bhb_bpiall_swi
#endif
vector_rst:
ARM( swi SYS_ERROR0 )
@ -1155,8 +1196,10 @@ vector_addrexcptn:
* FIQ "NMI" handler
*-----------------------------------------------------------------------------
* Handle a FIQ using the SVC stack allowing FIQ act like NMI on x86
* systems.
* systems. This must be the last vector stub, so lets place it in its own
* subsection.
*/
.subsection 2
vector_stub fiq, FIQ_MODE, 4
.long __fiq_usr @ 0 (USR_26 / USR_32)
@ -1189,6 +1232,30 @@ vector_addrexcptn:
W(b) vector_irq
W(b) vector_fiq
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
.section .vectors.bhb.loop8, "ax", %progbits
.L__vectors_bhb_loop8_start:
W(b) vector_rst
W(b) vector_bhb_loop8_und
W(ldr) pc, .L__vectors_bhb_loop8_start + 0x1004
W(b) vector_bhb_loop8_pabt
W(b) vector_bhb_loop8_dabt
W(b) vector_addrexcptn
W(b) vector_bhb_loop8_irq
W(b) vector_bhb_loop8_fiq
.section .vectors.bhb.bpiall, "ax", %progbits
.L__vectors_bhb_bpiall_start:
W(b) vector_rst
W(b) vector_bhb_bpiall_und
W(ldr) pc, .L__vectors_bhb_bpiall_start + 0x1008
W(b) vector_bhb_bpiall_pabt
W(b) vector_bhb_bpiall_dabt
W(b) vector_addrexcptn
W(b) vector_bhb_bpiall_irq
W(b) vector_bhb_bpiall_fiq
#endif
.data
.align 2

View File

@ -162,6 +162,29 @@ ENDPROC(ret_from_fork)
*-----------------------------------------------------------------------------
*/
.align 5
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
ENTRY(vector_bhb_loop8_swi)
sub sp, sp, #PT_REGS_SIZE
stmia sp, {r0 - r12}
mov r8, #8
1: b 2f
2: subs r8, r8, #1
bne 1b
dsb
isb
b 3f
ENDPROC(vector_bhb_loop8_swi)
.align 5
ENTRY(vector_bhb_bpiall_swi)
sub sp, sp, #PT_REGS_SIZE
stmia sp, {r0 - r12}
mcr p15, 0, r8, c7, c5, 6 @ BPIALL
isb
b 3f
ENDPROC(vector_bhb_bpiall_swi)
#endif
.align 5
ENTRY(vector_swi)
#ifdef CONFIG_CPU_V7M
@ -169,6 +192,7 @@ ENTRY(vector_swi)
#else
sub sp, sp, #PT_REGS_SIZE
stmia sp, {r0 - r12} @ Calling r0 - r12
3:
ARM( add r8, sp, #S_PC )
ARM( stmdb r8, {sp, lr}^ ) @ Calling sp, lr
THUMB( mov r8, sp )

71
arch/arm/kernel/spectre.c Normal file
View File

@ -0,0 +1,71 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/bpf.h>
#include <linux/cpu.h>
#include <linux/device.h>
#include <asm/spectre.h>
static bool _unprivileged_ebpf_enabled(void)
{
#ifdef CONFIG_BPF_SYSCALL
return !sysctl_unprivileged_bpf_disabled;
#else
return false;
#endif
}
ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
}
static unsigned int spectre_v2_state;
static unsigned int spectre_v2_methods;
void spectre_v2_update_state(unsigned int state, unsigned int method)
{
if (state > spectre_v2_state)
spectre_v2_state = state;
spectre_v2_methods |= method;
}
ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
char *buf)
{
const char *method;
if (spectre_v2_state == SPECTRE_UNAFFECTED)
return sprintf(buf, "%s\n", "Not affected");
if (spectre_v2_state != SPECTRE_MITIGATED)
return sprintf(buf, "%s\n", "Vulnerable");
if (_unprivileged_ebpf_enabled())
return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n");
switch (spectre_v2_methods) {
case SPECTRE_V2_METHOD_BPIALL:
method = "Branch predictor hardening";
break;
case SPECTRE_V2_METHOD_ICIALLU:
method = "I-cache invalidation";
break;
case SPECTRE_V2_METHOD_SMC:
case SPECTRE_V2_METHOD_HVC:
method = "Firmware call";
break;
case SPECTRE_V2_METHOD_LOOP8:
method = "History overwrite";
break;
default:
method = "Multiple mitigations";
break;
}
return sprintf(buf, "Mitigation: %s\n", method);
}

View File

@ -30,6 +30,7 @@
#include <linux/atomic.h>
#include <asm/cacheflush.h>
#include <asm/exception.h>
#include <asm/spectre.h>
#include <asm/unistd.h>
#include <asm/traps.h>
#include <asm/ptrace.h>
@ -806,10 +807,59 @@ static inline void __init kuser_init(void *vectors)
}
#endif
#ifndef CONFIG_CPU_V7M
static void copy_from_lma(void *vma, void *lma_start, void *lma_end)
{
memcpy(vma, lma_start, lma_end - lma_start);
}
static void flush_vectors(void *vma, size_t offset, size_t size)
{
unsigned long start = (unsigned long)vma + offset;
unsigned long end = start + size;
flush_icache_range(start, end);
}
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
int spectre_bhb_update_vectors(unsigned int method)
{
extern char __vectors_bhb_bpiall_start[], __vectors_bhb_bpiall_end[];
extern char __vectors_bhb_loop8_start[], __vectors_bhb_loop8_end[];
void *vec_start, *vec_end;
if (system_state > SYSTEM_SCHEDULING) {
pr_err("CPU%u: Spectre BHB workaround too late - system vulnerable\n",
smp_processor_id());
return SPECTRE_VULNERABLE;
}
switch (method) {
case SPECTRE_V2_METHOD_LOOP8:
vec_start = __vectors_bhb_loop8_start;
vec_end = __vectors_bhb_loop8_end;
break;
case SPECTRE_V2_METHOD_BPIALL:
vec_start = __vectors_bhb_bpiall_start;
vec_end = __vectors_bhb_bpiall_end;
break;
default:
pr_err("CPU%u: unknown Spectre BHB state %d\n",
smp_processor_id(), method);
return SPECTRE_VULNERABLE;
}
copy_from_lma(vectors_page, vec_start, vec_end);
flush_vectors(vectors_page, 0, vec_end - vec_start);
return SPECTRE_MITIGATED;
}
#endif
void __init early_trap_init(void *vectors_base)
{
#ifndef CONFIG_CPU_V7M
unsigned long vectors = (unsigned long)vectors_base;
extern char __stubs_start[], __stubs_end[];
extern char __vectors_start[], __vectors_end[];
unsigned i;
@ -830,17 +880,20 @@ void __init early_trap_init(void *vectors_base)
* into the vector page, mapped at 0xffff0000, and ensure these
* are visible to the instruction stream.
*/
memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start);
memcpy((void *)vectors + 0x1000, __stubs_start, __stubs_end - __stubs_start);
copy_from_lma(vectors_base, __vectors_start, __vectors_end);
copy_from_lma(vectors_base + 0x1000, __stubs_start, __stubs_end);
kuser_init(vectors_base);
flush_icache_range(vectors, vectors + PAGE_SIZE * 2);
flush_vectors(vectors_base, 0, PAGE_SIZE * 2);
}
#else /* ifndef CONFIG_CPU_V7M */
void __init early_trap_init(void *vectors_base)
{
/*
* on V7-M there is no need to copy the vector table to a dedicated
* memory area. The address is configurable and so a table in the kernel
* image can be used.
*/
#endif
}
#endif

View File

@ -833,6 +833,7 @@ config CPU_BPREDICT_DISABLE
config CPU_SPECTRE
bool
select GENERIC_CPU_VULNERABILITIES
config HARDEN_BRANCH_PREDICTOR
bool "Harden the branch predictor against aliasing attacks" if EXPERT
@ -853,6 +854,16 @@ config HARDEN_BRANCH_PREDICTOR
If unsure, say Y.
config HARDEN_BRANCH_HISTORY
bool "Harden Spectre style attacks against branch history" if EXPERT
depends on CPU_SPECTRE
default y
help
Speculation attacks against some high-performance processors can
make use of branch history to influence future speculation. When
taking an exception, a sequence of branches overwrites the branch
history, or branch history is invalidated.
config TLS_REG_EMUL
bool
select NEED_KUSER_HELPERS

View File

@ -6,8 +6,35 @@
#include <asm/cp15.h>
#include <asm/cputype.h>
#include <asm/proc-fns.h>
#include <asm/spectre.h>
#include <asm/system_misc.h>
#ifdef CONFIG_ARM_PSCI
static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
{
struct arm_smccc_res res;
arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
switch ((int)res.a0) {
case SMCCC_RET_SUCCESS:
return SPECTRE_MITIGATED;
case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED:
return SPECTRE_UNAFFECTED;
default:
return SPECTRE_VULNERABLE;
}
}
#else
static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
{
return SPECTRE_VULNERABLE;
}
#endif
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
@ -36,13 +63,61 @@ static void __maybe_unused call_hvc_arch_workaround_1(void)
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
}
static void cpu_v7_spectre_init(void)
static unsigned int spectre_v2_install_workaround(unsigned int method)
{
const char *spectre_v2_method = NULL;
int cpu = smp_processor_id();
if (per_cpu(harden_branch_predictor_fn, cpu))
return;
return SPECTRE_MITIGATED;
switch (method) {
case SPECTRE_V2_METHOD_BPIALL:
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_bpiall;
spectre_v2_method = "BPIALL";
break;
case SPECTRE_V2_METHOD_ICIALLU:
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_iciallu;
spectre_v2_method = "ICIALLU";
break;
case SPECTRE_V2_METHOD_HVC:
per_cpu(harden_branch_predictor_fn, cpu) =
call_hvc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
spectre_v2_method = "hypervisor";
break;
case SPECTRE_V2_METHOD_SMC:
per_cpu(harden_branch_predictor_fn, cpu) =
call_smc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_smc_switch_mm;
spectre_v2_method = "firmware";
break;
}
if (spectre_v2_method)
pr_info("CPU%u: Spectre v2: using %s workaround\n",
smp_processor_id(), spectre_v2_method);
return SPECTRE_MITIGATED;
}
#else
static unsigned int spectre_v2_install_workaround(unsigned int method)
{
pr_info("CPU%u: Spectre V2: workarounds disabled by configuration\n",
smp_processor_id());
return SPECTRE_VULNERABLE;
}
#endif
static void cpu_v7_spectre_v2_init(void)
{
unsigned int state, method = 0;
switch (read_cpuid_part()) {
case ARM_CPU_PART_CORTEX_A8:
@ -51,69 +126,133 @@ static void cpu_v7_spectre_init(void)
case ARM_CPU_PART_CORTEX_A17:
case ARM_CPU_PART_CORTEX_A73:
case ARM_CPU_PART_CORTEX_A75:
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_bpiall;
spectre_v2_method = "BPIALL";
state = SPECTRE_MITIGATED;
method = SPECTRE_V2_METHOD_BPIALL;
break;
case ARM_CPU_PART_CORTEX_A15:
case ARM_CPU_PART_BRAHMA_B15:
per_cpu(harden_branch_predictor_fn, cpu) =
harden_branch_predictor_iciallu;
spectre_v2_method = "ICIALLU";
state = SPECTRE_MITIGATED;
method = SPECTRE_V2_METHOD_ICIALLU;
break;
#ifdef CONFIG_ARM_PSCI
case ARM_CPU_PART_BRAHMA_B53:
/* Requires no workaround */
state = SPECTRE_UNAFFECTED;
break;
default:
/* Other ARM CPUs require no workaround */
if (read_cpuid_implementor() == ARM_CPU_IMP_ARM)
if (read_cpuid_implementor() == ARM_CPU_IMP_ARM) {
state = SPECTRE_UNAFFECTED;
break;
fallthrough;
/* Cortex A57/A72 require firmware workaround */
case ARM_CPU_PART_CORTEX_A57:
case ARM_CPU_PART_CORTEX_A72: {
struct arm_smccc_res res;
}
arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 != 0)
return;
fallthrough;
/* Cortex A57/A72 require firmware workaround */
case ARM_CPU_PART_CORTEX_A57:
case ARM_CPU_PART_CORTEX_A72:
state = spectre_v2_get_cpu_fw_mitigation_state();
if (state != SPECTRE_MITIGATED)
break;
switch (arm_smccc_1_1_get_conduit()) {
case SMCCC_CONDUIT_HVC:
per_cpu(harden_branch_predictor_fn, cpu) =
call_hvc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
spectre_v2_method = "hypervisor";
method = SPECTRE_V2_METHOD_HVC;
break;
case SMCCC_CONDUIT_SMC:
per_cpu(harden_branch_predictor_fn, cpu) =
call_smc_arch_workaround_1;
cpu_do_switch_mm = cpu_v7_smc_switch_mm;
spectre_v2_method = "firmware";
method = SPECTRE_V2_METHOD_SMC;
break;
default:
state = SPECTRE_VULNERABLE;
break;
}
}
#endif
if (state == SPECTRE_MITIGATED)
state = spectre_v2_install_workaround(method);
spectre_v2_update_state(state, method);
}
#ifdef CONFIG_HARDEN_BRANCH_HISTORY
static int spectre_bhb_method;
static const char *spectre_bhb_method_name(int method)
{
switch (method) {
case SPECTRE_V2_METHOD_LOOP8:
return "loop";
case SPECTRE_V2_METHOD_BPIALL:
return "BPIALL";
default:
return "unknown";
}
}
static int spectre_bhb_install_workaround(int method)
{
if (spectre_bhb_method != method) {
if (spectre_bhb_method) {
pr_err("CPU%u: Spectre BHB: method disagreement, system vulnerable\n",
smp_processor_id());
return SPECTRE_VULNERABLE;
}
if (spectre_bhb_update_vectors(method) == SPECTRE_VULNERABLE)
return SPECTRE_VULNERABLE;
spectre_bhb_method = method;
}
if (spectre_v2_method)
pr_info("CPU%u: Spectre v2: using %s workaround\n",
smp_processor_id(), spectre_v2_method);
pr_info("CPU%u: Spectre BHB: using %s workaround\n",
smp_processor_id(), spectre_bhb_method_name(method));
return SPECTRE_MITIGATED;
}
#else
static void cpu_v7_spectre_init(void)
static int spectre_bhb_install_workaround(int method)
{
return SPECTRE_VULNERABLE;
}
#endif
static void cpu_v7_spectre_bhb_init(void)
{
unsigned int state, method = 0;
switch (read_cpuid_part()) {
case ARM_CPU_PART_CORTEX_A15:
case ARM_CPU_PART_BRAHMA_B15:
case ARM_CPU_PART_CORTEX_A57:
case ARM_CPU_PART_CORTEX_A72:
state = SPECTRE_MITIGATED;
method = SPECTRE_V2_METHOD_LOOP8;
break;
case ARM_CPU_PART_CORTEX_A73:
case ARM_CPU_PART_CORTEX_A75:
state = SPECTRE_MITIGATED;
method = SPECTRE_V2_METHOD_BPIALL;
break;
default:
state = SPECTRE_UNAFFECTED;
break;
}
if (state == SPECTRE_MITIGATED)
state = spectre_bhb_install_workaround(method);
spectre_v2_update_state(state, method);
}
static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned,
u32 mask, const char *msg)
{
@ -142,16 +281,17 @@ static bool check_spectre_auxcr(bool *warned, u32 bit)
void cpu_v7_ca8_ibe(void)
{
if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)))
cpu_v7_spectre_init();
cpu_v7_spectre_v2_init();
}
void cpu_v7_ca15_ibe(void)
{
if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
cpu_v7_spectre_init();
cpu_v7_spectre_v2_init();
}
void cpu_v7_bugs_init(void)
{
cpu_v7_spectre_init();
cpu_v7_spectre_v2_init();
cpu_v7_spectre_bhb_init();
}

View File

@ -112,6 +112,7 @@
#define ESR_ELx_FSC_ACCESS (0x08)
#define ESR_ELx_FSC_FAULT (0x04)
#define ESR_ELx_FSC_PERM (0x0C)
#define ESR_ELx_FSC_TLBCONF (0x30)
/* ISS field definitions for Data Aborts */
#define ESR_ELx_ISV_SHIFT (24)

View File

@ -711,7 +711,11 @@ static int do_alignment_fault(unsigned long far, unsigned int esr,
static int do_bad(unsigned long far, unsigned int esr, struct pt_regs *regs)
{
return 1; /* "fault" */
unsigned long addr = untagged_addr(far);
int ret = 1;
trace_android_vh_handle_tlb_conf(addr, esr, &ret);
return ret;
}
static int do_sea(unsigned long far, unsigned int esr, struct pt_regs *regs)

View File

@ -204,7 +204,7 @@
#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
#define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
#define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */

View File

@ -82,7 +82,7 @@
#ifdef CONFIG_RETPOLINE
ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \
__stringify(jmp __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \
__stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_AMD
__stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE
#else
jmp *%\reg
#endif
@ -92,7 +92,7 @@
#ifdef CONFIG_RETPOLINE
ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *%\reg), \
__stringify(call __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \
__stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_AMD
__stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_LFENCE
#else
call *%\reg
#endif
@ -134,7 +134,7 @@
"lfence;\n" \
ANNOTATE_RETPOLINE_SAFE \
"call *%[thunk_target]\n", \
X86_FEATURE_RETPOLINE_AMD)
X86_FEATURE_RETPOLINE_LFENCE)
# define THUNK_TARGET(addr) [thunk_target] "r" (addr)
@ -164,7 +164,7 @@
"lfence;\n" \
ANNOTATE_RETPOLINE_SAFE \
"call *%[thunk_target]\n", \
X86_FEATURE_RETPOLINE_AMD)
X86_FEATURE_RETPOLINE_LFENCE)
# define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
#endif
@ -176,9 +176,11 @@
/* The Spectre V2 mitigation variants */
enum spectre_v2_mitigation {
SPECTRE_V2_NONE,
SPECTRE_V2_RETPOLINE_GENERIC,
SPECTRE_V2_RETPOLINE_AMD,
SPECTRE_V2_IBRS_ENHANCED,
SPECTRE_V2_RETPOLINE,
SPECTRE_V2_LFENCE,
SPECTRE_V2_EIBRS,
SPECTRE_V2_EIBRS_RETPOLINE,
SPECTRE_V2_EIBRS_LFENCE,
};
/* The indirect branch speculation control variants */

View File

@ -16,6 +16,7 @@
#include <linux/prctl.h>
#include <linux/sched/smt.h>
#include <linux/pgtable.h>
#include <linux/bpf.h>
#include <asm/spec-ctrl.h>
#include <asm/cmdline.h>
@ -613,6 +614,32 @@ static inline const char *spectre_v2_module_string(void)
static inline const char *spectre_v2_module_string(void) { return ""; }
#endif
#define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n"
#define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n"
#define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n"
#ifdef CONFIG_BPF_SYSCALL
void unpriv_ebpf_notify(int new_state)
{
if (new_state)
return;
/* Unprivileged eBPF is enabled */
switch (spectre_v2_enabled) {
case SPECTRE_V2_EIBRS:
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
break;
case SPECTRE_V2_EIBRS_LFENCE:
if (sched_smt_active())
pr_err(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
break;
default:
break;
}
}
#endif
static inline bool match_option(const char *arg, int arglen, const char *opt)
{
int len = strlen(opt);
@ -627,7 +654,10 @@ enum spectre_v2_mitigation_cmd {
SPECTRE_V2_CMD_FORCE,
SPECTRE_V2_CMD_RETPOLINE,
SPECTRE_V2_CMD_RETPOLINE_GENERIC,
SPECTRE_V2_CMD_RETPOLINE_AMD,
SPECTRE_V2_CMD_RETPOLINE_LFENCE,
SPECTRE_V2_CMD_EIBRS,
SPECTRE_V2_CMD_EIBRS_RETPOLINE,
SPECTRE_V2_CMD_EIBRS_LFENCE,
};
enum spectre_v2_user_cmd {
@ -700,6 +730,13 @@ spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
return SPECTRE_V2_USER_CMD_AUTO;
}
static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
{
return (mode == SPECTRE_V2_EIBRS ||
mode == SPECTRE_V2_EIBRS_RETPOLINE ||
mode == SPECTRE_V2_EIBRS_LFENCE);
}
static void __init
spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
{
@ -767,7 +804,7 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
*/
if (!boot_cpu_has(X86_FEATURE_STIBP) ||
!smt_possible ||
spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return;
/*
@ -787,9 +824,11 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
static const char * const spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline",
[SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline",
[SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS",
[SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
[SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
[SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS",
[SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE",
[SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines",
};
static const struct {
@ -800,8 +839,12 @@ static const struct {
{ "off", SPECTRE_V2_CMD_NONE, false },
{ "on", SPECTRE_V2_CMD_FORCE, true },
{ "retpoline", SPECTRE_V2_CMD_RETPOLINE, false },
{ "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false },
{ "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
{ "retpoline,lfence", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
{ "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
{ "eibrs", SPECTRE_V2_CMD_EIBRS, false },
{ "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false },
{ "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false },
{ "auto", SPECTRE_V2_CMD_AUTO, false },
};
@ -838,17 +881,30 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
}
if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
cmd == SPECTRE_V2_CMD_RETPOLINE_AMD ||
cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) &&
cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC ||
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!IS_ENABLED(CONFIG_RETPOLINE)) {
pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option);
pr_err("%s selected but not compiled in. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
}
if (cmd == SPECTRE_V2_CMD_RETPOLINE_AMD &&
boot_cpu_data.x86_vendor != X86_VENDOR_HYGON &&
boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n");
if ((cmd == SPECTRE_V2_CMD_EIBRS ||
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
!boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
}
if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
cmd == SPECTRE_V2_CMD_EIBRS_LFENCE) &&
!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n",
mitigation_options[i].option);
return SPECTRE_V2_CMD_AUTO;
}
@ -857,6 +913,16 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
return cmd;
}
static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void)
{
if (!IS_ENABLED(CONFIG_RETPOLINE)) {
pr_err("Kernel not compiled with retpoline; no mitigation available!");
return SPECTRE_V2_NONE;
}
return SPECTRE_V2_RETPOLINE;
}
static void __init spectre_v2_select_mitigation(void)
{
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
@ -877,49 +943,64 @@ static void __init spectre_v2_select_mitigation(void)
case SPECTRE_V2_CMD_FORCE:
case SPECTRE_V2_CMD_AUTO:
if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
mode = SPECTRE_V2_IBRS_ENHANCED;
/* Force it so VMEXIT will restore correctly */
x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
goto specv2_set_mode;
mode = SPECTRE_V2_EIBRS;
break;
}
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto;
mode = spectre_v2_select_retpoline();
break;
case SPECTRE_V2_CMD_RETPOLINE_AMD:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_amd;
case SPECTRE_V2_CMD_RETPOLINE_LFENCE:
pr_err(SPECTRE_V2_LFENCE_MSG);
mode = SPECTRE_V2_LFENCE;
break;
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_generic;
mode = SPECTRE_V2_RETPOLINE;
break;
case SPECTRE_V2_CMD_RETPOLINE:
if (IS_ENABLED(CONFIG_RETPOLINE))
goto retpoline_auto;
mode = spectre_v2_select_retpoline();
break;
case SPECTRE_V2_CMD_EIBRS:
mode = SPECTRE_V2_EIBRS;
break;
case SPECTRE_V2_CMD_EIBRS_LFENCE:
mode = SPECTRE_V2_EIBRS_LFENCE;
break;
case SPECTRE_V2_CMD_EIBRS_RETPOLINE:
mode = SPECTRE_V2_EIBRS_RETPOLINE;
break;
}
pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
return;
retpoline_auto:
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
retpoline_amd:
if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");
goto retpoline_generic;
}
mode = SPECTRE_V2_RETPOLINE_AMD;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
} else {
retpoline_generic:
mode = SPECTRE_V2_RETPOLINE_GENERIC;
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
if (spectre_v2_in_eibrs_mode(mode)) {
/* Force it so VMEXIT will restore correctly */
x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
}
switch (mode) {
case SPECTRE_V2_NONE:
case SPECTRE_V2_EIBRS:
break;
case SPECTRE_V2_LFENCE:
case SPECTRE_V2_EIBRS_LFENCE:
setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE);
fallthrough;
case SPECTRE_V2_RETPOLINE:
case SPECTRE_V2_EIBRS_RETPOLINE:
setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
break;
}
specv2_set_mode:
spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]);
@ -945,7 +1026,7 @@ static void __init spectre_v2_select_mitigation(void)
* the CPU supports Enhanced IBRS, kernel might un-intentionally not
* enable IBRS around firmware calls.
*/
if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) {
if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
@ -1015,6 +1096,10 @@ void cpu_bugs_smt_update(void)
{
mutex_lock(&spec_ctrl_mutex);
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
switch (spectre_v2_user_stibp) {
case SPECTRE_V2_USER_NONE:
break;
@ -1621,7 +1706,7 @@ static ssize_t tsx_async_abort_show_state(char *buf)
static char *stibp_state(void)
{
if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return "";
switch (spectre_v2_user_stibp) {
@ -1651,6 +1736,27 @@ static char *ibpb_state(void)
return "";
}
static ssize_t spectre_v2_show_state(char *buf)
{
if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
return sprintf(buf, "Vulnerable: LFENCE\n");
if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
return sprintf(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");
if (sched_smt_active() && unprivileged_ebpf_enabled() &&
spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
return sprintf(buf, "%s%s%s%s%s%s\n",
spectre_v2_strings[spectre_v2_enabled],
ibpb_state(),
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
stibp_state(),
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
spectre_v2_module_string());
}
static ssize_t srbds_show_state(char *buf)
{
return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
@ -1676,12 +1782,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
case X86_BUG_SPECTRE_V2:
return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
ibpb_state(),
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
stibp_state(),
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
spectre_v2_module_string());
return spectre_v2_show_state(buf);
case X86_BUG_SPEC_STORE_BYPASS:
return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);

View File

@ -2705,8 +2705,8 @@ static void binder_transaction(struct binder_proc *proc,
ref->node, &target_proc,
&return_error);
} else {
binder_user_error("%d:%d got transaction to invalid handle\n",
proc->pid, thread->pid);
binder_user_error("%d:%d got transaction to invalid handle, %u\n",
proc->pid, thread->pid, tr->target.handle);
return_error = BR_FAILED_REPLY;
}
binder_proc_unlock(proc);

View File

@ -392,3 +392,4 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_free_proc);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_thread_release);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_has_work_ilocked);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_read_done);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_handle_tlb_conf);

View File

@ -236,6 +236,7 @@ static void set_type(struct bow_context *bc, struct bow_range **br, int type)
(*br)->type = type;
mutex_lock(&bc->ranges_lock);
if (next->type == type) {
if (type == TRIMMED)
list_del(&next->trimmed_list);
@ -249,6 +250,7 @@ static void set_type(struct bow_context *bc, struct bow_range **br, int type)
rb_erase(&(*br)->node, &bc->ranges);
kfree(*br);
}
mutex_unlock(&bc->ranges_lock);
*br = NULL;
}
@ -599,6 +601,7 @@ static void dm_bow_dtr(struct dm_target *ti)
struct bow_context *bc = (struct bow_context *) ti->private;
struct kobject *kobj;
mutex_lock(&bc->ranges_lock);
while (rb_first(&bc->ranges)) {
struct bow_range *br = container_of(rb_first(&bc->ranges),
struct bow_range, node);
@ -606,6 +609,8 @@ static void dm_bow_dtr(struct dm_target *ti)
rb_erase(&br->node, &bc->ranges);
kfree(br);
}
mutex_unlock(&bc->ranges_lock);
if (bc->workqueue)
destroy_workqueue(bc->workqueue);
if (bc->bufio)
@ -1181,6 +1186,7 @@ static void dm_bow_tablestatus(struct dm_target *ti, char *result,
return;
}
mutex_lock(&bc->ranges_lock);
for (i = rb_first(&bc->ranges); i; i = rb_next(i)) {
struct bow_range *br = container_of(i, struct bow_range, node);
@ -1188,11 +1194,11 @@ static void dm_bow_tablestatus(struct dm_target *ti, char *result,
readable_type[br->type],
(unsigned long long)br->sector);
if (result >= end)
return;
goto unlock;
result += scnprintf(result, end - result, "\n");
if (result >= end)
return;
goto unlock;
if (br->type == TRIMMED)
++trimmed_range_count;
@ -1214,19 +1220,22 @@ static void dm_bow_tablestatus(struct dm_target *ti, char *result,
if (!rb_next(i)) {
scnprintf(result, end - result,
"\nERROR: Last range not of type TOP");
return;
goto unlock;
}
if (br->sector > range_top(br)) {
scnprintf(result, end - result,
"\nERROR: sectors out of order");
return;
goto unlock;
}
}
if (trimmed_range_count != trimmed_list_length)
scnprintf(result, end - result,
"\nERROR: not all trimmed ranges in trimmed list");
unlock:
mutex_unlock(&bc->ranges_lock);
}
static void dm_bow_status(struct dm_target *ti, status_type_t type,

View File

@ -17,6 +17,8 @@
#include <linux/list.h>
#include <linux/mempool.h>
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/of_reserved_mem.h>
#include <linux/pagemap.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
@ -39,6 +41,105 @@ static unsigned kcopyd_subjob_size_kb = DEFAULT_SUB_JOB_SIZE_KB;
module_param(kcopyd_subjob_size_kb, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(kcopyd_subjob_size_kb, "Sub-job size for dm-kcopyd clients");
static bool rsm_enabled;
static phys_addr_t rsm_mem_base, rsm_mem_size;
#ifndef MODULE
static DEFINE_SPINLOCK(rsm_lock);
static int *rsm_mem;
static int rsm_page_cnt;
static int rsm_tbl_idx;
static struct reserved_mem *rmem;
static void __init kcopyd_rsm_init(void)
{
static struct device_node *rsm_node;
int ret = 0;
if (!rsm_enabled)
return;
rsm_node = of_find_compatible_node(NULL, NULL, "mediatek,dm_ota");
if (!rsm_node) {
ret = -ENODEV;
goto out;
}
rmem = of_reserved_mem_lookup(rsm_node);
if (!rmem) {
ret = -EINVAL;
goto out_put_node;
}
rsm_mem_base = rmem->base;
rsm_mem_size = rmem->size;
rsm_page_cnt = rsm_mem_size / PAGE_SIZE;
rsm_mem = kcalloc(rsm_page_cnt, sizeof(int), GFP_KERNEL);
if (!rsm_mem)
ret = -ENOMEM;
out_put_node:
of_node_put(rsm_node);
out:
if (ret)
pr_warn("kcopyd: failed to init rsm: %d", ret);
}
static int __init kcopyd_rsm_enable(char *str)
{
rsm_enabled = true;
return 0;
}
early_param("mtk_kcopyd_quirk", kcopyd_rsm_enable);
static void kcopyd_rsm_get_page(struct page **p)
{
int i;
unsigned long flags;
*p = NULL;
spin_lock_irqsave(&rsm_lock, flags);
for (i = 0 ; i < rsm_page_cnt ; i++) {
rsm_tbl_idx = (rsm_tbl_idx + 1 == rsm_page_cnt) ? 0 : rsm_tbl_idx + 1;
if (rsm_mem[rsm_tbl_idx] == 0) {
rsm_mem[rsm_tbl_idx] = 1;
*p = virt_to_page(phys_to_virt(rsm_mem_base + PAGE_SIZE
* rsm_tbl_idx));
break;
}
}
spin_unlock_irqrestore(&rsm_lock, flags);
}
static void kcopyd_rsm_drop_page(struct page **p)
{
u64 off;
unsigned long flags;
if (*p) {
off = page_to_phys(*p) - rsm_mem_base;
spin_lock_irqsave(&rsm_lock, flags);
rsm_mem[off >> PAGE_SHIFT] = 0;
spin_unlock_irqrestore(&rsm_lock, flags);
*p = NULL;
}
}
static void kcopyd_rsm_destroy(void)
{
if (rsm_enabled)
kfree(rsm_mem);
}
#else
#define kcopyd_rsm_destroy(...)
#define kcopyd_rsm_drop_page(...)
#define kcopyd_rsm_get_page(...)
#define kcopyd_rsm_init(...)
#endif
static unsigned dm_get_kcopyd_subjob_size(void)
{
unsigned sub_job_size_kb;
@ -211,7 +312,7 @@ static void wake(struct dm_kcopyd_client *kc)
/*
* Obtain one page for the use of kcopyd.
*/
static struct page_list *alloc_pl(gfp_t gfp)
static struct page_list *alloc_pl(gfp_t gfp, unsigned long job_flags)
{
struct page_list *pl;
@ -219,7 +320,12 @@ static struct page_list *alloc_pl(gfp_t gfp)
if (!pl)
return NULL;
pl->page = alloc_page(gfp);
if (rsm_enabled && test_bit(DM_KCOPYD_SNAP_MERGE, &job_flags)) {
kcopyd_rsm_get_page(&pl->page);
} else {
pl->page = alloc_page(gfp);
}
if (!pl->page) {
kfree(pl);
return NULL;
@ -230,7 +336,14 @@ static struct page_list *alloc_pl(gfp_t gfp)
static void free_pl(struct page_list *pl)
{
__free_page(pl->page);
struct page *p = pl->page;
phys_addr_t pa = page_to_phys(p);
if (rsm_enabled && pa >= rsm_mem_base && pa < rsm_mem_base + rsm_mem_size)
kcopyd_rsm_drop_page(&pl->page);
else
__free_page(pl->page);
kfree(pl);
}
@ -258,14 +371,15 @@ static void kcopyd_put_pages(struct dm_kcopyd_client *kc, struct page_list *pl)
}
static int kcopyd_get_pages(struct dm_kcopyd_client *kc,
unsigned int nr, struct page_list **pages)
unsigned int nr, struct page_list **pages,
unsigned long job_flags)
{
struct page_list *pl;
*pages = NULL;
do {
pl = alloc_pl(__GFP_NOWARN | __GFP_NORETRY | __GFP_KSWAPD_RECLAIM);
pl = alloc_pl(__GFP_NOWARN | __GFP_NORETRY | __GFP_KSWAPD_RECLAIM, job_flags);
if (unlikely(!pl)) {
/* Use reserved pages */
pl = kc->pages;
@ -309,7 +423,7 @@ static int client_reserve_pages(struct dm_kcopyd_client *kc, unsigned nr_pages)
struct page_list *pl = NULL, *next;
for (i = 0; i < nr_pages; i++) {
next = alloc_pl(GFP_KERNEL);
next = alloc_pl(GFP_KERNEL, 0);
if (!next) {
if (pl)
drop_pages(pl);
@ -395,6 +509,8 @@ int __init dm_kcopyd_init(void)
zero_page_list.next = &zero_page_list;
zero_page_list.page = ZERO_PAGE(0);
kcopyd_rsm_init();
return 0;
}
@ -402,6 +518,7 @@ void dm_kcopyd_exit(void)
{
kmem_cache_destroy(_job_cache);
_job_cache = NULL;
kcopyd_rsm_destroy();
}
/*
@ -586,7 +703,7 @@ static int run_pages_job(struct kcopyd_job *job)
int r;
unsigned nr_pages = dm_div_up(job->dests[0].count, PAGE_SIZE >> 9);
r = kcopyd_get_pages(job->kc, nr_pages, &job->pages);
r = kcopyd_get_pages(job->kc, nr_pages, &job->pages, job->flags);
if (!r) {
/* this job is ready for io */
push(&job->kc->io_jobs, job);

View File

@ -1117,7 +1117,8 @@ static void snapshot_merge_next_chunks(struct dm_snapshot *s)
for (i = 0; i < linear_chunks; i++)
__check_for_conflicting_io(s, old_chunk + i);
dm_kcopyd_copy(s->kcopyd_client, &src, 1, &dest, 0, merge_callback, s);
dm_kcopyd_copy(s->kcopyd_client, &src, 1, &dest, 1 << DM_KCOPYD_SNAP_MERGE,
merge_callback, s);
return;
shut:

View File

@ -2268,6 +2268,15 @@ static void hw_scan_work(struct work_struct *work)
if (req->ie_len)
skb_put_data(probe, req->ie, req->ie_len);
if (!ieee80211_tx_prepare_skb(hwsim->hw,
hwsim->hw_scan_vif,
probe,
hwsim->tmp_chan->band,
NULL)) {
kfree_skb(probe);
continue;
}
local_bh_disable();
mac80211_hwsim_tx_frame(hwsim->hw, probe,
hwsim->tmp_chan);

View File

@ -220,6 +220,10 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
goto fail;
}
/* Use min_t(int, ...) in case shost->can_queue exceeds SHRT_MAX */
shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
shost->can_queue);
error = scsi_init_sense_cache(shost);
if (error)
goto fail;
@ -228,12 +232,6 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
if (error)
goto fail;
shost->can_queue = shost->tag_set.queue_depth;
/* Use min_t(int, ...) in case shost->can_queue exceeds SHRT_MAX */
shost->cmd_per_lun = min_t(int, shost->cmd_per_lun,
shost->can_queue);
if (!shost->shost_gendev.parent)
shost->shost_gendev.parent = dev ? dev : &platform_bus;
if (!dma_dev)

View File

@ -1907,10 +1907,6 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
tag_set->ops = &scsi_mq_ops_no_commit;
tag_set->nr_hw_queues = shost->nr_hw_queues ? : 1;
tag_set->queue_depth = shost->can_queue;
if (shost->hostt->name && strcmp(shost->hostt->name, "ufshcd") == 0) {
tag_set->queue_depth--;
tag_set->reserved_tags++;
}
tag_set->cmd_size = cmd_size;
tag_set->numa_node = NUMA_NO_NODE;
tag_set->flags = BLK_MQ_F_SHOULD_MERGE;

View File

@ -3189,7 +3189,7 @@ int __sync_dirty_buffer(struct buffer_head *bh, int op_flags)
}
return ret;
}
EXPORT_SYMBOL_NS(__sync_dirty_buffer, ANDROID_GKI_VFS_EXPORT_ONLY);
EXPORT_SYMBOL(__sync_dirty_buffer);
int sync_dirty_buffer(struct buffer_head *bh)
{

View File

@ -2080,9 +2080,17 @@ static inline bool enabled_nat_bits(struct f2fs_sb_info *sbi,
return (cpc) ? (cpc->reason & CP_UMOUNT) && set : set;
}
static inline void init_f2fs_rwsem(struct f2fs_rwsem *sem)
#define init_f2fs_rwsem(sem) \
do { \
static struct lock_class_key __key; \
\
__init_f2fs_rwsem((sem), #sem, &__key); \
} while (0)
static inline void __init_f2fs_rwsem(struct f2fs_rwsem *sem,
const char *sem_name, struct lock_class_key *key)
{
init_rwsem(&sem->internal_rwsem);
__init_rwsem(&sem->internal_rwsem, sem_name, key);
init_waitqueue_head(&sem->read_waiters);
}

View File

@ -2485,7 +2485,7 @@ int f2fs_quota_sync(struct super_block *sb, int type)
struct f2fs_sb_info *sbi = F2FS_SB(sb);
struct quota_info *dqopt = sb_dqopt(sb);
int cnt;
int ret;
int ret = 0;
/*
* Now when everything is written we can discard the pagecache so
@ -2496,8 +2496,8 @@ int f2fs_quota_sync(struct super_block *sb, int type)
if (type != -1 && cnt != type)
continue;
if (!sb_has_quota_active(sb, type))
return 0;
if (!sb_has_quota_active(sb, cnt))
continue;
inode_lock(dqopt->files[cnt]);

View File

@ -2533,7 +2533,7 @@ int kern_path(const char *name, unsigned int flags, struct path *path)
return filename_lookup(AT_FDCWD, getname_kernel(name),
flags, path, NULL);
}
EXPORT_SYMBOL_NS(kern_path, ANDROID_GKI_VFS_EXPORT_ONLY);
EXPORT_SYMBOL(kern_path);
/**
* vfs_path_lookup - lookup a file path relative to a dentry-vfsmount pair

View File

@ -1491,6 +1491,12 @@ struct bpf_prog *bpf_prog_by_id(u32 id);
struct bpf_link *bpf_link_by_id(u32 id);
const struct bpf_func_proto *bpf_base_func_proto(enum bpf_func_id func_id);
static inline bool unprivileged_ebpf_enabled(void)
{
return !sysctl_unprivileged_bpf_disabled;
}
#else /* !CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get(u32 ufd)
{
@ -1685,6 +1691,12 @@ bpf_base_func_proto(enum bpf_func_id func_id)
{
return NULL;
}
static inline bool unprivileged_ebpf_enabled(void)
{
return false;
}
#endif /* CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get_type(u32 ufd,

View File

@ -21,6 +21,7 @@
#define DM_KCOPYD_IGNORE_ERROR 1
#define DM_KCOPYD_WRITE_SEQ 2
#define DM_KCOPYD_SNAP_MERGE 3
struct dm_kcopyd_throttle {
unsigned throttle;

View File

@ -55,8 +55,8 @@ extern asmlinkage void schedule_tail(struct task_struct *prev);
extern void init_idle(struct task_struct *idle, int cpu);
extern int sched_fork(unsigned long clone_flags, struct task_struct *p);
extern void sched_post_fork(struct task_struct *p,
struct kernel_clone_args *kargs);
extern void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs);
extern void sched_post_fork(struct task_struct *p);
extern void sched_dead(struct task_struct *p);
void __noreturn do_task_dead(void);

View File

@ -29,6 +29,10 @@ DECLARE_RESTRICTED_HOOK(android_rvh_do_sp_pc_abort,
TP_ARGS(regs, esr, addr, user),
TP_CONDITION(!user));
DECLARE_HOOK(android_vh_handle_tlb_conf,
TP_PROTO(unsigned long addr, unsigned int esr, int *ret),
TP_ARGS(addr, esr, ret));
/* macro versions of hooks are no longer required */
#endif /* _TRACE_HOOK_FAULT_H */

View File

@ -1748,6 +1748,16 @@ config BPF_JIT_DEFAULT_ON
def_bool ARCH_WANT_DEFAULT_BPF_JIT || BPF_JIT_ALWAYS_ON
depends on HAVE_EBPF_JIT && BPF_JIT
config BPF_UNPRIV_DEFAULT_OFF
bool "Disable unprivileged BPF by default"
depends on BPF_SYSCALL
help
Disables unprivileged BPF by default by setting the corresponding
/proc/sys/kernel/unprivileged_bpf_disabled knob to 2. An admin can
still reenable it by setting it to 0 later on, or permanently
disable it by setting it to 1 (from which no other transition to
0 is possible anymore).
source "kernel/bpf/preload/Kconfig"
config USERFAULTFD

View File

@ -52,7 +52,8 @@ static DEFINE_SPINLOCK(map_idr_lock);
static DEFINE_IDR(link_idr);
static DEFINE_SPINLOCK(link_idr_lock);
int sysctl_unprivileged_bpf_disabled __read_mostly;
int sysctl_unprivileged_bpf_disabled __read_mostly =
IS_BUILTIN(CONFIG_BPF_UNPRIV_DEFAULT_OFF) ? 2 : 0;
static const struct bpf_map_ops * const bpf_map_types[] = {
#define BPF_PROG_TYPE(_id, _name, prog_ctx_type, kern_ctx_type)

View File

@ -2249,6 +2249,17 @@ static __latent_entropy struct task_struct *copy_process(
if (retval)
goto bad_fork_put_pidfd;
/*
* Now that the cgroups are pinned, re-clone the parent cgroup and put
* the new task on the correct runqueue. All this *before* the task
* becomes visible.
*
* This isn't part of ->can_fork() because while the re-cloning is
* cgroup specific, it unconditionally needs to place the task on a
* runqueue.
*/
sched_cgroup_fork(p, args);
/*
* From this point on we must avoid any synchronous user-space
* communication until we take the tasklist-lock. In particular, we do
@ -2357,7 +2368,7 @@ static __latent_entropy struct task_struct *copy_process(
write_unlock_irq(&tasklist_lock);
proc_fork_connector(p);
sched_post_fork(p, args);
sched_post_fork(p);
cgroup_post_fork(p, args);
perf_event_fork(p);

View File

@ -880,9 +880,8 @@ int tg_nop(struct task_group *tg, void *data)
}
#endif
static void set_load_weight(struct task_struct *p)
static void set_load_weight(struct task_struct *p, bool update_load)
{
bool update_load = !(READ_ONCE(p->state) & TASK_NEW);
int prio = p->static_prio - MAX_RT_PRIO;
struct load_weight *load = &p->se.load;
@ -3485,7 +3484,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
p->static_prio = NICE_TO_PRIO(0);
p->prio = p->normal_prio = p->static_prio;
set_load_weight(p);
set_load_weight(p, false);
/*
* We don't need the reset flag anymore after the fork. It has
@ -3504,6 +3503,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
init_entity_runnable_average(&p->se);
trace_android_rvh_finish_prio_fork(p);
#ifdef CONFIG_SCHED_INFO
if (likely(sched_info_on()))
memset(&p->sched_info, 0, sizeof(p->sched_info));
@ -3519,18 +3519,24 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
return 0;
}
void sched_post_fork(struct task_struct *p, struct kernel_clone_args *kargs)
void sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
{
unsigned long flags;
#ifdef CONFIG_CGROUP_SCHED
struct task_group *tg;
#endif
/*
* Because we're not yet on the pid-hash, p->pi_lock isn't strictly
* required yet, but lockdep gets upset if rules are violated.
*/
raw_spin_lock_irqsave(&p->pi_lock, flags);
#ifdef CONFIG_CGROUP_SCHED
tg = container_of(kargs->cset->subsys[cpu_cgrp_id],
struct task_group, css);
p->sched_task_group = autogroup_task_group(p, tg);
if (1) {
struct task_group *tg;
tg = container_of(kargs->cset->subsys[cpu_cgrp_id],
struct task_group, css);
tg = autogroup_task_group(p, tg);
p->sched_task_group = tg;
}
#endif
rseq_migrate(p);
/*
@ -3541,7 +3547,10 @@ void sched_post_fork(struct task_struct *p, struct kernel_clone_args *kargs)
if (p->sched_class->task_fork)
p->sched_class->task_fork(p);
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
}
void sched_post_fork(struct task_struct *p)
{
uclamp_post_fork(p);
}
@ -5253,7 +5262,7 @@ void set_user_nice(struct task_struct *p, long nice)
put_prev_task(rq, p);
p->static_prio = NICE_TO_PRIO(nice);
set_load_weight(p);
set_load_weight(p, true);
old_prio = p->prio;
p->prio = effective_prio(p);
@ -5427,7 +5436,7 @@ static void __setscheduler_params(struct task_struct *p,
*/
p->rt_priority = attr->sched_priority;
p->normal_prio = normal_prio(p);
set_load_weight(p);
set_load_weight(p, true);
}
/*
@ -7570,7 +7579,7 @@ void __init sched_init(void)
atomic_set(&rq->nr_iowait, 0);
}
set_load_weight(&init_task);
set_load_weight(&init_task, false);
/*
* The boot idle thread does lazy MMU switching as well:

View File

@ -236,7 +236,34 @@ static int bpf_stats_handler(struct ctl_table *table, int write,
mutex_unlock(&bpf_stats_enabled_mutex);
return ret;
}
#endif
void __weak unpriv_ebpf_notify(int new_state)
{
}
static int bpf_unpriv_handler(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos)
{
int ret, unpriv_enable = *(int *)table->data;
bool locked_state = unpriv_enable == 1;
struct ctl_table tmp = *table;
if (write && !capable(CAP_SYS_ADMIN))
return -EPERM;
tmp.data = &unpriv_enable;
ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos);
if (write && !ret) {
if (locked_state && unpriv_enable != 1)
return -EPERM;
*(int *)table->data = unpriv_enable;
}
unpriv_ebpf_notify(unpriv_enable);
return ret;
}
#endif /* CONFIG_BPF_SYSCALL && CONFIG_SYSCTL */
/*
* /proc/sys support
@ -2629,10 +2656,9 @@ static struct ctl_table kern_table[] = {
.data = &sysctl_unprivileged_bpf_disabled,
.maxlen = sizeof(sysctl_unprivileged_bpf_disabled),
.mode = 0644,
/* only handle a transition from default "0" to "1" */
.proc_handler = proc_dointvec_minmax,
.extra1 = SYSCTL_ONE,
.extra2 = SYSCTL_ONE,
.proc_handler = bpf_unpriv_handler,
.extra1 = SYSCTL_ZERO,
.extra2 = &two,
},
{
.procname = "bpf_stats_enabled",

View File

@ -328,7 +328,6 @@ void __dump_page_pinner(struct page *page)
void __page_pinner_migration_failed(struct page *page)
{
struct page_ext *page_ext = lookup_page_ext(page);
struct page_pinner *page_pinner;
struct captured_pinner record;
unsigned long flags;
unsigned int idx;
@ -336,7 +335,6 @@ void __page_pinner_migration_failed(struct page *page)
if (unlikely(!page_ext))
return;
page_pinner = get_page_pinner(page_ext);
if (!test_bit(PAGE_EXT_PINNER_MIGRATION_FAILED, &page_ext->flags))
return;

View File

@ -599,7 +599,9 @@ unsigned long get_each_object_track(struct kmem_cache *s,
slab_lock(page);
for_each_object(p, s, page_address(page), page->objects) {
t = get_track(s, p, alloc);
metadata_access_enable();
ret = fn(s, p, t, private);
metadata_access_disable();
if (ret < 0)
break;
num_track += 1;

View File

@ -204,7 +204,7 @@
#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
#define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCEs for Spectre variant 2 */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
#define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
#define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */