android_kernel_xiaomi_sm8450/kernel/bpf
Luis Gerhorst da75dec7c6 bpf: Fix pointer-leak due to insufficient speculative store bypass mitigation
[ Upstream commit e4f4db47794c9f474b184ee1418f42e6a07412b6 ]

To mitigate Spectre v4, 2039f26f3aca ("bpf: Fix leakage due to
insufficient speculative store bypass mitigation") inserts lfence
instructions after 1) initializing a stack slot and 2) spilling a
pointer to the stack.

However, this does not cover cases where a stack slot is first
initialized with a pointer (subject to sanitization) but then
overwritten with a scalar (not subject to sanitization because
the slot was already initialized). In this case, the second write
may be subject to speculative store bypass (SSB) creating a
speculative pointer-as-scalar type confusion. This allows the
program to subsequently leak the numerical pointer value using,
for example, a branch-based cache side channel.

To fix this, also sanitize scalars if they write a stack slot
that previously contained a pointer. Assuming that pointer-spills
are only generated by LLVM on register-pressure, the performance
impact on most real-world BPF programs should be small.

The following unprivileged BPF bytecode drafts a minimal exploit
and the mitigation:

  [...]
  // r6 = 0 or 1 (skalar, unknown user input)
  // r7 = accessible ptr for side channel
  // r10 = frame pointer (fp), to be leaked
  //
  r9 = r10 # fp alias to encourage ssb
  *(u64 *)(r9 - 8) = r10 // fp[-8] = ptr, to be leaked
  // lfence added here because of pointer spill to stack.
  //
  // Ommitted: Dummy bpf_ringbuf_output() here to train alias predictor
  // for no r9-r10 dependency.
  //
  *(u64 *)(r10 - 8) = r6 // fp[-8] = scalar, overwrites ptr
  // 2039f26f3aca: no lfence added because stack slot was not STACK_INVALID,
  // store may be subject to SSB
  //
  // fix: also add an lfence when the slot contained a ptr
  //
  r8 = *(u64 *)(r9 - 8)
  // r8 = architecturally a scalar, speculatively a ptr
  //
  // leak ptr using branch-based cache side channel:
  r8 &= 1 // choose bit to leak
  if r8 == 0 goto SLOW // no mispredict
  // architecturally dead code if input r6 is 0,
  // only executes speculatively iff ptr bit is 1
  r8 = *(u64 *)(r7 + 0) # encode bit in cache (0: slow, 1: fast)
SLOW:
  [...]

After running this, the program can time the access to *(r7 + 0) to
determine whether the chosen pointer bit was 0 or 1. Repeat this 64
times to recover the whole address on amd64.

In summary, sanitization can only be skipped if one scalar is
overwritten with another scalar. Scalar-confusion due to speculative
store bypass can not lead to invalid accesses because the pointer
bounds deducted during verification are enforced using branchless
logic. See 979d63d50c ("bpf: prevent out of bounds speculation on
pointer arithmetic") for details.

Do not make the mitigation depend on !env->allow_{uninit_stack,ptr_leaks}
because speculative leaks are likely unexpected if these were enabled.
For example, leaking the address to a protected log file may be acceptable
while disabling the mitigation might unintentionally leak the address
into the cached-state of a map that is accessible to unprivileged
processes.

Fixes: 2039f26f3aca ("bpf: Fix leakage due to insufficient speculative store bypass mitigation")
Signed-off-by: Luis Gerhorst <gerhorst@cs.fau.de>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Henriette Hofmeier <henriette.hofmeier@rub.de>
Link: https://lore.kernel.org/bpf/edc95bad-aada-9cfc-ffe2-fa9bb206583c@cs.fau.de
Link: https://lore.kernel.org/bpf/20230109150544.41465-1-gerhorst@cs.fau.de
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-02-01 08:23:11 +01:00
..
preload bpf: Fix umd memory leak in copy_process() 2021-03-30 14:32:03 +02:00
arraymap.c bpf: Acquire map uref in .init_seq_private for array map iterator 2022-08-25 11:37:55 +02:00
bpf_inode_storage.c bpf: Change inode_storage's lookup_elem return value from NULL to -EBADF 2021-03-30 14:31:56 +02:00
bpf_iter.c bpf: Fix an unitialized value in bpf_iter 2021-03-04 11:37:33 +01:00
bpf_local_storage.c bpf: Do not copy spin lock field from user in bpf_selem_alloc 2022-12-08 11:23:55 +01:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-03-04 11:37:29 +01:00
bpf_lru_list.h bpf: Fix a typo "inacitve" -> "inactive" 2020-04-06 21:54:10 +02:00
bpf_lsm.c bpf: Update verification logic for LSM programs 2020-11-06 13:15:21 -08:00
bpf_struct_ops_types.h bpf: tcp: Support tcp_congestion_ops in bpf 2020-01-09 08:46:18 -08:00
bpf_struct_ops.c bpf: Handle return value of BPF_PROG_TYPE_STRUCT_OPS prog 2021-10-06 15:55:50 +02:00
btf.c bpf: Prevent decl_tag from being referenced in func_proto arg 2023-01-14 10:16:18 +01:00
cgroup.c bpf, cgroup: Fix kernel BUG in purge_effective_progs 2022-09-08 11:11:36 +02:00
core.c bpf: Make sure mac_header was set before using it 2022-07-29 17:19:23 +02:00
cpumap.c bpf, cpumap: Remove rcpu pointer from cpu_map_build_skb signature 2020-09-28 23:30:42 +02:00
devmap.c bpf: Fix integer overflow in argument calculation for bpf_map_area_alloc 2021-12-17 10:14:41 +01:00
disasm.c bpf: Introduce BPF nospec instruction for mitigating Spectre v4 2021-08-04 12:46:44 +02:00
disasm.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 295 2019-06-05 17:36:38 +02:00
dispatcher.c bpf: Remove bpf_image tree 2020-03-13 12:49:52 -07:00
hashtab.c bpf: Acquire map uref in .init_seq_private for hash map iterator 2022-08-25 11:37:55 +02:00
helpers.c bpf: Fix potentially incorrect results with bpf_get_local_storage() 2021-09-03 10:09:31 +02:00
inode.c bpf: link: Refuse non-O_RDWR flags in BPF_OBJ_GET 2021-04-14 08:42:00 +02:00
local_storage.c bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper 2021-09-03 10:09:21 +02:00
lpm_trie.c bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
Makefile bpf: Don't rely on GCC __attribute__((optimize)) to disable GCSE 2020-10-29 20:01:46 -07:00
map_in_map.c bpf: Relax max_entries check for most of the inner map types 2020-08-28 15:41:30 +02:00
map_in_map.h bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
map_iter.c bpf: Implement link_query callbacks in map element iterators 2020-08-21 14:01:39 -07:00
net_namespace.c bpf: Add support for forced LINK_DETACH command 2020-08-01 20:38:28 -07:00
offload.c bpf, offload: Replace bitwise AND by logical AND in bpf_prog_offload_info_fill 2020-02-17 16:53:49 +01:00
percpu_freelist.c bpf: Initialize same number of free nodes for each pcpu_freelist 2022-11-25 17:45:45 +01:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c bpf: Refactor bpf_iter_reg to have separate seq_info member 2020-07-25 20:16:32 -07:00
queue_stack_maps.c bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
reuseport_array.c bpf, net: Rework cookie generator as per-cpu one 2020-09-30 11:50:35 -07:00
ringbuf.c bpf: Use VM_MAP instead of VM_ALLOC for ringbuf 2022-02-08 18:30:39 +01:00
stackmap.c bpf: Fix incorrect memory charge cost calculation in stack_map_alloc() 2022-06-22 14:13:12 +02:00
syscall.c bpf: Ensure correct locking around vulnerable function find_vpid() 2022-10-26 13:25:21 +02:00
sysfs_btf.c bpf: Fix sysfs export of empty BTF section 2020-09-21 21:50:24 +02:00
task_iter.c bpf: Save correct stopping point in file seq iteration 2021-01-19 18:27:28 +01:00
tnum.c bpf: Verifier, do explicit ALU32 bounds tracking 2020-03-30 14:59:53 -07:00
trampoline.c bpf: Fix potential array overflow in bpf_trampoline_get_progs() 2022-06-06 08:42:45 +02:00
verifier.c bpf: Fix pointer-leak due to insufficient speculative store bypass mitigation 2023-02-01 08:23:11 +01:00