Merge bf4eebf8cf ("Merge tag 'linux-kselftest-kunit-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest") into android-mainline

Steps on the way to 5.17-rc1

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I1aaabb580bd134941bc1bcbc8761f41d1853344a
This commit is contained in:
Greg Kroah-Hartman 2022-01-14 07:51:47 +01:00
commit 75f0f48b26
261 changed files with 6280 additions and 3317 deletions

View File

@ -176,3 +176,9 @@ Contact: Keith Busch <keith.busch@intel.com>
Description:
The cache write policy: 0 for write-back, 1 for write-through,
other or unknown.
What: /sys/devices/system/node/nodeX/x86/sgx_total_bytes
Date: November 2021
Contact: Jarkko Sakkinen <jarkko@kernel.org>
Description:
The total amount of SGX physical memory in bytes.

View File

@ -2944,7 +2944,7 @@
both parameters are enabled, hugetlb_free_vmemmap takes
precedence over memory_hotplug.memmap_on_memory.
memtest= [KNL,X86,ARM,PPC,RISCV] Enable memtest
memtest= [KNL,X86,ARM,M68K,PPC,RISCV] Enable memtest
Format: <integer>
default : 0 <disable>
Specifies the number of memtest passes to be

View File

@ -12,5 +12,4 @@ following sections:
Documentation/dev-tools/kunit/api/test.rst
- documents all of the standard testing API excluding mocking
or mocking related features.
- documents all of the standard testing API

View File

@ -4,8 +4,7 @@
Test API
========
This file documents all of the standard testing API excluding mocking or mocking
related features.
This file documents all of the standard testing API.
.. kernel-doc:: include/kunit/test.h
:internal:

View File

@ -19,7 +19,7 @@ KUnit - Unit Testing for the Linux Kernel
What is KUnit?
==============
KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
KUnit is a lightweight unit testing framework for the Linux kernel.
KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining unit test

View File

@ -50,10 +50,10 @@ It'll warn you if you haven't included the dependencies of the options you're
using.
.. note::
Note that removing something from the ``.kunitconfig`` will not trigger a
rebuild of the ``.config`` file: the configuration is only updated if the
``.kunitconfig`` is not a subset of ``.config``. This means that you can use
other tools (such as make menuconfig) to adjust other config options.
If you change the ``.kunitconfig``, kunit.py will trigger a rebuild of the
``.config`` file. But you can edit the ``.config`` file directly or with
tools like ``make menuconfig O=.kunit``. As long as its a superset of
``.kunitconfig``, kunit.py won't overwrite your changes.
Running the tests (KUnit Wrapper)

View File

@ -26,6 +26,7 @@ properties:
enum:
- xlnx,zynq-ddrc-a05
- xlnx,zynqmp-ddrc-2.40a
- snps,ddrc-3.80a
interrupts:
maxItems: 1

View File

@ -181,5 +181,24 @@ You should see something like this in dmesg::
[22715.834759] EDAC sbridge MC3: PROCESSOR 0:306e7 TIME 1422553404 SOCKET 0 APIC 0
[22716.616173] EDAC MC3: 1 CE memory read error on CPU_SrcID#0_Channel#0_DIMM#0 (channel:0 slot:0 page:0x12345 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0001:0090 socket:0 channel_mask:1 rank:0)
Special notes for injection into SGX enclaves:
There may be a separate BIOS setup option to enable SGX injection.
The injection process consists of setting some special memory controller
trigger that will inject the error on the next write to the target
address. But the h/w prevents any software outside of an SGX enclave
from accessing enclave pages (even BIOS SMM mode).
The following sequence can be used:
1) Determine physical address of enclave page
2) Use "notrigger=1" mode to inject (this will setup
the injection address, but will not actually inject)
3) Enter the enclave
4) Store data to the virtual address matching physical address from step 1
5) Execute CLFLUSH for that virtual address
6) Spin delay for 250ms
7) Read from the virtual address. This will trigger the error
For more information about EINJ, please refer to ACPI specification
version 4.0, section 17.5 and ACPI 5.0, section 18.6.

View File

@ -10,7 +10,7 @@ Overview
Software Guard eXtensions (SGX) hardware enables for user space applications
to set aside private memory regions of code and data:
* Privileged (ring-0) ENCLS functions orchestrate the construction of the.
* Privileged (ring-0) ENCLS functions orchestrate the construction of the
regions.
* Unprivileged (ring-3) ENCLU functions allow an application to enter and
execute inside the regions.
@ -91,7 +91,7 @@ In addition to the traditional compiler and linker build process, SGX has a
separate enclave “build” process. Enclaves must be built before they can be
executed (entered). The first step in building an enclave is opening the
**/dev/sgx_enclave** device. Since enclave memory is protected from direct
access, special privileged instructions are Then used to copy data into enclave
access, special privileged instructions are then used to copy data into enclave
pages and establish enclave page permissions.
.. kernel-doc:: arch/x86/kernel/cpu/sgx/ioctl.c
@ -126,13 +126,13 @@ the need to juggle signal handlers.
ksgxd
=====
SGX support includes a kernel thread called *ksgxwapd*.
SGX support includes a kernel thread called *ksgxd*.
EPC sanitization
----------------
ksgxd is started when SGX initializes. Enclave memory is typically ready
For use when the processor powers on or resets. However, if SGX has been in
for use when the processor powers on or resets. However, if SGX has been in
use since the reset, enclave pages may be in an inconsistent state. This might
occur after a crash and kexec() cycle, for instance. At boot, ksgxd
reinitializes all enclave pages so that they can be allocated and re-used.
@ -147,7 +147,7 @@ Page reclaimer
Similar to the core kswapd, ksgxd, is responsible for managing the
overcommitment of enclave memory. If the system runs out of enclave memory,
*ksgxwapd* “swaps” enclave memory to normal memory.
*ksgxd* “swaps” enclave memory to normal memory.
Launch Control
==============
@ -156,7 +156,7 @@ SGX provides a launch control mechanism. After all enclave pages have been
copied, kernel executes EINIT function, which initializes the enclave. Only after
this the CPU can execute inside the enclave.
ENIT function takes an RSA-3072 signature of the enclave measurement. The function
EINIT function takes an RSA-3072 signature of the enclave measurement. The function
checks that the measurement is correct and signature is signed with the key
hashed to the four **IA32_SGXLEPUBKEYHASH{0, 1, 2, 3}** MSRs representing the
SHA256 of a public key.
@ -184,7 +184,7 @@ CPUs starting from Icelake use Total Memory Encryption (TME) in the place of
MEE. TME-based SGX implementations do not have an integrity Merkle tree, which
means integrity and replay-attacks are not mitigated. B, it includes
additional changes to prevent cipher text from being returned and SW memory
aliases from being Created.
aliases from being created.
DMA to enclave memory is blocked by range registers on both MEE and TME systems
(SDM section 41.10).

View File

@ -16005,6 +16005,7 @@ F: arch/mips/generic/board-ranchu.c
RANDOM NUMBER DRIVER
M: "Theodore Ts'o" <tytso@mit.edu>
M: Jason A. Donenfeld <Jason@zx2c4.com>
T: git https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git
S: Maintained
F: drivers/char/random.c

View File

@ -1312,6 +1312,10 @@ config ARCH_HAS_PARANOID_L1D_FLUSH
config DYNAMIC_SIGFRAME
bool
# Select, if arch has a named attribute group bound to NUMA device nodes.
config HAVE_ARCH_NODE_DEV_GROUP
bool
source "kernel/gcov/Kconfig"
source "scripts/gcc-plugins/Kconfig"

View File

@ -535,6 +535,6 @@ do_work_pending(struct pt_regs *regs, unsigned long thread_flags,
}
}
local_irq_disable();
thread_flags = current_thread_info()->flags;
thread_flags = read_thread_flags();
} while (thread_flags & _TIF_WORK_MASK);
}

View File

@ -10,6 +10,7 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
obj-$(CONFIG_CRYPTO_BLAKE2S_ARM) += blake2s-arm.o
obj-$(if $(CONFIG_CRYPTO_BLAKE2S_ARM),y) += libblake2s-arm.o
obj-$(CONFIG_CRYPTO_BLAKE2B_NEON) += blake2b-neon.o
obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha-neon.o
obj-$(CONFIG_CRYPTO_POLY1305_ARM) += poly1305-arm.o
@ -31,7 +32,8 @@ sha256-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha256_neon_glue.o
sha256-arm-y := sha256-core.o sha256_glue.o $(sha256-arm-neon-y)
sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha512-neon-glue.o
sha512-arm-y := sha512-core.o sha512-glue.o $(sha512-arm-neon-y)
blake2s-arm-y := blake2s-core.o blake2s-glue.o
blake2s-arm-y := blake2s-shash.o
libblake2s-arm-y:= blake2s-core.o blake2s-glue.o
blake2b-neon-y := blake2b-neon-core.o blake2b-neon-glue.o
sha1-arm-ce-y := sha1-ce-core.o sha1-ce-glue.o
sha2-arm-ce-y := sha2-ce-core.o sha2-ce-glue.o

View File

@ -167,8 +167,8 @@
.endm
//
// void blake2s_compress_arch(struct blake2s_state *state,
// const u8 *block, size_t nblocks, u32 inc);
// void blake2s_compress(struct blake2s_state *state,
// const u8 *block, size_t nblocks, u32 inc);
//
// Only the first three fields of struct blake2s_state are used:
// u32 h[8]; (inout)
@ -176,7 +176,7 @@
// u32 f[2]; (in)
//
.align 5
ENTRY(blake2s_compress_arch)
ENTRY(blake2s_compress)
push {r0-r2,r4-r11,lr} // keep this an even number
.Lnext_block:
@ -303,4 +303,4 @@ ENTRY(blake2s_compress_arch)
str r3, [r12], #4
bne 1b
b .Lcopy_block_done
ENDPROC(blake2s_compress_arch)
ENDPROC(blake2s_compress)

View File

@ -1,78 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* BLAKE2s digest algorithm, ARM scalar implementation
*
* Copyright 2020 Google LLC
*/
#include <crypto/internal/blake2s.h>
#include <crypto/internal/hash.h>
#include <linux/module.h>
/* defined in blake2s-core.S */
EXPORT_SYMBOL(blake2s_compress_arch);
static int crypto_blake2s_update_arm(struct shash_desc *desc,
const u8 *in, unsigned int inlen)
{
return crypto_blake2s_update(desc, in, inlen, blake2s_compress_arch);
}
static int crypto_blake2s_final_arm(struct shash_desc *desc, u8 *out)
{
return crypto_blake2s_final(desc, out, blake2s_compress_arch);
}
#define BLAKE2S_ALG(name, driver_name, digest_size) \
{ \
.base.cra_name = name, \
.base.cra_driver_name = driver_name, \
.base.cra_priority = 200, \
.base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \
.base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \
.base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \
.base.cra_module = THIS_MODULE, \
.digestsize = digest_size, \
.setkey = crypto_blake2s_setkey, \
.init = crypto_blake2s_init, \
.update = crypto_blake2s_update_arm, \
.final = crypto_blake2s_final_arm, \
.descsize = sizeof(struct blake2s_state), \
}
static struct shash_alg blake2s_arm_algs[] = {
BLAKE2S_ALG("blake2s-128", "blake2s-128-arm", BLAKE2S_128_HASH_SIZE),
BLAKE2S_ALG("blake2s-160", "blake2s-160-arm", BLAKE2S_160_HASH_SIZE),
BLAKE2S_ALG("blake2s-224", "blake2s-224-arm", BLAKE2S_224_HASH_SIZE),
BLAKE2S_ALG("blake2s-256", "blake2s-256-arm", BLAKE2S_256_HASH_SIZE),
};
static int __init blake2s_arm_mod_init(void)
{
return IS_REACHABLE(CONFIG_CRYPTO_HASH) ?
crypto_register_shashes(blake2s_arm_algs,
ARRAY_SIZE(blake2s_arm_algs)) : 0;
}
static void __exit blake2s_arm_mod_exit(void)
{
if (IS_REACHABLE(CONFIG_CRYPTO_HASH))
crypto_unregister_shashes(blake2s_arm_algs,
ARRAY_SIZE(blake2s_arm_algs));
}
module_init(blake2s_arm_mod_init);
module_exit(blake2s_arm_mod_exit);
MODULE_DESCRIPTION("BLAKE2s digest algorithm, ARM scalar implementation");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
MODULE_ALIAS_CRYPTO("blake2s-128");
MODULE_ALIAS_CRYPTO("blake2s-128-arm");
MODULE_ALIAS_CRYPTO("blake2s-160");
MODULE_ALIAS_CRYPTO("blake2s-160-arm");
MODULE_ALIAS_CRYPTO("blake2s-224");
MODULE_ALIAS_CRYPTO("blake2s-224-arm");
MODULE_ALIAS_CRYPTO("blake2s-256");
MODULE_ALIAS_CRYPTO("blake2s-256-arm");
EXPORT_SYMBOL(blake2s_compress);

View File

@ -0,0 +1,75 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* BLAKE2s digest algorithm, ARM scalar implementation
*
* Copyright 2020 Google LLC
*/
#include <crypto/internal/blake2s.h>
#include <crypto/internal/hash.h>
#include <linux/module.h>
static int crypto_blake2s_update_arm(struct shash_desc *desc,
const u8 *in, unsigned int inlen)
{
return crypto_blake2s_update(desc, in, inlen, blake2s_compress);
}
static int crypto_blake2s_final_arm(struct shash_desc *desc, u8 *out)
{
return crypto_blake2s_final(desc, out, blake2s_compress);
}
#define BLAKE2S_ALG(name, driver_name, digest_size) \
{ \
.base.cra_name = name, \
.base.cra_driver_name = driver_name, \
.base.cra_priority = 200, \
.base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \
.base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \
.base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \
.base.cra_module = THIS_MODULE, \
.digestsize = digest_size, \
.setkey = crypto_blake2s_setkey, \
.init = crypto_blake2s_init, \
.update = crypto_blake2s_update_arm, \
.final = crypto_blake2s_final_arm, \
.descsize = sizeof(struct blake2s_state), \
}
static struct shash_alg blake2s_arm_algs[] = {
BLAKE2S_ALG("blake2s-128", "blake2s-128-arm", BLAKE2S_128_HASH_SIZE),
BLAKE2S_ALG("blake2s-160", "blake2s-160-arm", BLAKE2S_160_HASH_SIZE),
BLAKE2S_ALG("blake2s-224", "blake2s-224-arm", BLAKE2S_224_HASH_SIZE),
BLAKE2S_ALG("blake2s-256", "blake2s-256-arm", BLAKE2S_256_HASH_SIZE),
};
static int __init blake2s_arm_mod_init(void)
{
return IS_REACHABLE(CONFIG_CRYPTO_HASH) ?
crypto_register_shashes(blake2s_arm_algs,
ARRAY_SIZE(blake2s_arm_algs)) : 0;
}
static void __exit blake2s_arm_mod_exit(void)
{
if (IS_REACHABLE(CONFIG_CRYPTO_HASH))
crypto_unregister_shashes(blake2s_arm_algs,
ARRAY_SIZE(blake2s_arm_algs));
}
module_init(blake2s_arm_mod_init);
module_exit(blake2s_arm_mod_exit);
MODULE_DESCRIPTION("BLAKE2s digest algorithm, ARM scalar implementation");
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
MODULE_ALIAS_CRYPTO("blake2s-128");
MODULE_ALIAS_CRYPTO("blake2s-128-arm");
MODULE_ALIAS_CRYPTO("blake2s-160");
MODULE_ALIAS_CRYPTO("blake2s-160-arm");
MODULE_ALIAS_CRYPTO("blake2s-224");
MODULE_ALIAS_CRYPTO("blake2s-224-arm");
MODULE_ALIAS_CRYPTO("blake2s-256");
MODULE_ALIAS_CRYPTO("blake2s-256-arm");

View File

@ -631,7 +631,7 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall)
}
}
local_irq_disable();
thread_flags = current_thread_info()->flags;
thread_flags = read_thread_flags();
} while (thread_flags & _TIF_WORK_MASK);
return 0;
}

View File

@ -990,7 +990,7 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
* there is no work pending for this thread.
*/
raw_local_irq_disable();
if (!(current_thread_info()->flags & _TIF_WORK_MASK))
if (!(read_thread_flags() & _TIF_WORK_MASK))
set_cr(cr_no_alignment);
}

View File

@ -129,7 +129,7 @@ static __always_inline void prepare_exit_to_user_mode(struct pt_regs *regs)
local_daif_mask();
flags = READ_ONCE(current_thread_info()->flags);
flags = read_thread_flags();
if (unlikely(flags & _TIF_WORK_MASK))
do_notify_resume(regs, flags);
}

View File

@ -1839,7 +1839,7 @@ static void tracehook_report_syscall(struct pt_regs *regs,
int syscall_trace_enter(struct pt_regs *regs)
{
unsigned long flags = READ_ONCE(current_thread_info()->flags);
unsigned long flags = read_thread_flags();
if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);
@ -1862,7 +1862,7 @@ int syscall_trace_enter(struct pt_regs *regs)
void syscall_trace_exit(struct pt_regs *regs)
{
unsigned long flags = READ_ONCE(current_thread_info()->flags);
unsigned long flags = read_thread_flags();
audit_syscall_exit(regs);

View File

@ -948,7 +948,7 @@ void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags)
}
local_daif_mask();
thread_flags = READ_ONCE(current_thread_info()->flags);
thread_flags = read_thread_flags();
} while (thread_flags & _TIF_WORK_MASK);
}

View File

@ -81,7 +81,7 @@ void syscall_trace_exit(struct pt_regs *regs);
static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
const syscall_fn_t syscall_table[])
{
unsigned long flags = current_thread_info()->flags;
unsigned long flags = read_thread_flags();
regs->orig_x0 = regs->regs[0];
regs->syscallno = scno;
@ -148,7 +148,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
*/
if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
local_daif_mask();
flags = current_thread_info()->flags;
flags = read_thread_flags();
if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP))
return;
local_daif_restore(DAIF_PROCCTX);

View File

@ -9,6 +9,7 @@ config M68K
select ARCH_HAVE_NMI_SAFE_CMPXCHG if RMW_INSNS
select ARCH_MIGHT_HAVE_PC_PARPORT if ISA
select ARCH_NO_PREEMPT if !COLDFIRE
select ARCH_USE_MEMTEST if MMU_MOTOROLA
select ARCH_WANT_IPC_PARSE_VERSION
select BINFMT_FLAT_ARGVP_ENVP_ON_STACK
select DMA_DIRECT_REMAP if HAS_DMA && MMU && !COLDFIRE

View File

@ -302,7 +302,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -317,7 +316,6 @@ CONFIG_PARPORT_1284=y
CONFIG_AMIGA_FLOPPY=y
CONFIG_AMIGA_Z2RAM=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -298,7 +298,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -307,7 +306,6 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_TEST_ASYNC_DRIVER_PROBE=m
CONFIG_CONNECTOR=m
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -305,7 +305,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -318,7 +317,6 @@ CONFIG_PARPORT_ATARI=m
CONFIG_PARPORT_1284=y
CONFIG_ATARI_FLOPPY=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -295,7 +295,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -304,7 +303,6 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_TEST_ASYNC_DRIVER_PROBE=m
CONFIG_CONNECTOR=m
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -297,7 +297,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -306,7 +305,6 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_TEST_ASYNC_DRIVER_PROBE=m
CONFIG_CONNECTOR=m
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -299,7 +299,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -309,7 +308,6 @@ CONFIG_TEST_ASYNC_DRIVER_PROBE=m
CONFIG_CONNECTOR=m
CONFIG_BLK_DEV_SWIM=m
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -319,7 +319,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -338,7 +337,6 @@ CONFIG_ATARI_FLOPPY=y
CONFIG_BLK_DEV_SWIM=m
CONFIG_AMIGA_Z2RAM=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -294,7 +294,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -303,7 +302,6 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_TEST_ASYNC_DRIVER_PROBE=m
CONFIG_CONNECTOR=m
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -295,7 +295,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -304,7 +303,6 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_TEST_ASYNC_DRIVER_PROBE=m
CONFIG_CONNECTOR=m
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -296,7 +296,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -308,7 +307,6 @@ CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_1284=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -292,7 +292,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -301,7 +300,6 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_TEST_ASYNC_DRIVER_PROBE=m
CONFIG_CONNECTOR=m
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -292,7 +292,6 @@ CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
CONFIG_AF_KCM=m
CONFIG_MCTP=m
# CONFIG_WIRELESS is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
@ -301,7 +300,6 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_TEST_ASYNC_DRIVER_PROBE=m
CONFIG_CONNECTOR=m
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y

View File

@ -338,13 +338,6 @@ void __init setup_arch(char **cmdline_p)
panic("No configuration setup");
}
paging_init();
#ifdef CONFIG_NATFEAT
nf_init();
#endif
#ifndef CONFIG_SUN3
#ifdef CONFIG_BLK_DEV_INITRD
if (m68k_ramdisk.size) {
memblock_reserve(m68k_ramdisk.addr, m68k_ramdisk.size);
@ -354,6 +347,12 @@ void __init setup_arch(char **cmdline_p)
}
#endif
paging_init();
#ifdef CONFIG_NATFEAT
nf_init();
#endif
#ifdef CONFIG_ATARI
if (MACH_IS_ATARI)
atari_stram_reserve_pages((void *)availmem);
@ -364,8 +363,6 @@ void __init setup_arch(char **cmdline_p)
}
#endif
#endif /* !CONFIG_SUN3 */
/* set ISA defs early as possible */
#if defined(CONFIG_ISA) && defined(MULTI_ISA)
if (MACH_IS_Q40) {

View File

@ -457,6 +457,8 @@ void __init paging_init(void)
flush_tlb_all();
early_memtest(min_addr, max_addr);
/*
* initialize the bad page table and bad page to point
* to a couple of allocated pages

View File

@ -283,7 +283,7 @@ static void do_signal(struct pt_regs *regs, int in_syscall)
#ifdef DEBUG_SIG
pr_info("do signal: %p %d\n", regs, in_syscall);
pr_info("do signal2: %lx %lx %ld [%lx]\n", regs->pc, regs->r1,
regs->r12, current_thread_info()->flags);
regs->r12, read_thread_flags());
#endif
if (get_signal(&ksig)) {

View File

@ -313,7 +313,7 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall)
}
}
local_irq_disable();
thread_flags = current_thread_info()->flags;
thread_flags = read_thread_flags();
} while (thread_flags & _TIF_WORK_MASK);
return 0;
}

View File

@ -148,7 +148,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
*/
if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&
unlikely(MSR_TM_TRANSACTIONAL(regs->msr)))
current_thread_info()->flags |= _TIF_RESTOREALL;
set_bits(_TIF_RESTOREALL, &current_thread_info()->flags);
/*
* If the system call was made with a transaction active, doom it and
@ -181,7 +181,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
local_irq_enable();
if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
if (unlikely(read_thread_flags() & _TIF_SYSCALL_DOTRACE)) {
if (unlikely(trap_is_unsupported_scv(regs))) {
/* Unsupported scv vector */
_exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
@ -343,7 +343,7 @@ interrupt_exit_user_prepare_main(unsigned long ret, struct pt_regs *regs)
unsigned long ti_flags;
again:
ti_flags = READ_ONCE(current_thread_info()->flags);
ti_flags = read_thread_flags();
while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
local_irq_enable();
if (ti_flags & _TIF_NEED_RESCHED) {
@ -359,7 +359,7 @@ interrupt_exit_user_prepare_main(unsigned long ret, struct pt_regs *regs)
do_notify_resume(regs, ti_flags);
}
local_irq_disable();
ti_flags = READ_ONCE(current_thread_info()->flags);
ti_flags = read_thread_flags();
}
if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && IS_ENABLED(CONFIG_PPC_FPU)) {
@ -437,7 +437,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
/* Check whether the syscall is issued inside a restartable sequence */
rseq_syscall(regs);
ti_flags = current_thread_info()->flags;
ti_flags = read_thread_flags();
if (unlikely(r3 >= (unsigned long)-MAX_ERRNO) && is_not_scv) {
if (likely(!(ti_flags & (_TIF_NOERROR | _TIF_RESTOREALL)))) {
@ -532,8 +532,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
unsigned long flags;
unsigned long ret = 0;
unsigned long kuap;
bool stack_store = current_thread_info()->flags &
_TIF_EMULATE_STACK_STORE;
bool stack_store = read_thread_flags() & _TIF_EMULATE_STACK_STORE;
if (regs_is_unrecoverable(regs))
unrecoverable_exception(regs);
@ -554,7 +553,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
again:
if (IS_ENABLED(CONFIG_PREEMPT)) {
/* Return to preemptible kernel context */
if (unlikely(current_thread_info()->flags & _TIF_NEED_RESCHED)) {
if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) {
if (preempt_count() == 0)
preempt_schedule_irq();
}

View File

@ -260,8 +260,7 @@ long do_syscall_trace_enter(struct pt_regs *regs)
{
u32 flags;
flags = READ_ONCE(current_thread_info()->flags) &
(_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE);
flags = read_thread_flags() & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE);
if (flags) {
int rc = tracehook_report_syscall_entry(regs);

View File

@ -770,6 +770,7 @@ CONFIG_CRYPTO_SHA3_256_S390=m
CONFIG_CRYPTO_SHA3_512_S390=m
CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_CHACHA_S390=m
CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=y
CONFIG_CRYPTO_DEV_VIRTIO=m

View File

@ -757,6 +757,7 @@ CONFIG_CRYPTO_SHA3_256_S390=m
CONFIG_CRYPTO_SHA3_512_S390=m
CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_CHACHA_S390=m
CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=y
CONFIG_CRYPTO_DEV_VIRTIO=m

View File

@ -11,9 +11,11 @@ obj-$(CONFIG_CRYPTO_SHA3_512_S390) += sha3_512_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o
obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o
obj-$(CONFIG_CRYPTO_PAES_S390) += paes_s390.o
obj-$(CONFIG_CRYPTO_CHACHA_S390) += chacha_s390.o
obj-$(CONFIG_S390_PRNG) += prng.o
obj-$(CONFIG_CRYPTO_GHASH_S390) += ghash_s390.o
obj-$(CONFIG_CRYPTO_CRC32_S390) += crc32-vx_s390.o
obj-$(CONFIG_ARCH_RANDOM) += arch_random.o
crc32-vx_s390-y := crc32-vx.o crc32le-vx.o crc32be-vx.o
chacha_s390-y := chacha-glue.o chacha-s390.o

View File

@ -0,0 +1,100 @@
// SPDX-License-Identifier: GPL-2.0
/*
* s390 ChaCha stream cipher.
*
* Copyright IBM Corp. 2021
*/
#define KMSG_COMPONENT "chacha_s390"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
#include <crypto/internal/chacha.h>
#include <crypto/internal/skcipher.h>
#include <crypto/algapi.h>
#include <linux/cpufeature.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <asm/fpu/api.h>
#include "chacha-s390.h"
static void chacha20_crypt_s390(u32 *state, u8 *dst, const u8 *src,
unsigned int nbytes, const u32 *key,
u32 *counter)
{
struct kernel_fpu vxstate;
kernel_fpu_begin(&vxstate, KERNEL_VXR);
chacha20_vx(dst, src, nbytes, key, counter);
kernel_fpu_end(&vxstate, KERNEL_VXR);
*counter += round_up(nbytes, CHACHA_BLOCK_SIZE) / CHACHA_BLOCK_SIZE;
}
static int chacha20_s390(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
u32 state[CHACHA_STATE_WORDS] __aligned(16);
struct skcipher_walk walk;
unsigned int nbytes;
int rc;
rc = skcipher_walk_virt(&walk, req, false);
chacha_init_generic(state, ctx->key, req->iv);
while (walk.nbytes > 0) {
nbytes = walk.nbytes;
if (nbytes < walk.total)
nbytes = round_down(nbytes, walk.stride);
if (nbytes <= CHACHA_BLOCK_SIZE) {
chacha_crypt_generic(state, walk.dst.virt.addr,
walk.src.virt.addr, nbytes,
ctx->nrounds);
} else {
chacha20_crypt_s390(state, walk.dst.virt.addr,
walk.src.virt.addr, nbytes,
&state[4], &state[12]);
}
rc = skcipher_walk_done(&walk, walk.nbytes - nbytes);
}
return rc;
}
static struct skcipher_alg chacha_algs[] = {
{
.base.cra_name = "chacha20",
.base.cra_driver_name = "chacha20-s390",
.base.cra_priority = 900,
.base.cra_blocksize = 1,
.base.cra_ctxsize = sizeof(struct chacha_ctx),
.base.cra_module = THIS_MODULE,
.min_keysize = CHACHA_KEY_SIZE,
.max_keysize = CHACHA_KEY_SIZE,
.ivsize = CHACHA_IV_SIZE,
.chunksize = CHACHA_BLOCK_SIZE,
.setkey = chacha20_setkey,
.encrypt = chacha20_s390,
.decrypt = chacha20_s390,
}
};
static int __init chacha_mod_init(void)
{
return crypto_register_skciphers(chacha_algs, ARRAY_SIZE(chacha_algs));
}
static void __exit chacha_mod_fini(void)
{
crypto_unregister_skciphers(chacha_algs, ARRAY_SIZE(chacha_algs));
}
module_cpu_feature_match(VXRS, chacha_mod_init);
module_exit(chacha_mod_fini);
MODULE_DESCRIPTION("ChaCha20 stream cipher");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("chacha20");

View File

@ -0,0 +1,907 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Original implementation written by Andy Polyakov, @dot-asm.
* This is an adaptation of the original code for kernel use.
*
* Copyright (C) 2006-2019 CRYPTOGAMS by <appro@openssl.org>. All Rights Reserved.
*/
#include <linux/linkage.h>
#include <asm/nospec-insn.h>
#include <asm/vx-insn.h>
#define SP %r15
#define FRAME (16 * 8 + 4 * 8)
.data
.align 32
.Lsigma:
.long 0x61707865,0x3320646e,0x79622d32,0x6b206574 # endian-neutral
.long 1,0,0,0
.long 2,0,0,0
.long 3,0,0,0
.long 0x03020100,0x07060504,0x0b0a0908,0x0f0e0d0c # byte swap
.long 0,1,2,3
.long 0x61707865,0x61707865,0x61707865,0x61707865 # smashed sigma
.long 0x3320646e,0x3320646e,0x3320646e,0x3320646e
.long 0x79622d32,0x79622d32,0x79622d32,0x79622d32
.long 0x6b206574,0x6b206574,0x6b206574,0x6b206574
.previous
GEN_BR_THUNK %r14
.text
#############################################################################
# void chacha20_vx_4x(u8 *out, counst u8 *inp, size_t len,
# counst u32 *key, const u32 *counter)
#define OUT %r2
#define INP %r3
#define LEN %r4
#define KEY %r5
#define COUNTER %r6
#define BEPERM %v31
#define CTR %v26
#define K0 %v16
#define K1 %v17
#define K2 %v18
#define K3 %v19
#define XA0 %v0
#define XA1 %v1
#define XA2 %v2
#define XA3 %v3
#define XB0 %v4
#define XB1 %v5
#define XB2 %v6
#define XB3 %v7
#define XC0 %v8
#define XC1 %v9
#define XC2 %v10
#define XC3 %v11
#define XD0 %v12
#define XD1 %v13
#define XD2 %v14
#define XD3 %v15
#define XT0 %v27
#define XT1 %v28
#define XT2 %v29
#define XT3 %v30
ENTRY(chacha20_vx_4x)
stmg %r6,%r7,6*8(SP)
larl %r7,.Lsigma
lhi %r0,10
lhi %r1,0
VL K0,0,,%r7 # load sigma
VL K1,0,,KEY # load key
VL K2,16,,KEY
VL K3,0,,COUNTER # load counter
VL BEPERM,0x40,,%r7
VL CTR,0x50,,%r7
VLM XA0,XA3,0x60,%r7,4 # load [smashed] sigma
VREPF XB0,K1,0 # smash the key
VREPF XB1,K1,1
VREPF XB2,K1,2
VREPF XB3,K1,3
VREPF XD0,K3,0
VREPF XD1,K3,1
VREPF XD2,K3,2
VREPF XD3,K3,3
VAF XD0,XD0,CTR
VREPF XC0,K2,0
VREPF XC1,K2,1
VREPF XC2,K2,2
VREPF XC3,K2,3
.Loop_4x:
VAF XA0,XA0,XB0
VX XD0,XD0,XA0
VERLLF XD0,XD0,16
VAF XA1,XA1,XB1
VX XD1,XD1,XA1
VERLLF XD1,XD1,16
VAF XA2,XA2,XB2
VX XD2,XD2,XA2
VERLLF XD2,XD2,16
VAF XA3,XA3,XB3
VX XD3,XD3,XA3
VERLLF XD3,XD3,16
VAF XC0,XC0,XD0
VX XB0,XB0,XC0
VERLLF XB0,XB0,12
VAF XC1,XC1,XD1
VX XB1,XB1,XC1
VERLLF XB1,XB1,12
VAF XC2,XC2,XD2
VX XB2,XB2,XC2
VERLLF XB2,XB2,12
VAF XC3,XC3,XD3
VX XB3,XB3,XC3
VERLLF XB3,XB3,12
VAF XA0,XA0,XB0
VX XD0,XD0,XA0
VERLLF XD0,XD0,8
VAF XA1,XA1,XB1
VX XD1,XD1,XA1
VERLLF XD1,XD1,8
VAF XA2,XA2,XB2
VX XD2,XD2,XA2
VERLLF XD2,XD2,8
VAF XA3,XA3,XB3
VX XD3,XD3,XA3
VERLLF XD3,XD3,8
VAF XC0,XC0,XD0
VX XB0,XB0,XC0
VERLLF XB0,XB0,7
VAF XC1,XC1,XD1
VX XB1,XB1,XC1
VERLLF XB1,XB1,7
VAF XC2,XC2,XD2
VX XB2,XB2,XC2
VERLLF XB2,XB2,7
VAF XC3,XC3,XD3
VX XB3,XB3,XC3
VERLLF XB3,XB3,7
VAF XA0,XA0,XB1
VX XD3,XD3,XA0
VERLLF XD3,XD3,16
VAF XA1,XA1,XB2
VX XD0,XD0,XA1
VERLLF XD0,XD0,16
VAF XA2,XA2,XB3
VX XD1,XD1,XA2
VERLLF XD1,XD1,16
VAF XA3,XA3,XB0
VX XD2,XD2,XA3
VERLLF XD2,XD2,16
VAF XC2,XC2,XD3
VX XB1,XB1,XC2
VERLLF XB1,XB1,12
VAF XC3,XC3,XD0
VX XB2,XB2,XC3
VERLLF XB2,XB2,12
VAF XC0,XC0,XD1
VX XB3,XB3,XC0
VERLLF XB3,XB3,12
VAF XC1,XC1,XD2
VX XB0,XB0,XC1
VERLLF XB0,XB0,12
VAF XA0,XA0,XB1
VX XD3,XD3,XA0
VERLLF XD3,XD3,8
VAF XA1,XA1,XB2
VX XD0,XD0,XA1
VERLLF XD0,XD0,8
VAF XA2,XA2,XB3
VX XD1,XD1,XA2
VERLLF XD1,XD1,8
VAF XA3,XA3,XB0
VX XD2,XD2,XA3
VERLLF XD2,XD2,8
VAF XC2,XC2,XD3
VX XB1,XB1,XC2
VERLLF XB1,XB1,7
VAF XC3,XC3,XD0
VX XB2,XB2,XC3
VERLLF XB2,XB2,7
VAF XC0,XC0,XD1
VX XB3,XB3,XC0
VERLLF XB3,XB3,7
VAF XC1,XC1,XD2
VX XB0,XB0,XC1
VERLLF XB0,XB0,7
brct %r0,.Loop_4x
VAF XD0,XD0,CTR
VMRHF XT0,XA0,XA1 # transpose data
VMRHF XT1,XA2,XA3
VMRLF XT2,XA0,XA1
VMRLF XT3,XA2,XA3
VPDI XA0,XT0,XT1,0b0000
VPDI XA1,XT0,XT1,0b0101
VPDI XA2,XT2,XT3,0b0000
VPDI XA3,XT2,XT3,0b0101
VMRHF XT0,XB0,XB1
VMRHF XT1,XB2,XB3
VMRLF XT2,XB0,XB1
VMRLF XT3,XB2,XB3
VPDI XB0,XT0,XT1,0b0000
VPDI XB1,XT0,XT1,0b0101
VPDI XB2,XT2,XT3,0b0000
VPDI XB3,XT2,XT3,0b0101
VMRHF XT0,XC0,XC1
VMRHF XT1,XC2,XC3
VMRLF XT2,XC0,XC1
VMRLF XT3,XC2,XC3
VPDI XC0,XT0,XT1,0b0000
VPDI XC1,XT0,XT1,0b0101
VPDI XC2,XT2,XT3,0b0000
VPDI XC3,XT2,XT3,0b0101
VMRHF XT0,XD0,XD1
VMRHF XT1,XD2,XD3
VMRLF XT2,XD0,XD1
VMRLF XT3,XD2,XD3
VPDI XD0,XT0,XT1,0b0000
VPDI XD1,XT0,XT1,0b0101
VPDI XD2,XT2,XT3,0b0000
VPDI XD3,XT2,XT3,0b0101
VAF XA0,XA0,K0
VAF XB0,XB0,K1
VAF XC0,XC0,K2
VAF XD0,XD0,K3
VPERM XA0,XA0,XA0,BEPERM
VPERM XB0,XB0,XB0,BEPERM
VPERM XC0,XC0,XC0,BEPERM
VPERM XD0,XD0,XD0,BEPERM
VLM XT0,XT3,0,INP,0
VX XT0,XT0,XA0
VX XT1,XT1,XB0
VX XT2,XT2,XC0
VX XT3,XT3,XD0
VSTM XT0,XT3,0,OUT,0
la INP,0x40(INP)
la OUT,0x40(OUT)
aghi LEN,-0x40
VAF XA0,XA1,K0
VAF XB0,XB1,K1
VAF XC0,XC1,K2
VAF XD0,XD1,K3
VPERM XA0,XA0,XA0,BEPERM
VPERM XB0,XB0,XB0,BEPERM
VPERM XC0,XC0,XC0,BEPERM
VPERM XD0,XD0,XD0,BEPERM
.insn rilu,0xc20e00000000,LEN,0x40 # clgfi LEN,0x40
jl .Ltail_4x
VLM XT0,XT3,0,INP,0
VX XT0,XT0,XA0
VX XT1,XT1,XB0
VX XT2,XT2,XC0
VX XT3,XT3,XD0
VSTM XT0,XT3,0,OUT,0
la INP,0x40(INP)
la OUT,0x40(OUT)
aghi LEN,-0x40
je .Ldone_4x
VAF XA0,XA2,K0
VAF XB0,XB2,K1
VAF XC0,XC2,K2
VAF XD0,XD2,K3
VPERM XA0,XA0,XA0,BEPERM
VPERM XB0,XB0,XB0,BEPERM
VPERM XC0,XC0,XC0,BEPERM
VPERM XD0,XD0,XD0,BEPERM
.insn rilu,0xc20e00000000,LEN,0x40 # clgfi LEN,0x40
jl .Ltail_4x
VLM XT0,XT3,0,INP,0
VX XT0,XT0,XA0
VX XT1,XT1,XB0
VX XT2,XT2,XC0
VX XT3,XT3,XD0
VSTM XT0,XT3,0,OUT,0
la INP,0x40(INP)
la OUT,0x40(OUT)
aghi LEN,-0x40
je .Ldone_4x
VAF XA0,XA3,K0
VAF XB0,XB3,K1
VAF XC0,XC3,K2
VAF XD0,XD3,K3
VPERM XA0,XA0,XA0,BEPERM
VPERM XB0,XB0,XB0,BEPERM
VPERM XC0,XC0,XC0,BEPERM
VPERM XD0,XD0,XD0,BEPERM
.insn rilu,0xc20e00000000,LEN,0x40 # clgfi LEN,0x40
jl .Ltail_4x
VLM XT0,XT3,0,INP,0
VX XT0,XT0,XA0
VX XT1,XT1,XB0
VX XT2,XT2,XC0
VX XT3,XT3,XD0
VSTM XT0,XT3,0,OUT,0
.Ldone_4x:
lmg %r6,%r7,6*8(SP)
BR_EX %r14
.Ltail_4x:
VLR XT0,XC0
VLR XT1,XD0
VST XA0,8*8+0x00,,SP
VST XB0,8*8+0x10,,SP
VST XT0,8*8+0x20,,SP
VST XT1,8*8+0x30,,SP
lghi %r1,0
.Loop_tail_4x:
llgc %r5,0(%r1,INP)
llgc %r6,8*8(%r1,SP)
xr %r6,%r5
stc %r6,0(%r1,OUT)
la %r1,1(%r1)
brct LEN,.Loop_tail_4x
lmg %r6,%r7,6*8(SP)
BR_EX %r14
ENDPROC(chacha20_vx_4x)
#undef OUT
#undef INP
#undef LEN
#undef KEY
#undef COUNTER
#undef BEPERM
#undef K0
#undef K1
#undef K2
#undef K3
#############################################################################
# void chacha20_vx(u8 *out, counst u8 *inp, size_t len,
# counst u32 *key, const u32 *counter)
#define OUT %r2
#define INP %r3
#define LEN %r4
#define KEY %r5
#define COUNTER %r6
#define BEPERM %v31
#define K0 %v27
#define K1 %v24
#define K2 %v25
#define K3 %v26
#define A0 %v0
#define B0 %v1
#define C0 %v2
#define D0 %v3
#define A1 %v4
#define B1 %v5
#define C1 %v6
#define D1 %v7
#define A2 %v8
#define B2 %v9
#define C2 %v10
#define D2 %v11
#define A3 %v12
#define B3 %v13
#define C3 %v14
#define D3 %v15
#define A4 %v16
#define B4 %v17
#define C4 %v18
#define D4 %v19
#define A5 %v20
#define B5 %v21
#define C5 %v22
#define D5 %v23
#define T0 %v27
#define T1 %v28
#define T2 %v29
#define T3 %v30
ENTRY(chacha20_vx)
.insn rilu,0xc20e00000000,LEN,256 # clgfi LEN,256
jle chacha20_vx_4x
stmg %r6,%r7,6*8(SP)
lghi %r1,-FRAME
lgr %r0,SP
la SP,0(%r1,SP)
stg %r0,0(SP) # back-chain
larl %r7,.Lsigma
lhi %r0,10
VLM K1,K2,0,KEY,0 # load key
VL K3,0,,COUNTER # load counter
VLM K0,BEPERM,0,%r7,4 # load sigma, increments, ...
.Loop_outer_vx:
VLR A0,K0
VLR B0,K1
VLR A1,K0
VLR B1,K1
VLR A2,K0
VLR B2,K1
VLR A3,K0
VLR B3,K1
VLR A4,K0
VLR B4,K1
VLR A5,K0
VLR B5,K1
VLR D0,K3
VAF D1,K3,T1 # K[3]+1
VAF D2,K3,T2 # K[3]+2
VAF D3,K3,T3 # K[3]+3
VAF D4,D2,T2 # K[3]+4
VAF D5,D2,T3 # K[3]+5
VLR C0,K2
VLR C1,K2
VLR C2,K2
VLR C3,K2
VLR C4,K2
VLR C5,K2
VLR T1,D1
VLR T2,D2
VLR T3,D3
.Loop_vx:
VAF A0,A0,B0
VAF A1,A1,B1
VAF A2,A2,B2
VAF A3,A3,B3
VAF A4,A4,B4
VAF A5,A5,B5
VX D0,D0,A0
VX D1,D1,A1
VX D2,D2,A2
VX D3,D3,A3
VX D4,D4,A4
VX D5,D5,A5
VERLLF D0,D0,16
VERLLF D1,D1,16
VERLLF D2,D2,16
VERLLF D3,D3,16
VERLLF D4,D4,16
VERLLF D5,D5,16
VAF C0,C0,D0
VAF C1,C1,D1
VAF C2,C2,D2
VAF C3,C3,D3
VAF C4,C4,D4
VAF C5,C5,D5
VX B0,B0,C0
VX B1,B1,C1
VX B2,B2,C2
VX B3,B3,C3
VX B4,B4,C4
VX B5,B5,C5
VERLLF B0,B0,12
VERLLF B1,B1,12
VERLLF B2,B2,12
VERLLF B3,B3,12
VERLLF B4,B4,12
VERLLF B5,B5,12
VAF A0,A0,B0
VAF A1,A1,B1
VAF A2,A2,B2
VAF A3,A3,B3
VAF A4,A4,B4
VAF A5,A5,B5
VX D0,D0,A0
VX D1,D1,A1
VX D2,D2,A2
VX D3,D3,A3
VX D4,D4,A4
VX D5,D5,A5
VERLLF D0,D0,8
VERLLF D1,D1,8
VERLLF D2,D2,8
VERLLF D3,D3,8
VERLLF D4,D4,8
VERLLF D5,D5,8
VAF C0,C0,D0
VAF C1,C1,D1
VAF C2,C2,D2
VAF C3,C3,D3
VAF C4,C4,D4
VAF C5,C5,D5
VX B0,B0,C0
VX B1,B1,C1
VX B2,B2,C2
VX B3,B3,C3
VX B4,B4,C4
VX B5,B5,C5
VERLLF B0,B0,7
VERLLF B1,B1,7
VERLLF B2,B2,7
VERLLF B3,B3,7
VERLLF B4,B4,7
VERLLF B5,B5,7
VSLDB C0,C0,C0,8
VSLDB C1,C1,C1,8
VSLDB C2,C2,C2,8
VSLDB C3,C3,C3,8
VSLDB C4,C4,C4,8
VSLDB C5,C5,C5,8
VSLDB B0,B0,B0,4
VSLDB B1,B1,B1,4
VSLDB B2,B2,B2,4
VSLDB B3,B3,B3,4
VSLDB B4,B4,B4,4
VSLDB B5,B5,B5,4
VSLDB D0,D0,D0,12
VSLDB D1,D1,D1,12
VSLDB D2,D2,D2,12
VSLDB D3,D3,D3,12
VSLDB D4,D4,D4,12
VSLDB D5,D5,D5,12
VAF A0,A0,B0
VAF A1,A1,B1
VAF A2,A2,B2
VAF A3,A3,B3
VAF A4,A4,B4
VAF A5,A5,B5
VX D0,D0,A0
VX D1,D1,A1
VX D2,D2,A2
VX D3,D3,A3
VX D4,D4,A4
VX D5,D5,A5
VERLLF D0,D0,16
VERLLF D1,D1,16
VERLLF D2,D2,16
VERLLF D3,D3,16
VERLLF D4,D4,16
VERLLF D5,D5,16
VAF C0,C0,D0
VAF C1,C1,D1
VAF C2,C2,D2
VAF C3,C3,D3
VAF C4,C4,D4
VAF C5,C5,D5
VX B0,B0,C0
VX B1,B1,C1
VX B2,B2,C2
VX B3,B3,C3
VX B4,B4,C4
VX B5,B5,C5
VERLLF B0,B0,12
VERLLF B1,B1,12
VERLLF B2,B2,12
VERLLF B3,B3,12
VERLLF B4,B4,12
VERLLF B5,B5,12
VAF A0,A0,B0
VAF A1,A1,B1
VAF A2,A2,B2
VAF A3,A3,B3
VAF A4,A4,B4
VAF A5,A5,B5
VX D0,D0,A0
VX D1,D1,A1
VX D2,D2,A2
VX D3,D3,A3
VX D4,D4,A4
VX D5,D5,A5
VERLLF D0,D0,8
VERLLF D1,D1,8
VERLLF D2,D2,8
VERLLF D3,D3,8
VERLLF D4,D4,8
VERLLF D5,D5,8
VAF C0,C0,D0
VAF C1,C1,D1
VAF C2,C2,D2
VAF C3,C3,D3
VAF C4,C4,D4
VAF C5,C5,D5
VX B0,B0,C0
VX B1,B1,C1
VX B2,B2,C2
VX B3,B3,C3
VX B4,B4,C4
VX B5,B5,C5
VERLLF B0,B0,7
VERLLF B1,B1,7
VERLLF B2,B2,7
VERLLF B3,B3,7
VERLLF B4,B4,7
VERLLF B5,B5,7
VSLDB C0,C0,C0,8
VSLDB C1,C1,C1,8
VSLDB C2,C2,C2,8
VSLDB C3,C3,C3,8
VSLDB C4,C4,C4,8
VSLDB C5,C5,C5,8
VSLDB B0,B0,B0,12
VSLDB B1,B1,B1,12
VSLDB B2,B2,B2,12
VSLDB B3,B3,B3,12
VSLDB B4,B4,B4,12
VSLDB B5,B5,B5,12
VSLDB D0,D0,D0,4
VSLDB D1,D1,D1,4
VSLDB D2,D2,D2,4
VSLDB D3,D3,D3,4
VSLDB D4,D4,D4,4
VSLDB D5,D5,D5,4
brct %r0,.Loop_vx
VAF A0,A0,K0
VAF B0,B0,K1
VAF C0,C0,K2
VAF D0,D0,K3
VAF A1,A1,K0
VAF D1,D1,T1 # +K[3]+1
VPERM A0,A0,A0,BEPERM
VPERM B0,B0,B0,BEPERM
VPERM C0,C0,C0,BEPERM
VPERM D0,D0,D0,BEPERM
.insn rilu,0xc20e00000000,LEN,0x40 # clgfi LEN,0x40
jl .Ltail_vx
VAF D2,D2,T2 # +K[3]+2
VAF D3,D3,T3 # +K[3]+3
VLM T0,T3,0,INP,0
VX A0,A0,T0
VX B0,B0,T1
VX C0,C0,T2
VX D0,D0,T3
VLM K0,T3,0,%r7,4 # re-load sigma and increments
VSTM A0,D0,0,OUT,0
la INP,0x40(INP)
la OUT,0x40(OUT)
aghi LEN,-0x40
je .Ldone_vx
VAF B1,B1,K1
VAF C1,C1,K2
VPERM A0,A1,A1,BEPERM
VPERM B0,B1,B1,BEPERM
VPERM C0,C1,C1,BEPERM
VPERM D0,D1,D1,BEPERM
.insn rilu,0xc20e00000000,LEN,0x40 # clgfi LEN,0x40
jl .Ltail_vx
VLM A1,D1,0,INP,0
VX A0,A0,A1
VX B0,B0,B1
VX C0,C0,C1
VX D0,D0,D1
VSTM A0,D0,0,OUT,0
la INP,0x40(INP)
la OUT,0x40(OUT)
aghi LEN,-0x40
je .Ldone_vx
VAF A2,A2,K0
VAF B2,B2,K1
VAF C2,C2,K2
VPERM A0,A2,A2,BEPERM
VPERM B0,B2,B2,BEPERM
VPERM C0,C2,C2,BEPERM
VPERM D0,D2,D2,BEPERM
.insn rilu,0xc20e00000000,LEN,0x40 # clgfi LEN,0x40
jl .Ltail_vx
VLM A1,D1,0,INP,0
VX A0,A0,A1
VX B0,B0,B1
VX C0,C0,C1
VX D0,D0,D1
VSTM A0,D0,0,OUT,0
la INP,0x40(INP)
la OUT,0x40(OUT)
aghi LEN,-0x40
je .Ldone_vx
VAF A3,A3,K0
VAF B3,B3,K1
VAF C3,C3,K2
VAF D2,K3,T3 # K[3]+3
VPERM A0,A3,A3,BEPERM
VPERM B0,B3,B3,BEPERM
VPERM C0,C3,C3,BEPERM
VPERM D0,D3,D3,BEPERM
.insn rilu,0xc20e00000000,LEN,0x40 # clgfi LEN,0x40
jl .Ltail_vx
VAF D3,D2,T1 # K[3]+4
VLM A1,D1,0,INP,0
VX A0,A0,A1
VX B0,B0,B1
VX C0,C0,C1
VX D0,D0,D1
VSTM A0,D0,0,OUT,0
la INP,0x40(INP)
la OUT,0x40(OUT)
aghi LEN,-0x40
je .Ldone_vx
VAF A4,A4,K0
VAF B4,B4,K1
VAF C4,C4,K2
VAF D4,D4,D3 # +K[3]+4
VAF D3,D3,T1 # K[3]+5
VAF K3,D2,T3 # K[3]+=6
VPERM A0,A4,A4,BEPERM
VPERM B0,B4,B4,BEPERM
VPERM C0,C4,C4,BEPERM
VPERM D0,D4,D4,BEPERM
.insn rilu,0xc20e00000000,LEN,0x40 # clgfi LEN,0x40
jl .Ltail_vx
VLM A1,D1,0,INP,0
VX A0,A0,A1
VX B0,B0,B1
VX C0,C0,C1
VX D0,D0,D1
VSTM A0,D0,0,OUT,0
la INP,0x40(INP)
la OUT,0x40(OUT)
aghi LEN,-0x40
je .Ldone_vx
VAF A5,A5,K0
VAF B5,B5,K1
VAF C5,C5,K2
VAF D5,D5,D3 # +K[3]+5
VPERM A0,A5,A5,BEPERM
VPERM B0,B5,B5,BEPERM
VPERM C0,C5,C5,BEPERM
VPERM D0,D5,D5,BEPERM
.insn rilu,0xc20e00000000,LEN,0x40 # clgfi LEN,0x40
jl .Ltail_vx
VLM A1,D1,0,INP,0
VX A0,A0,A1
VX B0,B0,B1
VX C0,C0,C1
VX D0,D0,D1
VSTM A0,D0,0,OUT,0
la INP,0x40(INP)
la OUT,0x40(OUT)
lhi %r0,10
aghi LEN,-0x40
jne .Loop_outer_vx
.Ldone_vx:
lmg %r6,%r7,FRAME+6*8(SP)
la SP,FRAME(SP)
BR_EX %r14
.Ltail_vx:
VSTM A0,D0,8*8,SP,3
lghi %r1,0
.Loop_tail_vx:
llgc %r5,0(%r1,INP)
llgc %r6,8*8(%r1,SP)
xr %r6,%r5
stc %r6,0(%r1,OUT)
la %r1,1(%r1)
brct LEN,.Loop_tail_vx
lmg %r6,%r7,FRAME+6*8(SP)
la SP,FRAME(SP)
BR_EX %r14
ENDPROC(chacha20_vx)
.previous

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* s390 ChaCha stream cipher.
*
* Copyright IBM Corp. 2021
*/
#ifndef _CHACHA_S390_H
#define _CHACHA_S390_H
void chacha20_vx(u8 *out, const u8 *inp, size_t len, const u32 *key,
const u32 *counter);
#endif /* _CHACHA_S390_H */

View File

@ -12,6 +12,8 @@
#ifndef _ASM_S390_AP_H_
#define _ASM_S390_AP_H_
#include <linux/io.h>
/**
* The ap_qid_t identifier of an ap queue.
* If the AP facilities test (APFT) facility is available,
@ -238,7 +240,7 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid,
struct ap_qirq_ctrl qirqctrl;
struct ap_queue_status status;
} reg1;
void *reg2 = ind;
unsigned long reg2 = virt_to_phys(ind);
reg1.qirqctrl = qirqctrl;

View File

@ -47,8 +47,8 @@ static inline void diag10_range(unsigned long start_pfn, unsigned long num_pfn)
{
unsigned long start_addr, end_addr;
start_addr = start_pfn << PAGE_SHIFT;
end_addr = (start_pfn + num_pfn - 1) << PAGE_SHIFT;
start_addr = pfn_to_phys(start_pfn);
end_addr = pfn_to_phys(start_pfn + num_pfn - 1);
diag_stat_inc(DIAG_STAT_X010);
asm volatile(

View File

@ -98,9 +98,9 @@ struct mcesa {
struct pt_regs;
void nmi_alloc_boot_cpu(struct lowcore *lc);
int nmi_alloc_per_cpu(struct lowcore *lc);
void nmi_free_per_cpu(struct lowcore *lc);
void nmi_alloc_mcesa_early(u64 *mcesad);
int nmi_alloc_mcesa(u64 *mcesad);
void nmi_free_mcesa(u64 *mcesad);
void s390_handle_mcck(void);
void __s390_handle_mcck(void);

View File

@ -97,23 +97,23 @@ static inline unsigned int calc_px(dma_addr_t ptr)
return ((unsigned long) ptr >> PAGE_SHIFT) & ZPCI_PT_MASK;
}
static inline void set_pt_pfaa(unsigned long *entry, void *pfaa)
static inline void set_pt_pfaa(unsigned long *entry, phys_addr_t pfaa)
{
*entry &= ZPCI_PTE_FLAG_MASK;
*entry |= ((unsigned long) pfaa & ZPCI_PTE_ADDR_MASK);
*entry |= (pfaa & ZPCI_PTE_ADDR_MASK);
}
static inline void set_rt_sto(unsigned long *entry, void *sto)
static inline void set_rt_sto(unsigned long *entry, phys_addr_t sto)
{
*entry &= ZPCI_RTE_FLAG_MASK;
*entry |= ((unsigned long) sto & ZPCI_RTE_ADDR_MASK);
*entry |= (sto & ZPCI_RTE_ADDR_MASK);
*entry |= ZPCI_TABLE_TYPE_RTX;
}
static inline void set_st_pto(unsigned long *entry, void *pto)
static inline void set_st_pto(unsigned long *entry, phys_addr_t pto)
{
*entry &= ZPCI_STE_FLAG_MASK;
*entry |= ((unsigned long) pto & ZPCI_STE_ADDR_MASK);
*entry |= (pto & ZPCI_STE_ADDR_MASK);
*entry |= ZPCI_TABLE_TYPE_SX;
}
@ -169,16 +169,19 @@ static inline int pt_entry_isvalid(unsigned long entry)
static inline unsigned long *get_rt_sto(unsigned long entry)
{
return ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_RTX)
? (unsigned long *) (entry & ZPCI_RTE_ADDR_MASK)
: NULL;
if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_RTX)
return phys_to_virt(entry & ZPCI_RTE_ADDR_MASK);
else
return NULL;
}
static inline unsigned long *get_st_pto(unsigned long entry)
{
return ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_SX)
? (unsigned long *) (entry & ZPCI_STE_ADDR_MASK)
: NULL;
if ((entry & ZPCI_TABLE_TYPE_MASK) == ZPCI_TABLE_TYPE_SX)
return phys_to_virt(entry & ZPCI_STE_ADDR_MASK);
else
return NULL;
}
/* Prototypes */
@ -186,7 +189,7 @@ void dma_free_seg_table(unsigned long);
unsigned long *dma_alloc_cpu_table(void);
void dma_cleanup_tables(unsigned long *);
unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr);
void dma_update_cpu_trans(unsigned long *entry, void *page_addr, int flags);
void dma_update_cpu_trans(unsigned long *entry, phys_addr_t page_addr, int flags);
extern const struct dma_map_ops s390_pci_dma_ops;

View File

@ -88,11 +88,10 @@ extern void __bpon(void);
* User space process size: 2GB for 31 bit, 4TB or 8PT for 64 bit.
*/
#define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_31BIT) ? \
#define TASK_SIZE (test_thread_flag(TIF_31BIT) ? \
_REGION3_SIZE : TASK_SIZE_MAX)
#define TASK_UNMAPPED_BASE (test_thread_flag(TIF_31BIT) ? \
(_REGION3_SIZE >> 1) : (_REGION2_SIZE >> 1))
#define TASK_SIZE TASK_SIZE_OF(current)
#define TASK_SIZE_MAX (-PAGE_SIZE)
#define STACK_TOP (test_thread_flag(TIF_31BIT) ? \

View File

@ -18,7 +18,6 @@
#define QDIO_MAX_BUFFERS_MASK (QDIO_MAX_BUFFERS_PER_Q - 1)
#define QDIO_BUFNR(num) ((num) & QDIO_MAX_BUFFERS_MASK)
#define QDIO_MAX_ELEMENTS_PER_BUFFER 16
#define QDIO_SBAL_SIZE 256
#define QDIO_QETH_QFMT 0
#define QDIO_ZFCP_QFMT 1
@ -92,8 +91,8 @@ struct qdr {
* @pfmt: implementation dependent parameter format
* @rflags: QEBSM
* @ac: adapter characteristics
* @isliba: absolute address of first input SLIB
* @osliba: absolute address of first output SLIB
* @isliba: logical address of first input SLIB
* @osliba: logical address of first output SLIB
* @ebcnam: adapter identifier in EBCDIC
* @parm: implementation dependent parameters
*/
@ -313,7 +312,7 @@ typedef void qdio_handler_t(struct ccw_device *, unsigned int, int,
* @qib_rflags: rflags to set
* @no_input_qs: number of input queues
* @no_output_qs: number of output queues
* @input_handler: handler to be called for input queues
* @input_handler: handler to be called for input queues, and device-wide errors
* @output_handler: handler to be called for output queues
* @irq_poll: Data IRQ polling handler
* @scan_threshold: # of in-use buffers that triggers scan on output queue
@ -337,9 +336,6 @@ struct qdio_initialize {
struct qdio_buffer ***output_sbal_addr_array;
};
#define QDIO_FLAG_SYNC_INPUT 0x01
#define QDIO_FLAG_SYNC_OUTPUT 0x02
int qdio_alloc_buffers(struct qdio_buffer **buf, unsigned int count);
void qdio_free_buffers(struct qdio_buffer **buf, unsigned int count);
void qdio_reset_buffers(struct qdio_buffer **buf, unsigned int count);
@ -349,13 +345,18 @@ extern int qdio_allocate(struct ccw_device *cdev, unsigned int no_input_qs,
extern int qdio_establish(struct ccw_device *cdev,
struct qdio_initialize *init_data);
extern int qdio_activate(struct ccw_device *);
extern int do_QDIO(struct ccw_device *cdev, unsigned int callflags, int q_nr,
unsigned int bufnr, unsigned int count, struct qaob *aob);
extern int qdio_start_irq(struct ccw_device *cdev);
extern int qdio_stop_irq(struct ccw_device *cdev);
extern int qdio_inspect_queue(struct ccw_device *cdev, unsigned int nr,
bool is_input, unsigned int *bufnr,
unsigned int *error);
extern int qdio_inspect_input_queue(struct ccw_device *cdev, unsigned int nr,
unsigned int *bufnr, unsigned int *error);
extern int qdio_inspect_output_queue(struct ccw_device *cdev, unsigned int nr,
unsigned int *bufnr, unsigned int *error);
extern int qdio_add_bufs_to_input_queue(struct ccw_device *cdev,
unsigned int q_nr, unsigned int bufnr,
unsigned int count);
extern int qdio_add_bufs_to_output_queue(struct ccw_device *cdev,
unsigned int q_nr, unsigned int bufnr,
unsigned int count, struct qaob *aob);
extern int qdio_shutdown(struct ccw_device *, int);
extern int qdio_free(struct ccw_device *);
extern int qdio_get_ssqd_desc(struct ccw_device *, struct qdio_ssqd_desc *);

View File

@ -372,6 +372,16 @@
MRXBOPC \hint, 0x36, v1, v3
.endm
/* VECTOR STORE */
.macro VST vr1, disp, index="%r0", base
VX_NUM v1, \vr1
GR_NUM x2, \index
GR_NUM b2, \base /* Base register */
.word 0xE700 | ((v1&15) << 4) | (x2&15)
.word (b2 << 12) | (\disp)
MRXBOPC 0, 0x0E, v1
.endm
/* VECTOR STORE MULTIPLE */
.macro VSTM vfrom, vto, disp, base, hint=3
VX_NUM v1, \vfrom
@ -411,6 +421,81 @@
VUPLL \vr1, \vr2, 2
.endm
/* VECTOR PERMUTE DOUBLEWORD IMMEDIATE */
.macro VPDI vr1, vr2, vr3, m4
VX_NUM v1, \vr1
VX_NUM v2, \vr2
VX_NUM v3, \vr3
.word 0xE700 | ((v1&15) << 4) | (v2&15)
.word ((v3&15) << 12)
MRXBOPC \m4, 0x84, v1, v2, v3
.endm
/* VECTOR REPLICATE */
.macro VREP vr1, vr3, imm2, m4
VX_NUM v1, \vr1
VX_NUM v3, \vr3
.word 0xE700 | ((v1&15) << 4) | (v3&15)
.word \imm2
MRXBOPC \m4, 0x4D, v1, v3
.endm
.macro VREPB vr1, vr3, imm2
VREP \vr1, \vr3, \imm2, 0
.endm
.macro VREPH vr1, vr3, imm2
VREP \vr1, \vr3, \imm2, 1
.endm
.macro VREPF vr1, vr3, imm2
VREP \vr1, \vr3, \imm2, 2
.endm
.macro VREPG vr1, vr3, imm2
VREP \vr1, \vr3, \imm2, 3
.endm
/* VECTOR MERGE HIGH */
.macro VMRH vr1, vr2, vr3, m4
VX_NUM v1, \vr1
VX_NUM v2, \vr2
VX_NUM v3, \vr3
.word 0xE700 | ((v1&15) << 4) | (v2&15)
.word ((v3&15) << 12)
MRXBOPC \m4, 0x61, v1, v2, v3
.endm
.macro VMRHB vr1, vr2, vr3
VMRH \vr1, \vr2, \vr3, 0
.endm
.macro VMRHH vr1, vr2, vr3
VMRH \vr1, \vr2, \vr3, 1
.endm
.macro VMRHF vr1, vr2, vr3
VMRH \vr1, \vr2, \vr3, 2
.endm
.macro VMRHG vr1, vr2, vr3
VMRH \vr1, \vr2, \vr3, 3
.endm
/* VECTOR MERGE LOW */
.macro VMRL vr1, vr2, vr3, m4
VX_NUM v1, \vr1
VX_NUM v2, \vr2
VX_NUM v3, \vr3
.word 0xE700 | ((v1&15) << 4) | (v2&15)
.word ((v3&15) << 12)
MRXBOPC \m4, 0x60, v1, v2, v3
.endm
.macro VMRLB vr1, vr2, vr3
VMRL \vr1, \vr2, \vr3, 0
.endm
.macro VMRLH vr1, vr2, vr3
VMRL \vr1, \vr2, \vr3, 1
.endm
.macro VMRLF vr1, vr2, vr3
VMRL \vr1, \vr2, \vr3, 2
.endm
.macro VMRLG vr1, vr2, vr3
VMRL \vr1, \vr2, \vr3, 3
.endm
/* Vector integer instructions */
@ -557,5 +642,37 @@
VESRAV \vr1, \vr2, \vr3, 3
.endm
/* VECTOR ELEMENT ROTATE LEFT LOGICAL */
.macro VERLL vr1, vr3, disp, base="%r0", m4
VX_NUM v1, \vr1
VX_NUM v3, \vr3
GR_NUM b2, \base
.word 0xE700 | ((v1&15) << 4) | (v3&15)
.word (b2 << 12) | (\disp)
MRXBOPC \m4, 0x33, v1, v3
.endm
.macro VERLLB vr1, vr3, disp, base="%r0"
VERLL \vr1, \vr3, \disp, \base, 0
.endm
.macro VERLLH vr1, vr3, disp, base="%r0"
VERLL \vr1, \vr3, \disp, \base, 1
.endm
.macro VERLLF vr1, vr3, disp, base="%r0"
VERLL \vr1, \vr3, \disp, \base, 2
.endm
.macro VERLLG vr1, vr3, disp, base="%r0"
VERLL \vr1, \vr3, \disp, \base, 3
.endm
/* VECTOR SHIFT LEFT DOUBLE BY BYTE */
.macro VSLDB vr1, vr2, vr3, imm4
VX_NUM v1, \vr1
VX_NUM v2, \vr2
VX_NUM v3, \vr3
.word 0xE700 | ((v1&15) << 4) | (v2&15)
.word ((v3&15) << 12) | (\imm4)
MRXBOPC 0, 0x77, v1, v2, v3
.endm
#endif /* __ASSEMBLY__ */
#endif /* __ASM_S390_VX_INSN_H */

View File

@ -60,7 +60,7 @@ struct save_area * __init save_area_alloc(bool is_boot_cpu)
{
struct save_area *sa;
sa = (void *) memblock_phys_alloc(sizeof(*sa), 8);
sa = memblock_alloc(sizeof(*sa), 8);
if (!sa)
panic("Failed to allocate save area\n");

View File

@ -278,6 +278,7 @@ static const unsigned char formats[][6] = {
[INSTR_SIL_RDI] = { D_20, B_16, I16_32, 0, 0, 0 },
[INSTR_SIL_RDU] = { D_20, B_16, U16_32, 0, 0, 0 },
[INSTR_SIY_IRD] = { D20_20, B_16, I8_8, 0, 0, 0 },
[INSTR_SIY_RD] = { D20_20, B_16, 0, 0, 0, 0 },
[INSTR_SIY_URD] = { D20_20, B_16, U8_8, 0, 0, 0 },
[INSTR_SI_RD] = { D_20, B_16, 0, 0, 0, 0 },
[INSTR_SI_URD] = { D_20, B_16, U8_8, 0, 0, 0 },

View File

@ -86,7 +86,7 @@ static noinline void __machine_kdump(void *image)
continue;
}
/* Store status of the boot CPU */
mcesa = (struct mcesa *)(S390_lowcore.mcesad & MCESA_ORIGIN_MASK);
mcesa = __va(S390_lowcore.mcesad & MCESA_ORIGIN_MASK);
if (MACHINE_HAS_VX)
save_vx_regs((__vector128 *) mcesa->vector_save_area);
if (MACHINE_HAS_GS) {

View File

@ -58,27 +58,27 @@ static inline unsigned long nmi_get_mcesa_size(void)
/*
* The initial machine check extended save area for the boot CPU.
* It will be replaced by nmi_init() with an allocated structure.
* The structure is required for machine check happening early in
* the boot process.
* It will be replaced on the boot CPU reinit with an allocated
* structure. The structure is required for machine check happening
* early in the boot process.
*/
static struct mcesa boot_mcesa __initdata __aligned(MCESA_MAX_SIZE);
void __init nmi_alloc_boot_cpu(struct lowcore *lc)
void __init nmi_alloc_mcesa_early(u64 *mcesad)
{
if (!nmi_needs_mcesa())
return;
lc->mcesad = (unsigned long) &boot_mcesa;
*mcesad = __pa(&boot_mcesa);
if (MACHINE_HAS_GS)
lc->mcesad |= ilog2(MCESA_MAX_SIZE);
*mcesad |= ilog2(MCESA_MAX_SIZE);
}
static int __init nmi_init(void)
static void __init nmi_alloc_cache(void)
{
unsigned long origin, cr0, size;
unsigned long size;
if (!nmi_needs_mcesa())
return 0;
return;
size = nmi_get_mcesa_size();
if (size > MCESA_MIN_SIZE)
mcesa_origin_lc = ilog2(size);
@ -86,40 +86,31 @@ static int __init nmi_init(void)
mcesa_cache = kmem_cache_create("nmi_save_areas", size, size, 0, NULL);
if (!mcesa_cache)
panic("Couldn't create nmi save area cache");
origin = (unsigned long) kmem_cache_alloc(mcesa_cache, GFP_KERNEL);
if (!origin)
panic("Couldn't allocate nmi save area");
/* The pointer is stored with mcesa_bits ORed in */
kmemleak_not_leak((void *) origin);
__ctl_store(cr0, 0, 0);
__ctl_clear_bit(0, 28); /* disable lowcore protection */
/* Replace boot_mcesa on the boot CPU */
S390_lowcore.mcesad = origin | mcesa_origin_lc;
__ctl_load(cr0, 0, 0);
return 0;
}
early_initcall(nmi_init);
int nmi_alloc_per_cpu(struct lowcore *lc)
int __ref nmi_alloc_mcesa(u64 *mcesad)
{
unsigned long origin;
*mcesad = 0;
if (!nmi_needs_mcesa())
return 0;
if (!mcesa_cache)
nmi_alloc_cache();
origin = (unsigned long) kmem_cache_alloc(mcesa_cache, GFP_KERNEL);
if (!origin)
return -ENOMEM;
/* The pointer is stored with mcesa_bits ORed in */
kmemleak_not_leak((void *) origin);
lc->mcesad = origin | mcesa_origin_lc;
*mcesad = __pa(origin) | mcesa_origin_lc;
return 0;
}
void nmi_free_per_cpu(struct lowcore *lc)
void nmi_free_mcesa(u64 *mcesad)
{
if (!nmi_needs_mcesa())
return;
kmem_cache_free(mcesa_cache, (void *)(lc->mcesad & MCESA_ORIGIN_MASK));
kmem_cache_free(mcesa_cache, __va(*mcesad & MCESA_ORIGIN_MASK));
}
static notrace void s390_handle_damage(void)
@ -246,7 +237,7 @@ static int notrace s390_validate_registers(union mci mci, int umode)
: "Q" (S390_lowcore.fpt_creg_save_area));
}
mcesa = (struct mcesa *)(S390_lowcore.mcesad & MCESA_ORIGIN_MASK);
mcesa = __va(S390_lowcore.mcesad & MCESA_ORIGIN_MASK);
if (!MACHINE_HAS_VX) {
/* Validate floating point registers */
asm volatile(

View File

@ -139,7 +139,6 @@ int copy_thread(unsigned long clone_flags, unsigned long new_stackp,
(unsigned long)__ret_from_fork;
frame->childregs.gprs[9] = new_stackp; /* function */
frame->childregs.gprs[10] = arg;
frame->childregs.gprs[11] = (unsigned long)do_exit;
frame->childregs.orig_gpr2 = -1;
frame->childregs.last_break = 1;
return 0;

View File

@ -445,7 +445,7 @@ static void __init setup_lowcore_dat_off(void)
lc->lpp = LPP_MAGIC;
lc->machine_flags = S390_lowcore.machine_flags;
lc->preempt_count = S390_lowcore.preempt_count;
nmi_alloc_boot_cpu(lc);
nmi_alloc_mcesa_early(&lc->mcesad);
lc->sys_enter_timer = S390_lowcore.sys_enter_timer;
lc->exit_timer = S390_lowcore.exit_timer;
lc->user_timer = S390_lowcore.user_timer;

View File

@ -212,7 +212,7 @@ static int pcpu_alloc_lowcore(struct pcpu *pcpu, int cpu)
lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
lc->preempt_count = PREEMPT_DISABLED;
if (nmi_alloc_per_cpu(lc))
if (nmi_alloc_mcesa(&lc->mcesad))
goto out;
lowcore_ptr[cpu] = lc;
pcpu_sigp_retry(pcpu, SIGP_SET_PREFIX, (u32)(unsigned long) lc);
@ -239,7 +239,7 @@ static void pcpu_free_lowcore(struct pcpu *pcpu)
mcck_stack = lc->mcck_stack - STACK_INIT_OFFSET;
pcpu_sigp_retry(pcpu, SIGP_SET_PREFIX, 0);
lowcore_ptr[cpu] = NULL;
nmi_free_per_cpu(lc);
nmi_free_mcesa(&lc->mcesad);
stack_free(async_stack);
stack_free(mcck_stack);
free_pages(nodat_stack, THREAD_SIZE_ORDER);
@ -622,7 +622,7 @@ int smp_store_status(int cpu)
return -EIO;
if (!MACHINE_HAS_VX && !MACHINE_HAS_GS)
return 0;
pa = __pa(lc->mcesad & MCESA_ORIGIN_MASK);
pa = lc->mcesad & MCESA_ORIGIN_MASK;
if (MACHINE_HAS_GS)
pa |= lc->mcesad & MCESA_LC_MASK;
if (__pcpu_sigp_relax(pcpu->address, SIGP_STORE_ADDITIONAL_STATUS,
@ -658,26 +658,22 @@ int smp_store_status(int cpu)
* deactivates the elfcorehdr= kernel parameter
*/
static __init void smp_save_cpu_vxrs(struct save_area *sa, u16 addr,
bool is_boot_cpu, unsigned long page)
bool is_boot_cpu, __vector128 *vxrs)
{
__vector128 *vxrs = (__vector128 *) page;
if (is_boot_cpu)
vxrs = boot_cpu_vector_save_area;
else
__pcpu_sigp_relax(addr, SIGP_STORE_ADDITIONAL_STATUS, page);
__pcpu_sigp_relax(addr, SIGP_STORE_ADDITIONAL_STATUS, __pa(vxrs));
save_area_add_vxrs(sa, vxrs);
}
static __init void smp_save_cpu_regs(struct save_area *sa, u16 addr,
bool is_boot_cpu, unsigned long page)
bool is_boot_cpu, void *regs)
{
void *regs = (void *) page;
if (is_boot_cpu)
copy_oldmem_kernel(regs, (void *) __LC_FPREGS_SAVE_AREA, 512);
else
__pcpu_sigp_relax(addr, SIGP_STORE_STATUS_AT_ADDRESS, page);
__pcpu_sigp_relax(addr, SIGP_STORE_STATUS_AT_ADDRESS, __pa(regs));
save_area_add_regs(sa, regs);
}
@ -685,14 +681,14 @@ void __init smp_save_dump_cpus(void)
{
int addr, boot_cpu_addr, max_cpu_addr;
struct save_area *sa;
unsigned long page;
bool is_boot_cpu;
void *page;
if (!(oldmem_data.start || is_ipl_type_dump()))
/* No previous system present, normal boot. */
return;
/* Allocate a page as dumping area for the store status sigps */
page = memblock_phys_alloc_range(PAGE_SIZE, PAGE_SIZE, 0, 1UL << 31);
page = memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
if (!page)
panic("ERROR: Failed to allocate %lx bytes below %lx\n",
PAGE_SIZE, 1UL << 31);
@ -723,7 +719,7 @@ void __init smp_save_dump_cpus(void)
/* Get the CPU registers */
smp_save_cpu_regs(sa, addr, is_boot_cpu, page);
}
memblock_phys_free(page, PAGE_SIZE);
memblock_free(page, PAGE_SIZE);
diag_amode31_ops.diag308_reset();
pcpu_set_smt(0);
}
@ -880,7 +876,7 @@ void __init smp_detect_cpus(void)
/* Add CPUs present at boot */
__smp_rescan_cpus(info, true);
memblock_phys_free((unsigned long)info, sizeof(*info));
memblock_free(info, sizeof(*info));
}
/*
@ -1271,14 +1267,15 @@ static int __init smp_reinit_ipl_cpu(void)
{
unsigned long async_stack, nodat_stack, mcck_stack;
struct lowcore *lc, *lc_ipl;
unsigned long flags;
unsigned long flags, cr0;
u64 mcesad;
lc_ipl = lowcore_ptr[0];
lc = (struct lowcore *) __get_free_pages(GFP_KERNEL | GFP_DMA, LC_ORDER);
nodat_stack = __get_free_pages(GFP_KERNEL, THREAD_SIZE_ORDER);
async_stack = stack_alloc();
mcck_stack = stack_alloc();
if (!lc || !nodat_stack || !async_stack || !mcck_stack)
if (!lc || !nodat_stack || !async_stack || !mcck_stack || nmi_alloc_mcesa(&mcesad))
panic("Couldn't allocate memory");
local_irq_save(flags);
@ -1287,6 +1284,10 @@ static int __init smp_reinit_ipl_cpu(void)
S390_lowcore.nodat_stack = nodat_stack + STACK_INIT_OFFSET;
S390_lowcore.async_stack = async_stack + STACK_INIT_OFFSET;
S390_lowcore.mcck_stack = mcck_stack + STACK_INIT_OFFSET;
__ctl_store(cr0, 0, 0);
__ctl_clear_bit(0, 28); /* disable lowcore protection */
S390_lowcore.mcesad = mcesad;
__ctl_load(cr0, 0, 0);
lowcore_ptr[0] = lc;
local_mcck_enable();
local_irq_restore(flags);

View File

@ -30,7 +30,7 @@ int __bootdata_preserved(prot_virt_host);
EXPORT_SYMBOL(prot_virt_host);
EXPORT_SYMBOL(uv_info);
static int __init uv_init(unsigned long stor_base, unsigned long stor_len)
static int __init uv_init(phys_addr_t stor_base, unsigned long stor_len)
{
struct uv_cb_init uvcb = {
.header.cmd = UVC_CMD_INIT_UV,
@ -49,12 +49,12 @@ static int __init uv_init(unsigned long stor_base, unsigned long stor_len)
void __init setup_uv(void)
{
unsigned long uv_stor_base;
void *uv_stor_base;
if (!is_prot_virt_host())
return;
uv_stor_base = (unsigned long)memblock_alloc_try_nid(
uv_stor_base = memblock_alloc_try_nid(
uv_info.uv_base_stor_len, SZ_1M, SZ_2G,
MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
if (!uv_stor_base) {
@ -63,8 +63,8 @@ void __init setup_uv(void)
goto fail;
}
if (uv_init(uv_stor_base, uv_info.uv_base_stor_len)) {
memblock_phys_free(uv_stor_base, uv_info.uv_base_stor_len);
if (uv_init(__pa(uv_stor_base), uv_info.uv_base_stor_len)) {
memblock_free(uv_stor_base, uv_info.uv_base_stor_len);
goto fail;
}

View File

@ -90,7 +90,7 @@ static long cmm_alloc_pages(long nr, long *counter,
} else
free_page((unsigned long) npa);
}
diag10_range(addr >> PAGE_SHIFT, 1);
diag10_range(virt_to_pfn(addr), 1);
pa->pages[pa->index++] = addr;
(*counter)++;
spin_unlock(&cmm_lock);

View File

@ -115,7 +115,7 @@ static void dump_pagetable(unsigned long asce, unsigned long address)
pr_cont("R1:%016lx ", *table);
if (*table & _REGION_ENTRY_INVALID)
goto out;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = __va(*table & _REGION_ENTRY_ORIGIN);
fallthrough;
case _ASCE_TYPE_REGION2:
table += (address & _REGION2_INDEX) >> _REGION2_SHIFT;
@ -124,7 +124,7 @@ static void dump_pagetable(unsigned long asce, unsigned long address)
pr_cont("R2:%016lx ", *table);
if (*table & _REGION_ENTRY_INVALID)
goto out;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = __va(*table & _REGION_ENTRY_ORIGIN);
fallthrough;
case _ASCE_TYPE_REGION3:
table += (address & _REGION3_INDEX) >> _REGION3_SHIFT;
@ -133,7 +133,7 @@ static void dump_pagetable(unsigned long asce, unsigned long address)
pr_cont("R3:%016lx ", *table);
if (*table & (_REGION_ENTRY_INVALID | _REGION3_ENTRY_LARGE))
goto out;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = __va(*table & _REGION_ENTRY_ORIGIN);
fallthrough;
case _ASCE_TYPE_SEGMENT:
table += (address & _SEGMENT_INDEX) >> _SEGMENT_SHIFT;
@ -142,7 +142,7 @@ static void dump_pagetable(unsigned long asce, unsigned long address)
pr_cont("S:%016lx ", *table);
if (*table & (_SEGMENT_ENTRY_INVALID | _SEGMENT_ENTRY_LARGE))
goto out;
table = (unsigned long *)(*table & _SEGMENT_ENTRY_ORIGIN);
table = __va(*table & _SEGMENT_ENTRY_ORIGIN);
}
table += (address & _PAGE_INDEX) >> _PAGE_SHIFT;
if (bad_address(table))

View File

@ -215,6 +215,9 @@ void free_initmem(void)
__set_memory((unsigned long)_sinittext,
(unsigned long)(_einittext - _sinittext) >> PAGE_SHIFT,
SET_MEMORY_RW | SET_MEMORY_NX);
free_reserved_area(sclp_early_sccb,
sclp_early_sccb + EXT_SCCB_READ_SCP,
POISON_FREE_INITMEM, "unused early sccb");
free_initmem_default(POISON_FREE_INITMEM);
}

View File

@ -176,7 +176,75 @@ void page_table_free_pgste(struct page *page)
#endif /* CONFIG_PGSTE */
/*
* page table entry allocation/free routines.
* A 2KB-pgtable is either upper or lower half of a normal page.
* The second half of the page may be unused or used as another
* 2KB-pgtable.
*
* Whenever possible the parent page for a new 2KB-pgtable is picked
* from the list of partially allocated pages mm_context_t::pgtable_list.
* In case the list is empty a new parent page is allocated and added to
* the list.
*
* When a parent page gets fully allocated it contains 2KB-pgtables in both
* upper and lower halves and is removed from mm_context_t::pgtable_list.
*
* When 2KB-pgtable is freed from to fully allocated parent page that
* page turns partially allocated and added to mm_context_t::pgtable_list.
*
* If 2KB-pgtable is freed from the partially allocated parent page that
* page turns unused and gets removed from mm_context_t::pgtable_list.
* Furthermore, the unused parent page is released.
*
* As follows from the above, no unallocated or fully allocated parent
* pages are contained in mm_context_t::pgtable_list.
*
* The upper byte (bits 24-31) of the parent page _refcount is used
* for tracking contained 2KB-pgtables and has the following format:
*
* PP AA
* 01234567 upper byte (bits 24-31) of struct page::_refcount
* || ||
* || |+--- upper 2KB-pgtable is allocated
* || +---- lower 2KB-pgtable is allocated
* |+------- upper 2KB-pgtable is pending for removal
* +-------- lower 2KB-pgtable is pending for removal
*
* (See commit 620b4e903179 ("s390: use _refcount for pgtables") on why
* using _refcount is possible).
*
* When 2KB-pgtable is allocated the corresponding AA bit is set to 1.
* The parent page is either:
* - added to mm_context_t::pgtable_list in case the second half of the
* parent page is still unallocated;
* - removed from mm_context_t::pgtable_list in case both hales of the
* parent page are allocated;
* These operations are protected with mm_context_t::lock.
*
* When 2KB-pgtable is deallocated the corresponding AA bit is set to 0
* and the corresponding PP bit is set to 1 in a single atomic operation.
* Thus, PP and AA bits corresponding to the same 2KB-pgtable are mutually
* exclusive and may never be both set to 1!
* The parent page is either:
* - added to mm_context_t::pgtable_list in case the second half of the
* parent page is still allocated;
* - removed from mm_context_t::pgtable_list in case the second half of
* the parent page is unallocated;
* These operations are protected with mm_context_t::lock.
*
* It is important to understand that mm_context_t::lock only protects
* mm_context_t::pgtable_list and AA bits, but not the parent page itself
* and PP bits.
*
* Releasing the parent page happens whenever the PP bit turns from 1 to 0,
* while both AA bits and the second PP bit are already unset. Then the
* parent page does not contain any 2KB-pgtable fragment anymore, and it has
* also been removed from mm_context_t::pgtable_list. It is safe to release
* the page therefore.
*
* PGSTE memory spaces use full 4KB-pgtables and do not need most of the
* logic described above. Both AA bits are set to 1 to denote a 4KB-pgtable
* while the PP bits are never used, nor such a page is added to or removed
* from mm_context_t::pgtable_list.
*/
unsigned long *page_table_alloc(struct mm_struct *mm)
{
@ -192,14 +260,23 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
page = list_first_entry(&mm->context.pgtable_list,
struct page, lru);
mask = atomic_read(&page->_refcount) >> 24;
mask = (mask | (mask >> 4)) & 3;
if (mask != 3) {
/*
* The pending removal bits must also be checked.
* Failure to do so might lead to an impossible
* value of (i.e 0x13 or 0x23) written to _refcount.
* Such values violate the assumption that pending and
* allocation bits are mutually exclusive, and the rest
* of the code unrails as result. That could lead to
* a whole bunch of races and corruptions.
*/
mask = (mask | (mask >> 4)) & 0x03U;
if (mask != 0x03U) {
table = (unsigned long *) page_to_virt(page);
bit = mask & 1; /* =1 -> second 2K */
if (bit)
table += PTRS_PER_PTE;
atomic_xor_bits(&page->_refcount,
1U << (bit + 24));
0x01U << (bit + 24));
list_del(&page->lru);
}
}
@ -220,12 +297,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
table = (unsigned long *) page_to_virt(page);
if (mm_alloc_pgste(mm)) {
/* Return 4K page table with PGSTEs */
atomic_xor_bits(&page->_refcount, 3 << 24);
atomic_xor_bits(&page->_refcount, 0x03U << 24);
memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE);
memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE);
} else {
/* Return the first 2K fragment of the page */
atomic_xor_bits(&page->_refcount, 1 << 24);
atomic_xor_bits(&page->_refcount, 0x01U << 24);
memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE);
spin_lock_bh(&mm->context.lock);
list_add(&page->lru, &mm->context.pgtable_list);
@ -234,29 +311,53 @@ unsigned long *page_table_alloc(struct mm_struct *mm)
return table;
}
static void page_table_release_check(struct page *page, void *table,
unsigned int half, unsigned int mask)
{
char msg[128];
if (!IS_ENABLED(CONFIG_DEBUG_VM) || !mask)
return;
snprintf(msg, sizeof(msg),
"Invalid pgtable %p release half 0x%02x mask 0x%02x",
table, half, mask);
dump_page(page, msg);
}
void page_table_free(struct mm_struct *mm, unsigned long *table)
{
unsigned int mask, bit, half;
struct page *page;
unsigned int bit, mask;
page = virt_to_page(table);
if (!mm_alloc_pgste(mm)) {
/* Free 2K page table fragment of a 4K page */
bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t));
spin_lock_bh(&mm->context.lock);
mask = atomic_xor_bits(&page->_refcount, 1U << (bit + 24));
/*
* Mark the page for delayed release. The actual release
* will happen outside of the critical section from this
* function or from __tlb_remove_table()
*/
mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
mask >>= 24;
if (mask & 3)
if (mask & 0x03U)
list_add(&page->lru, &mm->context.pgtable_list);
else
list_del(&page->lru);
spin_unlock_bh(&mm->context.lock);
if (mask != 0)
mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24));
mask >>= 24;
if (mask != 0x00U)
return;
half = 0x01U << bit;
} else {
atomic_xor_bits(&page->_refcount, 3U << 24);
half = 0x03U;
mask = atomic_xor_bits(&page->_refcount, 0x03U << 24);
mask >>= 24;
}
page_table_release_check(page, table, half, mask);
pgtable_pte_page_dtor(page);
__free_page(page);
}
@ -272,47 +373,54 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table,
page = virt_to_page(table);
if (mm_alloc_pgste(mm)) {
gmap_unlink(mm, table, vmaddr);
table = (unsigned long *) ((unsigned long)table | 3);
table = (unsigned long *) ((unsigned long)table | 0x03U);
tlb_remove_table(tlb, table);
return;
}
bit = ((unsigned long) table & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t));
spin_lock_bh(&mm->context.lock);
/*
* Mark the page for delayed release. The actual release will happen
* outside of the critical section from __tlb_remove_table() or from
* page_table_free()
*/
mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24));
mask >>= 24;
if (mask & 3)
if (mask & 0x03U)
list_add_tail(&page->lru, &mm->context.pgtable_list);
else
list_del(&page->lru);
spin_unlock_bh(&mm->context.lock);
table = (unsigned long *) ((unsigned long) table | (1U << bit));
table = (unsigned long *) ((unsigned long) table | (0x01U << bit));
tlb_remove_table(tlb, table);
}
void __tlb_remove_table(void *_table)
{
unsigned int mask = (unsigned long) _table & 3;
unsigned int mask = (unsigned long) _table & 0x03U, half = mask;
void *table = (void *)((unsigned long) _table ^ mask);
struct page *page = virt_to_page(table);
switch (mask) {
case 0: /* pmd, pud, or p4d */
switch (half) {
case 0x00U: /* pmd, pud, or p4d */
free_pages((unsigned long) table, 2);
break;
case 1: /* lower 2K of a 4K page table */
case 2: /* higher 2K of a 4K page table */
return;
case 0x01U: /* lower 2K of a 4K page table */
case 0x02U: /* higher 2K of a 4K page table */
mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24));
mask >>= 24;
if (mask != 0)
break;
fallthrough;
case 3: /* 4K page table with pgstes */
if (mask & 3)
atomic_xor_bits(&page->_refcount, 3 << 24);
pgtable_pte_page_dtor(page);
__free_page(page);
if (mask != 0x00U)
return;
break;
case 0x03U: /* 4K page table with pgstes */
mask = atomic_xor_bits(&page->_refcount, 0x03U << 24);
mask >>= 24;
break;
}
page_table_release_check(page, table, half, mask);
pgtable_pte_page_dtor(page);
__free_page(page);
}
/*
@ -322,34 +430,34 @@ void __tlb_remove_table(void *_table)
static struct kmem_cache *base_pgt_cache;
static unsigned long base_pgt_alloc(void)
static unsigned long *base_pgt_alloc(void)
{
u64 *table;
unsigned long *table;
table = kmem_cache_alloc(base_pgt_cache, GFP_KERNEL);
if (table)
memset64(table, _PAGE_INVALID, PTRS_PER_PTE);
return (unsigned long) table;
}
static void base_pgt_free(unsigned long table)
{
kmem_cache_free(base_pgt_cache, (void *) table);
}
static unsigned long base_crst_alloc(unsigned long val)
{
unsigned long table;
table = __get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
if (table)
crst_table_init((unsigned long *)table, val);
memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE);
return table;
}
static void base_crst_free(unsigned long table)
static void base_pgt_free(unsigned long *table)
{
free_pages(table, CRST_ALLOC_ORDER);
kmem_cache_free(base_pgt_cache, table);
}
static unsigned long *base_crst_alloc(unsigned long val)
{
unsigned long *table;
table = (unsigned long *)__get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
if (table)
crst_table_init(table, val);
return table;
}
static void base_crst_free(unsigned long *table)
{
free_pages((unsigned long)table, CRST_ALLOC_ORDER);
}
#define BASE_ADDR_END_FUNC(NAME, SIZE) \
@ -377,14 +485,14 @@ static inline unsigned long base_lra(unsigned long address)
return real;
}
static int base_page_walk(unsigned long origin, unsigned long addr,
static int base_page_walk(unsigned long *origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *pte, next;
if (!alloc)
return 0;
pte = (unsigned long *) origin;
pte = origin;
pte += (addr & _PAGE_INDEX) >> _PAGE_SHIFT;
do {
next = base_page_addr_end(addr, end);
@ -393,13 +501,13 @@ static int base_page_walk(unsigned long origin, unsigned long addr,
return 0;
}
static int base_segment_walk(unsigned long origin, unsigned long addr,
static int base_segment_walk(unsigned long *origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *ste, next, table;
unsigned long *ste, next, *table;
int rc;
ste = (unsigned long *) origin;
ste = origin;
ste += (addr & _SEGMENT_INDEX) >> _SEGMENT_SHIFT;
do {
next = base_segment_addr_end(addr, end);
@ -409,9 +517,9 @@ static int base_segment_walk(unsigned long origin, unsigned long addr,
table = base_pgt_alloc();
if (!table)
return -ENOMEM;
*ste = table | _SEGMENT_ENTRY;
*ste = __pa(table) | _SEGMENT_ENTRY;
}
table = *ste & _SEGMENT_ENTRY_ORIGIN;
table = __va(*ste & _SEGMENT_ENTRY_ORIGIN);
rc = base_page_walk(table, addr, next, alloc);
if (rc)
return rc;
@ -422,13 +530,13 @@ static int base_segment_walk(unsigned long origin, unsigned long addr,
return 0;
}
static int base_region3_walk(unsigned long origin, unsigned long addr,
static int base_region3_walk(unsigned long *origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *rtte, next, table;
unsigned long *rtte, next, *table;
int rc;
rtte = (unsigned long *) origin;
rtte = origin;
rtte += (addr & _REGION3_INDEX) >> _REGION3_SHIFT;
do {
next = base_region3_addr_end(addr, end);
@ -438,9 +546,9 @@ static int base_region3_walk(unsigned long origin, unsigned long addr,
table = base_crst_alloc(_SEGMENT_ENTRY_EMPTY);
if (!table)
return -ENOMEM;
*rtte = table | _REGION3_ENTRY;
*rtte = __pa(table) | _REGION3_ENTRY;
}
table = *rtte & _REGION_ENTRY_ORIGIN;
table = __va(*rtte & _REGION_ENTRY_ORIGIN);
rc = base_segment_walk(table, addr, next, alloc);
if (rc)
return rc;
@ -450,13 +558,13 @@ static int base_region3_walk(unsigned long origin, unsigned long addr,
return 0;
}
static int base_region2_walk(unsigned long origin, unsigned long addr,
static int base_region2_walk(unsigned long *origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *rste, next, table;
unsigned long *rste, next, *table;
int rc;
rste = (unsigned long *) origin;
rste = origin;
rste += (addr & _REGION2_INDEX) >> _REGION2_SHIFT;
do {
next = base_region2_addr_end(addr, end);
@ -466,9 +574,9 @@ static int base_region2_walk(unsigned long origin, unsigned long addr,
table = base_crst_alloc(_REGION3_ENTRY_EMPTY);
if (!table)
return -ENOMEM;
*rste = table | _REGION2_ENTRY;
*rste = __pa(table) | _REGION2_ENTRY;
}
table = *rste & _REGION_ENTRY_ORIGIN;
table = __va(*rste & _REGION_ENTRY_ORIGIN);
rc = base_region3_walk(table, addr, next, alloc);
if (rc)
return rc;
@ -478,13 +586,13 @@ static int base_region2_walk(unsigned long origin, unsigned long addr,
return 0;
}
static int base_region1_walk(unsigned long origin, unsigned long addr,
static int base_region1_walk(unsigned long *origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *rfte, next, table;
unsigned long *rfte, next, *table;
int rc;
rfte = (unsigned long *) origin;
rfte = origin;
rfte += (addr & _REGION1_INDEX) >> _REGION1_SHIFT;
do {
next = base_region1_addr_end(addr, end);
@ -494,9 +602,9 @@ static int base_region1_walk(unsigned long origin, unsigned long addr,
table = base_crst_alloc(_REGION2_ENTRY_EMPTY);
if (!table)
return -ENOMEM;
*rfte = table | _REGION1_ENTRY;
*rfte = __pa(table) | _REGION1_ENTRY;
}
table = *rfte & _REGION_ENTRY_ORIGIN;
table = __va(*rfte & _REGION_ENTRY_ORIGIN);
rc = base_region2_walk(table, addr, next, alloc);
if (rc)
return rc;
@ -515,7 +623,7 @@ static int base_region1_walk(unsigned long origin, unsigned long addr,
*/
void base_asce_free(unsigned long asce)
{
unsigned long table = asce & _ASCE_ORIGIN;
unsigned long *table = __va(asce & _ASCE_ORIGIN);
if (!asce)
return;
@ -567,7 +675,7 @@ static int base_pgt_cache_init(void)
*/
unsigned long base_asce_alloc(unsigned long addr, unsigned long num_pages)
{
unsigned long asce, table, end;
unsigned long asce, *table, end;
int rc;
if (base_pgt_cache_init())
@ -578,25 +686,25 @@ unsigned long base_asce_alloc(unsigned long addr, unsigned long num_pages)
if (!table)
return 0;
rc = base_segment_walk(table, addr, end, 1);
asce = table | _ASCE_TYPE_SEGMENT | _ASCE_TABLE_LENGTH;
asce = __pa(table) | _ASCE_TYPE_SEGMENT | _ASCE_TABLE_LENGTH;
} else if (end <= _REGION2_SIZE) {
table = base_crst_alloc(_REGION3_ENTRY_EMPTY);
if (!table)
return 0;
rc = base_region3_walk(table, addr, end, 1);
asce = table | _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
asce = __pa(table) | _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
} else if (end <= _REGION1_SIZE) {
table = base_crst_alloc(_REGION2_ENTRY_EMPTY);
if (!table)
return 0;
rc = base_region2_walk(table, addr, end, 1);
asce = table | _ASCE_TYPE_REGION2 | _ASCE_TABLE_LENGTH;
asce = __pa(table) | _ASCE_TYPE_REGION2 | _ASCE_TABLE_LENGTH;
} else {
table = base_crst_alloc(_REGION1_ENTRY_EMPTY);
if (!table)
return 0;
rc = base_region1_walk(table, addr, end, 1);
asce = table | _ASCE_TYPE_REGION1 | _ASCE_TABLE_LENGTH;
asce = __pa(table) | _ASCE_TYPE_REGION1 | _ASCE_TABLE_LENGTH;
}
if (rc) {
base_asce_free(asce);

View File

@ -771,7 +771,7 @@ int zpci_hot_reset_device(struct zpci_dev *zdev)
if (zdev->dma_table)
rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
(u64)zdev->dma_table);
virt_to_phys(zdev->dma_table));
else
rc = zpci_dma_init_device(zdev);
if (rc) {

View File

@ -74,7 +74,7 @@ static unsigned long *dma_get_seg_table_origin(unsigned long *entry)
if (!sto)
return NULL;
set_rt_sto(entry, sto);
set_rt_sto(entry, virt_to_phys(sto));
validate_rt_entry(entry);
entry_clr_protected(entry);
}
@ -91,7 +91,7 @@ static unsigned long *dma_get_page_table_origin(unsigned long *entry)
pto = dma_alloc_page_table();
if (!pto)
return NULL;
set_st_pto(entry, pto);
set_st_pto(entry, virt_to_phys(pto));
validate_st_entry(entry);
entry_clr_protected(entry);
}
@ -117,7 +117,7 @@ unsigned long *dma_walk_cpu_trans(unsigned long *rto, dma_addr_t dma_addr)
return &pto[px];
}
void dma_update_cpu_trans(unsigned long *entry, void *page_addr, int flags)
void dma_update_cpu_trans(unsigned long *entry, phys_addr_t page_addr, int flags)
{
if (flags & ZPCI_PTE_INVALID) {
invalidate_pt_entry(entry);
@ -132,11 +132,11 @@ void dma_update_cpu_trans(unsigned long *entry, void *page_addr, int flags)
entry_clr_protected(entry);
}
static int __dma_update_trans(struct zpci_dev *zdev, unsigned long pa,
static int __dma_update_trans(struct zpci_dev *zdev, phys_addr_t pa,
dma_addr_t dma_addr, size_t size, int flags)
{
unsigned int nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
u8 *page_addr = (u8 *) (pa & PAGE_MASK);
phys_addr_t page_addr = (pa & PAGE_MASK);
unsigned long irq_flags;
unsigned long *entry;
int i, rc = 0;
@ -217,7 +217,7 @@ static int __dma_purge_tlb(struct zpci_dev *zdev, dma_addr_t dma_addr,
return ret;
}
static int dma_update_trans(struct zpci_dev *zdev, unsigned long pa,
static int dma_update_trans(struct zpci_dev *zdev, phys_addr_t pa,
dma_addr_t dma_addr, size_t size, int flags)
{
int rc;
@ -400,7 +400,7 @@ static void *s390_dma_alloc(struct device *dev, size_t size,
{
struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
struct page *page;
unsigned long pa;
phys_addr_t pa;
dma_addr_t map;
size = PAGE_ALIGN(size);
@ -411,18 +411,18 @@ static void *s390_dma_alloc(struct device *dev, size_t size,
pa = page_to_phys(page);
map = s390_dma_map_pages(dev, page, 0, size, DMA_BIDIRECTIONAL, 0);
if (dma_mapping_error(dev, map)) {
free_pages(pa, get_order(size));
__free_pages(page, get_order(size));
return NULL;
}
atomic64_add(size / PAGE_SIZE, &zdev->allocated_pages);
if (dma_handle)
*dma_handle = map;
return (void *) pa;
return phys_to_virt(pa);
}
static void s390_dma_free(struct device *dev, size_t size,
void *pa, dma_addr_t dma_handle,
void *vaddr, dma_addr_t dma_handle,
unsigned long attrs)
{
struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
@ -430,7 +430,7 @@ static void s390_dma_free(struct device *dev, size_t size,
size = PAGE_ALIGN(size);
atomic64_sub(size / PAGE_SIZE, &zdev->allocated_pages);
s390_dma_unmap_pages(dev, dma_handle, size, DMA_BIDIRECTIONAL, 0);
free_pages((unsigned long) pa, get_order(size));
free_pages((unsigned long)vaddr, get_order(size));
}
/* Map a segment into a contiguous dma address area */
@ -443,7 +443,7 @@ static int __s390_dma_map_sg(struct device *dev, struct scatterlist *sg,
dma_addr_t dma_addr_base, dma_addr;
int flags = ZPCI_PTE_VALID;
struct scatterlist *s;
unsigned long pa = 0;
phys_addr_t pa = 0;
int ret;
dma_addr_base = dma_alloc_address(dev, nr_pages);
@ -598,7 +598,7 @@ int zpci_dma_init_device(struct zpci_dev *zdev)
}
if (zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
(u64)zdev->dma_table)) {
virt_to_phys(zdev->dma_table))) {
rc = -EIO;
goto free_bitmap;
}

View File

@ -365,10 +365,7 @@ EXPORT_SYMBOL_GPL(zpci_write_block);
static inline void __pciwb_mio(void)
{
unsigned long unused = 0;
asm volatile (".insn rre,0xb9d50000,%[op],%[op]\n"
: [op] "+d" (unused));
asm volatile (".insn rre,0xb9d50000,0,0\n");
}
void zpci_barrier(void)

View File

@ -45,9 +45,9 @@ static int zpci_set_airq(struct zpci_dev *zdev)
fib.fmt0.isc = PCI_ISC;
fib.fmt0.sum = 1; /* enable summary notifications */
fib.fmt0.noi = airq_iv_end(zdev->aibv);
fib.fmt0.aibv = (unsigned long) zdev->aibv->vector;
fib.fmt0.aibv = virt_to_phys(zdev->aibv->vector);
fib.fmt0.aibvo = 0; /* each zdev has its own interrupt vector */
fib.fmt0.aisb = (unsigned long) zpci_sbv->vector + (zdev->aisb/64)*8;
fib.fmt0.aisb = virt_to_phys(zpci_sbv->vector) + (zdev->aisb / 64) * 8;
fib.fmt0.aisbo = zdev->aisb & 63;
return zpci_mod_fc(req, &fib, &status) ? -EIO : 0;
@ -422,7 +422,7 @@ static int __init zpci_directed_irq_init(void)
iib.diib.isc = PCI_ISC;
iib.diib.nr_cpus = num_possible_cpus();
iib.diib.disb_addr = (u64) zpci_sbv->vector;
iib.diib.disb_addr = virt_to_phys(zpci_sbv->vector);
__zpci_set_irq_ctrl(SIC_IRQ_MODE_DIRECT, 0, &iib);
zpci_ibv = kcalloc(num_possible_cpus(), sizeof(*zpci_ibv),

View File

@ -276,6 +276,7 @@ b285 lpctl S_RD
b286 qsi S_RD
b287 lsctl S_RD
b28e qctri S_RD
b28f qpaci S_RD
b299 srnm S_RD
b29c stfpc S_RD
b29d lfpc S_RD
@ -1098,7 +1099,7 @@ eb61 stric RSY_RDRU
eb62 mric RSY_RDRU
eb6a asi SIY_IRD
eb6e alsi SIY_IRD
eb71 lpswey SIY_URD
eb71 lpswey SIY_RD
eb7a agsi SIY_IRD
eb7e algsi SIY_IRD
eb80 icmh RSY_RURD

View File

@ -269,6 +269,7 @@ config X86
select HAVE_ARCH_KCSAN if X86_64
select X86_FEATURE_NAMES if PROC_FS
select PROC_PID_ARCH_STATUS if PROC_FS
select HAVE_ARCH_NODE_DEV_GROUP if X86_SGX
imply IMA_SECURE_AND_OR_TRUSTED_BOOT if EFI
config INSTRUCTION_DECODER
@ -1523,16 +1524,20 @@ config X86_CPA_STATISTICS
helps to determine the effectiveness of preserving large and huge
page mappings when mapping protections are changed.
config X86_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
select DYNAMIC_PHYSICAL_MASK
select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
def_bool n
config AMD_MEM_ENCRYPT
bool "AMD Secure Memory Encryption (SME) support"
depends on X86_64 && CPU_SUP_AMD
select DMA_COHERENT_POOL
select DYNAMIC_PHYSICAL_MASK
select ARCH_USE_MEMREMAP_PROT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
select INSTRUCTION_DECODER
select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
select ARCH_HAS_CC_PLATFORM
select X86_MEM_ENCRYPT
help
Say yes to enable support for the encryption of system memory.
This requires an AMD processor that supports Secure Memory
@ -1917,6 +1922,7 @@ config X86_SGX
select SRCU
select MMU_NOTIFIER
select NUMA_KEEP_MEMINFO if NUMA
select XARRAY_MULTI
help
Intel(R) Software Guard eXtensions (SGX) is a set of CPU instructions
that can be used by applications to set aside private regions of code

View File

@ -28,7 +28,11 @@ KCOV_INSTRUMENT := n
targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4 vmlinux.bin.zst
KBUILD_CFLAGS := -m$(BITS) -O2
# CLANG_FLAGS must come before any cc-disable-warning or cc-option calls in
# case of cross compiling, as it has the '--target=' flag, which is needed to
# avoid errors with '-march=i386', and future flags may depend on the target to
# be valid.
KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
KBUILD_CFLAGS += -Wundef
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
@ -47,7 +51,6 @@ KBUILD_CFLAGS += -D__DISABLE_EXPORTS
# Disable relocation relaxation in case the link is not PIE.
KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mrelax-relocations=no)
KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h
KBUILD_CFLAGS += $(CLANG_FLAGS)
# sev.c indirectly inludes inat-table.h which is generated during
# compilation and stored in $(objtree). Add the directory to the includes so

View File

@ -122,7 +122,7 @@ static enum es_result vc_read_mem(struct es_em_ctxt *ctxt,
static bool early_setup_sev_es(void)
{
if (!sev_es_negotiate_protocol())
sev_es_terminate(GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED);
sev_es_terminate(GHCB_SEV_ES_PROT_UNSUPPORTED);
if (set_page_decrypted((unsigned long)&boot_ghcb_page))
return false;
@ -175,7 +175,7 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
enum es_result result;
if (!boot_ghcb && !early_setup_sev_es())
sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
sev_es_terminate(GHCB_SEV_ES_GEN_REQ);
vc_ghcb_invalidate(boot_ghcb);
result = vc_init_em_ctxt(&ctxt, regs, exit_code);
@ -202,5 +202,5 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
if (result == ES_OK)
vc_finish_insn(&ctxt);
else if (result != ES_RETRY)
sev_es_terminate(GHCB_SEV_ES_REASON_GENERAL_REQUEST);
sev_es_terminate(GHCB_SEV_ES_GEN_REQ);
}

View File

@ -8,8 +8,10 @@
#undef memcmp
void *memcpy(void *dst, const void *src, size_t len);
void *memmove(void *dst, const void *src, size_t len);
void *memset(void *dst, int c, size_t len);
int memcmp(const void *s1, const void *s2, size_t len);
int bcmp(const void *s1, const void *s2, size_t len);
/* Access builtin version by default. */
#define memcpy(d,s,l) __builtin_memcpy(d,s,l)
@ -25,6 +27,7 @@ extern size_t strnlen(const char *s, size_t maxlen);
extern unsigned int atou(const char *s);
extern unsigned long long simple_strtoull(const char *cp, char **endp,
unsigned int base);
long simple_strtol(const char *cp, char **endp, unsigned int base);
int kstrtoull(const char *s, unsigned int base, unsigned long long *res);
int boot_kstrtoul(const char *s, unsigned int base, unsigned long *res);

View File

@ -62,7 +62,9 @@ obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o
sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o
obj-$(CONFIG_CRYPTO_BLAKE2S_X86) += blake2s-x86_64.o
blake2s-x86_64-y := blake2s-core.o blake2s-glue.o
blake2s-x86_64-y := blake2s-shash.o
obj-$(if $(CONFIG_CRYPTO_BLAKE2S_X86),y) += libblake2s-x86_64.o
libblake2s-x86_64-y := blake2s-core.o blake2s-glue.o
obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o
ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o

View File

@ -5,7 +5,6 @@
#include <crypto/internal/blake2s.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/hash.h>
#include <linux/types.h>
#include <linux/jump_label.h>
@ -28,9 +27,8 @@ asmlinkage void blake2s_compress_avx512(struct blake2s_state *state,
static __ro_after_init DEFINE_STATIC_KEY_FALSE(blake2s_use_ssse3);
static __ro_after_init DEFINE_STATIC_KEY_FALSE(blake2s_use_avx512);
void blake2s_compress_arch(struct blake2s_state *state,
const u8 *block, size_t nblocks,
const u32 inc)
void blake2s_compress(struct blake2s_state *state, const u8 *block,
size_t nblocks, const u32 inc)
{
/* SIMD disables preemption, so relax after processing each page. */
BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8);
@ -56,49 +54,12 @@ void blake2s_compress_arch(struct blake2s_state *state,
block += blocks * BLAKE2S_BLOCK_SIZE;
} while (nblocks);
}
EXPORT_SYMBOL(blake2s_compress_arch);
static int crypto_blake2s_update_x86(struct shash_desc *desc,
const u8 *in, unsigned int inlen)
{
return crypto_blake2s_update(desc, in, inlen, blake2s_compress_arch);
}
static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out)
{
return crypto_blake2s_final(desc, out, blake2s_compress_arch);
}
#define BLAKE2S_ALG(name, driver_name, digest_size) \
{ \
.base.cra_name = name, \
.base.cra_driver_name = driver_name, \
.base.cra_priority = 200, \
.base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \
.base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \
.base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \
.base.cra_module = THIS_MODULE, \
.digestsize = digest_size, \
.setkey = crypto_blake2s_setkey, \
.init = crypto_blake2s_init, \
.update = crypto_blake2s_update_x86, \
.final = crypto_blake2s_final_x86, \
.descsize = sizeof(struct blake2s_state), \
}
static struct shash_alg blake2s_algs[] = {
BLAKE2S_ALG("blake2s-128", "blake2s-128-x86", BLAKE2S_128_HASH_SIZE),
BLAKE2S_ALG("blake2s-160", "blake2s-160-x86", BLAKE2S_160_HASH_SIZE),
BLAKE2S_ALG("blake2s-224", "blake2s-224-x86", BLAKE2S_224_HASH_SIZE),
BLAKE2S_ALG("blake2s-256", "blake2s-256-x86", BLAKE2S_256_HASH_SIZE),
};
EXPORT_SYMBOL(blake2s_compress);
static int __init blake2s_mod_init(void)
{
if (!boot_cpu_has(X86_FEATURE_SSSE3))
return 0;
static_branch_enable(&blake2s_use_ssse3);
if (boot_cpu_has(X86_FEATURE_SSSE3))
static_branch_enable(&blake2s_use_ssse3);
if (IS_ENABLED(CONFIG_AS_AVX512) &&
boot_cpu_has(X86_FEATURE_AVX) &&
@ -109,26 +70,9 @@ static int __init blake2s_mod_init(void)
XFEATURE_MASK_AVX512, NULL))
static_branch_enable(&blake2s_use_avx512);
return IS_REACHABLE(CONFIG_CRYPTO_HASH) ?
crypto_register_shashes(blake2s_algs,
ARRAY_SIZE(blake2s_algs)) : 0;
}
static void __exit blake2s_mod_exit(void)
{
if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3))
crypto_unregister_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
return 0;
}
module_init(blake2s_mod_init);
module_exit(blake2s_mod_exit);
MODULE_ALIAS_CRYPTO("blake2s-128");
MODULE_ALIAS_CRYPTO("blake2s-128-x86");
MODULE_ALIAS_CRYPTO("blake2s-160");
MODULE_ALIAS_CRYPTO("blake2s-160-x86");
MODULE_ALIAS_CRYPTO("blake2s-224");
MODULE_ALIAS_CRYPTO("blake2s-224-x86");
MODULE_ALIAS_CRYPTO("blake2s-256");
MODULE_ALIAS_CRYPTO("blake2s-256-x86");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,77 @@
// SPDX-License-Identifier: GPL-2.0 OR MIT
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include <crypto/internal/blake2s.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/hash.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sizes.h>
#include <asm/cpufeature.h>
#include <asm/processor.h>
static int crypto_blake2s_update_x86(struct shash_desc *desc,
const u8 *in, unsigned int inlen)
{
return crypto_blake2s_update(desc, in, inlen, blake2s_compress);
}
static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out)
{
return crypto_blake2s_final(desc, out, blake2s_compress);
}
#define BLAKE2S_ALG(name, driver_name, digest_size) \
{ \
.base.cra_name = name, \
.base.cra_driver_name = driver_name, \
.base.cra_priority = 200, \
.base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \
.base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \
.base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \
.base.cra_module = THIS_MODULE, \
.digestsize = digest_size, \
.setkey = crypto_blake2s_setkey, \
.init = crypto_blake2s_init, \
.update = crypto_blake2s_update_x86, \
.final = crypto_blake2s_final_x86, \
.descsize = sizeof(struct blake2s_state), \
}
static struct shash_alg blake2s_algs[] = {
BLAKE2S_ALG("blake2s-128", "blake2s-128-x86", BLAKE2S_128_HASH_SIZE),
BLAKE2S_ALG("blake2s-160", "blake2s-160-x86", BLAKE2S_160_HASH_SIZE),
BLAKE2S_ALG("blake2s-224", "blake2s-224-x86", BLAKE2S_224_HASH_SIZE),
BLAKE2S_ALG("blake2s-256", "blake2s-256-x86", BLAKE2S_256_HASH_SIZE),
};
static int __init blake2s_mod_init(void)
{
if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3))
return crypto_register_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
return 0;
}
static void __exit blake2s_mod_exit(void)
{
if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3))
crypto_unregister_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs));
}
module_init(blake2s_mod_init);
module_exit(blake2s_mod_exit);
MODULE_ALIAS_CRYPTO("blake2s-128");
MODULE_ALIAS_CRYPTO("blake2s-128-x86");
MODULE_ALIAS_CRYPTO("blake2s-160");
MODULE_ALIAS_CRYPTO("blake2s-160-x86");
MODULE_ALIAS_CRYPTO("blake2s-224");
MODULE_ALIAS_CRYPTO("blake2s-224-x86");
MODULE_ALIAS_CRYPTO("blake2s-256");
MODULE_ALIAS_CRYPTO("blake2s-256-x86");
MODULE_LICENSE("GPL v2");

View File

@ -172,7 +172,7 @@ $(obj)/vdso32.so.dbg: $(obj)/vdso32/vdso32.lds $(vobjs32) FORCE
# The DSO images are built using a special linker script.
#
quiet_cmd_vdso = VDSO $@
cmd_vdso = $(LD) -nostdlib -o $@ \
cmd_vdso = $(LD) -o $@ \
$(VDSO_LDFLAGS) $(VDSO_LDFLAGS_$(filter %.lds,$(^F))) \
-T $(filter %.lds,$^) $(filter %.o,$^) && \
sh $(srctree)/$(src)/checkundef.sh '$(NM)' '$@'

View File

@ -161,7 +161,7 @@ static int get_next_avail_iommu_bnk_cntr(struct perf_event *event)
raw_spin_lock_irqsave(&piommu->lock, flags);
for (bank = 0, shift = 0; bank < max_banks; bank++) {
for (bank = 0; bank < max_banks; bank++) {
for (cntr = 0; cntr < max_cntrs; cntr++) {
shift = bank + (bank*3) + cntr;
if (piommu->cntr_assign_mask & BIT_ULL(shift)) {

View File

@ -24,7 +24,6 @@ extern int amd_set_subcaches(int, unsigned long);
extern int amd_smn_read(u16 node, u32 address, u32 *value);
extern int amd_smn_write(u16 node, u32 address, u32 value);
extern int amd_df_indirect_read(u16 node, u8 func, u16 reg, u8 instance_id, u32 *lo);
struct amd_l3_cache {
unsigned indices;

View File

@ -41,7 +41,4 @@ extern void fpu__clear_user_states(struct fpu *fpu);
extern bool fpu__restore_sig(void __user *buf, int ia32_frame);
extern void restore_fpregs_from_fpstate(struct fpstate *fpstate, u64 mask);
extern bool copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size);
#endif /* _ASM_X86_FPU_SIGNAL_H */

View File

@ -19,6 +19,7 @@ bool insn_has_rep_prefix(struct insn *insn);
void __user *insn_get_addr_ref(struct insn *insn, struct pt_regs *regs);
int insn_get_modrm_rm_off(struct insn *insn, struct pt_regs *regs);
int insn_get_modrm_reg_off(struct insn *insn, struct pt_regs *regs);
unsigned long *insn_get_modrm_reg_ptr(struct insn *insn, struct pt_regs *regs);
unsigned long insn_get_seg_base(struct pt_regs *regs, int seg_reg_idx);
int insn_get_code_seg_params(struct pt_regs *regs);
int insn_get_effective_ip(struct pt_regs *regs, unsigned long *ip);
@ -29,4 +30,16 @@ int insn_fetch_from_user_inatomic(struct pt_regs *regs,
bool insn_decode_from_regs(struct insn *insn, struct pt_regs *regs,
unsigned char buf[MAX_INSN_SIZE], int buf_size);
enum mmio_type {
MMIO_DECODE_FAILED,
MMIO_WRITE,
MMIO_WRITE_IMM,
MMIO_READ,
MMIO_READ_ZERO_EXTEND,
MMIO_READ_SIGN_EXTEND,
MMIO_MOVS,
};
enum mmio_type insn_decode_mmio(struct insn *insn, int *bytes);
#endif /* _ASM_X86_INSN_EVAL_H */

View File

@ -40,6 +40,7 @@
#include <linux/string.h>
#include <linux/compiler.h>
#include <linux/cc_platform.h>
#include <asm/page.h>
#include <asm/early_ioremap.h>
#include <asm/pgtable_types.h>
@ -256,21 +257,6 @@ static inline void slow_down_io(void)
#endif
#ifdef CONFIG_AMD_MEM_ENCRYPT
#include <linux/jump_label.h>
extern struct static_key_false sev_enable_key;
static inline bool sev_key_active(void)
{
return static_branch_unlikely(&sev_enable_key);
}
#else /* !CONFIG_AMD_MEM_ENCRYPT */
static inline bool sev_key_active(void) { return false; }
#endif /* CONFIG_AMD_MEM_ENCRYPT */
#define BUILDIO(bwl, bw, type) \
static inline void out##bwl(unsigned type value, int port) \
{ \
@ -301,7 +287,7 @@ static inline unsigned type in##bwl##_p(int port) \
\
static inline void outs##bwl(int port, const void *addr, unsigned long count) \
{ \
if (sev_key_active()) { \
if (cc_platform_has(CC_ATTR_GUEST_UNROLL_STRING_IO)) { \
unsigned type *value = (unsigned type *)addr; \
while (count) { \
out##bwl(*value, port); \
@ -317,7 +303,7 @@ static inline void outs##bwl(int port, const void *addr, unsigned long count) \
\
static inline void ins##bwl(int port, void *addr, unsigned long count) \
{ \
if (sev_key_active()) { \
if (cc_platform_has(CC_ATTR_GUEST_UNROLL_STRING_IO)) { \
unsigned type *value = (unsigned type *)addr; \
while (count) { \
*value = in##bwl(port); \

View File

@ -114,8 +114,6 @@ static __always_inline unsigned long arch_local_irq_save(void)
#define SAVE_FLAGS pushfq; popq %rax
#endif
#define INTERRUPT_RETURN jmp native_iret
#endif
#endif /* __ASSEMBLY__ */
@ -143,8 +141,13 @@ static __always_inline void arch_local_irq_restore(unsigned long flags)
#ifdef CONFIG_X86_64
#ifdef CONFIG_XEN_PV
#define SWAPGS ALTERNATIVE "swapgs", "", X86_FEATURE_XENPV
#define INTERRUPT_RETURN \
ANNOTATE_RETPOLINE_SAFE; \
ALTERNATIVE_TERNARY("jmp *paravirt_iret(%rip);", \
X86_FEATURE_XENPV, "jmp xen_iret;", "jmp native_iret;")
#else
#define SWAPGS swapgs
#define INTERRUPT_RETURN jmp native_iret
#endif
#endif
#endif /* !__ASSEMBLY__ */

View File

@ -313,31 +313,22 @@ enum smca_bank_types {
SMCA_SMU, /* System Management Unit */
SMCA_SMU_V2,
SMCA_MP5, /* Microprocessor 5 Unit */
SMCA_MPDMA, /* MPDMA Unit */
SMCA_NBIO, /* Northbridge IO Unit */
SMCA_PCIE, /* PCI Express Unit */
SMCA_PCIE_V2,
SMCA_XGMI_PCS, /* xGMI PCS Unit */
SMCA_NBIF, /* NBIF Unit */
SMCA_SHUB, /* System HUB Unit */
SMCA_SATA, /* SATA Unit */
SMCA_USB, /* USB Unit */
SMCA_GMI_PCS, /* GMI PCS Unit */
SMCA_XGMI_PHY, /* xGMI PHY Unit */
SMCA_WAFL_PHY, /* WAFL PHY Unit */
SMCA_GMI_PHY, /* GMI PHY Unit */
N_SMCA_BANK_TYPES
};
#define HWID_MCATYPE(hwid, mcatype) (((hwid) << 16) | (mcatype))
struct smca_hwid {
unsigned int bank_type; /* Use with smca_bank_types for easy indexing. */
u32 hwid_mcatype; /* (hwid,mcatype) tuple */
u8 count; /* Number of instances. */
};
struct smca_bank {
struct smca_hwid *hwid;
u32 id; /* Value of MCA_IPID[InstanceId]. */
u8 sysfs_id; /* Value used for sysfs name. */
};
extern struct smca_bank smca_banks[MAX_NR_BANKS];
extern const char *smca_get_long_name(enum smca_bank_types t);
extern bool amd_mce_is_memory_error(struct mce *m);
@ -345,16 +336,13 @@ extern int mce_threshold_create_device(unsigned int cpu);
extern int mce_threshold_remove_device(unsigned int cpu);
void mce_amd_feature_init(struct cpuinfo_x86 *c);
int umc_normaddr_to_sysaddr(u64 norm_addr, u16 nid, u8 umc, u64 *sys_addr);
enum smca_bank_types smca_get_bank_type(unsigned int bank);
enum smca_bank_types smca_get_bank_type(unsigned int cpu, unsigned int bank);
#else
static inline int mce_threshold_create_device(unsigned int cpu) { return 0; };
static inline int mce_threshold_remove_device(unsigned int cpu) { return 0; };
static inline bool amd_mce_is_memory_error(struct mce *m) { return false; };
static inline void mce_amd_feature_init(struct cpuinfo_x86 *c) { }
static inline int
umc_normaddr_to_sysaddr(u64 norm_addr, u16 nid, u8 umc, u64 *sys_addr) { return -EINVAL; };
#endif
static inline void mce_hygon_feature_init(struct cpuinfo_x86 *c) { return mce_amd_feature_init(c); }

View File

@ -24,8 +24,8 @@
#define _ASM_X86_MTRR_H
#include <uapi/asm/mtrr.h>
#include <asm/memtype.h>
void mtrr_bp_init(void);
/*
* The following functions are for use by other drivers that cannot use
@ -43,7 +43,6 @@ extern int mtrr_del(int reg, unsigned long base, unsigned long size);
extern int mtrr_del_page(int reg, unsigned long base, unsigned long size);
extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
extern void mtrr_ap_init(void);
extern void mtrr_bp_init(void);
extern void set_mtrr_aps_delayed_init(void);
extern void mtrr_aps_init(void);
extern void mtrr_bp_restore(void);
@ -84,11 +83,6 @@ static inline int mtrr_trim_uncached_memory(unsigned long end_pfn)
static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
{
}
static inline void mtrr_bp_init(void)
{
pat_disable("PAT support disabled because CONFIG_MTRR is disabled in the kernel.");
}
#define mtrr_ap_init() do {} while (0)
#define set_mtrr_aps_delayed_init() do {} while (0)
#define mtrr_aps_init() do {} while (0)

View File

@ -5,6 +5,7 @@
#include <asm/page_64_types.h>
#ifndef __ASSEMBLY__
#include <asm/cpufeatures.h>
#include <asm/alternative.h>
/* duplicated to the one in bootmem.h */

View File

@ -752,11 +752,6 @@ extern void default_banner(void);
#define PARA_SITE(ptype, ops) _PVSITE(ptype, ops, .quad, 8)
#define PARA_INDIRECT(addr) *addr(%rip)
#define INTERRUPT_RETURN \
ANNOTATE_RETPOLINE_SAFE; \
ALTERNATIVE_TERNARY("jmp *paravirt_iret(%rip);", \
X86_FEATURE_XENPV, "jmp xen_iret;", "jmp native_iret;")
#ifdef CONFIG_DEBUG_ENTRY
.macro PARA_IRQ_save_fl
PARA_SITE(PARA_PATCH(PV_IRQ_save_fl),

View File

@ -22,6 +22,7 @@
#define pgprot_decrypted(prot) __pgprot(__sme_clr(pgprot_val(prot)))
#ifndef __ASSEMBLY__
#include <linux/spinlock.h>
#include <asm/x86_init.h>
#include <asm/pkru.h>
#include <asm/fpu/api.h>

View File

@ -855,4 +855,12 @@ enum mds_mitigations {
MDS_MITIGATION_VMWERV,
};
#ifdef CONFIG_X86_SGX
int arch_memory_failure(unsigned long pfn, int flags);
#define arch_memory_failure arch_memory_failure
bool arch_is_platform_page(u64 paddr);
#define arch_is_platform_page arch_is_platform_page
#endif
#endif /* _ASM_X86_PROCESSOR_H */

View File

@ -89,6 +89,7 @@ static inline void set_real_mode_mem(phys_addr_t mem)
}
void reserve_real_mode(void);
void load_trampoline_pgtable(void);
#endif /* __ASSEMBLY__ */

View File

@ -2,6 +2,7 @@
#ifndef _ASM_X86_SET_MEMORY_H
#define _ASM_X86_SET_MEMORY_H
#include <linux/mm.h>
#include <asm/page.h>
#include <asm-generic/set_memory.h>
@ -99,6 +100,9 @@ static inline int set_mce_nospec(unsigned long pfn, bool unmap)
unsigned long decoy_addr;
int rc;
/* SGX pages are not in the 1:1 map */
if (arch_is_platform_page(pfn << PAGE_SHIFT))
return 0;
/*
* We would like to just call:
* set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);

View File

@ -18,20 +18,19 @@
/* SEV Information Request/Response */
#define GHCB_MSR_SEV_INFO_RESP 0x001
#define GHCB_MSR_SEV_INFO_REQ 0x002
#define GHCB_MSR_VER_MAX_POS 48
#define GHCB_MSR_VER_MAX_MASK 0xffff
#define GHCB_MSR_VER_MIN_POS 32
#define GHCB_MSR_VER_MIN_MASK 0xffff
#define GHCB_MSR_CBIT_POS 24
#define GHCB_MSR_CBIT_MASK 0xff
#define GHCB_MSR_SEV_INFO(_max, _min, _cbit) \
((((_max) & GHCB_MSR_VER_MAX_MASK) << GHCB_MSR_VER_MAX_POS) | \
(((_min) & GHCB_MSR_VER_MIN_MASK) << GHCB_MSR_VER_MIN_POS) | \
(((_cbit) & GHCB_MSR_CBIT_MASK) << GHCB_MSR_CBIT_POS) | \
#define GHCB_MSR_SEV_INFO(_max, _min, _cbit) \
/* GHCBData[63:48] */ \
((((_max) & 0xffff) << 48) | \
/* GHCBData[47:32] */ \
(((_min) & 0xffff) << 32) | \
/* GHCBData[31:24] */ \
(((_cbit) & 0xff) << 24) | \
GHCB_MSR_SEV_INFO_RESP)
#define GHCB_MSR_INFO(v) ((v) & 0xfffUL)
#define GHCB_MSR_PROTO_MAX(v) (((v) >> GHCB_MSR_VER_MAX_POS) & GHCB_MSR_VER_MAX_MASK)
#define GHCB_MSR_PROTO_MIN(v) (((v) >> GHCB_MSR_VER_MIN_POS) & GHCB_MSR_VER_MIN_MASK)
#define GHCB_MSR_PROTO_MAX(v) (((v) >> 48) & 0xffff)
#define GHCB_MSR_PROTO_MIN(v) (((v) >> 32) & 0xffff)
/* CPUID Request/Response */
#define GHCB_MSR_CPUID_REQ 0x004
@ -46,30 +45,36 @@
#define GHCB_CPUID_REQ_EBX 1
#define GHCB_CPUID_REQ_ECX 2
#define GHCB_CPUID_REQ_EDX 3
#define GHCB_CPUID_REQ(fn, reg) \
(GHCB_MSR_CPUID_REQ | \
(((unsigned long)reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS) | \
(((unsigned long)fn) << GHCB_MSR_CPUID_FUNC_POS))
#define GHCB_CPUID_REQ(fn, reg) \
/* GHCBData[11:0] */ \
(GHCB_MSR_CPUID_REQ | \
/* GHCBData[31:12] */ \
(((unsigned long)(reg) & 0x3) << 30) | \
/* GHCBData[63:32] */ \
(((unsigned long)fn) << 32))
/* AP Reset Hold */
#define GHCB_MSR_AP_RESET_HOLD_REQ 0x006
#define GHCB_MSR_AP_RESET_HOLD_RESP 0x007
#define GHCB_MSR_AP_RESET_HOLD_REQ 0x006
#define GHCB_MSR_AP_RESET_HOLD_RESP 0x007
/* GHCB Hypervisor Feature Request/Response */
#define GHCB_MSR_HV_FT_REQ 0x080
#define GHCB_MSR_HV_FT_RESP 0x081
#define GHCB_MSR_HV_FT_REQ 0x080
#define GHCB_MSR_HV_FT_RESP 0x081
#define GHCB_MSR_TERM_REQ 0x100
#define GHCB_MSR_TERM_REASON_SET_POS 12
#define GHCB_MSR_TERM_REASON_SET_MASK 0xf
#define GHCB_MSR_TERM_REASON_POS 16
#define GHCB_MSR_TERM_REASON_MASK 0xff
#define GHCB_SEV_TERM_REASON(reason_set, reason_val) \
(((((u64)reason_set) & GHCB_MSR_TERM_REASON_SET_MASK) << GHCB_MSR_TERM_REASON_SET_POS) | \
((((u64)reason_val) & GHCB_MSR_TERM_REASON_MASK) << GHCB_MSR_TERM_REASON_POS))
#define GHCB_SEV_ES_REASON_GENERAL_REQUEST 0
#define GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED 1
#define GHCB_SEV_TERM_REASON(reason_set, reason_val) \
/* GHCBData[15:12] */ \
(((((u64)reason_set) & 0xf) << 12) | \
/* GHCBData[23:16] */ \
((((u64)reason_val) & 0xff) << 16))
#define GHCB_SEV_ES_GEN_REQ 0
#define GHCB_SEV_ES_PROT_UNSUPPORTED 1
#define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK)

View File

@ -261,4 +261,9 @@ extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
#endif /* !MODULE */
static inline void __native_tlb_flush_global(unsigned long cr4)
{
native_write_cr4(cr4 ^ X86_CR4_PGE);
native_write_cr4(cr4);
}
#endif /* _ASM_X86_TLBFLUSH_H */

View File

@ -314,11 +314,12 @@ do { \
do { \
__chk_user_ptr(ptr); \
switch (size) { \
unsigned char x_u8__; \
case 1: \
case 1: { \
unsigned char x_u8__; \
__get_user_asm(x_u8__, ptr, "b", "=q", label); \
(x) = x_u8__; \
break; \
} \
case 2: \
__get_user_asm(x, ptr, "w", "=r", label); \
break; \

View File

@ -29,7 +29,7 @@
#define PCI_DEVICE_ID_AMD_19H_M40H_DF_F4 0x167d
#define PCI_DEVICE_ID_AMD_19H_M50H_DF_F4 0x166e
/* Protect the PCI config register pairs used for SMN and DF indirect access. */
/* Protect the PCI config register pairs used for SMN. */
static DEFINE_MUTEX(smn_mutex);
static u32 *flush_words;
@ -182,53 +182,6 @@ int amd_smn_write(u16 node, u32 address, u32 value)
}
EXPORT_SYMBOL_GPL(amd_smn_write);
/*
* Data Fabric Indirect Access uses FICAA/FICAD.
*
* Fabric Indirect Configuration Access Address (FICAA): Constructed based
* on the device's Instance Id and the PCI function and register offset of
* the desired register.
*
* Fabric Indirect Configuration Access Data (FICAD): There are FICAD LO
* and FICAD HI registers but so far we only need the LO register.
*/
int amd_df_indirect_read(u16 node, u8 func, u16 reg, u8 instance_id, u32 *lo)
{
struct pci_dev *F4;
u32 ficaa;
int err = -ENODEV;
if (node >= amd_northbridges.num)
goto out;
F4 = node_to_amd_nb(node)->link;
if (!F4)
goto out;
ficaa = 1;
ficaa |= reg & 0x3FC;
ficaa |= (func & 0x7) << 11;
ficaa |= instance_id << 16;
mutex_lock(&smn_mutex);
err = pci_write_config_dword(F4, 0x5C, ficaa);
if (err) {
pr_warn("Error writing DF Indirect FICAA, FICAA=0x%x\n", ficaa);
goto out_unlock;
}
err = pci_read_config_dword(F4, 0x98, lo);
if (err)
pr_warn("Error reading DF Indirect FICAD LO, FICAA=0x%x.\n", ficaa);
out_unlock:
mutex_unlock(&smn_mutex);
out:
return err;
}
EXPORT_SYMBOL_GPL(amd_df_indirect_read);
int amd_cache_northbridges(void)
{

View File

@ -50,6 +50,14 @@ static bool amd_cc_platform_has(enum cc_attr attr)
case CC_ATTR_GUEST_STATE_ENCRYPT:
return sev_status & MSR_AMD64_SEV_ES_ENABLED;
/*
* With SEV, the rep string I/O instructions need to be unrolled
* but SEV-ES supports them through the #VC handler.
*/
case CC_ATTR_GUEST_UNROLL_STRING_IO:
return (sev_status & MSR_AMD64_SEV_ENABLED) &&
!(sev_status & MSR_AMD64_SEV_ES_ENABLED);
default:
return false;
}

View File

@ -384,7 +384,7 @@ void native_write_cr0(unsigned long val)
}
EXPORT_SYMBOL(native_write_cr0);
void native_write_cr4(unsigned long val)
void __no_profile native_write_cr4(unsigned long val)
{
unsigned long bits_changed = 0;
@ -1787,6 +1787,17 @@ EXPORT_PER_CPU_SYMBOL(__preempt_count);
DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) = TOP_OF_INIT_STACK;
static void wrmsrl_cstar(unsigned long val)
{
/*
* Intel CPUs do not support 32-bit SYSCALL. Writing to MSR_CSTAR
* is so far ignored by the CPU, but raises a #VE trap in a TDX
* guest. Avoid the pointless write on all Intel CPUs.
*/
if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
wrmsrl(MSR_CSTAR, val);
}
/* May not be marked __init: used by software suspend */
void syscall_init(void)
{
@ -1794,7 +1805,7 @@ void syscall_init(void)
wrmsrl(MSR_LSTAR, (unsigned long)entry_SYSCALL_64);
#ifdef CONFIG_IA32_EMULATION
wrmsrl(MSR_CSTAR, (unsigned long)entry_SYSCALL_compat);
wrmsrl_cstar((unsigned long)entry_SYSCALL_compat);
/*
* This only works on Intel CPUs.
* On AMD CPUs these MSRs are 32-bit, CPU truncates MSR_IA32_SYSENTER_EIP.
@ -1806,7 +1817,7 @@ void syscall_init(void)
(unsigned long)(cpu_entry_stack(smp_processor_id()) + 1));
wrmsrl_safe(MSR_IA32_SYSENTER_EIP, (u64)entry_SYSENTER_compat);
#else
wrmsrl(MSR_CSTAR, (unsigned long)ignore_sysret);
wrmsrl_cstar((unsigned long)ignore_sysret);
wrmsrl_safe(MSR_IA32_SYSENTER_CS, (u64)GDT_ENTRY_INVALID_SEG);
wrmsrl_safe(MSR_IA32_SYSENTER_ESP, 0ULL);
wrmsrl_safe(MSR_IA32_SYSENTER_EIP, 0ULL);

View File

@ -71,6 +71,22 @@ static const char * const smca_umc_block_names[] = {
"misc_umc"
};
#define HWID_MCATYPE(hwid, mcatype) (((hwid) << 16) | (mcatype))
struct smca_hwid {
unsigned int bank_type; /* Use with smca_bank_types for easy indexing. */
u32 hwid_mcatype; /* (hwid,mcatype) tuple */
};
struct smca_bank {
const struct smca_hwid *hwid;
u32 id; /* Value of MCA_IPID[InstanceId]. */
u8 sysfs_id; /* Value used for sysfs name. */
};
static DEFINE_PER_CPU_READ_MOSTLY(struct smca_bank[MAX_NR_BANKS], smca_banks);
static DEFINE_PER_CPU_READ_MOSTLY(u8[N_SMCA_BANK_TYPES], smca_bank_counts);
struct smca_bank_name {
const char *name; /* Short name for sysfs */
const char *long_name; /* Long name for pretty-printing */
@ -95,11 +111,18 @@ static struct smca_bank_name smca_names[] = {
[SMCA_PSP ... SMCA_PSP_V2] = { "psp", "Platform Security Processor" },
[SMCA_SMU ... SMCA_SMU_V2] = { "smu", "System Management Unit" },
[SMCA_MP5] = { "mp5", "Microprocessor 5 Unit" },
[SMCA_MPDMA] = { "mpdma", "MPDMA Unit" },
[SMCA_NBIO] = { "nbio", "Northbridge IO Unit" },
[SMCA_PCIE ... SMCA_PCIE_V2] = { "pcie", "PCI Express Unit" },
[SMCA_XGMI_PCS] = { "xgmi_pcs", "Ext Global Memory Interconnect PCS Unit" },
[SMCA_NBIF] = { "nbif", "NBIF Unit" },
[SMCA_SHUB] = { "shub", "System Hub Unit" },
[SMCA_SATA] = { "sata", "SATA Unit" },
[SMCA_USB] = { "usb", "USB Unit" },
[SMCA_GMI_PCS] = { "gmi_pcs", "Global Memory Interconnect PCS Unit" },
[SMCA_XGMI_PHY] = { "xgmi_phy", "Ext Global Memory Interconnect PHY Unit" },
[SMCA_WAFL_PHY] = { "wafl_phy", "WAFL PHY Unit" },
[SMCA_GMI_PHY] = { "gmi_phy", "Global Memory Interconnect PHY Unit" },
};
static const char *smca_get_name(enum smca_bank_types t)
@ -119,14 +142,14 @@ const char *smca_get_long_name(enum smca_bank_types t)
}
EXPORT_SYMBOL_GPL(smca_get_long_name);
enum smca_bank_types smca_get_bank_type(unsigned int bank)
enum smca_bank_types smca_get_bank_type(unsigned int cpu, unsigned int bank)
{
struct smca_bank *b;
if (bank >= MAX_NR_BANKS)
return N_SMCA_BANK_TYPES;
b = &smca_banks[bank];
b = &per_cpu(smca_banks, cpu)[bank];
if (!b->hwid)
return N_SMCA_BANK_TYPES;
@ -134,7 +157,7 @@ enum smca_bank_types smca_get_bank_type(unsigned int bank)
}
EXPORT_SYMBOL_GPL(smca_get_bank_type);
static struct smca_hwid smca_hwid_mcatypes[] = {
static const struct smca_hwid smca_hwid_mcatypes[] = {
/* { bank_type, hwid_mcatype } */
/* Reserved type */
@ -174,6 +197,9 @@ static struct smca_hwid smca_hwid_mcatypes[] = {
/* Microprocessor 5 Unit MCA type */
{ SMCA_MP5, HWID_MCATYPE(0x01, 0x2) },
/* MPDMA MCA type */
{ SMCA_MPDMA, HWID_MCATYPE(0x01, 0x3) },
/* Northbridge IO Unit MCA type */
{ SMCA_NBIO, HWID_MCATYPE(0x18, 0x0) },
@ -181,19 +207,17 @@ static struct smca_hwid smca_hwid_mcatypes[] = {
{ SMCA_PCIE, HWID_MCATYPE(0x46, 0x0) },
{ SMCA_PCIE_V2, HWID_MCATYPE(0x46, 0x1) },
/* xGMI PCS MCA type */
{ SMCA_XGMI_PCS, HWID_MCATYPE(0x50, 0x0) },
/* xGMI PHY MCA type */
{ SMCA_NBIF, HWID_MCATYPE(0x6C, 0x0) },
{ SMCA_SHUB, HWID_MCATYPE(0x80, 0x0) },
{ SMCA_SATA, HWID_MCATYPE(0xA8, 0x0) },
{ SMCA_USB, HWID_MCATYPE(0xAA, 0x0) },
{ SMCA_GMI_PCS, HWID_MCATYPE(0x241, 0x0) },
{ SMCA_XGMI_PHY, HWID_MCATYPE(0x259, 0x0) },
/* WAFL PHY MCA type */
{ SMCA_WAFL_PHY, HWID_MCATYPE(0x267, 0x0) },
{ SMCA_GMI_PHY, HWID_MCATYPE(0x269, 0x0) },
};
struct smca_bank smca_banks[MAX_NR_BANKS];
EXPORT_SYMBOL_GPL(smca_banks);
/*
* In SMCA enabled processors, we can have multiple banks for a given IP type.
* So to define a unique name for each bank, we use a temp c-string to append
@ -249,8 +273,9 @@ static void smca_set_misc_banks_map(unsigned int bank, unsigned int cpu)
static void smca_configure(unsigned int bank, unsigned int cpu)
{
u8 *bank_counts = this_cpu_ptr(smca_bank_counts);
const struct smca_hwid *s_hwid;
unsigned int i, hwid_mcatype;
struct smca_hwid *s_hwid;
u32 high, low;
u32 smca_config = MSR_AMD64_SMCA_MCx_CONFIG(bank);
@ -286,10 +311,6 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
smca_set_misc_banks_map(bank, cpu);
/* Return early if this bank was already initialized. */
if (smca_banks[bank].hwid && smca_banks[bank].hwid->hwid_mcatype != 0)
return;
if (rdmsr_safe(MSR_AMD64_SMCA_MCx_IPID(bank), &low, &high)) {
pr_warn("Failed to read MCA_IPID for bank %d\n", bank);
return;
@ -300,10 +321,11 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
for (i = 0; i < ARRAY_SIZE(smca_hwid_mcatypes); i++) {
s_hwid = &smca_hwid_mcatypes[i];
if (hwid_mcatype == s_hwid->hwid_mcatype) {
smca_banks[bank].hwid = s_hwid;
smca_banks[bank].id = low;
smca_banks[bank].sysfs_id = s_hwid->count++;
this_cpu_ptr(smca_banks)[bank].hwid = s_hwid;
this_cpu_ptr(smca_banks)[bank].id = low;
this_cpu_ptr(smca_banks)[bank].sysfs_id = bank_counts[s_hwid->bank_type]++;
break;
}
}
@ -589,7 +611,7 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
bool amd_filter_mce(struct mce *m)
{
enum smca_bank_types bank_type = smca_get_bank_type(m->bank);
enum smca_bank_types bank_type = smca_get_bank_type(m->extcpu, m->bank);
struct cpuinfo_x86 *c = &boot_cpu_data;
/* See Family 17h Models 10h-2Fh Erratum #1114. */
@ -627,7 +649,7 @@ static void disable_err_thresholding(struct cpuinfo_x86 *c, unsigned int bank)
} else if (c->x86 == 0x17 &&
(c->x86_model >= 0x10 && c->x86_model <= 0x2F)) {
if (smca_get_bank_type(bank) != SMCA_IF)
if (smca_get_bank_type(smp_processor_id(), bank) != SMCA_IF)
return;
msrs[0] = MSR_AMD64_SMCA_MCx_MISC(bank);
@ -689,213 +711,13 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
deferred_error_interrupt_enable(c);
}
int umc_normaddr_to_sysaddr(u64 norm_addr, u16 nid, u8 umc, u64 *sys_addr)
{
u64 dram_base_addr, dram_limit_addr, dram_hole_base;
/* We start from the normalized address */
u64 ret_addr = norm_addr;
u32 tmp;
u8 die_id_shift, die_id_mask, socket_id_shift, socket_id_mask;
u8 intlv_num_dies, intlv_num_chan, intlv_num_sockets;
u8 intlv_addr_sel, intlv_addr_bit;
u8 num_intlv_bits, hashed_bit;
u8 lgcy_mmio_hole_en, base = 0;
u8 cs_mask, cs_id = 0;
bool hash_enabled = false;
/* Read D18F0x1B4 (DramOffset), check if base 1 is used. */
if (amd_df_indirect_read(nid, 0, 0x1B4, umc, &tmp))
goto out_err;
/* Remove HiAddrOffset from normalized address, if enabled: */
if (tmp & BIT(0)) {
u64 hi_addr_offset = (tmp & GENMASK_ULL(31, 20)) << 8;
if (norm_addr >= hi_addr_offset) {
ret_addr -= hi_addr_offset;
base = 1;
}
}
/* Read D18F0x110 (DramBaseAddress). */
if (amd_df_indirect_read(nid, 0, 0x110 + (8 * base), umc, &tmp))
goto out_err;
/* Check if address range is valid. */
if (!(tmp & BIT(0))) {
pr_err("%s: Invalid DramBaseAddress range: 0x%x.\n",
__func__, tmp);
goto out_err;
}
lgcy_mmio_hole_en = tmp & BIT(1);
intlv_num_chan = (tmp >> 4) & 0xF;
intlv_addr_sel = (tmp >> 8) & 0x7;
dram_base_addr = (tmp & GENMASK_ULL(31, 12)) << 16;
/* {0, 1, 2, 3} map to address bits {8, 9, 10, 11} respectively */
if (intlv_addr_sel > 3) {
pr_err("%s: Invalid interleave address select %d.\n",
__func__, intlv_addr_sel);
goto out_err;
}
/* Read D18F0x114 (DramLimitAddress). */
if (amd_df_indirect_read(nid, 0, 0x114 + (8 * base), umc, &tmp))
goto out_err;
intlv_num_sockets = (tmp >> 8) & 0x1;
intlv_num_dies = (tmp >> 10) & 0x3;
dram_limit_addr = ((tmp & GENMASK_ULL(31, 12)) << 16) | GENMASK_ULL(27, 0);
intlv_addr_bit = intlv_addr_sel + 8;
/* Re-use intlv_num_chan by setting it equal to log2(#channels) */
switch (intlv_num_chan) {
case 0: intlv_num_chan = 0; break;
case 1: intlv_num_chan = 1; break;
case 3: intlv_num_chan = 2; break;
case 5: intlv_num_chan = 3; break;
case 7: intlv_num_chan = 4; break;
case 8: intlv_num_chan = 1;
hash_enabled = true;
break;
default:
pr_err("%s: Invalid number of interleaved channels %d.\n",
__func__, intlv_num_chan);
goto out_err;
}
num_intlv_bits = intlv_num_chan;
if (intlv_num_dies > 2) {
pr_err("%s: Invalid number of interleaved nodes/dies %d.\n",
__func__, intlv_num_dies);
goto out_err;
}
num_intlv_bits += intlv_num_dies;
/* Add a bit if sockets are interleaved. */
num_intlv_bits += intlv_num_sockets;
/* Assert num_intlv_bits <= 4 */
if (num_intlv_bits > 4) {
pr_err("%s: Invalid interleave bits %d.\n",
__func__, num_intlv_bits);
goto out_err;
}
if (num_intlv_bits > 0) {
u64 temp_addr_x, temp_addr_i, temp_addr_y;
u8 die_id_bit, sock_id_bit, cs_fabric_id;
/*
* Read FabricBlockInstanceInformation3_CS[BlockFabricID].
* This is the fabric id for this coherent slave. Use
* umc/channel# as instance id of the coherent slave
* for FICAA.
*/
if (amd_df_indirect_read(nid, 0, 0x50, umc, &tmp))
goto out_err;
cs_fabric_id = (tmp >> 8) & 0xFF;
die_id_bit = 0;
/* If interleaved over more than 1 channel: */
if (intlv_num_chan) {
die_id_bit = intlv_num_chan;
cs_mask = (1 << die_id_bit) - 1;
cs_id = cs_fabric_id & cs_mask;
}
sock_id_bit = die_id_bit;
/* Read D18F1x208 (SystemFabricIdMask). */
if (intlv_num_dies || intlv_num_sockets)
if (amd_df_indirect_read(nid, 1, 0x208, umc, &tmp))
goto out_err;
/* If interleaved over more than 1 die. */
if (intlv_num_dies) {
sock_id_bit = die_id_bit + intlv_num_dies;
die_id_shift = (tmp >> 24) & 0xF;
die_id_mask = (tmp >> 8) & 0xFF;
cs_id |= ((cs_fabric_id & die_id_mask) >> die_id_shift) << die_id_bit;
}
/* If interleaved over more than 1 socket. */
if (intlv_num_sockets) {
socket_id_shift = (tmp >> 28) & 0xF;
socket_id_mask = (tmp >> 16) & 0xFF;
cs_id |= ((cs_fabric_id & socket_id_mask) >> socket_id_shift) << sock_id_bit;
}
/*
* The pre-interleaved address consists of XXXXXXIIIYYYYY
* where III is the ID for this CS, and XXXXXXYYYYY are the
* address bits from the post-interleaved address.
* "num_intlv_bits" has been calculated to tell us how many "I"
* bits there are. "intlv_addr_bit" tells us how many "Y" bits
* there are (where "I" starts).
*/
temp_addr_y = ret_addr & GENMASK_ULL(intlv_addr_bit-1, 0);
temp_addr_i = (cs_id << intlv_addr_bit);
temp_addr_x = (ret_addr & GENMASK_ULL(63, intlv_addr_bit)) << num_intlv_bits;
ret_addr = temp_addr_x | temp_addr_i | temp_addr_y;
}
/* Add dram base address */
ret_addr += dram_base_addr;
/* If legacy MMIO hole enabled */
if (lgcy_mmio_hole_en) {
if (amd_df_indirect_read(nid, 0, 0x104, umc, &tmp))
goto out_err;
dram_hole_base = tmp & GENMASK(31, 24);
if (ret_addr >= dram_hole_base)
ret_addr += (BIT_ULL(32) - dram_hole_base);
}
if (hash_enabled) {
/* Save some parentheses and grab ls-bit at the end. */
hashed_bit = (ret_addr >> 12) ^
(ret_addr >> 18) ^
(ret_addr >> 21) ^
(ret_addr >> 30) ^
cs_id;
hashed_bit &= BIT(0);
if (hashed_bit != ((ret_addr >> intlv_addr_bit) & BIT(0)))
ret_addr ^= BIT(intlv_addr_bit);
}
/* Is calculated system address is above DRAM limit address? */
if (ret_addr > dram_limit_addr)
goto out_err;
*sys_addr = ret_addr;
return 0;
out_err:
return -EINVAL;
}
EXPORT_SYMBOL_GPL(umc_normaddr_to_sysaddr);
bool amd_mce_is_memory_error(struct mce *m)
{
/* ErrCodeExt[20:16] */
u8 xec = (m->status >> 16) & 0x1f;
if (mce_flags.smca)
return smca_get_bank_type(m->bank) == SMCA_UMC && xec == 0x0;
return smca_get_bank_type(m->extcpu, m->bank) == SMCA_UMC && xec == 0x0;
return m->bank == 4 && xec == 0x8;
}
@ -1211,7 +1033,7 @@ static struct kobj_type threshold_ktype = {
.release = threshold_block_release,
};
static const char *get_name(unsigned int bank, struct threshold_block *b)
static const char *get_name(unsigned int cpu, unsigned int bank, struct threshold_block *b)
{
enum smca_bank_types bank_type;
@ -1222,7 +1044,7 @@ static const char *get_name(unsigned int bank, struct threshold_block *b)
return th_names[bank];
}
bank_type = smca_get_bank_type(bank);
bank_type = smca_get_bank_type(cpu, bank);
if (bank_type >= N_SMCA_BANK_TYPES)
return NULL;
@ -1232,12 +1054,12 @@ static const char *get_name(unsigned int bank, struct threshold_block *b)
return NULL;
}
if (smca_banks[bank].hwid->count == 1)
if (per_cpu(smca_bank_counts, cpu)[bank_type] == 1)
return smca_get_name(bank_type);
snprintf(buf_mcatype, MAX_MCATYPE_NAME_LEN,
"%s_%x", smca_get_name(bank_type),
smca_banks[bank].sysfs_id);
"%s_%u", smca_get_name(bank_type),
per_cpu(smca_banks, cpu)[bank].sysfs_id);
return buf_mcatype;
}
@ -1293,7 +1115,7 @@ static int allocate_threshold_blocks(unsigned int cpu, struct threshold_bank *tb
else
tb->blocks = b;
err = kobject_init_and_add(&b->kobj, &threshold_ktype, tb->kobj, get_name(bank, b));
err = kobject_init_and_add(&b->kobj, &threshold_ktype, tb->kobj, get_name(cpu, bank, b));
if (err)
goto out_free;
recurse:
@ -1348,7 +1170,7 @@ static int threshold_create_bank(struct threshold_bank **bp, unsigned int cpu,
struct device *dev = this_cpu_read(mce_device);
struct amd_northbridge *nb = NULL;
struct threshold_bank *b = NULL;
const char *name = get_name(bank, NULL);
const char *name = get_name(cpu, bank, NULL);
int err = 0;
if (!dev)

Some files were not shown because too many files have changed in this diff Show More