Merge 5e8919170a
("Merge tag 'edac_updates_for_v5.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras") into android-mainline
Steps on the way to 5.18-rc1 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I312172ada4ba8ea8e9962ec254b22a1aec91fa18
This commit is contained in:
commit
19e2af8d70
@ -662,6 +662,7 @@ Description: Preferred MTE tag checking mode
|
||||
|
||||
================ ==============================================
|
||||
"sync" Prefer synchronous mode
|
||||
"asymm" Prefer asymmetric mode
|
||||
"async" Prefer asynchronous mode
|
||||
================ ==============================================
|
||||
|
||||
|
@ -494,6 +494,14 @@ architecture which is used to lookup the page-tables for the Virtual
|
||||
addresses in the higher VA range (refer to ARMv8 ARM document for
|
||||
more details).
|
||||
|
||||
MODULES_VADDR|MODULES_END|VMALLOC_START|VMALLOC_END|VMEMMAP_START|VMEMMAP_END
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
Used to get the correct ranges:
|
||||
MODULES_VADDR ~ MODULES_END-1 : Kernel module space.
|
||||
VMALLOC_START ~ VMALLOC_END-1 : vmalloc() / ioremap() space.
|
||||
VMEMMAP_START ~ VMEMMAP_END-1 : vmemmap region, used for struct page array.
|
||||
|
||||
arm
|
||||
===
|
||||
|
||||
|
@ -10,9 +10,9 @@ This document is based on the ARM booting document by Russell King and
|
||||
is relevant to all public releases of the AArch64 Linux kernel.
|
||||
|
||||
The AArch64 exception model is made up of a number of exception levels
|
||||
(EL0 - EL3), with EL0 and EL1 having a secure and a non-secure
|
||||
counterpart. EL2 is the hypervisor level and exists only in non-secure
|
||||
mode. EL3 is the highest priority level and exists only in secure mode.
|
||||
(EL0 - EL3), with EL0, EL1 and EL2 having a secure and a non-secure
|
||||
counterpart. EL2 is the hypervisor level, EL3 is the highest priority
|
||||
level and exists only in secure mode. Both are architecturally optional.
|
||||
|
||||
For the purposes of this document, we will use the term `boot loader`
|
||||
simply to define all software that executes on the CPU(s) before control
|
||||
@ -167,8 +167,8 @@ Before jumping into the kernel, the following conditions must be met:
|
||||
|
||||
All forms of interrupts must be masked in PSTATE.DAIF (Debug, SError,
|
||||
IRQ and FIQ).
|
||||
The CPU must be in either EL2 (RECOMMENDED in order to have access to
|
||||
the virtualisation extensions) or non-secure EL1.
|
||||
The CPU must be in non-secure state, either in EL2 (RECOMMENDED in order
|
||||
to have access to the virtualisation extensions), or in EL1.
|
||||
|
||||
- Caches, MMUs
|
||||
|
||||
|
@ -259,6 +259,11 @@ HWCAP2_RPRES
|
||||
|
||||
Functionality implied by ID_AA64ISAR2_EL1.RPRES == 0b0001.
|
||||
|
||||
HWCAP2_MTE3
|
||||
|
||||
Functionality implied by ID_AA64PFR1_EL1.MTE == 0b0011, as described
|
||||
by Documentation/arm64/memory-tagging-extension.rst.
|
||||
|
||||
4. Unused AT_HWCAP bits
|
||||
-----------------------
|
||||
|
||||
|
@ -76,6 +76,9 @@ configurable behaviours:
|
||||
with ``.si_code = SEGV_MTEAERR`` and ``.si_addr = 0`` (the faulting
|
||||
address is unknown).
|
||||
|
||||
- *Asymmetric* - Reads are handled as for synchronous mode while writes
|
||||
are handled as for asynchronous mode.
|
||||
|
||||
The user can select the above modes, per thread, using the
|
||||
``prctl(PR_SET_TAGGED_ADDR_CTRL, flags, 0, 0, 0)`` system call where ``flags``
|
||||
contains any number of the following values in the ``PR_MTE_TCF_MASK``
|
||||
@ -91,8 +94,9 @@ mode is specified, the program will run in that mode. If multiple
|
||||
modes are specified, the mode is selected as described in the "Per-CPU
|
||||
preferred tag checking modes" section below.
|
||||
|
||||
The current tag check fault mode can be read using the
|
||||
``prctl(PR_GET_TAGGED_ADDR_CTRL, 0, 0, 0, 0)`` system call.
|
||||
The current tag check fault configuration can be read using the
|
||||
``prctl(PR_GET_TAGGED_ADDR_CTRL, 0, 0, 0, 0)`` system call. If
|
||||
multiple modes were requested then all will be reported.
|
||||
|
||||
Tag checking can also be disabled for a user thread by setting the
|
||||
``PSTATE.TCO`` bit with ``MSR TCO, #1``.
|
||||
@ -139,18 +143,25 @@ tag checking mode as the CPU's preferred tag checking mode.
|
||||
|
||||
The preferred tag checking mode for each CPU is controlled by
|
||||
``/sys/devices/system/cpu/cpu<N>/mte_tcf_preferred``, to which a
|
||||
privileged user may write the value ``async`` or ``sync``. The default
|
||||
preferred mode for each CPU is ``async``.
|
||||
privileged user may write the value ``async``, ``sync`` or ``asymm``. The
|
||||
default preferred mode for each CPU is ``async``.
|
||||
|
||||
To allow a program to potentially run in the CPU's preferred tag
|
||||
checking mode, the user program may set multiple tag check fault mode
|
||||
bits in the ``flags`` argument to the ``prctl(PR_SET_TAGGED_ADDR_CTRL,
|
||||
flags, 0, 0, 0)`` system call. If the CPU's preferred tag checking
|
||||
mode is in the task's set of provided tag checking modes (this will
|
||||
always be the case at present because the kernel only supports two
|
||||
tag checking modes, but future kernels may support more modes), that
|
||||
mode will be selected. Otherwise, one of the modes in the task's mode
|
||||
set will be selected in a currently unspecified manner.
|
||||
flags, 0, 0, 0)`` system call. If both synchronous and asynchronous
|
||||
modes are requested then asymmetric mode may also be selected by the
|
||||
kernel. If the CPU's preferred tag checking mode is in the task's set
|
||||
of provided tag checking modes, that mode will be selected. Otherwise,
|
||||
one of the modes in the task's mode will be selected by the kernel
|
||||
from the task's mode set using the preference order:
|
||||
|
||||
1. Asynchronous
|
||||
2. Asymmetric
|
||||
3. Synchronous
|
||||
|
||||
Note that there is no way for userspace to request multiple modes and
|
||||
also disable asymmetric mode.
|
||||
|
||||
Initial process state
|
||||
---------------------
|
||||
@ -213,6 +224,29 @@ address ABI control and MTE configuration of a process as per the
|
||||
Documentation/arm64/tagged-address-abi.rst and above. The corresponding
|
||||
``regset`` is 1 element of 8 bytes (``sizeof(long))``).
|
||||
|
||||
Core dump support
|
||||
-----------------
|
||||
|
||||
The allocation tags for user memory mapped with ``PROT_MTE`` are dumped
|
||||
in the core file as additional ``PT_ARM_MEMTAG_MTE`` segments. The
|
||||
program header for such segment is defined as:
|
||||
|
||||
:``p_type``: ``PT_ARM_MEMTAG_MTE``
|
||||
:``p_flags``: 0
|
||||
:``p_offset``: segment file offset
|
||||
:``p_vaddr``: segment virtual address, same as the corresponding
|
||||
``PT_LOAD`` segment
|
||||
:``p_paddr``: 0
|
||||
:``p_filesz``: segment size in file, calculated as ``p_mem_sz / 32``
|
||||
(two 4-bit tags cover 32 bytes of memory)
|
||||
:``p_memsz``: segment size in memory, same as the corresponding
|
||||
``PT_LOAD`` segment
|
||||
:``p_align``: 0
|
||||
|
||||
The tags are stored in the core file at ``p_offset`` as two 4-bit tags
|
||||
in a byte. With the tag granule of 16 bytes, a 4K page requires 128
|
||||
bytes in the core file.
|
||||
|
||||
Example of correct usage
|
||||
========================
|
||||
|
||||
|
@ -136,7 +136,7 @@ stable kernels.
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| Cavium | ThunderX ITS | #23144 | CAVIUM_ERRATUM_23144 |
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| Cavium | ThunderX GICv3 | #23154 | CAVIUM_ERRATUM_23154 |
|
||||
| Cavium | ThunderX GICv3 | #23154,38545 | CAVIUM_ERRATUM_23154 |
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
| Cavium | ThunderX GICv3 | #38539 | N/A |
|
||||
+----------------+-----------------+-----------------+-----------------------------+
|
||||
|
@ -130,14 +130,13 @@ denoting a range of code via ``SYM_*_START/END`` annotations.
|
||||
In fact, this kind of annotation corresponds to the now deprecated ``ENTRY``
|
||||
and ``ENDPROC`` macros.
|
||||
|
||||
* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` serve for those
|
||||
who decided to have two or more names for one function. The typical use is::
|
||||
* ``SYM_FUNC_ALIAS``, ``SYM_FUNC_ALIAS_LOCAL``, and ``SYM_FUNC_ALIAS_WEAK`` can
|
||||
be used to define multiple names for a function. The typical use is::
|
||||
|
||||
SYM_FUNC_START_ALIAS(__memset)
|
||||
SYM_FUNC_START(memset)
|
||||
SYM_FUNC_START(__memset)
|
||||
... asm insns ...
|
||||
SYM_FUNC_END(memset)
|
||||
SYM_FUNC_END_ALIAS(__memset)
|
||||
SYN_FUNC_END(__memset)
|
||||
SYM_FUNC_ALIAS(memset, __memset)
|
||||
|
||||
In this example, one can call ``__memset`` or ``memset`` with the same
|
||||
result, except the debug information for the instructions is generated to
|
||||
|
@ -20,6 +20,8 @@ properties:
|
||||
items:
|
||||
- enum:
|
||||
- apm,potenza-pmu
|
||||
- apple,firestorm-pmu
|
||||
- apple,icestorm-pmu
|
||||
- arm,armv8-pmuv3 # Only for s/w models
|
||||
- arm,arm1136-pmu
|
||||
- arm,arm1176-pmu
|
||||
|
@ -56,6 +56,8 @@ properties:
|
||||
- 1: virtual HV timer
|
||||
- 2: physical guest timer
|
||||
- 3: virtual guest timer
|
||||
- 4: 'efficient' CPU PMU
|
||||
- 5: 'performance' CPU PMU
|
||||
|
||||
The 3rd cell contains the interrupt flags. This is normally
|
||||
IRQ_TYPE_LEVEL_HIGH (4).
|
||||
@ -68,6 +70,35 @@ properties:
|
||||
power-domains:
|
||||
maxItems: 1
|
||||
|
||||
affinities:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
description:
|
||||
FIQ affinity can be expressed as a single "affinities" node,
|
||||
containing a set of sub-nodes, one per FIQ with a non-default
|
||||
affinity.
|
||||
patternProperties:
|
||||
"^.+-affinity$":
|
||||
type: object
|
||||
additionalProperties: false
|
||||
properties:
|
||||
apple,fiq-index:
|
||||
description:
|
||||
The interrupt number specified as a FIQ, and for which
|
||||
the affinity is not the default.
|
||||
$ref: /schemas/types.yaml#/definitions/uint32
|
||||
maximum: 5
|
||||
|
||||
cpus:
|
||||
$ref: /schemas/types.yaml#/definitions/phandle-array
|
||||
description:
|
||||
Should be a list of phandles to CPU nodes (as described in
|
||||
Documentation/devicetree/bindings/arm/cpus.yaml).
|
||||
|
||||
required:
|
||||
- fiq-index
|
||||
- cpus
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- '#interrupt-cells'
|
||||
|
@ -0,0 +1,37 @@
|
||||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: http://devicetree.org/schemas/perf/marvell-cn10k-ddr.yaml#
|
||||
$schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
|
||||
title: Marvell CN10K DDR performance monitor
|
||||
|
||||
maintainers:
|
||||
- Bharat Bhushan <bbhushan2@marvell.com>
|
||||
|
||||
properties:
|
||||
compatible:
|
||||
items:
|
||||
- enum:
|
||||
- marvell,cn10k-ddr-pmu
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
||||
required:
|
||||
- compatible
|
||||
- reg
|
||||
|
||||
additionalProperties: false
|
||||
|
||||
examples:
|
||||
- |
|
||||
bus {
|
||||
#address-cells = <2>;
|
||||
#size-cells = <2>;
|
||||
|
||||
pmu@87e1c0000000 {
|
||||
compatible = "marvell,cn10k-ddr-pmu";
|
||||
reg = <0x87e1 0xc0000000 0x0 0x10000>;
|
||||
};
|
||||
};
|
@ -164,47 +164,6 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align)
|
||||
return phys;
|
||||
}
|
||||
|
||||
static void __init arm_initrd_init(void)
|
||||
{
|
||||
#ifdef CONFIG_BLK_DEV_INITRD
|
||||
phys_addr_t start;
|
||||
unsigned long size;
|
||||
|
||||
initrd_start = initrd_end = 0;
|
||||
|
||||
if (!phys_initrd_size)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Round the memory region to page boundaries as per free_initrd_mem()
|
||||
* This allows us to detect whether the pages overlapping the initrd
|
||||
* are in use, but more importantly, reserves the entire set of pages
|
||||
* as we don't want these pages allocated for other purposes.
|
||||
*/
|
||||
start = round_down(phys_initrd_start, PAGE_SIZE);
|
||||
size = phys_initrd_size + (phys_initrd_start - start);
|
||||
size = round_up(size, PAGE_SIZE);
|
||||
|
||||
if (!memblock_is_region_memory(start, size)) {
|
||||
pr_err("INITRD: 0x%08llx+0x%08lx is not a memory region - disabling initrd\n",
|
||||
(u64)start, size);
|
||||
return;
|
||||
}
|
||||
|
||||
if (memblock_is_region_reserved(start, size)) {
|
||||
pr_err("INITRD: 0x%08llx+0x%08lx overlaps in-use memory region - disabling initrd\n",
|
||||
(u64)start, size);
|
||||
return;
|
||||
}
|
||||
|
||||
memblock_reserve(start, size);
|
||||
|
||||
/* Now convert initrd to virtual addresses */
|
||||
initrd_start = __phys_to_virt(phys_initrd_start);
|
||||
initrd_end = initrd_start + phys_initrd_size;
|
||||
#endif
|
||||
}
|
||||
|
||||
#ifdef CONFIG_CPU_ICACHE_MISMATCH_WORKAROUND
|
||||
void check_cpu_icache_size(int cpuid)
|
||||
{
|
||||
@ -226,7 +185,7 @@ void __init arm_memblock_init(const struct machine_desc *mdesc)
|
||||
/* Register the kernel text, kernel data and initrd with memblock. */
|
||||
memblock_reserve(__pa(KERNEL_START), KERNEL_END - KERNEL_START);
|
||||
|
||||
arm_initrd_init();
|
||||
reserve_initrd_mem();
|
||||
|
||||
arm_mm_memblock_reserve();
|
||||
|
||||
|
@ -18,7 +18,7 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO32
|
||||
|
||||
ldflags-$(CONFIG_CPU_ENDIAN_BE8) := --be8
|
||||
ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
|
||||
-z max-page-size=4096 -nostdlib -shared $(ldflags-y) \
|
||||
-z max-page-size=4096 -shared $(ldflags-y) \
|
||||
--hash-style=sysv --build-id=sha1 \
|
||||
-T
|
||||
|
||||
|
@ -10,6 +10,7 @@ config ARM64
|
||||
select ACPI_SPCR_TABLE if ACPI
|
||||
select ACPI_PPTT if ACPI
|
||||
select ARCH_HAS_DEBUG_WX
|
||||
select ARCH_BINFMT_ELF_EXTRA_PHDRS
|
||||
select ARCH_BINFMT_ELF_STATE
|
||||
select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
|
||||
select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION
|
||||
@ -891,13 +892,17 @@ config CAVIUM_ERRATUM_23144
|
||||
If unsure, say Y.
|
||||
|
||||
config CAVIUM_ERRATUM_23154
|
||||
bool "Cavium erratum 23154: Access to ICC_IAR1_EL1 is not sync'ed"
|
||||
bool "Cavium errata 23154 and 38545: GICv3 lacks HW synchronisation"
|
||||
default y
|
||||
help
|
||||
The gicv3 of ThunderX requires a modified version for
|
||||
The ThunderX GICv3 implementation requires a modified version for
|
||||
reading the IAR status to ensure data synchronization
|
||||
(access to icc_iar1_el1 is not sync'ed before and after).
|
||||
|
||||
It also suffers from erratum 38545 (also present on Marvell's
|
||||
OcteonTX and OcteonTX2), resulting in deactivated interrupts being
|
||||
spuriously presented to the CPU interface.
|
||||
|
||||
If unsure, say Y.
|
||||
|
||||
config CAVIUM_ERRATUM_27456
|
||||
|
@ -97,6 +97,18 @@ timer {
|
||||
<AIC_FIQ AIC_TMR_HV_VIRT IRQ_TYPE_LEVEL_HIGH>;
|
||||
};
|
||||
|
||||
pmu-e {
|
||||
compatible = "apple,icestorm-pmu";
|
||||
interrupt-parent = <&aic>;
|
||||
interrupts = <AIC_FIQ AIC_CPU_PMU_E IRQ_TYPE_LEVEL_HIGH>;
|
||||
};
|
||||
|
||||
pmu-p {
|
||||
compatible = "apple,firestorm-pmu";
|
||||
interrupt-parent = <&aic>;
|
||||
interrupts = <AIC_FIQ AIC_CPU_PMU_P IRQ_TYPE_LEVEL_HIGH>;
|
||||
};
|
||||
|
||||
clkref: clock-ref {
|
||||
compatible = "fixed-clock";
|
||||
#clock-cells = <0>;
|
||||
@ -213,6 +225,18 @@ aic: interrupt-controller@23b100000 {
|
||||
interrupt-controller;
|
||||
reg = <0x2 0x3b100000 0x0 0x8000>;
|
||||
power-domains = <&ps_aic>;
|
||||
|
||||
affinities {
|
||||
e-core-pmu-affinity {
|
||||
apple,fiq-index = <AIC_CPU_PMU_E>;
|
||||
cpus = <&cpu0 &cpu1 &cpu2 &cpu3>;
|
||||
};
|
||||
|
||||
p-core-pmu-affinity {
|
||||
apple,fiq-index = <AIC_CPU_PMU_P>;
|
||||
cpus = <&cpu4 &cpu5 &cpu6 &cpu7>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
pmgr: power-management@23b700000 {
|
||||
|
64
arch/arm64/include/asm/apple_m1_pmu.h
Normal file
64
arch/arm64/include/asm/apple_m1_pmu.h
Normal file
@ -0,0 +1,64 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#ifndef __ASM_APPLE_M1_PMU_h
|
||||
#define __ASM_APPLE_M1_PMU_h
|
||||
|
||||
#include <linux/bits.h>
|
||||
#include <asm/sysreg.h>
|
||||
|
||||
/* Counters */
|
||||
#define SYS_IMP_APL_PMC0_EL1 sys_reg(3, 2, 15, 0, 0)
|
||||
#define SYS_IMP_APL_PMC1_EL1 sys_reg(3, 2, 15, 1, 0)
|
||||
#define SYS_IMP_APL_PMC2_EL1 sys_reg(3, 2, 15, 2, 0)
|
||||
#define SYS_IMP_APL_PMC3_EL1 sys_reg(3, 2, 15, 3, 0)
|
||||
#define SYS_IMP_APL_PMC4_EL1 sys_reg(3, 2, 15, 4, 0)
|
||||
#define SYS_IMP_APL_PMC5_EL1 sys_reg(3, 2, 15, 5, 0)
|
||||
#define SYS_IMP_APL_PMC6_EL1 sys_reg(3, 2, 15, 6, 0)
|
||||
#define SYS_IMP_APL_PMC7_EL1 sys_reg(3, 2, 15, 7, 0)
|
||||
#define SYS_IMP_APL_PMC8_EL1 sys_reg(3, 2, 15, 9, 0)
|
||||
#define SYS_IMP_APL_PMC9_EL1 sys_reg(3, 2, 15, 10, 0)
|
||||
|
||||
/* Core PMC control register */
|
||||
#define SYS_IMP_APL_PMCR0_EL1 sys_reg(3, 1, 15, 0, 0)
|
||||
#define PMCR0_CNT_ENABLE_0_7 GENMASK(7, 0)
|
||||
#define PMCR0_IMODE GENMASK(10, 8)
|
||||
#define PMCR0_IMODE_OFF 0
|
||||
#define PMCR0_IMODE_PMI 1
|
||||
#define PMCR0_IMODE_AIC 2
|
||||
#define PMCR0_IMODE_HALT 3
|
||||
#define PMCR0_IMODE_FIQ 4
|
||||
#define PMCR0_IACT BIT(11)
|
||||
#define PMCR0_PMI_ENABLE_0_7 GENMASK(19, 12)
|
||||
#define PMCR0_STOP_CNT_ON_PMI BIT(20)
|
||||
#define PMCR0_CNT_GLOB_L2C_EVT BIT(21)
|
||||
#define PMCR0_DEFER_PMI_TO_ERET BIT(22)
|
||||
#define PMCR0_ALLOW_CNT_EN_EL0 BIT(30)
|
||||
#define PMCR0_CNT_ENABLE_8_9 GENMASK(33, 32)
|
||||
#define PMCR0_PMI_ENABLE_8_9 GENMASK(45, 44)
|
||||
|
||||
#define SYS_IMP_APL_PMCR1_EL1 sys_reg(3, 1, 15, 1, 0)
|
||||
#define PMCR1_COUNT_A64_EL0_0_7 GENMASK(15, 8)
|
||||
#define PMCR1_COUNT_A64_EL1_0_7 GENMASK(23, 16)
|
||||
#define PMCR1_COUNT_A64_EL0_8_9 GENMASK(41, 40)
|
||||
#define PMCR1_COUNT_A64_EL1_8_9 GENMASK(49, 48)
|
||||
|
||||
#define SYS_IMP_APL_PMCR2_EL1 sys_reg(3, 1, 15, 2, 0)
|
||||
#define SYS_IMP_APL_PMCR3_EL1 sys_reg(3, 1, 15, 3, 0)
|
||||
#define SYS_IMP_APL_PMCR4_EL1 sys_reg(3, 1, 15, 4, 0)
|
||||
|
||||
#define SYS_IMP_APL_PMESR0_EL1 sys_reg(3, 1, 15, 5, 0)
|
||||
#define PMESR0_EVT_CNT_2 GENMASK(7, 0)
|
||||
#define PMESR0_EVT_CNT_3 GENMASK(15, 8)
|
||||
#define PMESR0_EVT_CNT_4 GENMASK(23, 16)
|
||||
#define PMESR0_EVT_CNT_5 GENMASK(31, 24)
|
||||
|
||||
#define SYS_IMP_APL_PMESR1_EL1 sys_reg(3, 1, 15, 6, 0)
|
||||
#define PMESR1_EVT_CNT_6 GENMASK(7, 0)
|
||||
#define PMESR1_EVT_CNT_7 GENMASK(15, 8)
|
||||
#define PMESR1_EVT_CNT_8 GENMASK(23, 16)
|
||||
#define PMESR1_EVT_CNT_9 GENMASK(31, 24)
|
||||
|
||||
#define SYS_IMP_APL_PMSR_EL1 sys_reg(3, 1, 15, 13, 0)
|
||||
#define PMSR_OVERFLOW GENMASK(9, 0)
|
||||
|
||||
#endif /* __ASM_APPLE_M1_PMU_h */
|
@ -53,17 +53,36 @@ static inline u64 gic_read_iar_common(void)
|
||||
* The gicv3 of ThunderX requires a modified version for reading the
|
||||
* IAR status to ensure data synchronization (access to icc_iar1_el1
|
||||
* is not sync'ed before and after).
|
||||
*
|
||||
* Erratum 38545
|
||||
*
|
||||
* When a IAR register read races with a GIC interrupt RELEASE event,
|
||||
* GIC-CPU interface could wrongly return a valid INTID to the CPU
|
||||
* for an interrupt that is already released(non activated) instead of 0x3ff.
|
||||
*
|
||||
* To workaround this, return a valid interrupt ID only if there is a change
|
||||
* in the active priority list after the IAR read.
|
||||
*
|
||||
* Common function used for both the workarounds since,
|
||||
* 1. On Thunderx 88xx 1.x both erratas are applicable.
|
||||
* 2. Having extra nops doesn't add any side effects for Silicons where
|
||||
* erratum 23154 is not applicable.
|
||||
*/
|
||||
static inline u64 gic_read_iar_cavium_thunderx(void)
|
||||
{
|
||||
u64 irqstat;
|
||||
u64 irqstat, apr;
|
||||
|
||||
apr = read_sysreg_s(SYS_ICC_AP1R0_EL1);
|
||||
nops(8);
|
||||
irqstat = read_sysreg_s(SYS_ICC_IAR1_EL1);
|
||||
nops(4);
|
||||
mb();
|
||||
|
||||
/* Max priority groups implemented is only 32 */
|
||||
if (likely(apr != read_sysreg_s(SYS_ICC_AP1R0_EL1)))
|
||||
return irqstat;
|
||||
|
||||
return 0x3ff;
|
||||
}
|
||||
|
||||
static inline void gic_write_ctlr(u32 val)
|
||||
|
@ -42,13 +42,47 @@ static inline bool __arm64_rndr(unsigned long *v)
|
||||
return ok;
|
||||
}
|
||||
|
||||
static inline bool __arm64_rndrrs(unsigned long *v)
|
||||
{
|
||||
bool ok;
|
||||
|
||||
/*
|
||||
* Reads of RNDRRS set PSTATE.NZCV to 0b0000 on success,
|
||||
* and set PSTATE.NZCV to 0b0100 otherwise.
|
||||
*/
|
||||
asm volatile(
|
||||
__mrs_s("%0", SYS_RNDRRS_EL0) "\n"
|
||||
" cset %w1, ne\n"
|
||||
: "=r" (*v), "=r" (ok)
|
||||
:
|
||||
: "cc");
|
||||
|
||||
return ok;
|
||||
}
|
||||
|
||||
static inline bool __must_check arch_get_random_long(unsigned long *v)
|
||||
{
|
||||
/*
|
||||
* Only support the generic interface after we have detected
|
||||
* the system wide capability, avoiding complexity with the
|
||||
* cpufeature code and with potential scheduling between CPUs
|
||||
* with and without the feature.
|
||||
*/
|
||||
if (cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndr(v))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline bool __must_check arch_get_random_int(unsigned int *v)
|
||||
{
|
||||
if (cpus_have_const_cap(ARM64_HAS_RNG)) {
|
||||
unsigned long val;
|
||||
|
||||
if (__arm64_rndr(&val)) {
|
||||
*v = val;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -71,12 +105,11 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
|
||||
}
|
||||
|
||||
/*
|
||||
* Only support the generic interface after we have detected
|
||||
* the system wide capability, avoiding complexity with the
|
||||
* cpufeature code and with potential scheduling between CPUs
|
||||
* with and without the feature.
|
||||
* RNDRRS is not backed by an entropy source but by a DRBG that is
|
||||
* reseeded after each invocation. This is not a 100% fit but good
|
||||
* enough to implement this API if no other entropy source exists.
|
||||
*/
|
||||
if (cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndr(v))
|
||||
if (cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndrrs(v))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
@ -96,7 +129,7 @@ static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
|
||||
}
|
||||
|
||||
if (cpus_have_const_cap(ARM64_HAS_RNG)) {
|
||||
if (__arm64_rndr(&val)) {
|
||||
if (__arm64_rndrrs(&val)) {
|
||||
*v = val;
|
||||
return true;
|
||||
}
|
||||
|
@ -60,6 +60,9 @@ alternative_else_nop_endif
|
||||
.macro __ptrauth_keys_init_cpu tsk, tmp1, tmp2, tmp3
|
||||
mrs \tmp1, id_aa64isar1_el1
|
||||
ubfx \tmp1, \tmp1, #ID_AA64ISAR1_APA_SHIFT, #8
|
||||
mrs_s \tmp2, SYS_ID_AA64ISAR2_EL1
|
||||
ubfx \tmp2, \tmp2, #ID_AA64ISAR2_APA3_SHIFT, #4
|
||||
orr \tmp1, \tmp1, \tmp2
|
||||
cbz \tmp1, .Lno_addr_auth\@
|
||||
mov_q \tmp1, (SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
|
||||
SCTLR_ELx_ENDA | SCTLR_ELx_ENDB)
|
||||
|
@ -542,11 +542,6 @@ alternative_endif
|
||||
#define EXPORT_SYMBOL_NOKASAN(name) EXPORT_SYMBOL(name)
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_KASAN_HW_TAGS
|
||||
#define EXPORT_SYMBOL_NOHWKASAN(name)
|
||||
#else
|
||||
#define EXPORT_SYMBOL_NOHWKASAN(name) EXPORT_SYMBOL_NOKASAN(name)
|
||||
#endif
|
||||
/*
|
||||
* Emit a 64-bit absolute little endian symbol reference in a way that
|
||||
* ensures that it will be resolved at build time, even when building a
|
||||
|
@ -356,6 +356,7 @@ struct arm64_cpu_capabilities {
|
||||
struct { /* Feature register checking */
|
||||
u32 sys_reg;
|
||||
u8 field_pos;
|
||||
u8 field_width;
|
||||
u8 min_field_value;
|
||||
u8 hwcap_type;
|
||||
bool sign;
|
||||
@ -576,6 +577,8 @@ static inline u64 arm64_ftr_reg_user_value(const struct arm64_ftr_reg *reg)
|
||||
static inline int __attribute_const__
|
||||
cpuid_feature_extract_field_width(u64 features, int field, int width, bool sign)
|
||||
{
|
||||
if (WARN_ON_ONCE(!width))
|
||||
width = 4;
|
||||
return (sign) ?
|
||||
cpuid_feature_extract_signed_field_width(features, field, width) :
|
||||
cpuid_feature_extract_unsigned_field_width(features, field, width);
|
||||
@ -883,6 +886,7 @@ static inline unsigned int get_vmid_bits(u64 mmfr1)
|
||||
extern struct arm64_ftr_override id_aa64mmfr1_override;
|
||||
extern struct arm64_ftr_override id_aa64pfr1_override;
|
||||
extern struct arm64_ftr_override id_aa64isar1_override;
|
||||
extern struct arm64_ftr_override id_aa64isar2_override;
|
||||
|
||||
u32 get_kvm_ipa_limit(void);
|
||||
void dump_cpu_features(void);
|
||||
|
@ -88,6 +88,13 @@
|
||||
#define CAVIUM_CPU_PART_THUNDERX_81XX 0x0A2
|
||||
#define CAVIUM_CPU_PART_THUNDERX_83XX 0x0A3
|
||||
#define CAVIUM_CPU_PART_THUNDERX2 0x0AF
|
||||
/* OcteonTx2 series */
|
||||
#define CAVIUM_CPU_PART_OCTX2_98XX 0x0B1
|
||||
#define CAVIUM_CPU_PART_OCTX2_96XX 0x0B2
|
||||
#define CAVIUM_CPU_PART_OCTX2_95XX 0x0B3
|
||||
#define CAVIUM_CPU_PART_OCTX2_95XXN 0x0B4
|
||||
#define CAVIUM_CPU_PART_OCTX2_95XXMM 0x0B5
|
||||
#define CAVIUM_CPU_PART_OCTX2_95XXO 0x0B6
|
||||
|
||||
#define BRCM_CPU_PART_BRAHMA_B53 0x100
|
||||
#define BRCM_CPU_PART_VULCAN 0x516
|
||||
@ -132,6 +139,12 @@
|
||||
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
|
||||
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
|
||||
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
|
||||
#define MIDR_OCTX2_98XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_OCTX2_98XX)
|
||||
#define MIDR_OCTX2_96XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_OCTX2_96XX)
|
||||
#define MIDR_OCTX2_95XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_OCTX2_95XX)
|
||||
#define MIDR_OCTX2_95XXN MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_OCTX2_95XXN)
|
||||
#define MIDR_OCTX2_95XXMM MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_OCTX2_95XXMM)
|
||||
#define MIDR_OCTX2_95XXO MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_OCTX2_95XXO)
|
||||
#define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2)
|
||||
#define MIDR_BRAHMA_B53 MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_BRAHMA_B53)
|
||||
#define MIDR_BRCM_VULCAN MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_VULCAN)
|
||||
|
@ -34,18 +34,6 @@
|
||||
*/
|
||||
#define BREAK_INSTR_SIZE AARCH64_INSN_SIZE
|
||||
|
||||
/*
|
||||
* BRK instruction encoding
|
||||
* The #imm16 value should be placed at bits[20:5] within BRK ins
|
||||
*/
|
||||
#define AARCH64_BREAK_MON 0xd4200000
|
||||
|
||||
/*
|
||||
* BRK instruction for provoking a fault on purpose
|
||||
* Unlike kgdb, #imm16 value with unallocated handler is used for faulting.
|
||||
*/
|
||||
#define AARCH64_BREAK_FAULT (AARCH64_BREAK_MON | (FAULT_BRK_IMM << 5))
|
||||
|
||||
#define AARCH64_BREAK_KGDB_DYN_DBG \
|
||||
(AARCH64_BREAK_MON | (KGDB_DYN_DBG_BRK_IMM << 5))
|
||||
|
||||
|
@ -108,6 +108,7 @@
|
||||
#define KERNEL_HWCAP_ECV __khwcap2_feature(ECV)
|
||||
#define KERNEL_HWCAP_AFP __khwcap2_feature(AFP)
|
||||
#define KERNEL_HWCAP_RPRES __khwcap2_feature(RPRES)
|
||||
#define KERNEL_HWCAP_MTE3 __khwcap2_feature(MTE3)
|
||||
|
||||
/*
|
||||
* This yields a mask that user programs can use to figure out what
|
||||
|
@ -3,7 +3,21 @@
|
||||
#ifndef __ASM_INSN_DEF_H
|
||||
#define __ASM_INSN_DEF_H
|
||||
|
||||
#include <asm/brk-imm.h>
|
||||
|
||||
/* A64 instructions are always 32 bits. */
|
||||
#define AARCH64_INSN_SIZE 4
|
||||
|
||||
/*
|
||||
* BRK instruction encoding
|
||||
* The #imm16 value should be placed at bits[20:5] within BRK ins
|
||||
*/
|
||||
#define AARCH64_BREAK_MON 0xd4200000
|
||||
|
||||
/*
|
||||
* BRK instruction for provoking a fault on purpose
|
||||
* Unlike kgdb, #imm16 value with unallocated handler is used for faulting.
|
||||
*/
|
||||
#define AARCH64_BREAK_FAULT (AARCH64_BREAK_MON | (FAULT_BRK_IMM << 5))
|
||||
|
||||
#endif /* __ASM_INSN_DEF_H */
|
||||
|
@ -206,7 +206,9 @@ enum aarch64_insn_ldst_type {
|
||||
AARCH64_INSN_LDST_LOAD_PAIR_POST_INDEX,
|
||||
AARCH64_INSN_LDST_STORE_PAIR_POST_INDEX,
|
||||
AARCH64_INSN_LDST_LOAD_EX,
|
||||
AARCH64_INSN_LDST_LOAD_ACQ_EX,
|
||||
AARCH64_INSN_LDST_STORE_EX,
|
||||
AARCH64_INSN_LDST_STORE_REL_EX,
|
||||
};
|
||||
|
||||
enum aarch64_insn_adsb_type {
|
||||
@ -281,6 +283,36 @@ enum aarch64_insn_adr_type {
|
||||
AARCH64_INSN_ADR_TYPE_ADR,
|
||||
};
|
||||
|
||||
enum aarch64_insn_mem_atomic_op {
|
||||
AARCH64_INSN_MEM_ATOMIC_ADD,
|
||||
AARCH64_INSN_MEM_ATOMIC_CLR,
|
||||
AARCH64_INSN_MEM_ATOMIC_EOR,
|
||||
AARCH64_INSN_MEM_ATOMIC_SET,
|
||||
AARCH64_INSN_MEM_ATOMIC_SWP,
|
||||
};
|
||||
|
||||
enum aarch64_insn_mem_order_type {
|
||||
AARCH64_INSN_MEM_ORDER_NONE,
|
||||
AARCH64_INSN_MEM_ORDER_ACQ,
|
||||
AARCH64_INSN_MEM_ORDER_REL,
|
||||
AARCH64_INSN_MEM_ORDER_ACQREL,
|
||||
};
|
||||
|
||||
enum aarch64_insn_mb_type {
|
||||
AARCH64_INSN_MB_SY,
|
||||
AARCH64_INSN_MB_ST,
|
||||
AARCH64_INSN_MB_LD,
|
||||
AARCH64_INSN_MB_ISH,
|
||||
AARCH64_INSN_MB_ISHST,
|
||||
AARCH64_INSN_MB_ISHLD,
|
||||
AARCH64_INSN_MB_NSH,
|
||||
AARCH64_INSN_MB_NSHST,
|
||||
AARCH64_INSN_MB_NSHLD,
|
||||
AARCH64_INSN_MB_OSH,
|
||||
AARCH64_INSN_MB_OSHST,
|
||||
AARCH64_INSN_MB_OSHLD,
|
||||
};
|
||||
|
||||
#define __AARCH64_INSN_FUNCS(abbr, mask, val) \
|
||||
static __always_inline bool aarch64_insn_is_##abbr(u32 code) \
|
||||
{ \
|
||||
@ -304,6 +336,11 @@ __AARCH64_INSN_FUNCS(store_post, 0x3FE00C00, 0x38000400)
|
||||
__AARCH64_INSN_FUNCS(load_post, 0x3FE00C00, 0x38400400)
|
||||
__AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800)
|
||||
__AARCH64_INSN_FUNCS(ldadd, 0x3F20FC00, 0x38200000)
|
||||
__AARCH64_INSN_FUNCS(ldclr, 0x3F20FC00, 0x38201000)
|
||||
__AARCH64_INSN_FUNCS(ldeor, 0x3F20FC00, 0x38202000)
|
||||
__AARCH64_INSN_FUNCS(ldset, 0x3F20FC00, 0x38203000)
|
||||
__AARCH64_INSN_FUNCS(swp, 0x3F20FC00, 0x38208000)
|
||||
__AARCH64_INSN_FUNCS(cas, 0x3FA07C00, 0x08A07C00)
|
||||
__AARCH64_INSN_FUNCS(ldr_reg, 0x3FE0EC00, 0x38606800)
|
||||
__AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000)
|
||||
__AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000)
|
||||
@ -475,13 +512,6 @@ u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg,
|
||||
enum aarch64_insn_register state,
|
||||
enum aarch64_insn_size_type size,
|
||||
enum aarch64_insn_ldst_type type);
|
||||
u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result,
|
||||
enum aarch64_insn_register address,
|
||||
enum aarch64_insn_register value,
|
||||
enum aarch64_insn_size_type size);
|
||||
u32 aarch64_insn_gen_stadd(enum aarch64_insn_register address,
|
||||
enum aarch64_insn_register value,
|
||||
enum aarch64_insn_size_type size);
|
||||
u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst,
|
||||
enum aarch64_insn_register src,
|
||||
int imm, enum aarch64_insn_variant variant,
|
||||
@ -542,6 +572,42 @@ u32 aarch64_insn_gen_prefetch(enum aarch64_insn_register base,
|
||||
enum aarch64_insn_prfm_type type,
|
||||
enum aarch64_insn_prfm_target target,
|
||||
enum aarch64_insn_prfm_policy policy);
|
||||
#ifdef CONFIG_ARM64_LSE_ATOMICS
|
||||
u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result,
|
||||
enum aarch64_insn_register address,
|
||||
enum aarch64_insn_register value,
|
||||
enum aarch64_insn_size_type size,
|
||||
enum aarch64_insn_mem_atomic_op op,
|
||||
enum aarch64_insn_mem_order_type order);
|
||||
u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
|
||||
enum aarch64_insn_register address,
|
||||
enum aarch64_insn_register value,
|
||||
enum aarch64_insn_size_type size,
|
||||
enum aarch64_insn_mem_order_type order);
|
||||
#else
|
||||
static inline
|
||||
u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result,
|
||||
enum aarch64_insn_register address,
|
||||
enum aarch64_insn_register value,
|
||||
enum aarch64_insn_size_type size,
|
||||
enum aarch64_insn_mem_atomic_op op,
|
||||
enum aarch64_insn_mem_order_type order)
|
||||
{
|
||||
return AARCH64_BREAK_FAULT;
|
||||
}
|
||||
|
||||
static inline
|
||||
u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
|
||||
enum aarch64_insn_register address,
|
||||
enum aarch64_insn_register value,
|
||||
enum aarch64_insn_size_type size,
|
||||
enum aarch64_insn_mem_order_type order)
|
||||
{
|
||||
return AARCH64_BREAK_FAULT;
|
||||
}
|
||||
#endif
|
||||
u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type);
|
||||
|
||||
s32 aarch64_get_branch_offset(u32 insn);
|
||||
u32 aarch64_set_branch_offset(u32 insn, s32 offset);
|
||||
|
||||
|
@ -355,8 +355,8 @@
|
||||
ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \
|
||||
ECN(BKPT32), ECN(VECTOR32), ECN(BRK64)
|
||||
|
||||
#define CPACR_EL1_FPEN (3 << 20)
|
||||
#define CPACR_EL1_TTA (1 << 28)
|
||||
#define CPACR_EL1_DEFAULT (CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN)
|
||||
#define CPACR_EL1_DEFAULT (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN |\
|
||||
CPACR_EL1_ZEN_EL1EN)
|
||||
|
||||
#endif /* __ARM64_KVM_ARM_H__ */
|
||||
|
@ -118,6 +118,7 @@ extern u64 kvm_nvhe_sym(id_aa64pfr0_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64isar0_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64isar1_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64isar2_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
|
||||
extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
|
||||
|
@ -39,28 +39,4 @@
|
||||
SYM_START(name, SYM_L_WEAK, SYM_A_NONE) \
|
||||
bti c ;
|
||||
|
||||
/*
|
||||
* Annotate a function as position independent, i.e., safe to be called before
|
||||
* the kernel virtual mapping is activated.
|
||||
*/
|
||||
#define SYM_FUNC_START_PI(x) \
|
||||
SYM_FUNC_START_ALIAS(__pi_##x); \
|
||||
SYM_FUNC_START(x)
|
||||
|
||||
#define SYM_FUNC_START_WEAK_PI(x) \
|
||||
SYM_FUNC_START_ALIAS(__pi_##x); \
|
||||
SYM_FUNC_START_WEAK(x)
|
||||
|
||||
#define SYM_FUNC_START_WEAK_ALIAS_PI(x) \
|
||||
SYM_FUNC_START_ALIAS(__pi_##x); \
|
||||
SYM_START(x, SYM_L_WEAK, SYM_A_ALIGN)
|
||||
|
||||
#define SYM_FUNC_END_PI(x) \
|
||||
SYM_FUNC_END(x); \
|
||||
SYM_FUNC_END_ALIAS(__pi_##x)
|
||||
|
||||
#define SYM_FUNC_END_ALIAS_PI(x) \
|
||||
SYM_FUNC_END_ALIAS(x); \
|
||||
SYM_FUNC_END_ALIAS(__pi_##x)
|
||||
|
||||
#endif
|
||||
|
@ -17,12 +17,10 @@
|
||||
#include <asm/cpucaps.h>
|
||||
|
||||
extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS];
|
||||
extern struct static_key_false arm64_const_caps_ready;
|
||||
|
||||
static inline bool system_uses_lse_atomics(void)
|
||||
static __always_inline bool system_uses_lse_atomics(void)
|
||||
{
|
||||
return (static_branch_likely(&arm64_const_caps_ready)) &&
|
||||
static_branch_likely(&cpu_hwcap_keys[ARM64_HAS_LSE_ATOMICS]);
|
||||
return static_branch_likely(&cpu_hwcap_keys[ARM64_HAS_LSE_ATOMICS]);
|
||||
}
|
||||
|
||||
#define __lse_ll_sc_body(op, ...) \
|
||||
|
@ -1,8 +1,8 @@
|
||||
SECTIONS {
|
||||
#ifdef CONFIG_ARM64_MODULE_PLTS
|
||||
.plt 0 (NOLOAD) : { BYTE(0) }
|
||||
.init.plt 0 (NOLOAD) : { BYTE(0) }
|
||||
.text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) }
|
||||
.plt 0 : { BYTE(0) }
|
||||
.init.plt 0 : { BYTE(0) }
|
||||
.text.ftrace_trampoline 0 : { BYTE(0) }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_KASAN_SW_TAGS
|
||||
|
@ -11,6 +11,7 @@
|
||||
#define MTE_TAG_SHIFT 56
|
||||
#define MTE_TAG_SIZE 4
|
||||
#define MTE_TAG_MASK GENMASK((MTE_TAG_SHIFT + (MTE_TAG_SIZE - 1)), MTE_TAG_SHIFT)
|
||||
#define MTE_PAGE_TAG_STORAGE (MTE_GRANULES_PER_PAGE * MTE_TAG_SIZE / 8)
|
||||
|
||||
#define __MTE_PREAMBLE ARM64_ASM_PREAMBLE ".arch_extension memtag\n"
|
||||
|
||||
|
@ -11,7 +11,9 @@
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/kasan-enabled.h>
|
||||
#include <linux/page-flags.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include <asm/pgtable-types.h>
|
||||
@ -86,6 +88,26 @@ static inline int mte_ptrace_copy_tags(struct task_struct *child,
|
||||
|
||||
#endif /* CONFIG_ARM64_MTE */
|
||||
|
||||
static inline void mte_disable_tco_entry(struct task_struct *task)
|
||||
{
|
||||
if (!system_supports_mte())
|
||||
return;
|
||||
|
||||
/*
|
||||
* Re-enable tag checking (TCO set on exception entry). This is only
|
||||
* necessary if MTE is enabled in either the kernel or the userspace
|
||||
* task in synchronous or asymmetric mode (SCTLR_EL1.TCF0 bit 0 is set
|
||||
* for both). With MTE disabled in the kernel and disabled or
|
||||
* asynchronous in userspace, tag check faults (including in uaccesses)
|
||||
* are not reported, therefore there is no need to re-enable checking.
|
||||
* This is beneficial on microarchitectures where re-enabling TCO is
|
||||
* expensive.
|
||||
*/
|
||||
if (kasan_hw_tags_enabled() ||
|
||||
(task->thread.sctlr_user & (1UL << SCTLR_EL1_TCF0_SHIFT)))
|
||||
asm volatile(SET_PSTATE_TCO(0));
|
||||
}
|
||||
|
||||
#ifdef CONFIG_KASAN_HW_TAGS
|
||||
/* Whether the MTE asynchronous mode is enabled. */
|
||||
DECLARE_STATIC_KEY_FALSE(mte_async_or_asymm_mode);
|
||||
|
@ -15,70 +15,70 @@
|
||||
/*
|
||||
* Common architectural and microarchitectural event numbers.
|
||||
*/
|
||||
#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x00
|
||||
#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x01
|
||||
#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x02
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x03
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x04
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x05
|
||||
#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x06
|
||||
#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x07
|
||||
#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x08
|
||||
#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x09
|
||||
#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x0A
|
||||
#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x0B
|
||||
#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x0C
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x0D
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x0E
|
||||
#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x0F
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x10
|
||||
#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x11
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x12
|
||||
#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x13
|
||||
#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x14
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x15
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x16
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x17
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x18
|
||||
#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x19
|
||||
#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x1A
|
||||
#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x1B
|
||||
#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x1C
|
||||
#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x1D
|
||||
#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x1F
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x20
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x21
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x22
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x23
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x24
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x25
|
||||
#define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x26
|
||||
#define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x27
|
||||
#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x28
|
||||
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x29
|
||||
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x2A
|
||||
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x2B
|
||||
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x2C
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x2D
|
||||
#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x2E
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x2F
|
||||
#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x30
|
||||
#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x31
|
||||
#define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x32
|
||||
#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x33
|
||||
#define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x34
|
||||
#define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x35
|
||||
#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x36
|
||||
#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x37
|
||||
#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x38
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x39
|
||||
#define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x3A
|
||||
#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x3B
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL 0x3C
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x3D
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x3E
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x3F
|
||||
#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000
|
||||
#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001
|
||||
#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005
|
||||
#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006
|
||||
#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007
|
||||
#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008
|
||||
#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009
|
||||
#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A
|
||||
#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B
|
||||
#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E
|
||||
#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010
|
||||
#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012
|
||||
#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013
|
||||
#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018
|
||||
#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019
|
||||
#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A
|
||||
#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B
|
||||
#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C
|
||||
#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D
|
||||
#define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021
|
||||
#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025
|
||||
#define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026
|
||||
#define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027
|
||||
#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028
|
||||
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029
|
||||
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A
|
||||
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B
|
||||
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D
|
||||
#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E
|
||||
#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F
|
||||
#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030
|
||||
#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031
|
||||
#define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032
|
||||
#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033
|
||||
#define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034
|
||||
#define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035
|
||||
#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036
|
||||
#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037
|
||||
#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038
|
||||
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039
|
||||
#define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A
|
||||
#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL 0x003C
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E
|
||||
#define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F
|
||||
|
||||
/* Statistical profiling extension microarchitectural events */
|
||||
#define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000
|
||||
@ -96,6 +96,20 @@
|
||||
#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A
|
||||
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B
|
||||
|
||||
/* Trace buffer events */
|
||||
#define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C
|
||||
#define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E
|
||||
|
||||
/* Trace unit events */
|
||||
#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010
|
||||
#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011
|
||||
#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012
|
||||
#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013
|
||||
#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018
|
||||
#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019
|
||||
#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A
|
||||
#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B
|
||||
|
||||
/* additional latency from alignment events */
|
||||
#define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020
|
||||
#define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021
|
||||
@ -107,91 +121,91 @@
|
||||
#define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026
|
||||
|
||||
/* ARMv8 recommended implementation defined event types */
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x40
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x41
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x42
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x43
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x44
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x45
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x46
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x47
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x48
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x4C
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x4D
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x4E
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x4F
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x50
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x51
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x52
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x53
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E
|
||||
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x56
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x57
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x58
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x5C
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x5D
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x5E
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x5F
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x60
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x61
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x62
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x63
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x64
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x65
|
||||
#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x66
|
||||
#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x67
|
||||
#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x68
|
||||
#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x69
|
||||
#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x6A
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E
|
||||
#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064
|
||||
#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065
|
||||
#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066
|
||||
#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067
|
||||
#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068
|
||||
#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069
|
||||
#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x6C
|
||||
#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x6D
|
||||
#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x6E
|
||||
#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x6F
|
||||
#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x70
|
||||
#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x71
|
||||
#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x72
|
||||
#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x73
|
||||
#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x74
|
||||
#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x75
|
||||
#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x76
|
||||
#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x77
|
||||
#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x78
|
||||
#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x79
|
||||
#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x7A
|
||||
#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C
|
||||
#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D
|
||||
#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E
|
||||
#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F
|
||||
#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070
|
||||
#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071
|
||||
#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072
|
||||
#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073
|
||||
#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074
|
||||
#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075
|
||||
#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076
|
||||
#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077
|
||||
#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078
|
||||
#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079
|
||||
#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x7C
|
||||
#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x7D
|
||||
#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x7E
|
||||
#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C
|
||||
#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D
|
||||
#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x81
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x82
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x83
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x84
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x86
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x87
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x88
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x8A
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x8B
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x8C
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x8D
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x8E
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x8F
|
||||
#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x90
|
||||
#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x91
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E
|
||||
#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F
|
||||
#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090
|
||||
#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0xA0
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0xA1
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0xA2
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0xA3
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3
|
||||
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0xA6
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0xA7
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0xA8
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7
|
||||
#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8
|
||||
|
||||
/*
|
||||
* Per-CPU PMCR: config reg
|
||||
|
@ -273,6 +273,8 @@
|
||||
#define TCR_NFD1 (UL(1) << 54)
|
||||
#define TCR_E0PD0 (UL(1) << 55)
|
||||
#define TCR_E0PD1 (UL(1) << 56)
|
||||
#define TCR_TCMA0 (UL(1) << 57)
|
||||
#define TCR_TCMA1 (UL(1) << 58)
|
||||
|
||||
/*
|
||||
* TTBR.
|
||||
|
@ -21,6 +21,7 @@
|
||||
|
||||
#define MTE_CTRL_TCF_SYNC (1UL << 16)
|
||||
#define MTE_CTRL_TCF_ASYNC (1UL << 17)
|
||||
#define MTE_CTRL_TCF_ASYMM (1UL << 18)
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
|
@ -67,7 +67,8 @@ struct bp_hardening_data {
|
||||
|
||||
DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
|
||||
|
||||
static inline void arm64_apply_bp_hardening(void)
|
||||
/* Called during entry so must be __always_inline */
|
||||
static __always_inline void arm64_apply_bp_hardening(void)
|
||||
{
|
||||
struct bp_hardening_data *d;
|
||||
|
||||
|
@ -12,13 +12,11 @@ extern char *strrchr(const char *, int c);
|
||||
#define __HAVE_ARCH_STRCHR
|
||||
extern char *strchr(const char *, int c);
|
||||
|
||||
#ifndef CONFIG_KASAN_HW_TAGS
|
||||
#define __HAVE_ARCH_STRCMP
|
||||
extern int strcmp(const char *, const char *);
|
||||
|
||||
#define __HAVE_ARCH_STRNCMP
|
||||
extern int strncmp(const char *, const char *, __kernel_size_t);
|
||||
#endif
|
||||
|
||||
#define __HAVE_ARCH_STRLEN
|
||||
extern __kernel_size_t strlen(const char *);
|
||||
|
@ -774,6 +774,8 @@
|
||||
|
||||
/* id_aa64isar2 */
|
||||
#define ID_AA64ISAR2_CLEARBHB_SHIFT 28
|
||||
#define ID_AA64ISAR2_APA3_SHIFT 12
|
||||
#define ID_AA64ISAR2_GPA3_SHIFT 8
|
||||
#define ID_AA64ISAR2_RPRES_SHIFT 4
|
||||
#define ID_AA64ISAR2_WFXT_SHIFT 0
|
||||
|
||||
@ -787,6 +789,16 @@
|
||||
#define ID_AA64ISAR2_WFXT_NI 0x0
|
||||
#define ID_AA64ISAR2_WFXT_SUPPORTED 0x2
|
||||
|
||||
#define ID_AA64ISAR2_APA3_NI 0x0
|
||||
#define ID_AA64ISAR2_APA3_ARCHITECTED 0x1
|
||||
#define ID_AA64ISAR2_APA3_ARCH_EPAC 0x2
|
||||
#define ID_AA64ISAR2_APA3_ARCH_EPAC2 0x3
|
||||
#define ID_AA64ISAR2_APA3_ARCH_EPAC2_FPAC 0x4
|
||||
#define ID_AA64ISAR2_APA3_ARCH_EPAC2_FPAC_CMB 0x5
|
||||
|
||||
#define ID_AA64ISAR2_GPA3_NI 0x0
|
||||
#define ID_AA64ISAR2_GPA3_ARCHITECTED 0x1
|
||||
|
||||
/* id_aa64pfr0 */
|
||||
#define ID_AA64PFR0_CSV3_SHIFT 60
|
||||
#define ID_AA64PFR0_CSV2_SHIFT 56
|
||||
@ -1099,13 +1111,11 @@
|
||||
#define ZCR_ELx_LEN_SIZE 9
|
||||
#define ZCR_ELx_LEN_MASK 0x1ff
|
||||
|
||||
#define CPACR_EL1_FPEN_EL1EN (BIT(20)) /* enable EL1 access */
|
||||
#define CPACR_EL1_FPEN_EL0EN (BIT(21)) /* enable EL0 access, if EL1EN set */
|
||||
|
||||
#define CPACR_EL1_ZEN_EL1EN (BIT(16)) /* enable EL1 access */
|
||||
#define CPACR_EL1_ZEN_EL0EN (BIT(17)) /* enable EL0 access, if EL1EN set */
|
||||
#define CPACR_EL1_ZEN (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN)
|
||||
|
||||
/* TCR EL1 Bit Definitions */
|
||||
#define SYS_TCR_EL1_TCMA1 (BIT(58))
|
||||
#define SYS_TCR_EL1_TCMA0 (BIT(57))
|
||||
|
||||
/* GCR_EL1 Definitions */
|
||||
#define SYS_GCR_EL1_RRND (BIT(16))
|
||||
|
@ -78,5 +78,6 @@
|
||||
#define HWCAP2_ECV (1 << 19)
|
||||
#define HWCAP2_AFP (1 << 20)
|
||||
#define HWCAP2_RPRES (1 << 21)
|
||||
#define HWCAP2_MTE3 (1 << 22)
|
||||
|
||||
#endif /* _UAPI__ASM_HWCAP_H */
|
||||
|
@ -61,6 +61,7 @@ obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o
|
||||
obj-$(CONFIG_PARAVIRT) += paravirt.o
|
||||
obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
|
||||
obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o
|
||||
obj-$(CONFIG_ELF_CORE) += elfcore.o
|
||||
obj-$(CONFIG_KEXEC_CORE) += machine_kexec.o relocate_kernel.o \
|
||||
cpu-reset.o
|
||||
obj-$(CONFIG_KEXEC_FILE) += machine_kexec_file.o kexec_image.o
|
||||
|
@ -214,6 +214,21 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
|
||||
};
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CAVIUM_ERRATUM_23154
|
||||
const struct midr_range cavium_erratum_23154_cpus[] = {
|
||||
MIDR_ALL_VERSIONS(MIDR_THUNDERX),
|
||||
MIDR_ALL_VERSIONS(MIDR_THUNDERX_81XX),
|
||||
MIDR_ALL_VERSIONS(MIDR_THUNDERX_83XX),
|
||||
MIDR_ALL_VERSIONS(MIDR_OCTX2_98XX),
|
||||
MIDR_ALL_VERSIONS(MIDR_OCTX2_96XX),
|
||||
MIDR_ALL_VERSIONS(MIDR_OCTX2_95XX),
|
||||
MIDR_ALL_VERSIONS(MIDR_OCTX2_95XXN),
|
||||
MIDR_ALL_VERSIONS(MIDR_OCTX2_95XXMM),
|
||||
MIDR_ALL_VERSIONS(MIDR_OCTX2_95XXO),
|
||||
{},
|
||||
};
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CAVIUM_ERRATUM_27456
|
||||
const struct midr_range cavium_erratum_27456_cpus[] = {
|
||||
/* Cavium ThunderX, T88 pass 1.x - 2.1 */
|
||||
@ -425,10 +440,10 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
|
||||
#endif
|
||||
#ifdef CONFIG_CAVIUM_ERRATUM_23154
|
||||
{
|
||||
/* Cavium ThunderX, pass 1.x */
|
||||
.desc = "Cavium erratum 23154",
|
||||
.desc = "Cavium errata 23154 and 38545",
|
||||
.capability = ARM64_WORKAROUND_CAVIUM_23154,
|
||||
ERRATA_MIDR_REV_RANGE(MIDR_THUNDERX, 0, 0, 1),
|
||||
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
|
||||
ERRATA_MIDR_RANGE_LIST(cavium_erratum_23154_cpus),
|
||||
},
|
||||
#endif
|
||||
#ifdef CONFIG_CAVIUM_ERRATUM_27456
|
||||
|
@ -232,6 +232,10 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
|
||||
|
||||
static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
|
||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
|
||||
FTR_STRICT, FTR_EXACT, ID_AA64ISAR2_APA3_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
|
||||
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_GPA3_SHIFT, 4, 0),
|
||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_RPRES_SHIFT, 4, 0),
|
||||
ARM64_FTR_END,
|
||||
};
|
||||
@ -602,6 +606,7 @@ static const struct arm64_ftr_bits ftr_raz[] = {
|
||||
struct arm64_ftr_override __ro_after_init id_aa64mmfr1_override;
|
||||
struct arm64_ftr_override __ro_after_init id_aa64pfr1_override;
|
||||
struct arm64_ftr_override __ro_after_init id_aa64isar1_override;
|
||||
struct arm64_ftr_override __ro_after_init id_aa64isar2_override;
|
||||
|
||||
static const struct __ftr_reg_entry {
|
||||
u32 sys_id;
|
||||
@ -650,6 +655,8 @@ static const struct __ftr_reg_entry {
|
||||
ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1,
|
||||
&id_aa64isar1_override),
|
||||
ARM64_FTR_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2),
|
||||
ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2,
|
||||
&id_aa64isar2_override),
|
||||
|
||||
/* Op1 = 0, CRn = 0, CRm = 7 */
|
||||
ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0),
|
||||
@ -1313,7 +1320,9 @@ u64 __read_sysreg_by_encoding(u32 sys_id)
|
||||
static bool
|
||||
feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
|
||||
{
|
||||
int val = cpuid_feature_extract_field(reg, entry->field_pos, entry->sign);
|
||||
int val = cpuid_feature_extract_field_width(reg, entry->field_pos,
|
||||
entry->field_width,
|
||||
entry->sign);
|
||||
|
||||
return val >= entry->min_field_value;
|
||||
}
|
||||
@ -1787,14 +1796,6 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused)
|
||||
write_sysreg(read_sysreg(tpidr_el1), tpidr_el2);
|
||||
}
|
||||
|
||||
static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused)
|
||||
{
|
||||
u64 val = read_sysreg_s(SYS_CLIDR_EL1);
|
||||
|
||||
/* Check that CLIDR_EL1.LOU{U,IS} are both 0 */
|
||||
WARN_ON(CLIDR_LOUU(val) || CLIDR_LOUIS(val));
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ARM64_PAN
|
||||
static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
|
||||
{
|
||||
@ -1841,21 +1842,27 @@ static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry,
|
||||
/* Now check for the secondary CPUs with SCOPE_LOCAL_CPU scope */
|
||||
sec_val = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
|
||||
entry->field_pos, entry->sign);
|
||||
return sec_val == boot_val;
|
||||
return (sec_val >= entry->min_field_value) && (sec_val == boot_val);
|
||||
}
|
||||
|
||||
static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
|
||||
int scope)
|
||||
{
|
||||
return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
|
||||
has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
|
||||
bool api = has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
|
||||
bool apa = has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA5], scope);
|
||||
bool apa3 = has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA3], scope);
|
||||
|
||||
return apa || apa3 || api;
|
||||
}
|
||||
|
||||
static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
|
||||
int __unused)
|
||||
{
|
||||
return __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH) ||
|
||||
__system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
|
||||
bool gpi = __system_matches_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF);
|
||||
bool gpa = __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH_QARMA5);
|
||||
bool gpa3 = __system_matches_cap(ARM64_HAS_GENERIC_AUTH_ARCH_QARMA3);
|
||||
|
||||
return gpa || gpa3 || gpi;
|
||||
}
|
||||
#endif /* CONFIG_ARM64_PTR_AUTH */
|
||||
|
||||
@ -1957,6 +1964,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_useable_gicv3_cpuif,
|
||||
.sys_reg = SYS_ID_AA64PFR0_EL1,
|
||||
.field_pos = ID_AA64PFR0_GIC_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = 1,
|
||||
},
|
||||
@ -1967,6 +1975,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64MMFR0_EL1,
|
||||
.field_pos = ID_AA64MMFR0_ECV_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = 1,
|
||||
},
|
||||
@ -1978,6 +1987,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64MMFR1_EL1,
|
||||
.field_pos = ID_AA64MMFR1_PAN_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = 1,
|
||||
.cpu_enable = cpu_enable_pan,
|
||||
@ -1991,6 +2001,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64MMFR1_EL1,
|
||||
.field_pos = ID_AA64MMFR1_PAN_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = 3,
|
||||
},
|
||||
@ -2003,6 +2014,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64ISAR0_EL1,
|
||||
.field_pos = ID_AA64ISAR0_ATOMICS_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = 2,
|
||||
},
|
||||
@ -2027,6 +2039,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64PFR0_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64PFR0_EL0_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
|
||||
},
|
||||
#ifdef CONFIG_KVM
|
||||
@ -2038,6 +2051,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64PFR0_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64PFR0_EL1_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64PFR0_ELx_32BIT_64BIT,
|
||||
},
|
||||
{
|
||||
@ -2058,6 +2072,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
*/
|
||||
.sys_reg = SYS_ID_AA64PFR0_EL1,
|
||||
.field_pos = ID_AA64PFR0_CSV3_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = 1,
|
||||
.matches = unmap_kernel_at_el0,
|
||||
.cpu_enable = kpti_install_ng_mappings,
|
||||
@ -2077,6 +2092,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64ISAR1_EL1,
|
||||
.field_pos = ID_AA64ISAR1_DPB_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = 1,
|
||||
},
|
||||
{
|
||||
@ -2087,6 +2103,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64ISAR1_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR1_DPB_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = 2,
|
||||
},
|
||||
#endif
|
||||
@ -2098,6 +2115,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64PFR0_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64PFR0_SVE_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64PFR0_SVE,
|
||||
.matches = has_cpuid_feature,
|
||||
.cpu_enable = sve_kernel_enable,
|
||||
@ -2112,6 +2130,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64PFR0_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64PFR0_RAS_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64PFR0_RAS_V1,
|
||||
.cpu_enable = cpu_clear_disr,
|
||||
},
|
||||
@ -2130,6 +2149,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64PFR0_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64PFR0_AMU_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64PFR0_AMU,
|
||||
.cpu_enable = cpu_amu_enable,
|
||||
},
|
||||
@ -2154,9 +2174,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64MMFR2_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64MMFR2_FWB_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = 1,
|
||||
.matches = has_cpuid_feature,
|
||||
.cpu_enable = cpu_has_fwb,
|
||||
},
|
||||
{
|
||||
.desc = "ARMv8.4 Translation Table Level",
|
||||
@ -2165,6 +2185,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64MMFR2_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64MMFR2_TTL_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = 1,
|
||||
.matches = has_cpuid_feature,
|
||||
},
|
||||
@ -2175,6 +2196,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64ISAR0_EL1,
|
||||
.field_pos = ID_AA64ISAR0_TLB_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = ID_AA64ISAR0_TLB_RANGE,
|
||||
},
|
||||
@ -2193,6 +2215,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64MMFR1_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64MMFR1_HADBS_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = 2,
|
||||
.matches = has_hw_dbm,
|
||||
.cpu_enable = cpu_enable_hw_dbm,
|
||||
@ -2205,6 +2228,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64ISAR0_EL1,
|
||||
.field_pos = ID_AA64ISAR0_CRC32_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = 1,
|
||||
},
|
||||
{
|
||||
@ -2214,6 +2238,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64PFR1_EL1,
|
||||
.field_pos = ID_AA64PFR1_SSBS_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
|
||||
},
|
||||
@ -2226,6 +2251,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64MMFR2_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64MMFR2_CNP_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = 1,
|
||||
.cpu_enable = cpu_enable_cnp,
|
||||
},
|
||||
@ -2237,20 +2263,33 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64ISAR1_EL1,
|
||||
.field_pos = ID_AA64ISAR1_SB_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = 1,
|
||||
},
|
||||
#ifdef CONFIG_ARM64_PTR_AUTH
|
||||
{
|
||||
.desc = "Address authentication (architected algorithm)",
|
||||
.capability = ARM64_HAS_ADDRESS_AUTH_ARCH,
|
||||
.desc = "Address authentication (architected QARMA5 algorithm)",
|
||||
.capability = ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA5,
|
||||
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
|
||||
.sys_reg = SYS_ID_AA64ISAR1_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR1_APA_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
|
||||
.matches = has_address_auth_cpucap,
|
||||
},
|
||||
{
|
||||
.desc = "Address authentication (architected QARMA3 algorithm)",
|
||||
.capability = ARM64_HAS_ADDRESS_AUTH_ARCH_QARMA3,
|
||||
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
|
||||
.sys_reg = SYS_ID_AA64ISAR2_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR2_APA3_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64ISAR2_APA3_ARCHITECTED,
|
||||
.matches = has_address_auth_cpucap,
|
||||
},
|
||||
{
|
||||
.desc = "Address authentication (IMP DEF algorithm)",
|
||||
.capability = ARM64_HAS_ADDRESS_AUTH_IMP_DEF,
|
||||
@ -2258,6 +2297,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64ISAR1_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR1_API_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64ISAR1_API_IMP_DEF,
|
||||
.matches = has_address_auth_cpucap,
|
||||
},
|
||||
@ -2267,15 +2307,27 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_address_auth_metacap,
|
||||
},
|
||||
{
|
||||
.desc = "Generic authentication (architected algorithm)",
|
||||
.capability = ARM64_HAS_GENERIC_AUTH_ARCH,
|
||||
.desc = "Generic authentication (architected QARMA5 algorithm)",
|
||||
.capability = ARM64_HAS_GENERIC_AUTH_ARCH_QARMA5,
|
||||
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
|
||||
.sys_reg = SYS_ID_AA64ISAR1_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR1_GPA_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64ISAR1_GPA_ARCHITECTED,
|
||||
.matches = has_cpuid_feature,
|
||||
},
|
||||
{
|
||||
.desc = "Generic authentication (architected QARMA3 algorithm)",
|
||||
.capability = ARM64_HAS_GENERIC_AUTH_ARCH_QARMA3,
|
||||
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
|
||||
.sys_reg = SYS_ID_AA64ISAR2_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR2_GPA3_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64ISAR2_GPA3_ARCHITECTED,
|
||||
.matches = has_cpuid_feature,
|
||||
},
|
||||
{
|
||||
.desc = "Generic authentication (IMP DEF algorithm)",
|
||||
.capability = ARM64_HAS_GENERIC_AUTH_IMP_DEF,
|
||||
@ -2283,6 +2335,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64ISAR1_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR1_GPI_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64ISAR1_GPI_IMP_DEF,
|
||||
.matches = has_cpuid_feature,
|
||||
},
|
||||
@ -2303,6 +2356,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = can_use_gic_priorities,
|
||||
.sys_reg = SYS_ID_AA64PFR0_EL1,
|
||||
.field_pos = ID_AA64PFR0_GIC_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = 1,
|
||||
},
|
||||
@ -2314,6 +2368,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
|
||||
.sys_reg = SYS_ID_AA64MMFR2_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_width = 4,
|
||||
.field_pos = ID_AA64MMFR2_E0PD_SHIFT,
|
||||
.matches = has_cpuid_feature,
|
||||
.min_field_value = 1,
|
||||
@ -2328,6 +2383,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64ISAR0_EL1,
|
||||
.field_pos = ID_AA64ISAR0_RNDR_SHIFT,
|
||||
.field_width = 4,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.min_field_value = 1,
|
||||
},
|
||||
@ -2345,6 +2401,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.cpu_enable = bti_enable,
|
||||
.sys_reg = SYS_ID_AA64PFR1_EL1,
|
||||
.field_pos = ID_AA64PFR1_BT_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64PFR1_BT_BTI,
|
||||
.sign = FTR_UNSIGNED,
|
||||
},
|
||||
@ -2357,6 +2414,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64PFR1_EL1,
|
||||
.field_pos = ID_AA64PFR1_MTE_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64PFR1_MTE,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.cpu_enable = cpu_enable_mte,
|
||||
@ -2368,6 +2426,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.matches = has_cpuid_feature,
|
||||
.sys_reg = SYS_ID_AA64PFR1_EL1,
|
||||
.field_pos = ID_AA64PFR1_MTE_SHIFT,
|
||||
.field_width = 4,
|
||||
.min_field_value = ID_AA64PFR1_MTE_ASYMM,
|
||||
.sign = FTR_UNSIGNED,
|
||||
},
|
||||
@ -2379,16 +2438,18 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.sys_reg = SYS_ID_AA64ISAR1_EL1,
|
||||
.sign = FTR_UNSIGNED,
|
||||
.field_pos = ID_AA64ISAR1_LRCPC_SHIFT,
|
||||
.field_width = 4,
|
||||
.matches = has_cpuid_feature,
|
||||
.min_field_value = 1,
|
||||
},
|
||||
{},
|
||||
};
|
||||
|
||||
#define HWCAP_CPUID_MATCH(reg, field, s, min_value) \
|
||||
#define HWCAP_CPUID_MATCH(reg, field, width, s, min_value) \
|
||||
.matches = has_cpuid_feature, \
|
||||
.sys_reg = reg, \
|
||||
.field_pos = field, \
|
||||
.field_width = width, \
|
||||
.sign = s, \
|
||||
.min_field_value = min_value,
|
||||
|
||||
@ -2398,10 +2459,10 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
.hwcap_type = cap_type, \
|
||||
.hwcap = cap, \
|
||||
|
||||
#define HWCAP_CAP(reg, field, s, min_value, cap_type, cap) \
|
||||
#define HWCAP_CAP(reg, field, width, s, min_value, cap_type, cap) \
|
||||
{ \
|
||||
__HWCAP_CAP(#cap, cap_type, cap) \
|
||||
HWCAP_CPUID_MATCH(reg, field, s, min_value) \
|
||||
HWCAP_CPUID_MATCH(reg, field, width, s, min_value) \
|
||||
}
|
||||
|
||||
#define HWCAP_MULTI_CAP(list, cap_type, cap) \
|
||||
@ -2421,11 +2482,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
|
||||
static const struct arm64_cpu_capabilities ptr_auth_hwcap_addr_matches[] = {
|
||||
{
|
||||
HWCAP_CPUID_MATCH(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_APA_SHIFT,
|
||||
FTR_UNSIGNED, ID_AA64ISAR1_APA_ARCHITECTED)
|
||||
4, FTR_UNSIGNED,
|
||||
ID_AA64ISAR1_APA_ARCHITECTED)
|
||||
},
|
||||
{
|
||||
HWCAP_CPUID_MATCH(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_APA3_SHIFT,
|
||||
4, FTR_UNSIGNED, ID_AA64ISAR2_APA3_ARCHITECTED)
|
||||
},
|
||||
{
|
||||
HWCAP_CPUID_MATCH(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_API_SHIFT,
|
||||
FTR_UNSIGNED, ID_AA64ISAR1_API_IMP_DEF)
|
||||
4, FTR_UNSIGNED, ID_AA64ISAR1_API_IMP_DEF)
|
||||
},
|
||||
{},
|
||||
};
|
||||
@ -2433,77 +2499,82 @@ static const struct arm64_cpu_capabilities ptr_auth_hwcap_addr_matches[] = {
|
||||
static const struct arm64_cpu_capabilities ptr_auth_hwcap_gen_matches[] = {
|
||||
{
|
||||
HWCAP_CPUID_MATCH(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_GPA_SHIFT,
|
||||
FTR_UNSIGNED, ID_AA64ISAR1_GPA_ARCHITECTED)
|
||||
4, FTR_UNSIGNED, ID_AA64ISAR1_GPA_ARCHITECTED)
|
||||
},
|
||||
{
|
||||
HWCAP_CPUID_MATCH(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_GPA3_SHIFT,
|
||||
4, FTR_UNSIGNED, ID_AA64ISAR2_GPA3_ARCHITECTED)
|
||||
},
|
||||
{
|
||||
HWCAP_CPUID_MATCH(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_GPI_SHIFT,
|
||||
FTR_UNSIGNED, ID_AA64ISAR1_GPI_IMP_DEF)
|
||||
4, FTR_UNSIGNED, ID_AA64ISAR1_GPI_IMP_DEF)
|
||||
},
|
||||
{},
|
||||
};
|
||||
#endif
|
||||
|
||||
static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_PMULL),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AES),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA1),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA2),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_SHA512),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_CRC32),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ATOMICS),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDRDM),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA3),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM3),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM4),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDDP),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDFHM),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FLAGM),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_FLAGM2),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RNDR_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RNG),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_FP),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_DCPODP),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FCMA),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_LRCPC),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ILRCPC),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FRINTTS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FRINT),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_SB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SB),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_BF16_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_BF16),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DGH_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DGH),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_I8MM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_I8MM),
|
||||
HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_PMULL),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AES),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA1),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA2),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_SHA512),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_CRC32),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ATOMICS),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDRDM),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA3),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM3),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM4),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDDP),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDFHM),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FLAGM),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_FLAGM2),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RNDR_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RNG),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, 4, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_FP),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, 4, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, 4, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_DCPODP),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FCMA),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_LRCPC),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, 4, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ILRCPC),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FRINTTS_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FRINT),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_SB_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SB),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_BF16_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_BF16),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DGH_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DGH),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_I8MM_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_I8MM),
|
||||
HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
|
||||
#ifdef CONFIG_ARM64_SVE
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SVEVER_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SVEVER_SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_AES, CAP_HWCAP, KERNEL_HWCAP_SVEAES),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_AES_PMULL, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_BITPERM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_BITPERM, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_BF16_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_BF16, CAP_HWCAP, KERNEL_HWCAP_SVEBF16),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SHA3_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SHA3, CAP_HWCAP, KERNEL_HWCAP_SVESHA3),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SM4_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_SM4, CAP_HWCAP, KERNEL_HWCAP_SVESM4),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_I8MM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_I8MM, CAP_HWCAP, KERNEL_HWCAP_SVEI8MM),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_F32MM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_F32MM, CAP_HWCAP, KERNEL_HWCAP_SVEF32MM),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_F64MM_SHIFT, FTR_UNSIGNED, ID_AA64ZFR0_F64MM, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SVEVER_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_SVEVER_SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_AES, CAP_HWCAP, KERNEL_HWCAP_SVEAES),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_AES_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_AES_PMULL, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_BITPERM_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_BITPERM, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_BF16_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_BF16, CAP_HWCAP, KERNEL_HWCAP_SVEBF16),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SHA3_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_SHA3, CAP_HWCAP, KERNEL_HWCAP_SVESHA3),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_SM4_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_SM4, CAP_HWCAP, KERNEL_HWCAP_SVESM4),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_I8MM_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_I8MM, CAP_HWCAP, KERNEL_HWCAP_SVEI8MM),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_F32MM_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_F32MM, CAP_HWCAP, KERNEL_HWCAP_SVEF32MM),
|
||||
HWCAP_CAP(SYS_ID_AA64ZFR0_EL1, ID_AA64ZFR0_F64MM_SHIFT, 4, FTR_UNSIGNED, ID_AA64ZFR0_F64MM, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM),
|
||||
#endif
|
||||
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
|
||||
#ifdef CONFIG_ARM64_BTI
|
||||
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_BT_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_BT_BTI, CAP_HWCAP, KERNEL_HWCAP_BTI),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_BT_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_BT_BTI, CAP_HWCAP, KERNEL_HWCAP_BTI),
|
||||
#endif
|
||||
#ifdef CONFIG_ARM64_PTR_AUTH
|
||||
HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, KERNEL_HWCAP_PACA),
|
||||
HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, KERNEL_HWCAP_PACG),
|
||||
#endif
|
||||
#ifdef CONFIG_ARM64_MTE
|
||||
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE),
|
||||
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, 4, FTR_UNSIGNED, ID_AA64PFR1_MTE_ASYMM, CAP_HWCAP, KERNEL_HWCAP_MTE3),
|
||||
#endif /* CONFIG_ARM64_MTE */
|
||||
HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_ECV_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV),
|
||||
HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_AFP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AFP),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_RPRES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RPRES),
|
||||
HWCAP_CAP(SYS_ID_AA64MMFR0_EL1, ID_AA64MMFR0_ECV_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ECV),
|
||||
HWCAP_CAP(SYS_ID_AA64MMFR1_EL1, ID_AA64MMFR1_AFP_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AFP),
|
||||
HWCAP_CAP(SYS_ID_AA64ISAR2_EL1, ID_AA64ISAR2_RPRES_SHIFT, 4, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_RPRES),
|
||||
{},
|
||||
};
|
||||
|
||||
@ -2532,15 +2603,15 @@ static bool compat_has_neon(const struct arm64_cpu_capabilities *cap, int scope)
|
||||
static const struct arm64_cpu_capabilities compat_elf_hwcaps[] = {
|
||||
#ifdef CONFIG_COMPAT
|
||||
HWCAP_CAP_MATCH(compat_has_neon, CAP_COMPAT_HWCAP, COMPAT_HWCAP_NEON),
|
||||
HWCAP_CAP(SYS_MVFR1_EL1, MVFR1_SIMDFMAC_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv4),
|
||||
HWCAP_CAP(SYS_MVFR1_EL1, MVFR1_SIMDFMAC_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv4),
|
||||
/* Arm v8 mandates MVFR0.FPDP == {0, 2}. So, piggy back on this for the presence of VFP support */
|
||||
HWCAP_CAP(SYS_MVFR0_EL1, MVFR0_FPDP_SHIFT, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFP),
|
||||
HWCAP_CAP(SYS_MVFR0_EL1, MVFR0_FPDP_SHIFT, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv3),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_PMULL),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_AES),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA1),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA2),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_CRC32),
|
||||
HWCAP_CAP(SYS_MVFR0_EL1, MVFR0_FPDP_SHIFT, 4, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFP),
|
||||
HWCAP_CAP(SYS_MVFR0_EL1, MVFR0_FPDP_SHIFT, 4, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP, COMPAT_HWCAP_VFPv3),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 4, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_PMULL),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_AES),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA1_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA1),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA2_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA2),
|
||||
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_CRC32_SHIFT, 4, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_CRC32),
|
||||
#endif
|
||||
{},
|
||||
};
|
||||
|
@ -97,6 +97,7 @@ static const char *const hwcap_str[] = {
|
||||
[KERNEL_HWCAP_ECV] = "ecv",
|
||||
[KERNEL_HWCAP_AFP] = "afp",
|
||||
[KERNEL_HWCAP_RPRES] = "rpres",
|
||||
[KERNEL_HWCAP_MTE3] = "mte3",
|
||||
};
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
|
@ -20,6 +20,12 @@ void arch_crash_save_vmcoreinfo(void)
|
||||
{
|
||||
VMCOREINFO_NUMBER(VA_BITS);
|
||||
/* Please note VMCOREINFO_NUMBER() uses "%d", not "%x" */
|
||||
vmcoreinfo_append_str("NUMBER(MODULES_VADDR)=0x%lx\n", MODULES_VADDR);
|
||||
vmcoreinfo_append_str("NUMBER(MODULES_END)=0x%lx\n", MODULES_END);
|
||||
vmcoreinfo_append_str("NUMBER(VMALLOC_START)=0x%lx\n", VMALLOC_START);
|
||||
vmcoreinfo_append_str("NUMBER(VMALLOC_END)=0x%lx\n", VMALLOC_END);
|
||||
vmcoreinfo_append_str("NUMBER(VMEMMAP_START)=0x%lx\n", VMEMMAP_START);
|
||||
vmcoreinfo_append_str("NUMBER(VMEMMAP_END)=0x%lx\n", VMEMMAP_END);
|
||||
vmcoreinfo_append_str("NUMBER(kimage_voffset)=0x%llx\n",
|
||||
kimage_voffset);
|
||||
vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
|
||||
|
134
arch/arm64/kernel/elfcore.c
Normal file
134
arch/arm64/kernel/elfcore.c
Normal file
@ -0,0 +1,134 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
#include <linux/coredump.h>
|
||||
#include <linux/elfcore.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mm.h>
|
||||
|
||||
#include <asm/cpufeature.h>
|
||||
#include <asm/mte.h>
|
||||
|
||||
#ifndef VMA_ITERATOR
|
||||
#define VMA_ITERATOR(name, mm, addr) \
|
||||
struct mm_struct *name = mm
|
||||
#define for_each_vma(vmi, vma) \
|
||||
for (vma = vmi->mmap; vma; vma = vma->vm_next)
|
||||
#endif
|
||||
|
||||
#define for_each_mte_vma(vmi, vma) \
|
||||
if (system_supports_mte()) \
|
||||
for_each_vma(vmi, vma) \
|
||||
if (vma->vm_flags & VM_MTE)
|
||||
|
||||
static unsigned long mte_vma_tag_dump_size(struct vm_area_struct *vma)
|
||||
{
|
||||
if (vma->vm_flags & VM_DONTDUMP)
|
||||
return 0;
|
||||
|
||||
return vma_pages(vma) * MTE_PAGE_TAG_STORAGE;
|
||||
}
|
||||
|
||||
/* Derived from dump_user_range(); start/end must be page-aligned */
|
||||
static int mte_dump_tag_range(struct coredump_params *cprm,
|
||||
unsigned long start, unsigned long end)
|
||||
{
|
||||
unsigned long addr;
|
||||
|
||||
for (addr = start; addr < end; addr += PAGE_SIZE) {
|
||||
char tags[MTE_PAGE_TAG_STORAGE];
|
||||
struct page *page = get_dump_page(addr);
|
||||
|
||||
/*
|
||||
* get_dump_page() returns NULL when encountering an empty
|
||||
* page table entry that would otherwise have been filled with
|
||||
* the zero page. Skip the equivalent tag dump which would
|
||||
* have been all zeros.
|
||||
*/
|
||||
if (!page) {
|
||||
dump_skip(cprm, MTE_PAGE_TAG_STORAGE);
|
||||
continue;
|
||||
}
|
||||
|
||||
/*
|
||||
* Pages mapped in user space as !pte_access_permitted() (e.g.
|
||||
* PROT_EXEC only) may not have the PG_mte_tagged flag set.
|
||||
*/
|
||||
if (!test_bit(PG_mte_tagged, &page->flags)) {
|
||||
put_page(page);
|
||||
dump_skip(cprm, MTE_PAGE_TAG_STORAGE);
|
||||
continue;
|
||||
}
|
||||
|
||||
mte_save_page_tags(page_address(page), tags);
|
||||
put_page(page);
|
||||
if (!dump_emit(cprm, tags, MTE_PAGE_TAG_STORAGE))
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
Elf_Half elf_core_extra_phdrs(void)
|
||||
{
|
||||
struct vm_area_struct *vma;
|
||||
int vma_count = 0;
|
||||
VMA_ITERATOR(vmi, current->mm, 0);
|
||||
|
||||
for_each_mte_vma(vmi, vma)
|
||||
vma_count++;
|
||||
|
||||
return vma_count;
|
||||
}
|
||||
|
||||
int elf_core_write_extra_phdrs(struct coredump_params *cprm, loff_t offset)
|
||||
{
|
||||
struct vm_area_struct *vma;
|
||||
VMA_ITERATOR(vmi, current->mm, 0);
|
||||
|
||||
for_each_mte_vma(vmi, vma) {
|
||||
struct elf_phdr phdr;
|
||||
|
||||
phdr.p_type = PT_ARM_MEMTAG_MTE;
|
||||
phdr.p_offset = offset;
|
||||
phdr.p_vaddr = vma->vm_start;
|
||||
phdr.p_paddr = 0;
|
||||
phdr.p_filesz = mte_vma_tag_dump_size(vma);
|
||||
phdr.p_memsz = vma->vm_end - vma->vm_start;
|
||||
offset += phdr.p_filesz;
|
||||
phdr.p_flags = 0;
|
||||
phdr.p_align = 0;
|
||||
|
||||
if (!dump_emit(cprm, &phdr, sizeof(phdr)))
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
size_t elf_core_extra_data_size(void)
|
||||
{
|
||||
struct vm_area_struct *vma;
|
||||
size_t data_size = 0;
|
||||
VMA_ITERATOR(vmi, current->mm, 0);
|
||||
|
||||
for_each_mte_vma(vmi, vma)
|
||||
data_size += mte_vma_tag_dump_size(vma);
|
||||
|
||||
return data_size;
|
||||
}
|
||||
|
||||
int elf_core_write_extra_data(struct coredump_params *cprm)
|
||||
{
|
||||
struct vm_area_struct *vma;
|
||||
VMA_ITERATOR(vmi, current->mm, 0);
|
||||
|
||||
for_each_mte_vma(vmi, vma) {
|
||||
if (vma->vm_flags & VM_DONTDUMP)
|
||||
continue;
|
||||
|
||||
if (!mte_dump_tag_range(cprm, vma->vm_start, vma->vm_end))
|
||||
return 0;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
@ -6,6 +6,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/context_tracking.h>
|
||||
#include <linux/kasan.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/lockdep.h>
|
||||
#include <linux/ptrace.h>
|
||||
@ -56,6 +57,7 @@ static void noinstr enter_from_kernel_mode(struct pt_regs *regs)
|
||||
{
|
||||
__enter_from_kernel_mode(regs);
|
||||
mte_check_tfsr_entry();
|
||||
mte_disable_tco_entry(current);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -103,6 +105,7 @@ static __always_inline void __enter_from_user_mode(void)
|
||||
CT_WARN_ON(ct_state() != CONTEXT_USER);
|
||||
user_exit_irqoff();
|
||||
trace_hardirqs_off_finish();
|
||||
mte_disable_tco_entry(current);
|
||||
}
|
||||
|
||||
static __always_inline void enter_from_user_mode(struct pt_regs *regs)
|
||||
|
@ -307,6 +307,7 @@ alternative_else_nop_endif
|
||||
str w21, [sp, #S_SYSCALLNO]
|
||||
.endif
|
||||
|
||||
#ifdef CONFIG_ARM64_PSEUDO_NMI
|
||||
/* Save pmr */
|
||||
alternative_if ARM64_HAS_IRQ_PRIO_MASKING
|
||||
mrs_s x20, SYS_ICC_PMR_EL1
|
||||
@ -314,12 +315,6 @@ alternative_if ARM64_HAS_IRQ_PRIO_MASKING
|
||||
mov x20, #GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET
|
||||
msr_s SYS_ICC_PMR_EL1, x20
|
||||
alternative_else_nop_endif
|
||||
|
||||
/* Re-enable tag checking (TCO set on exception entry) */
|
||||
#ifdef CONFIG_ARM64_MTE
|
||||
alternative_if ARM64_MTE
|
||||
SET_PSTATE_TCO(0)
|
||||
alternative_else_nop_endif
|
||||
#endif
|
||||
|
||||
/*
|
||||
@ -337,6 +332,7 @@ alternative_else_nop_endif
|
||||
disable_daif
|
||||
.endif
|
||||
|
||||
#ifdef CONFIG_ARM64_PSEUDO_NMI
|
||||
/* Restore pmr */
|
||||
alternative_if ARM64_HAS_IRQ_PRIO_MASKING
|
||||
ldr x20, [sp, #S_PMR_SAVE]
|
||||
@ -346,6 +342,7 @@ alternative_if ARM64_HAS_IRQ_PRIO_MASKING
|
||||
dsb sy // Ensure priority change is seen by redistributor
|
||||
.L__skip_pmr_sync\@:
|
||||
alternative_else_nop_endif
|
||||
#endif
|
||||
|
||||
ldp x21, x22, [sp, #S_PC] // load ELR, SPSR
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
#define FTR_DESC_NAME_LEN 20
|
||||
#define FTR_DESC_FIELD_LEN 10
|
||||
#define FTR_ALIAS_NAME_LEN 30
|
||||
#define FTR_ALIAS_OPTION_LEN 80
|
||||
#define FTR_ALIAS_OPTION_LEN 116
|
||||
|
||||
struct ftr_set_desc {
|
||||
char name[FTR_DESC_NAME_LEN];
|
||||
@ -71,6 +71,16 @@ static const struct ftr_set_desc isar1 __initconst = {
|
||||
},
|
||||
};
|
||||
|
||||
static const struct ftr_set_desc isar2 __initconst = {
|
||||
.name = "id_aa64isar2",
|
||||
.override = &id_aa64isar2_override,
|
||||
.fields = {
|
||||
{ "gpa3", ID_AA64ISAR2_GPA3_SHIFT },
|
||||
{ "apa3", ID_AA64ISAR2_APA3_SHIFT },
|
||||
{}
|
||||
},
|
||||
};
|
||||
|
||||
extern struct arm64_ftr_override kaslr_feature_override;
|
||||
|
||||
static const struct ftr_set_desc kaslr __initconst = {
|
||||
@ -88,6 +98,7 @@ static const struct ftr_set_desc * const regs[] __initconst = {
|
||||
&mmfr1,
|
||||
&pfr1,
|
||||
&isar1,
|
||||
&isar2,
|
||||
&kaslr,
|
||||
};
|
||||
|
||||
@ -100,7 +111,8 @@ static const struct {
|
||||
{ "arm64.nobti", "id_aa64pfr1.bt=0" },
|
||||
{ "arm64.nopauth",
|
||||
"id_aa64isar1.gpi=0 id_aa64isar1.gpa=0 "
|
||||
"id_aa64isar1.api=0 id_aa64isar1.apa=0" },
|
||||
"id_aa64isar1.api=0 id_aa64isar1.apa=0 "
|
||||
"id_aa64isar2.gpa3=0 id_aa64isar2.apa3=0" },
|
||||
{ "arm64.nomte", "id_aa64pfr1.mte=0" },
|
||||
{ "nokaslr", "kaslr.disabled=1" },
|
||||
};
|
||||
|
@ -186,6 +186,11 @@ void mte_check_tfsr_el1(void)
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* This is where we actually resolve the system and process MTE mode
|
||||
* configuration into an actual value in SCTLR_EL1 that affects
|
||||
* userspace.
|
||||
*/
|
||||
static void mte_update_sctlr_user(struct task_struct *task)
|
||||
{
|
||||
/*
|
||||
@ -199,9 +204,20 @@ static void mte_update_sctlr_user(struct task_struct *task)
|
||||
unsigned long pref, resolved_mte_tcf;
|
||||
|
||||
pref = __this_cpu_read(mte_tcf_preferred);
|
||||
/*
|
||||
* If there is no overlap between the system preferred and
|
||||
* program requested values go with what was requested.
|
||||
*/
|
||||
resolved_mte_tcf = (mte_ctrl & pref) ? pref : mte_ctrl;
|
||||
sctlr &= ~SCTLR_EL1_TCF0_MASK;
|
||||
if (resolved_mte_tcf & MTE_CTRL_TCF_ASYNC)
|
||||
/*
|
||||
* Pick an actual setting. The order in which we check for
|
||||
* set bits and map into register values determines our
|
||||
* default order.
|
||||
*/
|
||||
if (resolved_mte_tcf & MTE_CTRL_TCF_ASYMM)
|
||||
sctlr |= SCTLR_EL1_TCF0_ASYMM;
|
||||
else if (resolved_mte_tcf & MTE_CTRL_TCF_ASYNC)
|
||||
sctlr |= SCTLR_EL1_TCF0_ASYNC;
|
||||
else if (resolved_mte_tcf & MTE_CTRL_TCF_SYNC)
|
||||
sctlr |= SCTLR_EL1_TCF0_SYNC;
|
||||
@ -253,6 +269,9 @@ void mte_thread_switch(struct task_struct *next)
|
||||
mte_update_sctlr_user(next);
|
||||
mte_update_gcr_excl(next);
|
||||
|
||||
/* TCO may not have been disabled on exception entry for the current task. */
|
||||
mte_disable_tco_entry(next);
|
||||
|
||||
/*
|
||||
* Check if an async tag exception occurred at EL1.
|
||||
*
|
||||
@ -293,6 +312,17 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg)
|
||||
if (arg & PR_MTE_TCF_SYNC)
|
||||
mte_ctrl |= MTE_CTRL_TCF_SYNC;
|
||||
|
||||
/*
|
||||
* If the system supports it and both sync and async modes are
|
||||
* specified then implicitly enable asymmetric mode.
|
||||
* Userspace could see a mix of both sync and async anyway due
|
||||
* to differing or changing defaults on CPUs.
|
||||
*/
|
||||
if (cpus_have_cap(ARM64_MTE_ASYMM) &&
|
||||
(arg & PR_MTE_TCF_ASYNC) &&
|
||||
(arg & PR_MTE_TCF_SYNC))
|
||||
mte_ctrl |= MTE_CTRL_TCF_ASYMM;
|
||||
|
||||
task->thread.mte_ctrl = mte_ctrl;
|
||||
if (task == current) {
|
||||
preempt_disable();
|
||||
@ -467,6 +497,8 @@ static ssize_t mte_tcf_preferred_show(struct device *dev,
|
||||
return sysfs_emit(buf, "async\n");
|
||||
case MTE_CTRL_TCF_SYNC:
|
||||
return sysfs_emit(buf, "sync\n");
|
||||
case MTE_CTRL_TCF_ASYMM:
|
||||
return sysfs_emit(buf, "asymm\n");
|
||||
default:
|
||||
return sysfs_emit(buf, "???\n");
|
||||
}
|
||||
@ -482,6 +514,8 @@ static ssize_t mte_tcf_preferred_store(struct device *dev,
|
||||
tcf = MTE_CTRL_TCF_ASYNC;
|
||||
else if (sysfs_streq(buf, "sync"))
|
||||
tcf = MTE_CTRL_TCF_SYNC;
|
||||
else if (cpus_have_cap(ARM64_MTE_ASYMM) && sysfs_streq(buf, "asymm"))
|
||||
tcf = MTE_CTRL_TCF_ASYMM;
|
||||
else
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -242,6 +242,16 @@ static struct attribute *armv8_pmuv3_event_attrs[] = {
|
||||
ARMV8_EVENT_ATTR(l2d_cache_lmiss_rd, ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD),
|
||||
ARMV8_EVENT_ATTR(l2i_cache_lmiss, ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS),
|
||||
ARMV8_EVENT_ATTR(l3d_cache_lmiss_rd, ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD),
|
||||
ARMV8_EVENT_ATTR(trb_wrap, ARMV8_PMUV3_PERFCTR_TRB_WRAP),
|
||||
ARMV8_EVENT_ATTR(trb_trig, ARMV8_PMUV3_PERFCTR_TRB_TRIG),
|
||||
ARMV8_EVENT_ATTR(trcextout0, ARMV8_PMUV3_PERFCTR_TRCEXTOUT0),
|
||||
ARMV8_EVENT_ATTR(trcextout1, ARMV8_PMUV3_PERFCTR_TRCEXTOUT1),
|
||||
ARMV8_EVENT_ATTR(trcextout2, ARMV8_PMUV3_PERFCTR_TRCEXTOUT2),
|
||||
ARMV8_EVENT_ATTR(trcextout3, ARMV8_PMUV3_PERFCTR_TRCEXTOUT3),
|
||||
ARMV8_EVENT_ATTR(cti_trigout4, ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4),
|
||||
ARMV8_EVENT_ATTR(cti_trigout5, ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5),
|
||||
ARMV8_EVENT_ATTR(cti_trigout6, ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6),
|
||||
ARMV8_EVENT_ATTR(cti_trigout7, ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7),
|
||||
ARMV8_EVENT_ATTR(ldst_align_lat, ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT),
|
||||
ARMV8_EVENT_ATTR(ld_align_lat, ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT),
|
||||
ARMV8_EVENT_ATTR(st_align_lat, ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT),
|
||||
|
@ -646,7 +646,8 @@ long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg)
|
||||
return -EINVAL;
|
||||
|
||||
if (system_supports_mte())
|
||||
valid_mask |= PR_MTE_TCF_MASK | PR_MTE_TAG_MASK;
|
||||
valid_mask |= PR_MTE_TCF_SYNC | PR_MTE_TCF_ASYNC \
|
||||
| PR_MTE_TAG_MASK;
|
||||
|
||||
if (arg & ~valid_mask)
|
||||
return -EINVAL;
|
||||
|
@ -233,17 +233,20 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn)
|
||||
__this_cpu_write(bp_hardening_data.slot, HYP_VECTOR_SPECTRE_DIRECT);
|
||||
}
|
||||
|
||||
static void call_smc_arch_workaround_1(void)
|
||||
/* Called during entry so must be noinstr */
|
||||
static noinstr void call_smc_arch_workaround_1(void)
|
||||
{
|
||||
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
|
||||
}
|
||||
|
||||
static void call_hvc_arch_workaround_1(void)
|
||||
/* Called during entry so must be noinstr */
|
||||
static noinstr void call_hvc_arch_workaround_1(void)
|
||||
{
|
||||
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
|
||||
}
|
||||
|
||||
static void qcom_link_stack_sanitisation(void)
|
||||
/* Called during entry so must be noinstr */
|
||||
static noinstr void qcom_link_stack_sanitisation(void)
|
||||
{
|
||||
u64 tmp;
|
||||
|
||||
|
@ -11,7 +11,6 @@
|
||||
#include <linux/errno.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/signal.h>
|
||||
#include <linux/personality.h>
|
||||
#include <linux/freezer.h>
|
||||
#include <linux/stddef.h>
|
||||
#include <linux/uaccess.h>
|
||||
@ -577,10 +576,12 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user,
|
||||
{
|
||||
int err;
|
||||
|
||||
if (system_supports_fpsimd()) {
|
||||
err = sigframe_alloc(user, &user->fpsimd_offset,
|
||||
sizeof(struct fpsimd_context));
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/* fault information, if valid */
|
||||
if (add_all || current->thread.fault_code) {
|
||||
|
@ -9,7 +9,6 @@
|
||||
|
||||
#include <linux/compat.h>
|
||||
#include <linux/cpufeature.h>
|
||||
#include <linux/personality.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/sched/signal.h>
|
||||
#include <linux/slab.h>
|
||||
|
@ -9,7 +9,6 @@
|
||||
#include <linux/bug.h>
|
||||
#include <linux/context_tracking.h>
|
||||
#include <linux/signal.h>
|
||||
#include <linux/personality.h>
|
||||
#include <linux/kallsyms.h>
|
||||
#include <linux/kprobes.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
@ -1867,6 +1867,7 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
|
||||
kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
|
||||
kvm_nvhe_sym(id_aa64isar0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR0_EL1);
|
||||
kvm_nvhe_sym(id_aa64isar1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
|
||||
kvm_nvhe_sym(id_aa64isar2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR2_EL1);
|
||||
kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
|
||||
kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
|
||||
kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
|
||||
|
@ -174,9 +174,9 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
|
||||
|
||||
/* Valid trap. Switch the context: */
|
||||
if (has_vhe()) {
|
||||
reg = CPACR_EL1_FPEN;
|
||||
reg = CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN;
|
||||
if (sve_guest)
|
||||
reg |= CPACR_EL1_ZEN;
|
||||
reg |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
|
||||
|
||||
sysreg_clear_set(cpacr_el1, 0, reg);
|
||||
} else {
|
||||
|
@ -192,6 +192,11 @@
|
||||
ARM64_FEATURE_MASK(ID_AA64ISAR1_I8MM) \
|
||||
)
|
||||
|
||||
#define PVM_ID_AA64ISAR2_ALLOW (\
|
||||
ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3) | \
|
||||
ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) \
|
||||
)
|
||||
|
||||
u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
|
||||
bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code);
|
||||
bool kvm_handle_pvm_restricted(struct kvm_vcpu *vcpu, u64 *exit_code);
|
||||
|
@ -7,7 +7,8 @@
|
||||
#include <asm/assembler.h>
|
||||
#include <asm/alternative.h>
|
||||
|
||||
SYM_FUNC_START_PI(dcache_clean_inval_poc)
|
||||
SYM_FUNC_START(__pi_dcache_clean_inval_poc)
|
||||
dcache_by_line_op civac, sy, x0, x1, x2, x3
|
||||
ret
|
||||
SYM_FUNC_END_PI(dcache_clean_inval_poc)
|
||||
SYM_FUNC_END(__pi_dcache_clean_inval_poc)
|
||||
SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc)
|
||||
|
@ -22,6 +22,7 @@ u64 id_aa64pfr0_el1_sys_val;
|
||||
u64 id_aa64pfr1_el1_sys_val;
|
||||
u64 id_aa64isar0_el1_sys_val;
|
||||
u64 id_aa64isar1_el1_sys_val;
|
||||
u64 id_aa64isar2_el1_sys_val;
|
||||
u64 id_aa64mmfr0_el1_sys_val;
|
||||
u64 id_aa64mmfr1_el1_sys_val;
|
||||
u64 id_aa64mmfr2_el1_sys_val;
|
||||
@ -183,6 +184,17 @@ static u64 get_pvm_id_aa64isar1(const struct kvm_vcpu *vcpu)
|
||||
return id_aa64isar1_el1_sys_val & allow_mask;
|
||||
}
|
||||
|
||||
static u64 get_pvm_id_aa64isar2(const struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u64 allow_mask = PVM_ID_AA64ISAR2_ALLOW;
|
||||
|
||||
if (!vcpu_has_ptrauth(vcpu))
|
||||
allow_mask &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) |
|
||||
ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3));
|
||||
|
||||
return id_aa64isar2_el1_sys_val & allow_mask;
|
||||
}
|
||||
|
||||
static u64 get_pvm_id_aa64mmfr0(const struct kvm_vcpu *vcpu)
|
||||
{
|
||||
u64 set_mask;
|
||||
@ -225,6 +237,8 @@ u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id)
|
||||
return get_pvm_id_aa64isar0(vcpu);
|
||||
case SYS_ID_AA64ISAR1_EL1:
|
||||
return get_pvm_id_aa64isar1(vcpu);
|
||||
case SYS_ID_AA64ISAR2_EL1:
|
||||
return get_pvm_id_aa64isar2(vcpu);
|
||||
case SYS_ID_AA64MMFR0_EL1:
|
||||
return get_pvm_id_aa64mmfr0(vcpu);
|
||||
case SYS_ID_AA64MMFR1_EL1:
|
||||
|
@ -41,7 +41,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
|
||||
|
||||
val = read_sysreg(cpacr_el1);
|
||||
val |= CPACR_EL1_TTA;
|
||||
val &= ~CPACR_EL1_ZEN;
|
||||
val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN);
|
||||
|
||||
/*
|
||||
* With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
|
||||
@ -56,9 +56,9 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
|
||||
|
||||
if (update_fp_enabled(vcpu)) {
|
||||
if (vcpu_has_sve(vcpu))
|
||||
val |= CPACR_EL1_ZEN;
|
||||
val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
|
||||
} else {
|
||||
val &= ~CPACR_EL1_FPEN;
|
||||
val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN);
|
||||
__activate_traps_fpsimd32(vcpu);
|
||||
}
|
||||
|
||||
|
@ -1097,6 +1097,11 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
|
||||
ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) |
|
||||
ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI));
|
||||
break;
|
||||
case SYS_ID_AA64ISAR2_EL1:
|
||||
if (!vcpu_has_ptrauth(vcpu))
|
||||
val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) |
|
||||
ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3));
|
||||
break;
|
||||
case SYS_ID_AA64DFR0_EL1:
|
||||
/* Limit debug to ARMv8.0 */
|
||||
val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
|
||||
|
@ -14,7 +14,7 @@
|
||||
* Parameters:
|
||||
* x0 - dest
|
||||
*/
|
||||
SYM_FUNC_START_PI(clear_page)
|
||||
SYM_FUNC_START(__pi_clear_page)
|
||||
mrs x1, dczid_el0
|
||||
tbnz x1, #4, 2f /* Branch if DC ZVA is prohibited */
|
||||
and w1, w1, #0xf
|
||||
@ -35,5 +35,6 @@ SYM_FUNC_START_PI(clear_page)
|
||||
tst x0, #(PAGE_SIZE - 1)
|
||||
b.ne 2b
|
||||
ret
|
||||
SYM_FUNC_END_PI(clear_page)
|
||||
SYM_FUNC_END(__pi_clear_page)
|
||||
SYM_FUNC_ALIAS(clear_page, __pi_clear_page)
|
||||
EXPORT_SYMBOL(clear_page)
|
||||
|
@ -17,7 +17,7 @@
|
||||
* x0 - dest
|
||||
* x1 - src
|
||||
*/
|
||||
SYM_FUNC_START_PI(copy_page)
|
||||
SYM_FUNC_START(__pi_copy_page)
|
||||
alternative_if ARM64_HAS_NO_HW_PREFETCH
|
||||
// Prefetch three cache lines ahead.
|
||||
prfm pldl1strm, [x1, #128]
|
||||
@ -75,5 +75,6 @@ alternative_else_nop_endif
|
||||
stnp x16, x17, [x0, #112 - 256]
|
||||
|
||||
ret
|
||||
SYM_FUNC_END_PI(copy_page)
|
||||
SYM_FUNC_END(__pi_copy_page)
|
||||
SYM_FUNC_ALIAS(copy_page, __pi_copy_page)
|
||||
EXPORT_SYMBOL(copy_page)
|
||||
|
@ -578,10 +578,16 @@ u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg,
|
||||
|
||||
switch (type) {
|
||||
case AARCH64_INSN_LDST_LOAD_EX:
|
||||
case AARCH64_INSN_LDST_LOAD_ACQ_EX:
|
||||
insn = aarch64_insn_get_load_ex_value();
|
||||
if (type == AARCH64_INSN_LDST_LOAD_ACQ_EX)
|
||||
insn |= BIT(15);
|
||||
break;
|
||||
case AARCH64_INSN_LDST_STORE_EX:
|
||||
case AARCH64_INSN_LDST_STORE_REL_EX:
|
||||
insn = aarch64_insn_get_store_ex_value();
|
||||
if (type == AARCH64_INSN_LDST_STORE_REL_EX)
|
||||
insn |= BIT(15);
|
||||
break;
|
||||
default:
|
||||
pr_err("%s: unknown load/store exclusive encoding %d\n", __func__, type);
|
||||
@ -603,12 +609,65 @@ u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg,
|
||||
state);
|
||||
}
|
||||
|
||||
u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result,
|
||||
#ifdef CONFIG_ARM64_LSE_ATOMICS
|
||||
static u32 aarch64_insn_encode_ldst_order(enum aarch64_insn_mem_order_type type,
|
||||
u32 insn)
|
||||
{
|
||||
u32 order;
|
||||
|
||||
switch (type) {
|
||||
case AARCH64_INSN_MEM_ORDER_NONE:
|
||||
order = 0;
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ORDER_ACQ:
|
||||
order = 2;
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ORDER_REL:
|
||||
order = 1;
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ORDER_ACQREL:
|
||||
order = 3;
|
||||
break;
|
||||
default:
|
||||
pr_err("%s: unknown mem order %d\n", __func__, type);
|
||||
return AARCH64_BREAK_FAULT;
|
||||
}
|
||||
|
||||
insn &= ~GENMASK(23, 22);
|
||||
insn |= order << 22;
|
||||
|
||||
return insn;
|
||||
}
|
||||
|
||||
u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result,
|
||||
enum aarch64_insn_register address,
|
||||
enum aarch64_insn_register value,
|
||||
enum aarch64_insn_size_type size)
|
||||
enum aarch64_insn_size_type size,
|
||||
enum aarch64_insn_mem_atomic_op op,
|
||||
enum aarch64_insn_mem_order_type order)
|
||||
{
|
||||
u32 insn = aarch64_insn_get_ldadd_value();
|
||||
u32 insn;
|
||||
|
||||
switch (op) {
|
||||
case AARCH64_INSN_MEM_ATOMIC_ADD:
|
||||
insn = aarch64_insn_get_ldadd_value();
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ATOMIC_CLR:
|
||||
insn = aarch64_insn_get_ldclr_value();
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ATOMIC_EOR:
|
||||
insn = aarch64_insn_get_ldeor_value();
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ATOMIC_SET:
|
||||
insn = aarch64_insn_get_ldset_value();
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ATOMIC_SWP:
|
||||
insn = aarch64_insn_get_swp_value();
|
||||
break;
|
||||
default:
|
||||
pr_err("%s: unimplemented mem atomic op %d\n", __func__, op);
|
||||
return AARCH64_BREAK_FAULT;
|
||||
}
|
||||
|
||||
switch (size) {
|
||||
case AARCH64_INSN_SIZE_32:
|
||||
@ -621,6 +680,8 @@ u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result,
|
||||
|
||||
insn = aarch64_insn_encode_ldst_size(size, insn);
|
||||
|
||||
insn = aarch64_insn_encode_ldst_order(order, insn);
|
||||
|
||||
insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn,
|
||||
result);
|
||||
|
||||
@ -631,18 +692,69 @@ u32 aarch64_insn_gen_ldadd(enum aarch64_insn_register result,
|
||||
value);
|
||||
}
|
||||
|
||||
u32 aarch64_insn_gen_stadd(enum aarch64_insn_register address,
|
||||
enum aarch64_insn_register value,
|
||||
enum aarch64_insn_size_type size)
|
||||
static u32 aarch64_insn_encode_cas_order(enum aarch64_insn_mem_order_type type,
|
||||
u32 insn)
|
||||
{
|
||||
/*
|
||||
* STADD is simply encoded as an alias for LDADD with XZR as
|
||||
* the destination register.
|
||||
*/
|
||||
return aarch64_insn_gen_ldadd(AARCH64_INSN_REG_ZR, address,
|
||||
value, size);
|
||||
u32 order;
|
||||
|
||||
switch (type) {
|
||||
case AARCH64_INSN_MEM_ORDER_NONE:
|
||||
order = 0;
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ORDER_ACQ:
|
||||
order = BIT(22);
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ORDER_REL:
|
||||
order = BIT(15);
|
||||
break;
|
||||
case AARCH64_INSN_MEM_ORDER_ACQREL:
|
||||
order = BIT(15) | BIT(22);
|
||||
break;
|
||||
default:
|
||||
pr_err("%s: unknown mem order %d\n", __func__, type);
|
||||
return AARCH64_BREAK_FAULT;
|
||||
}
|
||||
|
||||
insn &= ~(BIT(15) | BIT(22));
|
||||
insn |= order;
|
||||
|
||||
return insn;
|
||||
}
|
||||
|
||||
u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
|
||||
enum aarch64_insn_register address,
|
||||
enum aarch64_insn_register value,
|
||||
enum aarch64_insn_size_type size,
|
||||
enum aarch64_insn_mem_order_type order)
|
||||
{
|
||||
u32 insn;
|
||||
|
||||
switch (size) {
|
||||
case AARCH64_INSN_SIZE_32:
|
||||
case AARCH64_INSN_SIZE_64:
|
||||
break;
|
||||
default:
|
||||
pr_err("%s: unimplemented size encoding %d\n", __func__, size);
|
||||
return AARCH64_BREAK_FAULT;
|
||||
}
|
||||
|
||||
insn = aarch64_insn_get_cas_value();
|
||||
|
||||
insn = aarch64_insn_encode_ldst_size(size, insn);
|
||||
|
||||
insn = aarch64_insn_encode_cas_order(order, insn);
|
||||
|
||||
insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn,
|
||||
result);
|
||||
|
||||
insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn,
|
||||
address);
|
||||
|
||||
return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RS, insn,
|
||||
value);
|
||||
}
|
||||
#endif
|
||||
|
||||
static u32 aarch64_insn_encode_prfm_imm(enum aarch64_insn_prfm_type type,
|
||||
enum aarch64_insn_prfm_target target,
|
||||
enum aarch64_insn_prfm_policy policy,
|
||||
@ -1379,7 +1491,7 @@ static u32 aarch64_encode_immediate(u64 imm,
|
||||
* Compute the rotation to get a continuous set of
|
||||
* ones, with the first bit set at position 0
|
||||
*/
|
||||
ror = fls(~imm);
|
||||
ror = fls64(~imm);
|
||||
}
|
||||
|
||||
/*
|
||||
@ -1456,3 +1568,48 @@ u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
|
||||
insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, Rn);
|
||||
return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, Rm);
|
||||
}
|
||||
|
||||
u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type)
|
||||
{
|
||||
u32 opt;
|
||||
u32 insn;
|
||||
|
||||
switch (type) {
|
||||
case AARCH64_INSN_MB_SY:
|
||||
opt = 0xf;
|
||||
break;
|
||||
case AARCH64_INSN_MB_ST:
|
||||
opt = 0xe;
|
||||
break;
|
||||
case AARCH64_INSN_MB_LD:
|
||||
opt = 0xd;
|
||||
break;
|
||||
case AARCH64_INSN_MB_ISH:
|
||||
opt = 0xb;
|
||||
break;
|
||||
case AARCH64_INSN_MB_ISHST:
|
||||
opt = 0xa;
|
||||
break;
|
||||
case AARCH64_INSN_MB_ISHLD:
|
||||
opt = 0x9;
|
||||
break;
|
||||
case AARCH64_INSN_MB_NSH:
|
||||
opt = 0x7;
|
||||
break;
|
||||
case AARCH64_INSN_MB_NSHST:
|
||||
opt = 0x6;
|
||||
break;
|
||||
case AARCH64_INSN_MB_NSHLD:
|
||||
opt = 0x5;
|
||||
break;
|
||||
default:
|
||||
pr_err("%s: unknown dmb type %d\n", __func__, type);
|
||||
return AARCH64_BREAK_FAULT;
|
||||
}
|
||||
|
||||
insn = aarch64_insn_get_dmb_value();
|
||||
insn &= ~GENMASK(11, 8);
|
||||
insn |= (opt << 8);
|
||||
|
||||
return insn;
|
||||
}
|
||||
|
@ -38,7 +38,7 @@
|
||||
|
||||
.p2align 4
|
||||
nop
|
||||
SYM_FUNC_START_WEAK_PI(memchr)
|
||||
SYM_FUNC_START(__pi_memchr)
|
||||
and chrin, chrin, #0xff
|
||||
lsr wordcnt, cntin, #3
|
||||
cbz wordcnt, L(byte_loop)
|
||||
@ -71,5 +71,6 @@ CPU_LE( rev tmp, tmp)
|
||||
L(not_found):
|
||||
mov result, #0
|
||||
ret
|
||||
SYM_FUNC_END_PI(memchr)
|
||||
SYM_FUNC_END(__pi_memchr)
|
||||
SYM_FUNC_ALIAS_WEAK(memchr, __pi_memchr)
|
||||
EXPORT_SYMBOL_NOKASAN(memchr)
|
||||
|
@ -32,7 +32,7 @@
|
||||
#define tmp1 x7
|
||||
#define tmp2 x8
|
||||
|
||||
SYM_FUNC_START_WEAK_PI(memcmp)
|
||||
SYM_FUNC_START(__pi_memcmp)
|
||||
subs limit, limit, 8
|
||||
b.lo L(less8)
|
||||
|
||||
@ -134,6 +134,6 @@ L(byte_loop):
|
||||
b.eq L(byte_loop)
|
||||
sub result, data1w, data2w
|
||||
ret
|
||||
|
||||
SYM_FUNC_END_PI(memcmp)
|
||||
SYM_FUNC_END(__pi_memcmp)
|
||||
SYM_FUNC_ALIAS_WEAK(memcmp, __pi_memcmp)
|
||||
EXPORT_SYMBOL_NOKASAN(memcmp)
|
||||
|
@ -57,10 +57,7 @@
|
||||
The loop tail is handled by always copying 64 bytes from the end.
|
||||
*/
|
||||
|
||||
SYM_FUNC_START_ALIAS(__memmove)
|
||||
SYM_FUNC_START_WEAK_ALIAS_PI(memmove)
|
||||
SYM_FUNC_START_ALIAS(__memcpy)
|
||||
SYM_FUNC_START_WEAK_PI(memcpy)
|
||||
SYM_FUNC_START(__pi_memcpy)
|
||||
add srcend, src, count
|
||||
add dstend, dstin, count
|
||||
cmp count, 128
|
||||
@ -241,12 +238,16 @@ L(copy64_from_start):
|
||||
stp B_l, B_h, [dstin, 16]
|
||||
stp C_l, C_h, [dstin]
|
||||
ret
|
||||
SYM_FUNC_END(__pi_memcpy)
|
||||
|
||||
SYM_FUNC_END_PI(memcpy)
|
||||
EXPORT_SYMBOL(memcpy)
|
||||
SYM_FUNC_END_ALIAS(__memcpy)
|
||||
SYM_FUNC_ALIAS(__memcpy, __pi_memcpy)
|
||||
EXPORT_SYMBOL(__memcpy)
|
||||
SYM_FUNC_END_ALIAS_PI(memmove)
|
||||
EXPORT_SYMBOL(memmove)
|
||||
SYM_FUNC_END_ALIAS(__memmove)
|
||||
SYM_FUNC_ALIAS_WEAK(memcpy, __memcpy)
|
||||
EXPORT_SYMBOL(memcpy)
|
||||
|
||||
SYM_FUNC_ALIAS(__pi_memmove, __pi_memcpy)
|
||||
|
||||
SYM_FUNC_ALIAS(__memmove, __pi_memmove)
|
||||
EXPORT_SYMBOL(__memmove)
|
||||
SYM_FUNC_ALIAS_WEAK(memmove, __memmove)
|
||||
EXPORT_SYMBOL(memmove)
|
||||
|
@ -42,8 +42,7 @@ dst .req x8
|
||||
tmp3w .req w9
|
||||
tmp3 .req x9
|
||||
|
||||
SYM_FUNC_START_ALIAS(__memset)
|
||||
SYM_FUNC_START_WEAK_PI(memset)
|
||||
SYM_FUNC_START(__pi_memset)
|
||||
mov dst, dstin /* Preserve return value. */
|
||||
and A_lw, val, #255
|
||||
orr A_lw, A_lw, A_lw, lsl #8
|
||||
@ -202,7 +201,10 @@ SYM_FUNC_START_WEAK_PI(memset)
|
||||
ands count, count, zva_bits_x
|
||||
b.ne .Ltail_maybe_long
|
||||
ret
|
||||
SYM_FUNC_END_PI(memset)
|
||||
EXPORT_SYMBOL(memset)
|
||||
SYM_FUNC_END_ALIAS(__memset)
|
||||
SYM_FUNC_END(__pi_memset)
|
||||
|
||||
SYM_FUNC_ALIAS(__memset, __pi_memset)
|
||||
EXPORT_SYMBOL(__memset)
|
||||
|
||||
SYM_FUNC_ALIAS_WEAK(memset, __pi_memset)
|
||||
EXPORT_SYMBOL(memset)
|
||||
|
@ -134,7 +134,7 @@ SYM_FUNC_END(mte_copy_tags_to_user)
|
||||
/*
|
||||
* Save the tags in a page
|
||||
* x0 - page address
|
||||
* x1 - tag storage
|
||||
* x1 - tag storage, MTE_PAGE_TAG_STORAGE bytes
|
||||
*/
|
||||
SYM_FUNC_START(mte_save_page_tags)
|
||||
multitag_transfer_size x7, x5
|
||||
@ -158,7 +158,7 @@ SYM_FUNC_END(mte_save_page_tags)
|
||||
/*
|
||||
* Restore the tags in a page
|
||||
* x0 - page address
|
||||
* x1 - tag storage
|
||||
* x1 - tag storage, MTE_PAGE_TAG_STORAGE bytes
|
||||
*/
|
||||
SYM_FUNC_START(mte_restore_page_tags)
|
||||
multitag_transfer_size x7, x5
|
||||
|
@ -18,7 +18,7 @@
|
||||
* Returns:
|
||||
* x0 - address of first occurrence of 'c' or 0
|
||||
*/
|
||||
SYM_FUNC_START_WEAK(strchr)
|
||||
SYM_FUNC_START(__pi_strchr)
|
||||
and w1, w1, #0xff
|
||||
1: ldrb w2, [x0], #1
|
||||
cmp w2, w1
|
||||
@ -28,5 +28,7 @@ SYM_FUNC_START_WEAK(strchr)
|
||||
cmp w2, w1
|
||||
csel x0, x0, xzr, eq
|
||||
ret
|
||||
SYM_FUNC_END(strchr)
|
||||
SYM_FUNC_END(__pi_strchr)
|
||||
|
||||
SYM_FUNC_ALIAS_WEAK(strchr, __pi_strchr)
|
||||
EXPORT_SYMBOL_NOKASAN(strchr)
|
||||
|
@ -1,9 +1,9 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Copyright (c) 2012-2021, Arm Limited.
|
||||
* Copyright (c) 2012-2022, Arm Limited.
|
||||
*
|
||||
* Adapted from the original at:
|
||||
* https://github.com/ARM-software/optimized-routines/blob/afd6244a1f8d9229/string/aarch64/strcmp.S
|
||||
* https://github.com/ARM-software/optimized-routines/blob/189dfefe37d54c5b/string/aarch64/strcmp.S
|
||||
*/
|
||||
|
||||
#include <linux/linkage.h>
|
||||
@ -11,166 +11,180 @@
|
||||
|
||||
/* Assumptions:
|
||||
*
|
||||
* ARMv8-a, AArch64
|
||||
* ARMv8-a, AArch64.
|
||||
* MTE compatible.
|
||||
*/
|
||||
|
||||
#define L(label) .L ## label
|
||||
|
||||
#define REP8_01 0x0101010101010101
|
||||
#define REP8_7f 0x7f7f7f7f7f7f7f7f
|
||||
#define REP8_80 0x8080808080808080
|
||||
|
||||
/* Parameters and result. */
|
||||
#define src1 x0
|
||||
#define src2 x1
|
||||
#define result x0
|
||||
|
||||
/* Internal variables. */
|
||||
#define data1 x2
|
||||
#define data1w w2
|
||||
#define data2 x3
|
||||
#define data2w w3
|
||||
#define has_nul x4
|
||||
#define diff x5
|
||||
#define off1 x5
|
||||
#define syndrome x6
|
||||
#define tmp1 x7
|
||||
#define tmp2 x8
|
||||
#define tmp3 x9
|
||||
#define zeroones x10
|
||||
#define pos x11
|
||||
#define tmp x6
|
||||
#define data3 x7
|
||||
#define zeroones x8
|
||||
#define shift x9
|
||||
#define off2 x10
|
||||
|
||||
/* Start of performance-critical section -- one 64B cache line. */
|
||||
.align 6
|
||||
SYM_FUNC_START_WEAK_PI(strcmp)
|
||||
eor tmp1, src1, src2
|
||||
mov zeroones, #REP8_01
|
||||
tst tmp1, #7
|
||||
b.ne L(misaligned8)
|
||||
ands tmp1, src1, #7
|
||||
b.ne L(mutual_align)
|
||||
/* NUL detection works on the principle that (X - 1) & (~X) & 0x80
|
||||
/* On big-endian early bytes are at MSB and on little-endian LSB.
|
||||
LS_FW means shifting towards early bytes. */
|
||||
#ifdef __AARCH64EB__
|
||||
# define LS_FW lsl
|
||||
#else
|
||||
# define LS_FW lsr
|
||||
#endif
|
||||
|
||||
/* NUL detection works on the principle that (X - 1) & (~X) & 0x80
|
||||
(=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and
|
||||
can be done in parallel across the entire word. */
|
||||
L(loop_aligned):
|
||||
ldr data1, [src1], #8
|
||||
ldr data2, [src2], #8
|
||||
L(start_realigned):
|
||||
sub tmp1, data1, zeroones
|
||||
orr tmp2, data1, #REP8_7f
|
||||
eor diff, data1, data2 /* Non-zero if differences found. */
|
||||
bic has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */
|
||||
orr syndrome, diff, has_nul
|
||||
cbz syndrome, L(loop_aligned)
|
||||
/* End of performance-critical section -- one 64B cache line. */
|
||||
can be done in parallel across the entire word.
|
||||
Since carry propagation makes 0x1 bytes before a NUL byte appear
|
||||
NUL too in big-endian, byte-reverse the data before the NUL check. */
|
||||
|
||||
|
||||
SYM_FUNC_START(__pi_strcmp)
|
||||
sub off2, src2, src1
|
||||
mov zeroones, REP8_01
|
||||
and tmp, src1, 7
|
||||
tst off2, 7
|
||||
b.ne L(misaligned8)
|
||||
cbnz tmp, L(mutual_align)
|
||||
|
||||
.p2align 4
|
||||
|
||||
L(loop_aligned):
|
||||
ldr data2, [src1, off2]
|
||||
ldr data1, [src1], 8
|
||||
L(start_realigned):
|
||||
#ifdef __AARCH64EB__
|
||||
rev tmp, data1
|
||||
sub has_nul, tmp, zeroones
|
||||
orr tmp, tmp, REP8_7f
|
||||
#else
|
||||
sub has_nul, data1, zeroones
|
||||
orr tmp, data1, REP8_7f
|
||||
#endif
|
||||
bics has_nul, has_nul, tmp /* Non-zero if NUL terminator. */
|
||||
ccmp data1, data2, 0, eq
|
||||
b.eq L(loop_aligned)
|
||||
#ifdef __AARCH64EB__
|
||||
rev has_nul, has_nul
|
||||
#endif
|
||||
eor diff, data1, data2
|
||||
orr syndrome, diff, has_nul
|
||||
L(end):
|
||||
#ifndef __AARCH64EB__
|
||||
rev syndrome, syndrome
|
||||
rev data1, data1
|
||||
/* The MS-non-zero bit of the syndrome marks either the first bit
|
||||
that is different, or the top bit of the first zero byte.
|
||||
Shifting left now will bring the critical information into the
|
||||
top bits. */
|
||||
clz pos, syndrome
|
||||
rev data2, data2
|
||||
lsl data1, data1, pos
|
||||
lsl data2, data2, pos
|
||||
/* But we need to zero-extend (char is unsigned) the value and then
|
||||
perform a signed 32-bit subtraction. */
|
||||
lsr data1, data1, #56
|
||||
sub result, data1, data2, lsr #56
|
||||
ret
|
||||
#else
|
||||
/* For big-endian we cannot use the trick with the syndrome value
|
||||
as carry-propagation can corrupt the upper bits if the trailing
|
||||
bytes in the string contain 0x01. */
|
||||
/* However, if there is no NUL byte in the dword, we can generate
|
||||
the result directly. We can't just subtract the bytes as the
|
||||
MSB might be significant. */
|
||||
cbnz has_nul, 1f
|
||||
cmp data1, data2
|
||||
cset result, ne
|
||||
cneg result, result, lo
|
||||
ret
|
||||
1:
|
||||
/* Re-compute the NUL-byte detection, using a byte-reversed value. */
|
||||
rev tmp3, data1
|
||||
sub tmp1, tmp3, zeroones
|
||||
orr tmp2, tmp3, #REP8_7f
|
||||
bic has_nul, tmp1, tmp2
|
||||
rev has_nul, has_nul
|
||||
orr syndrome, diff, has_nul
|
||||
clz pos, syndrome
|
||||
/* The MS-non-zero bit of the syndrome marks either the first bit
|
||||
that is different, or the top bit of the first zero byte.
|
||||
#endif
|
||||
clz shift, syndrome
|
||||
/* The most-significant-non-zero bit of the syndrome marks either the
|
||||
first bit that is different, or the top bit of the first zero byte.
|
||||
Shifting left now will bring the critical information into the
|
||||
top bits. */
|
||||
lsl data1, data1, pos
|
||||
lsl data2, data2, pos
|
||||
lsl data1, data1, shift
|
||||
lsl data2, data2, shift
|
||||
/* But we need to zero-extend (char is unsigned) the value and then
|
||||
perform a signed 32-bit subtraction. */
|
||||
lsr data1, data1, #56
|
||||
sub result, data1, data2, lsr #56
|
||||
lsr data1, data1, 56
|
||||
sub result, data1, data2, lsr 56
|
||||
ret
|
||||
#endif
|
||||
|
||||
.p2align 4
|
||||
|
||||
L(mutual_align):
|
||||
/* Sources are mutually aligned, but are not currently at an
|
||||
alignment boundary. Round down the addresses and then mask off
|
||||
the bytes that preceed the start point. */
|
||||
bic src1, src1, #7
|
||||
bic src2, src2, #7
|
||||
lsl tmp1, tmp1, #3 /* Bytes beyond alignment -> bits. */
|
||||
ldr data1, [src1], #8
|
||||
neg tmp1, tmp1 /* Bits to alignment -64. */
|
||||
ldr data2, [src2], #8
|
||||
mov tmp2, #~0
|
||||
#ifdef __AARCH64EB__
|
||||
/* Big-endian. Early bytes are at MSB. */
|
||||
lsl tmp2, tmp2, tmp1 /* Shift (tmp1 & 63). */
|
||||
#else
|
||||
/* Little-endian. Early bytes are at LSB. */
|
||||
lsr tmp2, tmp2, tmp1 /* Shift (tmp1 & 63). */
|
||||
#endif
|
||||
orr data1, data1, tmp2
|
||||
orr data2, data2, tmp2
|
||||
the bytes that precede the start point. */
|
||||
bic src1, src1, 7
|
||||
ldr data2, [src1, off2]
|
||||
ldr data1, [src1], 8
|
||||
neg shift, src2, lsl 3 /* Bits to alignment -64. */
|
||||
mov tmp, -1
|
||||
LS_FW tmp, tmp, shift
|
||||
orr data1, data1, tmp
|
||||
orr data2, data2, tmp
|
||||
b L(start_realigned)
|
||||
|
||||
L(misaligned8):
|
||||
/* Align SRC1 to 8 bytes and then compare 8 bytes at a time, always
|
||||
checking to make sure that we don't access beyond page boundary in
|
||||
SRC2. */
|
||||
tst src1, #7
|
||||
b.eq L(loop_misaligned)
|
||||
checking to make sure that we don't access beyond the end of SRC2. */
|
||||
cbz tmp, L(src1_aligned)
|
||||
L(do_misaligned):
|
||||
ldrb data1w, [src1], #1
|
||||
ldrb data2w, [src2], #1
|
||||
cmp data1w, #1
|
||||
ccmp data1w, data2w, #0, cs /* NZCV = 0b0000. */
|
||||
ldrb data1w, [src1], 1
|
||||
ldrb data2w, [src2], 1
|
||||
cmp data1w, 0
|
||||
ccmp data1w, data2w, 0, ne /* NZCV = 0b0000. */
|
||||
b.ne L(done)
|
||||
tst src1, #7
|
||||
tst src1, 7
|
||||
b.ne L(do_misaligned)
|
||||
|
||||
L(loop_misaligned):
|
||||
/* Test if we are within the last dword of the end of a 4K page. If
|
||||
yes then jump back to the misaligned loop to copy a byte at a time. */
|
||||
and tmp1, src2, #0xff8
|
||||
eor tmp1, tmp1, #0xff8
|
||||
cbz tmp1, L(do_misaligned)
|
||||
ldr data1, [src1], #8
|
||||
ldr data2, [src2], #8
|
||||
L(src1_aligned):
|
||||
neg shift, src2, lsl 3
|
||||
bic src2, src2, 7
|
||||
ldr data3, [src2], 8
|
||||
#ifdef __AARCH64EB__
|
||||
rev data3, data3
|
||||
#endif
|
||||
lsr tmp, zeroones, shift
|
||||
orr data3, data3, tmp
|
||||
sub has_nul, data3, zeroones
|
||||
orr tmp, data3, REP8_7f
|
||||
bics has_nul, has_nul, tmp
|
||||
b.ne L(tail)
|
||||
|
||||
sub tmp1, data1, zeroones
|
||||
orr tmp2, data1, #REP8_7f
|
||||
eor diff, data1, data2 /* Non-zero if differences found. */
|
||||
bic has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */
|
||||
sub off1, src2, src1
|
||||
|
||||
.p2align 4
|
||||
|
||||
L(loop_unaligned):
|
||||
ldr data3, [src1, off1]
|
||||
ldr data2, [src1, off2]
|
||||
#ifdef __AARCH64EB__
|
||||
rev data3, data3
|
||||
#endif
|
||||
sub has_nul, data3, zeroones
|
||||
orr tmp, data3, REP8_7f
|
||||
ldr data1, [src1], 8
|
||||
bics has_nul, has_nul, tmp
|
||||
ccmp data1, data2, 0, eq
|
||||
b.eq L(loop_unaligned)
|
||||
|
||||
lsl tmp, has_nul, shift
|
||||
#ifdef __AARCH64EB__
|
||||
rev tmp, tmp
|
||||
#endif
|
||||
eor diff, data1, data2
|
||||
orr syndrome, diff, tmp
|
||||
cbnz syndrome, L(end)
|
||||
L(tail):
|
||||
ldr data1, [src1]
|
||||
neg shift, shift
|
||||
lsr data2, data3, shift
|
||||
lsr has_nul, has_nul, shift
|
||||
#ifdef __AARCH64EB__
|
||||
rev data2, data2
|
||||
rev has_nul, has_nul
|
||||
#endif
|
||||
eor diff, data1, data2
|
||||
orr syndrome, diff, has_nul
|
||||
cbz syndrome, L(loop_misaligned)
|
||||
b L(end)
|
||||
|
||||
L(done):
|
||||
sub result, data1, data2
|
||||
ret
|
||||
|
||||
SYM_FUNC_END_PI(strcmp)
|
||||
EXPORT_SYMBOL_NOHWKASAN(strcmp)
|
||||
SYM_FUNC_END(__pi_strcmp)
|
||||
SYM_FUNC_ALIAS_WEAK(strcmp, __pi_strcmp)
|
||||
EXPORT_SYMBOL_NOKASAN(strcmp)
|
||||
|
@ -79,7 +79,7 @@
|
||||
whether the first fetch, which may be misaligned, crosses a page
|
||||
boundary. */
|
||||
|
||||
SYM_FUNC_START_WEAK_PI(strlen)
|
||||
SYM_FUNC_START(__pi_strlen)
|
||||
and tmp1, srcin, MIN_PAGE_SIZE - 1
|
||||
mov zeroones, REP8_01
|
||||
cmp tmp1, MIN_PAGE_SIZE - 16
|
||||
@ -208,6 +208,6 @@ L(page_cross):
|
||||
csel data1, data1, tmp4, eq
|
||||
csel data2, data2, tmp2, eq
|
||||
b L(page_cross_entry)
|
||||
|
||||
SYM_FUNC_END_PI(strlen)
|
||||
SYM_FUNC_END(__pi_strlen)
|
||||
SYM_FUNC_ALIAS_WEAK(strlen, __pi_strlen)
|
||||
EXPORT_SYMBOL_NOKASAN(strlen)
|
||||
|
@ -1,9 +1,9 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* Copyright (c) 2013-2021, Arm Limited.
|
||||
* Copyright (c) 2013-2022, Arm Limited.
|
||||
*
|
||||
* Adapted from the original at:
|
||||
* https://github.com/ARM-software/optimized-routines/blob/e823e3abf5f89ecb/string/aarch64/strncmp.S
|
||||
* https://github.com/ARM-software/optimized-routines/blob/189dfefe37d54c5b/string/aarch64/strncmp.S
|
||||
*/
|
||||
|
||||
#include <linux/linkage.h>
|
||||
@ -11,14 +11,14 @@
|
||||
|
||||
/* Assumptions:
|
||||
*
|
||||
* ARMv8-a, AArch64
|
||||
* ARMv8-a, AArch64.
|
||||
* MTE compatible.
|
||||
*/
|
||||
|
||||
#define L(label) .L ## label
|
||||
|
||||
#define REP8_01 0x0101010101010101
|
||||
#define REP8_7f 0x7f7f7f7f7f7f7f7f
|
||||
#define REP8_80 0x8080808080808080
|
||||
|
||||
/* Parameters and result. */
|
||||
#define src1 x0
|
||||
@ -39,12 +39,26 @@
|
||||
#define tmp3 x10
|
||||
#define zeroones x11
|
||||
#define pos x12
|
||||
#define limit_wd x13
|
||||
#define mask x14
|
||||
#define endloop x15
|
||||
#define mask x13
|
||||
#define endloop x14
|
||||
#define count mask
|
||||
#define offset pos
|
||||
#define neg_offset x15
|
||||
|
||||
SYM_FUNC_START_WEAK_PI(strncmp)
|
||||
/* Define endian dependent shift operations.
|
||||
On big-endian early bytes are at MSB and on little-endian LSB.
|
||||
LS_FW means shifting towards early bytes.
|
||||
LS_BK means shifting towards later bytes.
|
||||
*/
|
||||
#ifdef __AARCH64EB__
|
||||
#define LS_FW lsl
|
||||
#define LS_BK lsr
|
||||
#else
|
||||
#define LS_FW lsr
|
||||
#define LS_BK lsl
|
||||
#endif
|
||||
|
||||
SYM_FUNC_START(__pi_strncmp)
|
||||
cbz limit, L(ret0)
|
||||
eor tmp1, src1, src2
|
||||
mov zeroones, #REP8_01
|
||||
@ -52,9 +66,6 @@ SYM_FUNC_START_WEAK_PI(strncmp)
|
||||
and count, src1, #7
|
||||
b.ne L(misaligned8)
|
||||
cbnz count, L(mutual_align)
|
||||
/* Calculate the number of full and partial words -1. */
|
||||
sub limit_wd, limit, #1 /* limit != 0, so no underflow. */
|
||||
lsr limit_wd, limit_wd, #3 /* Convert to Dwords. */
|
||||
|
||||
/* NUL detection works on the principle that (X - 1) & (~X) & 0x80
|
||||
(=> (X - 1) & ~(X | 0x7f)) is non-zero iff a byte is zero, and
|
||||
@ -64,30 +75,45 @@ L(loop_aligned):
|
||||
ldr data1, [src1], #8
|
||||
ldr data2, [src2], #8
|
||||
L(start_realigned):
|
||||
subs limit_wd, limit_wd, #1
|
||||
subs limit, limit, #8
|
||||
sub tmp1, data1, zeroones
|
||||
orr tmp2, data1, #REP8_7f
|
||||
eor diff, data1, data2 /* Non-zero if differences found. */
|
||||
csinv endloop, diff, xzr, pl /* Last Dword or differences. */
|
||||
csinv endloop, diff, xzr, hi /* Last Dword or differences. */
|
||||
bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */
|
||||
ccmp endloop, #0, #0, eq
|
||||
b.eq L(loop_aligned)
|
||||
/* End of main loop */
|
||||
|
||||
/* Not reached the limit, must have found the end or a diff. */
|
||||
tbz limit_wd, #63, L(not_limit)
|
||||
|
||||
/* Limit % 8 == 0 => all bytes significant. */
|
||||
ands limit, limit, #7
|
||||
b.eq L(not_limit)
|
||||
|
||||
lsl limit, limit, #3 /* Bits -> bytes. */
|
||||
mov mask, #~0
|
||||
#ifdef __AARCH64EB__
|
||||
lsr mask, mask, limit
|
||||
L(full_check):
|
||||
#ifndef __AARCH64EB__
|
||||
orr syndrome, diff, has_nul
|
||||
add limit, limit, 8 /* Rewind limit to before last subs. */
|
||||
L(syndrome_check):
|
||||
/* Limit was reached. Check if the NUL byte or the difference
|
||||
is before the limit. */
|
||||
rev syndrome, syndrome
|
||||
rev data1, data1
|
||||
clz pos, syndrome
|
||||
rev data2, data2
|
||||
lsl data1, data1, pos
|
||||
cmp limit, pos, lsr #3
|
||||
lsl data2, data2, pos
|
||||
/* But we need to zero-extend (char is unsigned) the value and then
|
||||
perform a signed 32-bit subtraction. */
|
||||
lsr data1, data1, #56
|
||||
sub result, data1, data2, lsr #56
|
||||
csel result, result, xzr, hi
|
||||
ret
|
||||
#else
|
||||
lsl mask, mask, limit
|
||||
#endif
|
||||
/* Not reached the limit, must have found the end or a diff. */
|
||||
tbz limit, #63, L(not_limit)
|
||||
add tmp1, limit, 8
|
||||
cbz limit, L(not_limit)
|
||||
|
||||
lsl limit, tmp1, #3 /* Bits -> bytes. */
|
||||
mov mask, #~0
|
||||
lsr mask, mask, limit
|
||||
bic data1, data1, mask
|
||||
bic data2, data2, mask
|
||||
|
||||
@ -95,25 +121,6 @@ L(start_realigned):
|
||||
orr has_nul, has_nul, mask
|
||||
|
||||
L(not_limit):
|
||||
orr syndrome, diff, has_nul
|
||||
|
||||
#ifndef __AARCH64EB__
|
||||
rev syndrome, syndrome
|
||||
rev data1, data1
|
||||
/* The MS-non-zero bit of the syndrome marks either the first bit
|
||||
that is different, or the top bit of the first zero byte.
|
||||
Shifting left now will bring the critical information into the
|
||||
top bits. */
|
||||
clz pos, syndrome
|
||||
rev data2, data2
|
||||
lsl data1, data1, pos
|
||||
lsl data2, data2, pos
|
||||
/* But we need to zero-extend (char is unsigned) the value and then
|
||||
perform a signed 32-bit subtraction. */
|
||||
lsr data1, data1, #56
|
||||
sub result, data1, data2, lsr #56
|
||||
ret
|
||||
#else
|
||||
/* For big-endian we cannot use the trick with the syndrome value
|
||||
as carry-propagation can corrupt the upper bits if the trailing
|
||||
bytes in the string contain 0x01. */
|
||||
@ -134,10 +141,11 @@ L(not_limit):
|
||||
rev has_nul, has_nul
|
||||
orr syndrome, diff, has_nul
|
||||
clz pos, syndrome
|
||||
/* The MS-non-zero bit of the syndrome marks either the first bit
|
||||
that is different, or the top bit of the first zero byte.
|
||||
/* The most-significant-non-zero bit of the syndrome marks either the
|
||||
first bit that is different, or the top bit of the first zero byte.
|
||||
Shifting left now will bring the critical information into the
|
||||
top bits. */
|
||||
L(end_quick):
|
||||
lsl data1, data1, pos
|
||||
lsl data2, data2, pos
|
||||
/* But we need to zero-extend (char is unsigned) the value and then
|
||||
@ -159,22 +167,12 @@ L(mutual_align):
|
||||
neg tmp3, count, lsl #3 /* 64 - bits(bytes beyond align). */
|
||||
ldr data2, [src2], #8
|
||||
mov tmp2, #~0
|
||||
sub limit_wd, limit, #1 /* limit != 0, so no underflow. */
|
||||
#ifdef __AARCH64EB__
|
||||
/* Big-endian. Early bytes are at MSB. */
|
||||
lsl tmp2, tmp2, tmp3 /* Shift (count & 63). */
|
||||
#else
|
||||
/* Little-endian. Early bytes are at LSB. */
|
||||
lsr tmp2, tmp2, tmp3 /* Shift (count & 63). */
|
||||
#endif
|
||||
and tmp3, limit_wd, #7
|
||||
lsr limit_wd, limit_wd, #3
|
||||
/* Adjust the limit. Only low 3 bits used, so overflow irrelevant. */
|
||||
add limit, limit, count
|
||||
add tmp3, tmp3, count
|
||||
LS_FW tmp2, tmp2, tmp3 /* Shift (count & 63). */
|
||||
/* Adjust the limit and ensure it doesn't overflow. */
|
||||
adds limit, limit, count
|
||||
csinv limit, limit, xzr, lo
|
||||
orr data1, data1, tmp2
|
||||
orr data2, data2, tmp2
|
||||
add limit_wd, limit_wd, tmp3, lsr #3
|
||||
b L(start_realigned)
|
||||
|
||||
.p2align 4
|
||||
@ -197,13 +195,11 @@ L(done):
|
||||
/* Align the SRC1 to a dword by doing a bytewise compare and then do
|
||||
the dword loop. */
|
||||
L(try_misaligned_words):
|
||||
lsr limit_wd, limit, #3
|
||||
cbz count, L(do_misaligned)
|
||||
cbz count, L(src1_aligned)
|
||||
|
||||
neg count, count
|
||||
and count, count, #7
|
||||
sub limit, limit, count
|
||||
lsr limit_wd, limit, #3
|
||||
|
||||
L(page_end_loop):
|
||||
ldrb data1w, [src1], #1
|
||||
@ -214,48 +210,101 @@ L(page_end_loop):
|
||||
subs count, count, #1
|
||||
b.hi L(page_end_loop)
|
||||
|
||||
L(do_misaligned):
|
||||
/* Prepare ourselves for the next page crossing. Unlike the aligned
|
||||
loop, we fetch 1 less dword because we risk crossing bounds on
|
||||
SRC2. */
|
||||
mov count, #8
|
||||
subs limit_wd, limit_wd, #1
|
||||
b.lo L(done_loop)
|
||||
/* The following diagram explains the comparison of misaligned strings.
|
||||
The bytes are shown in natural order. For little-endian, it is
|
||||
reversed in the registers. The "x" bytes are before the string.
|
||||
The "|" separates data that is loaded at one time.
|
||||
src1 | a a a a a a a a | b b b c c c c c | . . .
|
||||
src2 | x x x x x a a a a a a a a b b b | c c c c c . . .
|
||||
|
||||
After shifting in each step, the data looks like this:
|
||||
STEP_A STEP_B STEP_C
|
||||
data1 a a a a a a a a b b b c c c c c b b b c c c c c
|
||||
data2 a a a a a a a a b b b 0 0 0 0 0 0 0 0 c c c c c
|
||||
|
||||
The bytes with "0" are eliminated from the syndrome via mask.
|
||||
|
||||
Align SRC2 down to 16 bytes. This way we can read 16 bytes at a
|
||||
time from SRC2. The comparison happens in 3 steps. After each step
|
||||
the loop can exit, or read from SRC1 or SRC2. */
|
||||
L(src1_aligned):
|
||||
/* Calculate offset from 8 byte alignment to string start in bits. No
|
||||
need to mask offset since shifts are ignoring upper bits. */
|
||||
lsl offset, src2, #3
|
||||
bic src2, src2, #0xf
|
||||
mov mask, -1
|
||||
neg neg_offset, offset
|
||||
ldr data1, [src1], #8
|
||||
ldp tmp1, tmp2, [src2], #16
|
||||
LS_BK mask, mask, neg_offset
|
||||
and neg_offset, neg_offset, #63 /* Need actual value for cmp later. */
|
||||
/* Skip the first compare if data in tmp1 is irrelevant. */
|
||||
tbnz offset, 6, L(misaligned_mid_loop)
|
||||
|
||||
L(loop_misaligned):
|
||||
and tmp2, src2, #0xff8
|
||||
eor tmp2, tmp2, #0xff8
|
||||
cbz tmp2, L(page_end_loop)
|
||||
/* STEP_A: Compare full 8 bytes when there is enough data from SRC2.*/
|
||||
LS_FW data2, tmp1, offset
|
||||
LS_BK tmp1, tmp2, neg_offset
|
||||
subs limit, limit, #8
|
||||
orr data2, data2, tmp1 /* 8 bytes from SRC2 combined from two regs.*/
|
||||
sub has_nul, data1, zeroones
|
||||
eor diff, data1, data2 /* Non-zero if differences found. */
|
||||
orr tmp3, data1, #REP8_7f
|
||||
csinv endloop, diff, xzr, hi /* If limit, set to all ones. */
|
||||
bic has_nul, has_nul, tmp3 /* Non-zero if NUL byte found in SRC1. */
|
||||
orr tmp3, endloop, has_nul
|
||||
cbnz tmp3, L(full_check)
|
||||
|
||||
ldr data1, [src1], #8
|
||||
ldr data2, [src2], #8
|
||||
sub tmp1, data1, zeroones
|
||||
orr tmp2, data1, #REP8_7f
|
||||
eor diff, data1, data2 /* Non-zero if differences found. */
|
||||
bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */
|
||||
ccmp diff, #0, #0, eq
|
||||
b.ne L(not_limit)
|
||||
subs limit_wd, limit_wd, #1
|
||||
b.pl L(loop_misaligned)
|
||||
L(misaligned_mid_loop):
|
||||
/* STEP_B: Compare first part of data1 to second part of tmp2. */
|
||||
LS_FW data2, tmp2, offset
|
||||
#ifdef __AARCH64EB__
|
||||
/* For big-endian we do a byte reverse to avoid carry-propagation
|
||||
problem described above. This way we can reuse the has_nul in the
|
||||
next step and also use syndrome value trick at the end. */
|
||||
rev tmp3, data1
|
||||
#define data1_fixed tmp3
|
||||
#else
|
||||
#define data1_fixed data1
|
||||
#endif
|
||||
sub has_nul, data1_fixed, zeroones
|
||||
orr tmp3, data1_fixed, #REP8_7f
|
||||
eor diff, data2, data1 /* Non-zero if differences found. */
|
||||
bic has_nul, has_nul, tmp3 /* Non-zero if NUL terminator. */
|
||||
#ifdef __AARCH64EB__
|
||||
rev has_nul, has_nul
|
||||
#endif
|
||||
cmp limit, neg_offset, lsr #3
|
||||
orr syndrome, diff, has_nul
|
||||
bic syndrome, syndrome, mask /* Ignore later bytes. */
|
||||
csinv tmp3, syndrome, xzr, hi /* If limit, set to all ones. */
|
||||
cbnz tmp3, L(syndrome_check)
|
||||
|
||||
L(done_loop):
|
||||
/* We found a difference or a NULL before the limit was reached. */
|
||||
and limit, limit, #7
|
||||
cbz limit, L(not_limit)
|
||||
/* Read the last word. */
|
||||
sub src1, src1, 8
|
||||
sub src2, src2, 8
|
||||
ldr data1, [src1, limit]
|
||||
ldr data2, [src2, limit]
|
||||
sub tmp1, data1, zeroones
|
||||
orr tmp2, data1, #REP8_7f
|
||||
eor diff, data1, data2 /* Non-zero if differences found. */
|
||||
bics has_nul, tmp1, tmp2 /* Non-zero if NUL terminator. */
|
||||
ccmp diff, #0, #0, eq
|
||||
b.ne L(not_limit)
|
||||
/* STEP_C: Compare second part of data1 to first part of tmp1. */
|
||||
ldp tmp1, tmp2, [src2], #16
|
||||
cmp limit, #8
|
||||
LS_BK data2, tmp1, neg_offset
|
||||
eor diff, data2, data1 /* Non-zero if differences found. */
|
||||
orr syndrome, diff, has_nul
|
||||
and syndrome, syndrome, mask /* Ignore earlier bytes. */
|
||||
csinv tmp3, syndrome, xzr, hi /* If limit, set to all ones. */
|
||||
cbnz tmp3, L(syndrome_check)
|
||||
|
||||
ldr data1, [src1], #8
|
||||
sub limit, limit, #8
|
||||
b L(loop_misaligned)
|
||||
|
||||
#ifdef __AARCH64EB__
|
||||
L(syndrome_check):
|
||||
clz pos, syndrome
|
||||
cmp pos, limit, lsl #3
|
||||
b.lo L(end_quick)
|
||||
#endif
|
||||
|
||||
L(ret0):
|
||||
mov result, #0
|
||||
ret
|
||||
|
||||
SYM_FUNC_END_PI(strncmp)
|
||||
EXPORT_SYMBOL_NOHWKASAN(strncmp)
|
||||
SYM_FUNC_END(__pi_strncmp)
|
||||
SYM_FUNC_ALIAS_WEAK(strncmp, __pi_strncmp)
|
||||
EXPORT_SYMBOL_NOKASAN(strncmp)
|
||||
|
@ -47,7 +47,7 @@ limit_wd .req x14
|
||||
#define REP8_7f 0x7f7f7f7f7f7f7f7f
|
||||
#define REP8_80 0x8080808080808080
|
||||
|
||||
SYM_FUNC_START_WEAK_PI(strnlen)
|
||||
SYM_FUNC_START(__pi_strnlen)
|
||||
cbz limit, .Lhit_limit
|
||||
mov zeroones, #REP8_01
|
||||
bic src, srcin, #15
|
||||
@ -156,5 +156,7 @@ CPU_LE( lsr tmp2, tmp2, tmp4 ) /* Shift (tmp1 & 63). */
|
||||
.Lhit_limit:
|
||||
mov len, limit
|
||||
ret
|
||||
SYM_FUNC_END_PI(strnlen)
|
||||
SYM_FUNC_END(__pi_strnlen)
|
||||
|
||||
SYM_FUNC_ALIAS_WEAK(strnlen, __pi_strnlen)
|
||||
EXPORT_SYMBOL_NOKASAN(strnlen)
|
||||
|
@ -18,7 +18,7 @@
|
||||
* Returns:
|
||||
* x0 - address of last occurrence of 'c' or 0
|
||||
*/
|
||||
SYM_FUNC_START_WEAK_PI(strrchr)
|
||||
SYM_FUNC_START(__pi_strrchr)
|
||||
mov x3, #0
|
||||
and w1, w1, #0xff
|
||||
1: ldrb w2, [x0], #1
|
||||
@ -29,5 +29,6 @@ SYM_FUNC_START_WEAK_PI(strrchr)
|
||||
b 1b
|
||||
2: mov x0, x3
|
||||
ret
|
||||
SYM_FUNC_END_PI(strrchr)
|
||||
SYM_FUNC_END(__pi_strrchr)
|
||||
SYM_FUNC_ALIAS_WEAK(strrchr, __pi_strrchr)
|
||||
EXPORT_SYMBOL_NOKASAN(strrchr)
|
||||
|
@ -107,10 +107,11 @@ SYM_FUNC_END(icache_inval_pou)
|
||||
* - start - virtual start address of region
|
||||
* - end - virtual end address of region
|
||||
*/
|
||||
SYM_FUNC_START_PI(dcache_clean_inval_poc)
|
||||
SYM_FUNC_START(__pi_dcache_clean_inval_poc)
|
||||
dcache_by_line_op civac, sy, x0, x1, x2, x3
|
||||
ret
|
||||
SYM_FUNC_END_PI(dcache_clean_inval_poc)
|
||||
SYM_FUNC_END(__pi_dcache_clean_inval_poc)
|
||||
SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc)
|
||||
|
||||
/*
|
||||
* dcache_clean_pou(start, end)
|
||||
@ -140,7 +141,7 @@ SYM_FUNC_END(dcache_clean_pou)
|
||||
* - start - kernel start address of region
|
||||
* - end - kernel end address of region
|
||||
*/
|
||||
SYM_FUNC_START_PI(dcache_inval_poc)
|
||||
SYM_FUNC_START(__pi_dcache_inval_poc)
|
||||
dcache_line_size x2, x3
|
||||
sub x3, x2, #1
|
||||
tst x1, x3 // end cache line aligned?
|
||||
@ -158,7 +159,8 @@ SYM_FUNC_START_PI(dcache_inval_poc)
|
||||
b.lo 2b
|
||||
dsb sy
|
||||
ret
|
||||
SYM_FUNC_END_PI(dcache_inval_poc)
|
||||
SYM_FUNC_END(__pi_dcache_inval_poc)
|
||||
SYM_FUNC_ALIAS(dcache_inval_poc, __pi_dcache_inval_poc)
|
||||
|
||||
/*
|
||||
* dcache_clean_poc(start, end)
|
||||
@ -169,10 +171,11 @@ SYM_FUNC_END_PI(dcache_inval_poc)
|
||||
* - start - virtual start address of region
|
||||
* - end - virtual end address of region
|
||||
*/
|
||||
SYM_FUNC_START_PI(dcache_clean_poc)
|
||||
SYM_FUNC_START(__pi_dcache_clean_poc)
|
||||
dcache_by_line_op cvac, sy, x0, x1, x2, x3
|
||||
ret
|
||||
SYM_FUNC_END_PI(dcache_clean_poc)
|
||||
SYM_FUNC_END(__pi_dcache_clean_poc)
|
||||
SYM_FUNC_ALIAS(dcache_clean_poc, __pi_dcache_clean_poc)
|
||||
|
||||
/*
|
||||
* dcache_clean_pop(start, end)
|
||||
@ -183,13 +186,14 @@ SYM_FUNC_END_PI(dcache_clean_poc)
|
||||
* - start - virtual start address of region
|
||||
* - end - virtual end address of region
|
||||
*/
|
||||
SYM_FUNC_START_PI(dcache_clean_pop)
|
||||
SYM_FUNC_START(__pi_dcache_clean_pop)
|
||||
alternative_if_not ARM64_HAS_DCPOP
|
||||
b dcache_clean_poc
|
||||
alternative_else_nop_endif
|
||||
dcache_by_line_op cvap, sy, x0, x1, x2, x3
|
||||
ret
|
||||
SYM_FUNC_END_PI(dcache_clean_pop)
|
||||
SYM_FUNC_END(__pi_dcache_clean_pop)
|
||||
SYM_FUNC_ALIAS(dcache_clean_pop, __pi_dcache_clean_pop)
|
||||
|
||||
/*
|
||||
* __dma_flush_area(start, size)
|
||||
@ -199,11 +203,12 @@ SYM_FUNC_END_PI(dcache_clean_pop)
|
||||
* - start - virtual start address of region
|
||||
* - size - size in question
|
||||
*/
|
||||
SYM_FUNC_START_PI(__dma_flush_area)
|
||||
SYM_FUNC_START(__pi___dma_flush_area)
|
||||
add x1, x0, x1
|
||||
dcache_by_line_op civac, sy, x0, x1, x2, x3
|
||||
ret
|
||||
SYM_FUNC_END_PI(__dma_flush_area)
|
||||
SYM_FUNC_END(__pi___dma_flush_area)
|
||||
SYM_FUNC_ALIAS(__dma_flush_area, __pi___dma_flush_area)
|
||||
|
||||
/*
|
||||
* __dma_map_area(start, size, dir)
|
||||
@ -211,12 +216,13 @@ SYM_FUNC_END_PI(__dma_flush_area)
|
||||
* - size - size of region
|
||||
* - dir - DMA direction
|
||||
*/
|
||||
SYM_FUNC_START_PI(__dma_map_area)
|
||||
SYM_FUNC_START(__pi___dma_map_area)
|
||||
add x1, x0, x1
|
||||
cmp w2, #DMA_FROM_DEVICE
|
||||
b.eq __pi_dcache_inval_poc
|
||||
b __pi_dcache_clean_poc
|
||||
SYM_FUNC_END_PI(__dma_map_area)
|
||||
SYM_FUNC_END(__pi___dma_map_area)
|
||||
SYM_FUNC_ALIAS(__dma_map_area, __pi___dma_map_area)
|
||||
|
||||
/*
|
||||
* __dma_unmap_area(start, size, dir)
|
||||
@ -224,9 +230,10 @@ SYM_FUNC_END_PI(__dma_map_area)
|
||||
* - size - size of region
|
||||
* - dir - DMA direction
|
||||
*/
|
||||
SYM_FUNC_START_PI(__dma_unmap_area)
|
||||
SYM_FUNC_START(__pi___dma_unmap_area)
|
||||
add x1, x0, x1
|
||||
cmp w2, #DMA_TO_DEVICE
|
||||
b.ne __pi_dcache_inval_poc
|
||||
ret
|
||||
SYM_FUNC_END_PI(__dma_unmap_area)
|
||||
SYM_FUNC_END(__pi___dma_unmap_area)
|
||||
SYM_FUNC_ALIAS(__dma_unmap_area, __pi___dma_unmap_area)
|
||||
|
@ -52,6 +52,13 @@ void __sync_icache_dcache(pte_t pte)
|
||||
{
|
||||
struct page *page = pte_page(pte);
|
||||
|
||||
/*
|
||||
* HugeTLB pages are always fully mapped, so only setting head page's
|
||||
* PG_dcache_clean flag is enough.
|
||||
*/
|
||||
if (PageHuge(page))
|
||||
page = compound_head(page);
|
||||
|
||||
if (!test_bit(PG_dcache_clean, &page->flags)) {
|
||||
sync_icache_aliases((unsigned long)page_address(page),
|
||||
(unsigned long)page_address(page) +
|
||||
|
@ -56,24 +56,33 @@ void __init arm64_hugetlb_cma_reserve(void)
|
||||
}
|
||||
#endif /* CONFIG_CMA */
|
||||
|
||||
static bool __hugetlb_valid_size(unsigned long size)
|
||||
{
|
||||
switch (size) {
|
||||
#ifndef __PAGETABLE_PMD_FOLDED
|
||||
case PUD_SIZE:
|
||||
return pud_sect_supported();
|
||||
#endif
|
||||
case CONT_PMD_SIZE:
|
||||
case PMD_SIZE:
|
||||
case CONT_PTE_SIZE:
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION
|
||||
bool arch_hugetlb_migration_supported(struct hstate *h)
|
||||
{
|
||||
size_t pagesize = huge_page_size(h);
|
||||
|
||||
switch (pagesize) {
|
||||
#ifndef __PAGETABLE_PMD_FOLDED
|
||||
case PUD_SIZE:
|
||||
return pud_sect_supported();
|
||||
#endif
|
||||
case PMD_SIZE:
|
||||
case CONT_PMD_SIZE:
|
||||
case CONT_PTE_SIZE:
|
||||
return true;
|
||||
}
|
||||
if (!__hugetlb_valid_size(pagesize)) {
|
||||
pr_warn("%s: unrecognized huge page size 0x%lx\n",
|
||||
__func__, pagesize);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
#endif
|
||||
|
||||
@ -506,16 +515,5 @@ arch_initcall(hugetlbpage_init);
|
||||
|
||||
bool __init arch_hugetlb_valid_size(unsigned long size)
|
||||
{
|
||||
switch (size) {
|
||||
#ifndef __PAGETABLE_PMD_FOLDED
|
||||
case PUD_SIZE:
|
||||
return pud_sect_supported();
|
||||
#endif
|
||||
case CONT_PMD_SIZE:
|
||||
case PMD_SIZE:
|
||||
case CONT_PTE_SIZE:
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
return __hugetlb_valid_size(size);
|
||||
}
|
||||
|
@ -61,8 +61,34 @@ EXPORT_SYMBOL(memstart_addr);
|
||||
* unless restricted on specific platforms (e.g. 30-bit on Raspberry Pi 4).
|
||||
* In such case, ZONE_DMA32 covers the rest of the 32-bit addressable memory,
|
||||
* otherwise it is empty.
|
||||
*
|
||||
* Memory reservation for crash kernel either done early or deferred
|
||||
* depending on DMA memory zones configs (ZONE_DMA) --
|
||||
*
|
||||
* In absence of ZONE_DMA configs arm64_dma_phys_limit initialized
|
||||
* here instead of max_zone_phys(). This lets early reservation of
|
||||
* crash kernel memory which has a dependency on arm64_dma_phys_limit.
|
||||
* Reserving memory early for crash kernel allows linear creation of block
|
||||
* mappings (greater than page-granularity) for all the memory bank rangs.
|
||||
* In this scheme a comparatively quicker boot is observed.
|
||||
*
|
||||
* If ZONE_DMA configs are defined, crash kernel memory reservation
|
||||
* is delayed until DMA zone memory range size initilazation performed in
|
||||
* zone_sizes_init(). The defer is necessary to steer clear of DMA zone
|
||||
* memory range to avoid overlap allocation. So crash kernel memory boundaries
|
||||
* are not known when mapping all bank memory ranges, which otherwise means
|
||||
* not possible to exclude crash kernel range from creating block mappings
|
||||
* so page-granularity mappings are created for the entire memory range.
|
||||
* Hence a slightly slower boot is observed.
|
||||
*
|
||||
* Note: Page-granularity mapppings are necessary for crash kernel memory
|
||||
* range for shrinking its size via /sys/kernel/kexec_crash_size interface.
|
||||
*/
|
||||
phys_addr_t arm64_dma_phys_limit __ro_after_init;
|
||||
#if IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)
|
||||
phys_addr_t __ro_after_init arm64_dma_phys_limit;
|
||||
#else
|
||||
phys_addr_t __ro_after_init arm64_dma_phys_limit = PHYS_MASK + 1;
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_KEXEC_CORE
|
||||
/*
|
||||
@ -153,8 +179,6 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
|
||||
if (!arm64_dma_phys_limit)
|
||||
arm64_dma_phys_limit = dma32_phys_limit;
|
||||
#endif
|
||||
if (!arm64_dma_phys_limit)
|
||||
arm64_dma_phys_limit = PHYS_MASK + 1;
|
||||
max_zone_pfns[ZONE_NORMAL] = max;
|
||||
|
||||
free_area_init(max_zone_pfns);
|
||||
@ -315,6 +339,9 @@ void __init arm64_memblock_init(void)
|
||||
|
||||
early_init_fdt_scan_reserved_mem();
|
||||
|
||||
if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32))
|
||||
reserve_crashkernel();
|
||||
|
||||
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
|
||||
}
|
||||
|
||||
@ -361,6 +388,7 @@ void __init bootmem_init(void)
|
||||
* request_standard_resources() depends on crashkernel's memory being
|
||||
* reserved, so do it here.
|
||||
*/
|
||||
if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32))
|
||||
reserve_crashkernel();
|
||||
|
||||
memblock_dump_all();
|
||||
|
@ -63,6 +63,7 @@ static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
|
||||
static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
|
||||
|
||||
static DEFINE_SPINLOCK(swapper_pgdir_lock);
|
||||
static DEFINE_MUTEX(fixmap_lock);
|
||||
|
||||
void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd)
|
||||
{
|
||||
@ -294,18 +295,6 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
|
||||
} while (addr = next, addr != end);
|
||||
}
|
||||
|
||||
static inline bool use_1G_block(unsigned long addr, unsigned long next,
|
||||
unsigned long phys)
|
||||
{
|
||||
if (PAGE_SHIFT != 12)
|
||||
return false;
|
||||
|
||||
if (((addr | next | phys) & ~PUD_MASK) != 0)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
|
||||
phys_addr_t phys, pgprot_t prot,
|
||||
phys_addr_t (*pgtable_alloc)(int),
|
||||
@ -329,6 +318,12 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
|
||||
}
|
||||
BUG_ON(p4d_bad(p4d));
|
||||
|
||||
/*
|
||||
* No need for locking during early boot. And it doesn't work as
|
||||
* expected with KASLR enabled.
|
||||
*/
|
||||
if (system_state != SYSTEM_BOOTING)
|
||||
mutex_lock(&fixmap_lock);
|
||||
pudp = pud_set_fixmap_offset(p4dp, addr);
|
||||
do {
|
||||
pud_t old_pud = READ_ONCE(*pudp);
|
||||
@ -338,7 +333,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
|
||||
/*
|
||||
* For 4K granule only, attempt to put down a 1GB block
|
||||
*/
|
||||
if (use_1G_block(addr, next, phys) &&
|
||||
if (pud_sect_supported() &&
|
||||
((addr | next | phys) & ~PUD_MASK) == 0 &&
|
||||
(flags & NO_BLOCK_MAPPINGS) == 0) {
|
||||
pud_set_huge(pudp, phys, prot);
|
||||
|
||||
@ -359,6 +355,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
|
||||
} while (pudp++, addr = next, addr != end);
|
||||
|
||||
pud_clear_fixmap();
|
||||
if (system_state != SYSTEM_BOOTING)
|
||||
mutex_unlock(&fixmap_lock);
|
||||
}
|
||||
|
||||
static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
|
||||
@ -517,7 +515,7 @@ static void __init map_mem(pgd_t *pgdp)
|
||||
*/
|
||||
BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end));
|
||||
|
||||
if (can_set_direct_map() || crash_mem_map || IS_ENABLED(CONFIG_KFENCE))
|
||||
if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
|
||||
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
|
||||
|
||||
/*
|
||||
@ -528,6 +526,17 @@ static void __init map_mem(pgd_t *pgdp)
|
||||
*/
|
||||
memblock_mark_nomap(kernel_start, kernel_end - kernel_start);
|
||||
|
||||
#ifdef CONFIG_KEXEC_CORE
|
||||
if (crash_mem_map) {
|
||||
if (IS_ENABLED(CONFIG_ZONE_DMA) ||
|
||||
IS_ENABLED(CONFIG_ZONE_DMA32))
|
||||
flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
|
||||
else if (crashk_res.end)
|
||||
memblock_mark_nomap(crashk_res.start,
|
||||
resource_size(&crashk_res));
|
||||
}
|
||||
#endif
|
||||
|
||||
/* map all the memory banks */
|
||||
for_each_mem_range(i, &start, &end) {
|
||||
if (start >= end)
|
||||
@ -554,6 +563,25 @@ static void __init map_mem(pgd_t *pgdp)
|
||||
__map_memblock(pgdp, kernel_start, kernel_end,
|
||||
PAGE_KERNEL, NO_CONT_MAPPINGS);
|
||||
memblock_clear_nomap(kernel_start, kernel_end - kernel_start);
|
||||
|
||||
/*
|
||||
* Use page-level mappings here so that we can shrink the region
|
||||
* in page granularity and put back unused memory to buddy system
|
||||
* through /sys/kernel/kexec_crash_size interface.
|
||||
*/
|
||||
#ifdef CONFIG_KEXEC_CORE
|
||||
if (crash_mem_map &&
|
||||
!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) {
|
||||
if (crashk_res.end) {
|
||||
__map_memblock(pgdp, crashk_res.start,
|
||||
crashk_res.end + 1,
|
||||
PAGE_KERNEL,
|
||||
NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
|
||||
memblock_clear_nomap(crashk_res.start,
|
||||
resource_size(&crashk_res));
|
||||
}
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
void mark_rodata_ro(void)
|
||||
|
@ -12,7 +12,7 @@ static DEFINE_XARRAY(mte_pages);
|
||||
void *mte_allocate_tag_storage(void)
|
||||
{
|
||||
/* tags granule is 16 bytes, 2 tags stored per byte */
|
||||
return kmalloc(PAGE_SIZE / 16 / 2, GFP_KERNEL);
|
||||
return kmalloc(MTE_PAGE_TAG_STORAGE, GFP_KERNEL);
|
||||
}
|
||||
|
||||
void mte_free_tag_storage(char *storage)
|
||||
|
@ -46,7 +46,7 @@
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_KASAN_HW_TAGS
|
||||
#define TCR_MTE_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1
|
||||
#define TCR_MTE_FLAGS TCR_TCMA1 | TCR_TBI1 | TCR_TBID1
|
||||
#else
|
||||
/*
|
||||
* The mte_zero_clear_page_tags() implementation uses DC GZVA, which relies on
|
||||
|
@ -89,9 +89,16 @@
|
||||
#define A64_STXR(sf, Rt, Rn, Rs) \
|
||||
A64_LSX(sf, Rt, Rn, Rs, STORE_EX)
|
||||
|
||||
/* LSE atomics */
|
||||
/*
|
||||
* LSE atomics
|
||||
*
|
||||
* STADD is simply encoded as an alias for LDADD with XZR as
|
||||
* the destination register.
|
||||
*/
|
||||
#define A64_STADD(sf, Rn, Rs) \
|
||||
aarch64_insn_gen_stadd(Rn, Rs, A64_SIZE(sf))
|
||||
aarch64_insn_gen_atomic_ld_op(A64_ZR, Rn, Rs, \
|
||||
A64_SIZE(sf), AARCH64_INSN_MEM_ATOMIC_ADD, \
|
||||
AARCH64_INSN_MEM_ORDER_NONE)
|
||||
|
||||
/* Add/subtract (immediate) */
|
||||
#define A64_ADDSUB_IMM(sf, Rd, Rn, imm12, type) \
|
||||
|
@ -5,18 +5,14 @@ kapi := $(gen)/asm
|
||||
|
||||
kapi-hdrs-y := $(kapi)/cpucaps.h
|
||||
|
||||
targets += $(addprefix ../../../,$(gen-y) $(kapi-hdrs-y))
|
||||
targets += $(addprefix ../../../, $(kapi-hdrs-y))
|
||||
|
||||
PHONY += kapi
|
||||
|
||||
kapi: $(kapi-hdrs-y) $(gen-y)
|
||||
|
||||
# Create output directory if not already present
|
||||
_dummy := $(shell [ -d '$(kapi)' ] || mkdir -p '$(kapi)')
|
||||
kapi: $(kapi-hdrs-y)
|
||||
|
||||
quiet_cmd_gen_cpucaps = GEN $@
|
||||
cmd_gen_cpucaps = mkdir -p $(dir $@) && \
|
||||
$(AWK) -f $(filter-out $(PHONY),$^) > $@
|
||||
cmd_gen_cpucaps = mkdir -p $(dir $@); $(AWK) -f $(real-prereqs) > $@
|
||||
|
||||
$(kapi)/cpucaps.h: $(src)/gen-cpucaps.awk $(src)/cpucaps FORCE
|
||||
$(call if_changed,gen_cpucaps)
|
||||
|
@ -7,7 +7,8 @@ BTI
|
||||
HAS_32BIT_EL0_DO_NOT_USE
|
||||
HAS_32BIT_EL1
|
||||
HAS_ADDRESS_AUTH
|
||||
HAS_ADDRESS_AUTH_ARCH
|
||||
HAS_ADDRESS_AUTH_ARCH_QARMA3
|
||||
HAS_ADDRESS_AUTH_ARCH_QARMA5
|
||||
HAS_ADDRESS_AUTH_IMP_DEF
|
||||
HAS_AMU_EXTN
|
||||
HAS_ARMv8_4_TTL
|
||||
@ -21,7 +22,8 @@ HAS_E0PD
|
||||
HAS_ECV
|
||||
HAS_EPAN
|
||||
HAS_GENERIC_AUTH
|
||||
HAS_GENERIC_AUTH_ARCH
|
||||
HAS_GENERIC_AUTH_ARCH_QARMA3
|
||||
HAS_GENERIC_AUTH_ARCH_QARMA5
|
||||
HAS_GENERIC_AUTH_IMP_DEF
|
||||
HAS_IRQ_PRIO_MASKING
|
||||
HAS_LDAPR
|
||||
|
@ -8,6 +8,7 @@ menu "Processor type and features"
|
||||
|
||||
config IA64
|
||||
bool
|
||||
select ARCH_BINFMT_ELF_EXTRA_PHDRS
|
||||
select ARCH_HAS_DMA_MARK_CLEAN
|
||||
select ARCH_HAS_STRNCPY_FROM_USER
|
||||
select ARCH_HAS_STRNLEN_USER
|
||||
|
@ -37,6 +37,7 @@
|
||||
#include <asm/irq.h>
|
||||
#include <asm/machdep.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/config.h>
|
||||
|
||||
static unsigned long amiga_model;
|
||||
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include <asm/apollohw.h>
|
||||
#include <asm/irq.h>
|
||||
#include <asm/machdep.h>
|
||||
#include <asm/config.h>
|
||||
|
||||
u_long sio01_physaddr;
|
||||
u_long sio23_physaddr;
|
||||
|
@ -46,6 +46,7 @@
|
||||
#include <asm/machdep.h>
|
||||
#include <asm/hwtest.h>
|
||||
#include <asm/io.h>
|
||||
#include <asm/config.h>
|
||||
|
||||
u_long atari_mch_cookie;
|
||||
EXPORT_SYMBOL(atari_mch_cookie);
|
||||
|
@ -36,6 +36,7 @@
|
||||
#include <asm/traps.h>
|
||||
#include <asm/machdep.h>
|
||||
#include <asm/bvme6000hw.h>
|
||||
#include <asm/config.h>
|
||||
|
||||
static void bvme6000_get_model(char *model);
|
||||
extern void bvme6000_sched_init(void);
|
||||
|
@ -104,7 +104,6 @@ CONFIG_NF_TABLES_NETDEV=y
|
||||
CONFIG_NFT_NUMGEN=m
|
||||
CONFIG_NFT_CT=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NFT_COUNTER=m
|
||||
CONFIG_NFT_CONNLIMIT=m
|
||||
CONFIG_NFT_LOG=m
|
||||
CONFIG_NFT_LIMIT=m
|
||||
@ -204,7 +203,6 @@ CONFIG_IP_SET_LIST_SET=m
|
||||
CONFIG_NFT_DUP_IPV4=m
|
||||
CONFIG_NFT_FIB_IPV4=m
|
||||
CONFIG_NF_TABLES_ARP=y
|
||||
CONFIG_NF_FLOW_TABLE_IPV4=m
|
||||
CONFIG_NF_LOG_ARP=m
|
||||
CONFIG_NF_LOG_IPV4=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
@ -229,7 +227,6 @@ CONFIG_IP_NF_ARPFILTER=m
|
||||
CONFIG_IP_NF_ARP_MANGLE=m
|
||||
CONFIG_NFT_DUP_IPV6=m
|
||||
CONFIG_NFT_FIB_IPV6=m
|
||||
CONFIG_NF_FLOW_TABLE_IPV6=m
|
||||
CONFIG_IP6_NF_IPTABLES=m
|
||||
CONFIG_IP6_NF_MATCH_AH=m
|
||||
CONFIG_IP6_NF_MATCH_EUI64=m
|
||||
@ -435,6 +432,7 @@ CONFIG_FB_AMIGA_ECS=y
|
||||
CONFIG_FB_AMIGA_AGA=y
|
||||
CONFIG_FB_FM2=y
|
||||
CONFIG_FRAMEBUFFER_CONSOLE=y
|
||||
CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION=y
|
||||
CONFIG_LOGO=y
|
||||
CONFIG_SOUND=m
|
||||
CONFIG_DMASOUND_PAULA=m
|
||||
@ -643,7 +641,7 @@ CONFIG_TEST_UUID=m
|
||||
CONFIG_TEST_XARRAY=m
|
||||
CONFIG_TEST_OVERFLOW=m
|
||||
CONFIG_TEST_RHASHTABLE=m
|
||||
CONFIG_TEST_HASH=m
|
||||
CONFIG_TEST_SIPHASH=m
|
||||
CONFIG_TEST_IDA=m
|
||||
CONFIG_TEST_BITOPS=m
|
||||
CONFIG_TEST_VMALLOC=m
|
||||
|
@ -100,7 +100,6 @@ CONFIG_NF_TABLES_NETDEV=y
|
||||
CONFIG_NFT_NUMGEN=m
|
||||
CONFIG_NFT_CT=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NFT_COUNTER=m
|
||||
CONFIG_NFT_CONNLIMIT=m
|
||||
CONFIG_NFT_LOG=m
|
||||
CONFIG_NFT_LIMIT=m
|
||||
@ -200,7 +199,6 @@ CONFIG_IP_SET_LIST_SET=m
|
||||
CONFIG_NFT_DUP_IPV4=m
|
||||
CONFIG_NFT_FIB_IPV4=m
|
||||
CONFIG_NF_TABLES_ARP=y
|
||||
CONFIG_NF_FLOW_TABLE_IPV4=m
|
||||
CONFIG_NF_LOG_ARP=m
|
||||
CONFIG_NF_LOG_IPV4=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
@ -225,7 +223,6 @@ CONFIG_IP_NF_ARPFILTER=m
|
||||
CONFIG_IP_NF_ARP_MANGLE=m
|
||||
CONFIG_NFT_DUP_IPV6=m
|
||||
CONFIG_NFT_FIB_IPV6=m
|
||||
CONFIG_NF_FLOW_TABLE_IPV6=m
|
||||
CONFIG_IP6_NF_IPTABLES=m
|
||||
CONFIG_IP6_NF_MATCH_AH=m
|
||||
CONFIG_IP6_NF_MATCH_EUI64=m
|
||||
@ -393,6 +390,7 @@ CONFIG_PTP_1588_CLOCK=m
|
||||
# CONFIG_HWMON is not set
|
||||
CONFIG_FB=y
|
||||
CONFIG_FRAMEBUFFER_CONSOLE=y
|
||||
CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION=y
|
||||
CONFIG_LOGO=y
|
||||
# CONFIG_LOGO_LINUX_VGA16 is not set
|
||||
# CONFIG_LOGO_LINUX_CLUT224 is not set
|
||||
@ -599,7 +597,7 @@ CONFIG_TEST_UUID=m
|
||||
CONFIG_TEST_XARRAY=m
|
||||
CONFIG_TEST_OVERFLOW=m
|
||||
CONFIG_TEST_RHASHTABLE=m
|
||||
CONFIG_TEST_HASH=m
|
||||
CONFIG_TEST_SIPHASH=m
|
||||
CONFIG_TEST_IDA=m
|
||||
CONFIG_TEST_BITOPS=m
|
||||
CONFIG_TEST_VMALLOC=m
|
||||
|
@ -107,7 +107,6 @@ CONFIG_NF_TABLES_NETDEV=y
|
||||
CONFIG_NFT_NUMGEN=m
|
||||
CONFIG_NFT_CT=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NFT_COUNTER=m
|
||||
CONFIG_NFT_CONNLIMIT=m
|
||||
CONFIG_NFT_LOG=m
|
||||
CONFIG_NFT_LIMIT=m
|
||||
@ -207,7 +206,6 @@ CONFIG_IP_SET_LIST_SET=m
|
||||
CONFIG_NFT_DUP_IPV4=m
|
||||
CONFIG_NFT_FIB_IPV4=m
|
||||
CONFIG_NF_TABLES_ARP=y
|
||||
CONFIG_NF_FLOW_TABLE_IPV4=m
|
||||
CONFIG_NF_LOG_ARP=m
|
||||
CONFIG_NF_LOG_IPV4=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
@ -232,7 +230,6 @@ CONFIG_IP_NF_ARPFILTER=m
|
||||
CONFIG_IP_NF_ARP_MANGLE=m
|
||||
CONFIG_NFT_DUP_IPV6=m
|
||||
CONFIG_NFT_FIB_IPV6=m
|
||||
CONFIG_NF_FLOW_TABLE_IPV6=m
|
||||
CONFIG_IP6_NF_IPTABLES=m
|
||||
CONFIG_IP6_NF_MATCH_AH=m
|
||||
CONFIG_IP6_NF_MATCH_EUI64=m
|
||||
@ -621,7 +618,7 @@ CONFIG_TEST_UUID=m
|
||||
CONFIG_TEST_XARRAY=m
|
||||
CONFIG_TEST_OVERFLOW=m
|
||||
CONFIG_TEST_RHASHTABLE=m
|
||||
CONFIG_TEST_HASH=m
|
||||
CONFIG_TEST_SIPHASH=m
|
||||
CONFIG_TEST_IDA=m
|
||||
CONFIG_TEST_BITOPS=m
|
||||
CONFIG_TEST_VMALLOC=m
|
||||
|
@ -97,7 +97,6 @@ CONFIG_NF_TABLES_NETDEV=y
|
||||
CONFIG_NFT_NUMGEN=m
|
||||
CONFIG_NFT_CT=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NFT_COUNTER=m
|
||||
CONFIG_NFT_CONNLIMIT=m
|
||||
CONFIG_NFT_LOG=m
|
||||
CONFIG_NFT_LIMIT=m
|
||||
@ -197,7 +196,6 @@ CONFIG_IP_SET_LIST_SET=m
|
||||
CONFIG_NFT_DUP_IPV4=m
|
||||
CONFIG_NFT_FIB_IPV4=m
|
||||
CONFIG_NF_TABLES_ARP=y
|
||||
CONFIG_NF_FLOW_TABLE_IPV4=m
|
||||
CONFIG_NF_LOG_ARP=m
|
||||
CONFIG_NF_LOG_IPV4=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
@ -222,7 +220,6 @@ CONFIG_IP_NF_ARPFILTER=m
|
||||
CONFIG_IP_NF_ARP_MANGLE=m
|
||||
CONFIG_NFT_DUP_IPV6=m
|
||||
CONFIG_NFT_FIB_IPV6=m
|
||||
CONFIG_NF_FLOW_TABLE_IPV6=m
|
||||
CONFIG_IP6_NF_IPTABLES=m
|
||||
CONFIG_IP6_NF_MATCH_AH=m
|
||||
CONFIG_IP6_NF_MATCH_EUI64=m
|
||||
@ -592,7 +589,7 @@ CONFIG_TEST_UUID=m
|
||||
CONFIG_TEST_XARRAY=m
|
||||
CONFIG_TEST_OVERFLOW=m
|
||||
CONFIG_TEST_RHASHTABLE=m
|
||||
CONFIG_TEST_HASH=m
|
||||
CONFIG_TEST_SIPHASH=m
|
||||
CONFIG_TEST_IDA=m
|
||||
CONFIG_TEST_BITOPS=m
|
||||
CONFIG_TEST_VMALLOC=m
|
||||
|
@ -99,7 +99,6 @@ CONFIG_NF_TABLES_NETDEV=y
|
||||
CONFIG_NFT_NUMGEN=m
|
||||
CONFIG_NFT_CT=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NFT_COUNTER=m
|
||||
CONFIG_NFT_CONNLIMIT=m
|
||||
CONFIG_NFT_LOG=m
|
||||
CONFIG_NFT_LIMIT=m
|
||||
@ -199,7 +198,6 @@ CONFIG_IP_SET_LIST_SET=m
|
||||
CONFIG_NFT_DUP_IPV4=m
|
||||
CONFIG_NFT_FIB_IPV4=m
|
||||
CONFIG_NF_TABLES_ARP=y
|
||||
CONFIG_NF_FLOW_TABLE_IPV4=m
|
||||
CONFIG_NF_LOG_ARP=m
|
||||
CONFIG_NF_LOG_IPV4=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
@ -224,7 +222,6 @@ CONFIG_IP_NF_ARPFILTER=m
|
||||
CONFIG_IP_NF_ARP_MANGLE=m
|
||||
CONFIG_NFT_DUP_IPV6=m
|
||||
CONFIG_NFT_FIB_IPV6=m
|
||||
CONFIG_NF_FLOW_TABLE_IPV6=m
|
||||
CONFIG_IP6_NF_IPTABLES=m
|
||||
CONFIG_IP6_NF_MATCH_AH=m
|
||||
CONFIG_IP6_NF_MATCH_EUI64=m
|
||||
@ -395,6 +392,7 @@ CONFIG_PTP_1588_CLOCK=m
|
||||
# CONFIG_HWMON is not set
|
||||
CONFIG_FB=y
|
||||
CONFIG_FRAMEBUFFER_CONSOLE=y
|
||||
CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION=y
|
||||
CONFIG_LOGO=y
|
||||
# CONFIG_LOGO_LINUX_MONO is not set
|
||||
# CONFIG_LOGO_LINUX_VGA16 is not set
|
||||
@ -601,7 +599,7 @@ CONFIG_TEST_UUID=m
|
||||
CONFIG_TEST_XARRAY=m
|
||||
CONFIG_TEST_OVERFLOW=m
|
||||
CONFIG_TEST_RHASHTABLE=m
|
||||
CONFIG_TEST_HASH=m
|
||||
CONFIG_TEST_SIPHASH=m
|
||||
CONFIG_TEST_IDA=m
|
||||
CONFIG_TEST_BITOPS=m
|
||||
CONFIG_TEST_VMALLOC=m
|
||||
|
@ -98,7 +98,6 @@ CONFIG_NF_TABLES_NETDEV=y
|
||||
CONFIG_NFT_NUMGEN=m
|
||||
CONFIG_NFT_CT=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NFT_COUNTER=m
|
||||
CONFIG_NFT_CONNLIMIT=m
|
||||
CONFIG_NFT_LOG=m
|
||||
CONFIG_NFT_LIMIT=m
|
||||
@ -198,7 +197,6 @@ CONFIG_IP_SET_LIST_SET=m
|
||||
CONFIG_NFT_DUP_IPV4=m
|
||||
CONFIG_NFT_FIB_IPV4=m
|
||||
CONFIG_NF_TABLES_ARP=y
|
||||
CONFIG_NF_FLOW_TABLE_IPV4=m
|
||||
CONFIG_NF_LOG_ARP=m
|
||||
CONFIG_NF_LOG_IPV4=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
@ -223,7 +221,6 @@ CONFIG_IP_NF_ARPFILTER=m
|
||||
CONFIG_IP_NF_ARP_MANGLE=m
|
||||
CONFIG_NFT_DUP_IPV6=m
|
||||
CONFIG_NFT_FIB_IPV6=m
|
||||
CONFIG_NF_FLOW_TABLE_IPV6=m
|
||||
CONFIG_IP6_NF_IPTABLES=m
|
||||
CONFIG_IP6_NF_MATCH_AH=m
|
||||
CONFIG_IP6_NF_MATCH_EUI64=m
|
||||
@ -623,7 +620,7 @@ CONFIG_TEST_UUID=m
|
||||
CONFIG_TEST_XARRAY=m
|
||||
CONFIG_TEST_OVERFLOW=m
|
||||
CONFIG_TEST_RHASHTABLE=m
|
||||
CONFIG_TEST_HASH=m
|
||||
CONFIG_TEST_SIPHASH=m
|
||||
CONFIG_TEST_IDA=m
|
||||
CONFIG_TEST_BITOPS=m
|
||||
CONFIG_TEST_VMALLOC=m
|
||||
|
@ -118,7 +118,6 @@ CONFIG_NF_TABLES_NETDEV=y
|
||||
CONFIG_NFT_NUMGEN=m
|
||||
CONFIG_NFT_CT=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NFT_COUNTER=m
|
||||
CONFIG_NFT_CONNLIMIT=m
|
||||
CONFIG_NFT_LOG=m
|
||||
CONFIG_NFT_LIMIT=m
|
||||
@ -218,7 +217,6 @@ CONFIG_IP_SET_LIST_SET=m
|
||||
CONFIG_NFT_DUP_IPV4=m
|
||||
CONFIG_NFT_FIB_IPV4=m
|
||||
CONFIG_NF_TABLES_ARP=y
|
||||
CONFIG_NF_FLOW_TABLE_IPV4=m
|
||||
CONFIG_NF_LOG_ARP=m
|
||||
CONFIG_NF_LOG_IPV4=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
@ -243,7 +241,6 @@ CONFIG_IP_NF_ARPFILTER=m
|
||||
CONFIG_IP_NF_ARP_MANGLE=m
|
||||
CONFIG_NFT_DUP_IPV6=m
|
||||
CONFIG_NFT_FIB_IPV6=m
|
||||
CONFIG_NF_FLOW_TABLE_IPV6=m
|
||||
CONFIG_IP6_NF_IPTABLES=m
|
||||
CONFIG_IP6_NF_MATCH_AH=m
|
||||
CONFIG_IP6_NF_MATCH_EUI64=m
|
||||
@ -497,6 +494,7 @@ CONFIG_FB_ATARI=y
|
||||
CONFIG_FB_VALKYRIE=y
|
||||
CONFIG_FB_MAC=y
|
||||
CONFIG_FRAMEBUFFER_CONSOLE=y
|
||||
CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION=y
|
||||
CONFIG_LOGO=y
|
||||
CONFIG_SOUND=m
|
||||
CONFIG_DMASOUND_ATARI=m
|
||||
@ -708,7 +706,7 @@ CONFIG_TEST_UUID=m
|
||||
CONFIG_TEST_XARRAY=m
|
||||
CONFIG_TEST_OVERFLOW=m
|
||||
CONFIG_TEST_RHASHTABLE=m
|
||||
CONFIG_TEST_HASH=m
|
||||
CONFIG_TEST_SIPHASH=m
|
||||
CONFIG_TEST_IDA=m
|
||||
CONFIG_TEST_BITOPS=m
|
||||
CONFIG_TEST_VMALLOC=m
|
||||
|
@ -96,7 +96,6 @@ CONFIG_NF_TABLES_NETDEV=y
|
||||
CONFIG_NFT_NUMGEN=m
|
||||
CONFIG_NFT_CT=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NFT_COUNTER=m
|
||||
CONFIG_NFT_CONNLIMIT=m
|
||||
CONFIG_NFT_LOG=m
|
||||
CONFIG_NFT_LIMIT=m
|
||||
@ -196,7 +195,6 @@ CONFIG_IP_SET_LIST_SET=m
|
||||
CONFIG_NFT_DUP_IPV4=m
|
||||
CONFIG_NFT_FIB_IPV4=m
|
||||
CONFIG_NF_TABLES_ARP=y
|
||||
CONFIG_NF_FLOW_TABLE_IPV4=m
|
||||
CONFIG_NF_LOG_ARP=m
|
||||
CONFIG_NF_LOG_IPV4=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
@ -221,7 +219,6 @@ CONFIG_IP_NF_ARPFILTER=m
|
||||
CONFIG_IP_NF_ARP_MANGLE=m
|
||||
CONFIG_NFT_DUP_IPV6=m
|
||||
CONFIG_NFT_FIB_IPV6=m
|
||||
CONFIG_NF_FLOW_TABLE_IPV6=m
|
||||
CONFIG_IP6_NF_IPTABLES=m
|
||||
CONFIG_IP6_NF_MATCH_AH=m
|
||||
CONFIG_IP6_NF_MATCH_EUI64=m
|
||||
@ -591,7 +588,7 @@ CONFIG_TEST_UUID=m
|
||||
CONFIG_TEST_XARRAY=m
|
||||
CONFIG_TEST_OVERFLOW=m
|
||||
CONFIG_TEST_RHASHTABLE=m
|
||||
CONFIG_TEST_HASH=m
|
||||
CONFIG_TEST_SIPHASH=m
|
||||
CONFIG_TEST_IDA=m
|
||||
CONFIG_TEST_BITOPS=m
|
||||
CONFIG_TEST_VMALLOC=m
|
||||
|
@ -97,7 +97,6 @@ CONFIG_NF_TABLES_NETDEV=y
|
||||
CONFIG_NFT_NUMGEN=m
|
||||
CONFIG_NFT_CT=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NFT_COUNTER=m
|
||||
CONFIG_NFT_CONNLIMIT=m
|
||||
CONFIG_NFT_LOG=m
|
||||
CONFIG_NFT_LIMIT=m
|
||||
@ -197,7 +196,6 @@ CONFIG_IP_SET_LIST_SET=m
|
||||
CONFIG_NFT_DUP_IPV4=m
|
||||
CONFIG_NFT_FIB_IPV4=m
|
||||
CONFIG_NF_TABLES_ARP=y
|
||||
CONFIG_NF_FLOW_TABLE_IPV4=m
|
||||
CONFIG_NF_LOG_ARP=m
|
||||
CONFIG_NF_LOG_IPV4=m
|
||||
CONFIG_IP_NF_IPTABLES=m
|
||||
@ -222,7 +220,6 @@ CONFIG_IP_NF_ARPFILTER=m
|
||||
CONFIG_IP_NF_ARP_MANGLE=m
|
||||
CONFIG_NFT_DUP_IPV6=m
|
||||
CONFIG_NFT_FIB_IPV6=m
|
||||
CONFIG_NF_FLOW_TABLE_IPV6=m
|
||||
CONFIG_IP6_NF_IPTABLES=m
|
||||
CONFIG_IP6_NF_MATCH_AH=m
|
||||
CONFIG_IP6_NF_MATCH_EUI64=m
|
||||
@ -592,7 +589,7 @@ CONFIG_TEST_UUID=m
|
||||
CONFIG_TEST_XARRAY=m
|
||||
CONFIG_TEST_OVERFLOW=m
|
||||
CONFIG_TEST_RHASHTABLE=m
|
||||
CONFIG_TEST_HASH=m
|
||||
CONFIG_TEST_SIPHASH=m
|
||||
CONFIG_TEST_IDA=m
|
||||
CONFIG_TEST_BITOPS=m
|
||||
CONFIG_TEST_VMALLOC=m
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user