Merge "Merge tag 'android11-5.4.242_r00' into android11-5.4" into android11-5.4-lts

Contains the following commits from android11-5.4:

*   304bc0bbaf21 Merge "Merge tag 'android11-5.4.242_r00' into android11-5.4" into android11-5.4-lts
|\
| * 3b9196f720 Merge tag 'android11-5.4.242_r00' into android11-5.4
* | 02293ac0e1 UPSTREAM: ext4: avoid a potential slab-out-of-bounds in ext4_group_desc_csum
* | 419a816b34 UPSTREAM: ext4: fix invalid free tracking in ext4_xattr_move_to_block()
|/
* 8177b9d924 Revert "Revert "mm/rmap: Fix anon_vma->degree ambiguity leading to double-reuse""
* 6d5e0c7837 FROMLIST: binder: fix UAF caused by faulty buffer cleanup
* 0779e2b555 UPSTREAM: usb: musb: mediatek: don't unregister something that wasn't registered
* e3280136fb UPSTREAM: net: fix NULL pointer in skb_segment_list
* 7fcf9449ae UPSTREAM: xfrm/compat: prevent potential spectre v1 gadget in xfrm_xlate32_attr()
* 79f59b0d69 UPSTREAM: xfrm: compat: change expression for switch in xfrm_xlate64
* 35093f38c4 UPSTREAM: perf/core: Call LSM hook after copying perf_event_attr
* edbf3d0e06 UPSTREAM: ext4: fix use-after-free in ext4_xattr_set_entry
* ed2290e360 UPSTREAM: ext4: remove duplicate definition of ext4_xattr_ibody_inline_set()
* df0f6ba0c5 UPSTREAM: Revert "ext4: fix use-after-free in ext4_xattr_set_entry"
* 4716ccc31d UPSTREAM: media: rc: Fix use-after-free bugs caused by ene_tx_irqsim()
* ae46662afa ANDROID: incremental fs: Evict inodes before freeing mount data
* 5da4c29d97 UPSTREAM: ext4: fix kernel BUG in 'ext4_write_inline_data_end()'
* 02a09d4145 UPSTREAM: hid: bigben_probe(): validate report count
* 63c050ea59 UPSTREAM: HID: bigben: use spinlock to safely schedule workers
* ce095c7400 BACKPORT: of: base: Skip CPU nodes with "fail"/"fail-..." status
* cdbef3965f UPSTREAM: HID: bigben_worker() remove unneeded check on report_field
* dfeba42a9c UPSTREAM: HID: bigben: use spinlock to protect concurrent accesses
* 13e656b3b5 UPSTREAM: hwrng: virtio - add an internal buffer

Change-Id: Iaa4d5779bef81fd1afaab0fd1490752f04041a07
Cc: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Todd Kjos 2023-05-26 17:07:00 +00:00 committed by Greg Kroah-Hartman
commit 05fe88d1c8
703 changed files with 86432 additions and 88719 deletions

View File

@ -82,6 +82,8 @@ Brief summary of control files.
memory.swappiness set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness)
memory.move_charge_at_immigrate set/show controls of moving charges
This knob is deprecated and shouldn't be
used.
memory.oom_control set/show oom controls.
memory.numa_stat show the number of memory usage per numa
node
@ -745,8 +747,15 @@ NOTE2:
It is recommended to set the soft limit always below the hard limit,
otherwise the hard limit will take precedence.
8. Move charges at task migration
=================================
8. Move charges at task migration (DEPRECATED!)
===============================================
THIS IS DEPRECATED!
It's expensive and unreliable! It's better practice to launch workload
tasks directly from inside their target cgroup. Use dedicated workload
cgroups to allow fine-grained policy adjustments without having to
move physical pages between control domains.
Users can move charges associated with a task along with task migration, that
is, uncharge task's pages from the old cgroup and charge them to the new cgroup.

View File

@ -479,8 +479,16 @@ Spectre variant 2
On Intel Skylake-era systems the mitigation covers most, but not all,
cases. See :ref:`[3] <spec_ref3>` for more details.
On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
IBRS on x86), retpoline is automatically disabled at run time.
On CPUs with hardware mitigation for Spectre variant 2 (e.g. IBRS
or enhanced IBRS on x86), retpoline is automatically disabled at run time.
Systems which support enhanced IBRS (eIBRS) enable IBRS protection once at
boot, by setting the IBRS bit, and they're automatically protected against
Spectre v2 variant attacks, including cross-thread branch target injections
on SMT systems (STIBP). In other words, eIBRS enables STIBP too.
Legacy IBRS systems clear the IBRS bit on exit to userspace and
therefore explicitly enable STIBP for that
The retpoline mitigation is turned on by default on vulnerable
CPUs. It can be forced on or off by the administrator
@ -504,9 +512,12 @@ Spectre variant 2
For Spectre variant 2 mitigation, individual user programs
can be compiled with return trampolines for indirect branches.
This protects them from consuming poisoned entries in the branch
target buffer left by malicious software. Alternatively, the
programs can disable their indirect branch speculation via prctl()
(See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
target buffer left by malicious software.
On legacy IBRS systems, at return to userspace, implicit STIBP is disabled
because the kernel clears the IBRS bit. In this case, the userspace programs
can disable indirect branch speculation via prctl() (See
:ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
On x86, this will turn on STIBP to guard against attacks from the
sibling thread when the user program is running, and use IBPB to
flush the branch target buffer when switching to/from the program.

View File

@ -1944,24 +1944,57 @@
ivrs_ioapic [HW,X86_64]
Provide an override to the IOAPIC-ID<->DEVICE-ID
mapping provided in the IVRS ACPI table. For
example, to map IOAPIC-ID decimal 10 to
PCI device 00:14.0 write the parameter as:
mapping provided in the IVRS ACPI table.
By default, PCI segment is 0, and can be omitted.
For example, to map IOAPIC-ID decimal 10 to
PCI segment 0x1 and PCI device 00:14.0,
write the parameter as:
ivrs_ioapic=10@0001:00:14.0
Deprecated formats:
* To map IOAPIC-ID decimal 10 to PCI device 00:14.0
write the parameter as:
ivrs_ioapic[10]=00:14.0
* To map IOAPIC-ID decimal 10 to PCI segment 0x1 and
PCI device 00:14.0 write the parameter as:
ivrs_ioapic[10]=0001:00:14.0
ivrs_hpet [HW,X86_64]
Provide an override to the HPET-ID<->DEVICE-ID
mapping provided in the IVRS ACPI table. For
example, to map HPET-ID decimal 0 to
PCI device 00:14.0 write the parameter as:
mapping provided in the IVRS ACPI table.
By default, PCI segment is 0, and can be omitted.
For example, to map HPET-ID decimal 10 to
PCI segment 0x1 and PCI device 00:14.0,
write the parameter as:
ivrs_hpet=10@0001:00:14.0
Deprecated formats:
* To map HPET-ID decimal 0 to PCI device 00:14.0
write the parameter as:
ivrs_hpet[0]=00:14.0
* To map HPET-ID decimal 10 to PCI segment 0x1 and
PCI device 00:14.0 write the parameter as:
ivrs_ioapic[10]=0001:00:14.0
ivrs_acpihid [HW,X86_64]
Provide an override to the ACPI-HID:UID<->DEVICE-ID
mapping provided in the IVRS ACPI table. For
example, to map UART-HID:UID AMD0020:0 to
PCI device 00:14.5 write the parameter as:
mapping provided in the IVRS ACPI table.
By default, PCI segment is 0, and can be omitted.
For example, to map UART-HID:UID AMD0020:0 to
PCI segment 0x1 and PCI device ID 00:14.5,
write the parameter as:
ivrs_acpihid=AMD0020:0@0001:00:14.5
Deprecated formats:
* To map UART-HID:UID AMD0020:0 to PCI segment is 0,
PCI device ID 00:14.5, write the parameter as:
ivrs_acpihid[00:14.5]=AMD0020:0
* To map UART-HID:UID AMD0020:0 to PCI segment 0x1 and
PCI device ID 00:14.5, write the parameter as:
ivrs_acpihid[0001:00:14.5]=AMD0020:0
js= [HW,JOY] Analog joystick
See Documentation/input/joydev/joystick.rst.

View File

@ -39,6 +39,10 @@ Setup
this mode. In this case, you should build the kernel with
CONFIG_RANDOMIZE_BASE disabled if the architecture supports KASLR.
- Build the gdb scripts (required on kernels v5.1 and above)::
make scripts_gdb
- Enable the gdb stub of QEMU/KVM, either
- at VM startup time by appending "-s" to the QEMU command line

View File

@ -128,7 +128,6 @@ required:
- compatible
- reg
- interrupts
- clocks
- clock-output-names
additionalProperties: false

View File

@ -1173,7 +1173,7 @@ defined:
return
-ECHILD and it will be called again in ref-walk mode.
``_weak_revalidate``
``d_weak_revalidate``
called when the VFS needs to revalidate a "jumped" dentry. This
is called when a path-walk ends at dentry that was not acquired
by doing a lookup in the parent directory. This includes "/",

View File

@ -702,7 +702,7 @@ ref
no-jd
BIOS setup but without jack-detection
intel
Intel DG45* mobos
Intel D*45* mobos
dell-m6-amic
Dell desktops/laptops with analog mics
dell-m6-dmic

View File

@ -3615,6 +3615,18 @@ Type: vm ioctl
Parameters: struct kvm_s390_cmma_log (in, out)
Returns: 0 on success, a negative value on error
Errors:
====== =============================================================
ENOMEM not enough memory can be allocated to complete the task
ENXIO if CMMA is not enabled
EINVAL if KVM_S390_CMMA_PEEK is not set but migration mode was not enabled
EINVAL if KVM_S390_CMMA_PEEK is not set but dirty tracking has been
disabled (and thus migration mode was automatically disabled)
EFAULT if the userspace address is invalid or if no page table is
present for the addresses (e.g. when using hugepages).
====== =============================================================
This ioctl is used to get the values of the CMMA bits on the s390
architecture. It is meant to be used in two scenarios:
- During live migration to save the CMMA values. Live migration needs
@ -3691,12 +3703,6 @@ mask is unused.
values points to the userspace buffer where the result will be stored.
This ioctl can fail with -ENOMEM if not enough memory can be allocated to
complete the task, with -ENXIO if CMMA is not enabled, with -EINVAL if
KVM_S390_CMMA_PEEK is not set but migration mode was not enabled, with
-EFAULT if the userspace address is invalid or if no page table is
present for the addresses (e.g. when using hugepages).
4.108 KVM_S390_SET_CMMA_BITS
Capability: KVM_CAP_S390_CMMA_MIGRATION

View File

@ -254,6 +254,10 @@ Allows userspace to start migration mode, needed for PGSTE migration.
Setting this attribute when migration mode is already active will have
no effects.
Dirty tracking must be enabled on all memslots, else -EINVAL is returned. When
dirty tracking is disabled on any memslot, migration mode is automatically
stopped.
Parameters: none
Returns: -ENOMEM if there is not enough free memory to start migration mode
-EINVAL if the state of the VM is invalid (e.g. no memory defined)

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 4
SUBLEVEL = 233
SUBLEVEL = 242
EXTRAVERSION =
NAME = Kleptomaniac Octopus
@ -89,9 +89,16 @@ endif
# If the user is running make -s (silent mode), suppress echoing of
# commands
# make-4.0 (and later) keep single letter options in the 1st word of MAKEFLAGS.
ifneq ($(findstring s,$(filter-out --%,$(MAKEFLAGS))),)
quiet=silent_
ifeq ($(filter 3.%,$(MAKE_VERSION)),)
silence:=$(findstring s,$(firstword -$(MAKEFLAGS)))
else
silence:=$(findstring s,$(filter-out --%,$(MAKEFLAGS)))
endif
ifeq ($(silence),s)
quiet=silent_
endif
export quiet Q KBUILD_VERBOSE

File diff suppressed because it is too large Load Diff

View File

@ -146,10 +146,8 @@ apply_relocate_add(Elf64_Shdr *sechdrs, const char *strtab,
base = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr;
symtab = (Elf64_Sym *)sechdrs[symindex].sh_addr;
/* The small sections were sorted to the end of the segment.
The following should definitely cover them. */
gp = (u64)me->core_layout.base + me->core_layout.size - 0x8000;
got = sechdrs[me->arch.gotsecindex].sh_addr;
gp = got + 0x8000;
for (i = 0; i < n; i++) {
unsigned long r_sym = ELF64_R_SYM (rela[i].r_info);

View File

@ -235,7 +235,21 @@ do_entIF(unsigned long type, struct pt_regs *regs)
{
int signo, code;
if ((regs->ps & ~IPL_MAX) == 0) {
if (type == 3) { /* FEN fault */
/* Irritating users can call PAL_clrfen to disable the
FPU for the process. The kernel will then trap in
do_switch_stack and undo_switch_stack when we try
to save and restore the FP registers.
Given that GCC by default generates code that uses the
FP registers, PAL_clrfen is not useful except for DoS
attacks. So turn the bleeding FPU back on and be done
with it. */
current_thread_info()->pcb.flags |= 1;
__reload_thread(&current_thread_info()->pcb);
return;
}
if (!user_mode(regs)) {
if (type == 1) {
const unsigned int *data
= (const unsigned int *) regs->pc;
@ -368,20 +382,6 @@ do_entIF(unsigned long type, struct pt_regs *regs)
}
break;
case 3: /* FEN fault */
/* Irritating users can call PAL_clrfen to disable the
FPU for the process. The kernel will then trap in
do_switch_stack and undo_switch_stack when we try
to save and restore the FP registers.
Given that GCC by default generates code that uses the
FP registers, PAL_clrfen is not useful except for DoS
attacks. So turn the bleeding FPU back on and be done
with it. */
current_thread_info()->pcb.flags |= 1;
__reload_thread(&current_thread_info()->pcb);
return;
case 5: /* illoc */
default: /* unexpected instruction-fault type */
;

View File

@ -48,8 +48,8 @@ void arch_dma_prep_coherent(struct page *page, size_t size)
* upper layer functions (in include/linux/dma-mapping.h)
*/
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_TO_DEVICE:
@ -69,8 +69,8 @@ void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
}
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_TO_DEVICE:

View File

@ -239,7 +239,7 @@
i80-if-timings {
cs-setup = <0>;
wr-setup = <0>;
wr-act = <1>;
wr-active = <1>;
wr-hold = <0>;
};
};

View File

@ -10,7 +10,7 @@
/ {
thermal-zones {
cpu_thermal: cpu-thermal {
thermal-sensors = <&tmu 0>;
thermal-sensors = <&tmu>;
polling-delay-passive = <0>;
polling-delay = <0>;
trips {

View File

@ -605,7 +605,7 @@
status = "disabled";
hdmi_i2c_phy: hdmiphy@38 {
compatible = "exynos4210-hdmiphy";
compatible = "samsung,exynos4210-hdmiphy";
reg = <0x38>;
};
};

View File

@ -116,7 +116,6 @@
};
&cpu0_thermal {
thermal-sensors = <&tmu_cpu0 0>;
polling-delay-passive = <0>;
polling-delay = <0>;

View File

@ -539,7 +539,7 @@
};
mipi_phy: mipi-video-phy {
compatible = "samsung,s5pv210-mipi-video-phy";
compatible = "samsung,exynos5420-mipi-video-phy";
syscon = <&pmu_system_controller>;
#phy-cells = <1>;
};

View File

@ -504,7 +504,7 @@
mux: mux-controller {
compatible = "mmio-mux";
#mux-control-cells = <0>;
#mux-control-cells = <1>;
mux-reg-masks = <0x14 0x00000010>;
};

View File

@ -942,7 +942,7 @@
status = "disabled";
};
spdif: sound@ff88b0000 {
spdif: sound@ff8b0000 {
compatible = "rockchip,rk3288-spdif", "rockchip,rk3066-spdif";
reg = <0x0 0xff8b0000 0x0 0x10000>;
#sound-dai-cells = <0>;
@ -1188,6 +1188,7 @@
clock-names = "dp", "pclk";
phys = <&edp_phy>;
phy-names = "dp";
power-domains = <&power RK3288_PD_VIO>;
resets = <&cru SRST_EDP>;
reset-names = "dp";
rockchip,grf = <&grf>;

View File

@ -242,7 +242,7 @@
irq-trigger = <0x1>;
stmpegpio: stmpe-gpio {
compatible = "stmpe,gpio";
compatible = "st,stmpe-gpio";
reg = <0>;
gpio-controller;
#gpio-cells = <2>;

View File

@ -99,6 +99,7 @@ struct mmdc_pmu {
cpumask_t cpu;
struct hrtimer hrtimer;
unsigned int active_events;
int id;
struct device *dev;
struct perf_event *mmdc_events[MMDC_NUM_COUNTERS];
struct hlist_node node;
@ -433,8 +434,6 @@ static enum hrtimer_restart mmdc_pmu_timer_handler(struct hrtimer *hrtimer)
static int mmdc_pmu_init(struct mmdc_pmu *pmu_mmdc,
void __iomem *mmdc_base, struct device *dev)
{
int mmdc_num;
*pmu_mmdc = (struct mmdc_pmu) {
.pmu = (struct pmu) {
.task_ctx_nr = perf_invalid_context,
@ -452,15 +451,16 @@ static int mmdc_pmu_init(struct mmdc_pmu *pmu_mmdc,
.active_events = 0,
};
mmdc_num = ida_simple_get(&mmdc_ida, 0, 0, GFP_KERNEL);
pmu_mmdc->id = ida_simple_get(&mmdc_ida, 0, 0, GFP_KERNEL);
return mmdc_num;
return pmu_mmdc->id;
}
static int imx_mmdc_remove(struct platform_device *pdev)
{
struct mmdc_pmu *pmu_mmdc = platform_get_drvdata(pdev);
ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
perf_pmu_unregister(&pmu_mmdc->pmu);
iounmap(pmu_mmdc->mmdc_base);
@ -474,7 +474,6 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
{
struct mmdc_pmu *pmu_mmdc;
char *name;
int mmdc_num;
int ret;
const struct of_device_id *of_id =
of_match_device(imx_mmdc_dt_ids, &pdev->dev);
@ -497,14 +496,14 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
cpuhp_mmdc_state = ret;
}
mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
if (mmdc_num == 0)
name = "mmdc";
else
name = devm_kasprintf(&pdev->dev,
GFP_KERNEL, "mmdc%d", mmdc_num);
ret = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
if (ret < 0)
goto pmu_free;
name = devm_kasprintf(&pdev->dev,
GFP_KERNEL, "mmdc%d", ret);
pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
pmu_mmdc->devtype_data = (struct fsl_mmdc_devtype_data *)of_id->data;
hrtimer_init(&pmu_mmdc->hrtimer, CLOCK_MONOTONIC,
@ -525,6 +524,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
pmu_register_err:
pr_warn("MMDC Perf PMU failed (%d), disabled\n", ret);
ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
hrtimer_cancel(&pmu_mmdc->hrtimer);
pmu_free:

View File

@ -165,7 +165,7 @@ static int __init omap1_dm_timer_init(void)
kfree(pdata);
err_free_pdev:
platform_device_unregister(pdev);
platform_device_put(pdev);
return ret;
}

View File

@ -649,6 +649,7 @@ static void __init realtime_counter_init(void)
}
rate = clk_get_rate(sys_clk);
clk_put(sys_clk);
if (soc_is_dra7xx()) {
/*

View File

@ -213,6 +213,7 @@ int __init zynq_early_slcr_init(void)
zynq_slcr_regmap = syscon_regmap_lookup_by_compatible("xlnx,zynq-slcr");
if (IS_ERR(zynq_slcr_regmap)) {
pr_err("%s: failed to find zynq-slcr\n", __func__);
of_node_put(np);
return -ENODEV;
}

View File

@ -2333,15 +2333,15 @@ void arch_teardown_dma_ops(struct device *dev)
}
#ifdef CONFIG_SWIOTLB
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
__dma_page_cpu_to_dev(phys_to_page(paddr), paddr & (PAGE_SIZE - 1),
size, dir);
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
__dma_page_dev_to_cpu(phys_to_page(paddr), paddr & (PAGE_SIZE - 1),
size, dir);

View File

@ -70,20 +70,20 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
* pfn_valid returns true the pages is local and we can use the native
* dma-direct functions, otherwise we call the Xen specific version.
*/
void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
phys_addr_t paddr, size_t size, enum dma_data_direction dir)
void xen_dma_sync_for_cpu(dma_addr_t handle, phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
if (pfn_valid(PFN_DOWN(handle)))
arch_sync_dma_for_cpu(dev, paddr, size, dir);
arch_sync_dma_for_cpu(paddr, size, dir);
else if (dir != DMA_TO_DEVICE)
dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
}
void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
phys_addr_t paddr, size_t size, enum dma_data_direction dir)
void xen_dma_sync_for_device(dma_addr_t handle, phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
if (pfn_valid(PFN_DOWN(handle)))
arch_sync_dma_for_device(dev, paddr, size, dir);
arch_sync_dma_for_device(paddr, size, dir);
else if (dir == DMA_FROM_DEVICE)
dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
else

View File

@ -150,7 +150,7 @@
scpi_clocks: clocks {
compatible = "arm,scpi-clocks";
scpi_dvfs: clock-controller {
scpi_dvfs: clocks-0 {
compatible = "arm,scpi-dvfs-clocks";
#clock-cells = <1>;
clock-indices = <0>;
@ -159,7 +159,7 @@
};
scpi_sensors: sensors {
compatible = "amlogic,meson-gxbb-scpi-sensors";
compatible = "amlogic,meson-gxbb-scpi-sensors", "arm,scpi-sensors";
#thermal-sensor-cells = <1>;
};
};

View File

@ -1376,10 +1376,9 @@
dmc: bus@38000 {
compatible = "simple-bus";
reg = <0x0 0x38000 0x0 0x400>;
#address-cells = <2>;
#size-cells = <2>;
ranges = <0x0 0x0 0x0 0x38000 0x0 0x400>;
ranges = <0x0 0x0 0x0 0x38000 0x0 0x2000>;
canvas: video-lut@48 {
compatible = "amlogic,canvas";
@ -1783,7 +1782,7 @@
#address-cells = <1>;
#size-cells = <0>;
internal_ephy: ethernet_phy@8 {
internal_ephy: ethernet-phy@8 {
compatible = "ethernet-phy-id0180.3301",
"ethernet-phy-ieee802.3-c22";
interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -54,26 +54,6 @@
compatible = "operating-points-v2";
opp-shared;
opp-100000000 {
opp-hz = /bits/ 64 <100000000>;
opp-microvolt = <731000>;
};
opp-250000000 {
opp-hz = /bits/ 64 <250000000>;
opp-microvolt = <731000>;
};
opp-500000000 {
opp-hz = /bits/ 64 <500000000>;
opp-microvolt = <731000>;
};
opp-667000000 {
opp-hz = /bits/ 64 <666666666>;
opp-microvolt = <731000>;
};
opp-1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <731000>;

View File

@ -172,7 +172,7 @@
reg = <0x14 0x10>;
};
eth_mac: eth_mac@34 {
eth_mac: eth-mac@34 {
reg = <0x34 0x10>;
};
@ -189,7 +189,7 @@
scpi_clocks: clocks {
compatible = "arm,scpi-clocks";
scpi_dvfs: scpi_clocks@0 {
scpi_dvfs: clocks-0 {
compatible = "arm,scpi-dvfs-clocks";
#clock-cells = <1>;
clock-indices = <0>;
@ -464,7 +464,7 @@
#size-cells = <2>;
ranges = <0x0 0x0 0x0 0xc8834000 0x0 0x2000>;
hwrng: rng {
hwrng: rng@0 {
compatible = "amlogic,meson-rng";
reg = <0x0 0x0 0x0 0x4>;
};

View File

@ -18,7 +18,7 @@
leds {
compatible = "gpio-leds";
status {
led {
label = "n1:white:status";
gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
default-state = "on";

View File

@ -700,7 +700,7 @@
};
};
eth-phy-mux {
eth-phy-mux@55c {
compatible = "mdio-mux-mmioreg", "mdio-mux";
#address-cells = <1>;
#size-cells = <0>;

View File

@ -428,6 +428,7 @@
pwm: pwm@11006000 {
compatible = "mediatek,mt7622-pwm";
reg = <0 0x11006000 0 0x1000>;
#pwm-cells = <2>;
interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_LOW>;
clocks = <&topckgen CLK_TOP_PWM_SEL>,
<&pericfg CLK_PERI_PWM_PD>,

View File

@ -533,7 +533,7 @@
clocks = <&gcc GCC_PCIE_0_PIPE_CLK>;
resets = <&gcc GCC_PCIEPHY_0_PHY_BCR>,
<&gcc 21>;
<&gcc GCC_PCIE_0_PIPE_ARES>;
reset-names = "phy", "pipe";
clock-output-names = "pcie_0_pipe_clk";
@ -991,12 +991,12 @@
<&gcc GCC_PCIE_0_SLV_AXI_CLK>;
clock-names = "iface", "aux", "master_bus", "slave_bus";
resets = <&gcc 18>,
<&gcc 17>,
<&gcc 15>,
<&gcc 19>,
resets = <&gcc GCC_PCIE_0_AXI_MASTER_ARES>,
<&gcc GCC_PCIE_0_AXI_SLAVE_ARES>,
<&gcc GCC_PCIE_0_AXI_MASTER_STICKY_ARES>,
<&gcc GCC_PCIE_0_CORE_STICKY_ARES>,
<&gcc GCC_PCIE_0_BCR>,
<&gcc 16>;
<&gcc GCC_PCIE_0_AHB_ARES>;
reset-names = "axi_m",
"axi_s",
"axi_m_sticky",

View File

@ -90,7 +90,6 @@
linux,default-trigger = "heartbeat";
gpios = <&rk805 1 GPIO_ACTIVE_LOW>;
default-state = "on";
mode = <0x23>;
};
user {
@ -98,7 +97,6 @@
linux,default-trigger = "mmc1";
gpios = <&rk805 0 GPIO_ACTIVE_LOW>;
default-state = "off";
mode = <0x05>;
};
};
};

View File

@ -13,14 +13,14 @@
#include <asm/cacheflush.h>
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
__dma_map_area(phys_to_virt(paddr), size, dir);
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
__dma_unmap_area(phys_to_virt(paddr), size, dir);
}

View File

@ -140,7 +140,7 @@ void __init coherent_mem_init(phys_addr_t start, u32 size)
sizeof(long));
}
static void c6x_dma_sync(struct device *dev, phys_addr_t paddr, size_t size,
static void c6x_dma_sync(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
@ -160,14 +160,14 @@ static void c6x_dma_sync(struct device *dev, phys_addr_t paddr, size_t size,
}
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
return c6x_dma_sync(dev, paddr, size, dir);
return c6x_dma_sync(paddr, size, dir);
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
return c6x_dma_sync(dev, paddr, size, dir);
return c6x_dma_sync(paddr, size, dir);
}

View File

@ -58,8 +58,8 @@ void arch_dma_prep_coherent(struct page *page, size_t size)
cache_op(page_to_phys(page), size, dma_wbinv_set_zero_range);
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_TO_DEVICE:
@ -74,8 +74,8 @@ void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
}
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_TO_DEVICE:

View File

@ -55,8 +55,8 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
gen_pool_free(coherent_pool, (unsigned long) vaddr, size);
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
void *addr = phys_to_virt(paddr);

View File

@ -73,8 +73,8 @@ __ia64_sync_icache_dcache (pte_t pte)
* DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
* flush them when they get mapped into an executable vm-area.
*/
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
unsigned long pfn = PHYS_PFN(paddr);

View File

@ -47,6 +47,8 @@ do_trace:
jbsr syscall_trace_enter
RESTORE_SWITCH_STACK
addql #4,%sp
addql #1,%d0
jeq ret_from_exception
movel %sp@(PT_OFF_ORIG_D0),%d1
movel #-ENOSYS,%d0
cmpl #NR_syscalls,%d1

View File

@ -19,6 +19,7 @@ config HEARTBEAT
# We have a dedicated heartbeat LED. :-)
config PROC_HARDWARE
bool "/proc/hardware support"
depends on PROC_FS
help
Say Y here to support the /proc/hardware file, which gives you
access to information about the machine you're running on,

View File

@ -92,6 +92,8 @@ ENTRY(system_call)
jbsr syscall_trace_enter
RESTORE_SWITCH_STACK
addql #4,%sp
addql #1,%d0
jeq ret_from_exception
movel %d3,%a0
jbsr %a0@
movel %d0,%sp@(PT_OFF_D0) /* save the return value */

View File

@ -61,8 +61,8 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
#endif /* CONFIG_MMU && !CONFIG_COLDFIRE */
void arch_sync_dma_for_device(struct device *dev, phys_addr_t handle,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t handle, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_BIDIRECTIONAL:

View File

@ -160,9 +160,12 @@ do_trace_entry:
jbsr syscall_trace
RESTORE_SWITCH_STACK
addql #4,%sp
addql #1,%d0 | optimization for cmpil #-1,%d0
jeq ret_from_syscall
movel %sp@(PT_OFF_ORIG_D0),%d0
cmpl #NR_syscalls,%d0
jcs syscall
jra ret_from_syscall
badsys:
movel #-ENOSYS,%sp@(PT_OFF_D0)
jra ret_from_syscall

View File

@ -30,6 +30,7 @@
#include <linux/init.h>
#include <linux/ptrace.h>
#include <linux/kallsyms.h>
#include <linux/extable.h>
#include <asm/setup.h>
#include <asm/fpu.h>
@ -550,7 +551,8 @@ static inline void bus_error030 (struct frame *fp)
errorcode |= 2;
if (mmusr & (MMU_I | MMU_WP)) {
if (ssw & 4) {
/* We might have an exception table for this PC */
if (ssw & 4 && !search_exception_tables(fp->ptregs.pc)) {
pr_err("Data %s fault at %#010lx in %s (pc=%#lx)\n",
ssw & RW ? "read" : "write",
fp->un.fmtb.daddr,

View File

@ -15,7 +15,7 @@
#include <linux/bug.h>
#include <asm/cacheflush.h>
static void __dma_sync(struct device *dev, phys_addr_t paddr, size_t size,
static void __dma_sync(phys_addr_t paddr, size_t size,
enum dma_data_direction direction)
{
switch (direction) {
@ -31,14 +31,14 @@ static void __dma_sync(struct device *dev, phys_addr_t paddr, size_t size,
}
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
__dma_sync(dev, paddr, size, dir);
__dma_sync(paddr, size, dir);
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
__dma_sync(dev, paddr, size, dir);
__dma_sync(paddr, size, dir);
}

View File

@ -64,7 +64,9 @@ phys_addr_t __dma_to_phys(struct device *dev, dma_addr_t dma_addr)
return dma_addr;
}
void arch_sync_dma_for_cpu_all(struct device *dev)
bool bmips_rac_flush_disable;
void arch_sync_dma_for_cpu_all(void)
{
void __iomem *cbr = BMIPS_GET_CBR();
u32 cfg;
@ -74,6 +76,9 @@ void arch_sync_dma_for_cpu_all(struct device *dev)
boot_cpu_type() != CPU_BMIPS4380)
return;
if (unlikely(bmips_rac_flush_disable))
return;
/* Flush stale data out of the readahead cache */
cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
__raw_writel(cfg | 0x100, cbr + BMIPS_RAC_CONFIG);

View File

@ -34,6 +34,8 @@
#define REG_BCM6328_OTP ((void __iomem *)CKSEG1ADDR(0x1000062c))
#define BCM6328_TP1_DISABLED BIT(9)
extern bool bmips_rac_flush_disable;
static const unsigned long kbase = VMLINUX_LOAD_ADDRESS & 0xfff00000;
struct bmips_quirk {
@ -103,6 +105,12 @@ static void bcm6358_quirks(void)
* disable SMP for now
*/
bmips_smp_enabled = 0;
/*
* RAC flush causes kernel panics on BCM6358 when booting from TP1
* because the bootloader is not initializing it properly.
*/
bmips_rac_flush_disable = !!(read_c0_brcm_cmt_local() & (1 << 31));
}
static void bcm6368_quirks(void)

View File

@ -377,7 +377,7 @@ struct pci_msu {
PCI_CFG04_STAT_SSE | \
PCI_CFG04_STAT_PE)
#define KORINA_CNFG1 ((KORINA_STAT<<16)|KORINA_CMD)
#define KORINA_CNFG1 (KORINA_STAT | KORINA_CMD)
#define KORINA_REVID 0
#define KORINA_CLASS_CODE 0

View File

@ -38,7 +38,7 @@ static inline bool mips_syscall_is_indirect(struct task_struct *task,
static inline long syscall_get_nr(struct task_struct *task,
struct pt_regs *regs)
{
return current_thread_info()->syscall;
return task_thread_info(task)->syscall;
}
static inline void mips_syscall_update_nr(struct task_struct *task,

View File

@ -104,7 +104,6 @@ struct vpe_control {
struct list_head tc_list; /* Thread contexts */
};
extern unsigned long physical_memsize;
extern struct vpe_control vpecontrol;
extern const struct file_operations vpe_fops;

View File

@ -592,7 +592,7 @@ static dma_addr_t jazz_dma_map_page(struct device *dev, struct page *page,
phys_addr_t phys = page_to_phys(page) + offset;
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
arch_sync_dma_for_device(dev, phys, size, dir);
arch_sync_dma_for_device(phys, size, dir);
return vdma_alloc(phys, size);
}
@ -600,7 +600,7 @@ static void jazz_dma_unmap_page(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
arch_sync_dma_for_cpu(dev, vdma_log2phys(dma_addr), size, dir);
arch_sync_dma_for_cpu(vdma_log2phys(dma_addr), size, dir);
vdma_free(dma_addr);
}
@ -612,7 +612,7 @@ static int jazz_dma_map_sg(struct device *dev, struct scatterlist *sglist,
for_each_sg(sglist, sg, nents, i) {
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
arch_sync_dma_for_device(dev, sg_phys(sg), sg->length,
arch_sync_dma_for_device(sg_phys(sg), sg->length,
dir);
sg->dma_address = vdma_alloc(sg_phys(sg), sg->length);
if (sg->dma_address == DMA_MAPPING_ERROR)
@ -631,8 +631,7 @@ static void jazz_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
for_each_sg(sglist, sg, nents, i) {
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
arch_sync_dma_for_cpu(dev, sg_phys(sg), sg->length,
dir);
arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
vdma_free(sg->dma_address);
}
}
@ -640,13 +639,13 @@ static void jazz_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
static void jazz_dma_sync_single_for_device(struct device *dev,
dma_addr_t addr, size_t size, enum dma_data_direction dir)
{
arch_sync_dma_for_device(dev, vdma_log2phys(addr), size, dir);
arch_sync_dma_for_device(vdma_log2phys(addr), size, dir);
}
static void jazz_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t addr, size_t size, enum dma_data_direction dir)
{
arch_sync_dma_for_cpu(dev, vdma_log2phys(addr), size, dir);
arch_sync_dma_for_cpu(vdma_log2phys(addr), size, dir);
}
static void jazz_dma_sync_sg_for_device(struct device *dev,
@ -656,7 +655,7 @@ static void jazz_dma_sync_sg_for_device(struct device *dev,
int i;
for_each_sg(sgl, sg, nents, i)
arch_sync_dma_for_device(dev, sg_phys(sg), sg->length, dir);
arch_sync_dma_for_device(sg_phys(sg), sg->length, dir);
}
static void jazz_dma_sync_sg_for_cpu(struct device *dev,
@ -666,7 +665,7 @@ static void jazz_dma_sync_sg_for_cpu(struct device *dev,
int i;
for_each_sg(sgl, sg, nents, i)
arch_sync_dma_for_cpu(dev, sg_phys(sg), sg->length, dir);
arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
}
const struct dma_map_ops jazz_dma_ops = {

View File

@ -423,9 +423,11 @@ static void cps_shutdown_this_cpu(enum cpu_death death)
wmb();
}
} else {
pr_debug("Gating power to core %d\n", core);
/* Power down the core */
cps_pm_enter_state(CPS_PM_POWER_GATED);
if (IS_ENABLED(CONFIG_HOTPLUG_CPU)) {
pr_debug("Gating power to core %d\n", core);
/* Power down the core */
cps_pm_enter_state(CPS_PM_POWER_GATED);
}
}
}

View File

@ -10,6 +10,8 @@
*/
#define BSS_FIRST_SECTIONS *(.bss..swapper_pg_dir)
#define RUNTIME_DISCARD_EXIT
#include <asm-generic/vmlinux.lds.h>
#undef mips

View File

@ -92,12 +92,11 @@ int vpe_run(struct vpe *v)
write_tc_c0_tchalt(read_tc_c0_tchalt() & ~TCHALT_H);
/*
* The sde-kit passes 'memsize' to __start in $a3, so set something
* here... Or set $a3 to zero and define DFLT_STACK_SIZE and
* DFLT_HEAP_SIZE when you compile your program
* We don't pass the memsize here, so VPE programs need to be
* compiled with DFLT_STACK_SIZE and DFLT_HEAP_SIZE defined.
*/
mttgpr(7, 0);
mttgpr(6, v->ntcs);
mttgpr(7, physical_memsize);
/* set up VPE1 */
/*

View File

@ -22,12 +22,6 @@
DEFINE_SPINLOCK(ebu_lock);
EXPORT_SYMBOL_GPL(ebu_lock);
/*
* This is needed by the VPE loader code, just set it to 0 and assume
* that the firmware hardcodes this value to something useful.
*/
unsigned long physical_memsize = 0L;
/*
* this struct is filled by the soc specific detection code and holds
* information about the specific soc type, revision and name

View File

@ -39,7 +39,7 @@ static void pvc_display(unsigned long data)
pvc_write_string(pvc_lines[i], 0, i);
}
static DECLARE_TASKLET(pvc_display_tasklet, &pvc_display, 0);
static DECLARE_TASKLET_OLD(pvc_display_tasklet, &pvc_display);
static int pvc_line_proc_show(struct seq_file *m, void *v)
{

View File

@ -27,7 +27,7 @@
* R10000 and R12000 are used in such systems, the SGI IP28 Indigo² rsp.
* SGI IP32 aka O2.
*/
static inline bool cpu_needs_post_dma_flush(struct device *dev)
static inline bool cpu_needs_post_dma_flush(void)
{
switch (boot_cpu_type()) {
case CPU_R10000:
@ -118,17 +118,17 @@ static inline void dma_sync_phys(phys_addr_t paddr, size_t size,
} while (left);
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
dma_sync_phys(paddr, size, dir);
}
#ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
if (cpu_needs_post_dma_flush(dev))
if (cpu_needs_post_dma_flush())
dma_sync_phys(paddr, size, dir);
}
#endif

View File

@ -46,8 +46,8 @@ static inline void cache_op(phys_addr_t paddr, size_t size,
} while (left);
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_FROM_DEVICE:
@ -61,8 +61,8 @@ void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
}
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_TO_DEVICE:

View File

@ -18,8 +18,8 @@
#include <linux/cache.h>
#include <asm/cacheflush.h>
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
void *vaddr = phys_to_virt(paddr);
@ -42,8 +42,8 @@ void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
}
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
void *vaddr = phys_to_virt(paddr);

View File

@ -125,7 +125,7 @@ arch_dma_free(struct device *dev, size_t size, void *vaddr,
free_pages_exact(vaddr, size);
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t addr, size_t size,
void arch_sync_dma_for_device(phys_addr_t addr, size_t size,
enum dma_data_direction dir)
{
unsigned long cl;

View File

@ -439,14 +439,14 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
free_pages((unsigned long)__va(dma_handle), order);
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
}

View File

@ -93,7 +93,7 @@ aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mlittle-endian
ifeq ($(HAS_BIARCH),y)
KBUILD_CFLAGS += -m$(BITS)
KBUILD_AFLAGS += -m$(BITS) -Wl,-a$(BITS)
KBUILD_AFLAGS += -m$(BITS)
KBUILD_LDFLAGS += -m elf$(BITS)$(LDEMULATION)
endif

View File

@ -1072,45 +1072,46 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
}
pr_info("EEH: Recovery successful.\n");
} else {
/*
* About 90% of all real-life EEH failures in the field
* are due to poorly seated PCI cards. Only 10% or so are
* due to actual, failed cards.
*/
pr_err("EEH: Unable to recover from failure from PHB#%x-PE#%x.\n"
"Please try reseating or replacing it\n",
pe->phb->global_number, pe->addr);
goto out;
}
eeh_slot_error_detail(pe, EEH_LOG_PERM);
/*
* About 90% of all real-life EEH failures in the field
* are due to poorly seated PCI cards. Only 10% or so are
* due to actual, failed cards.
*/
pr_err("EEH: Unable to recover from failure from PHB#%x-PE#%x.\n"
"Please try reseating or replacing it\n",
pe->phb->global_number, pe->addr);
/* Notify all devices that they're about to go down. */
eeh_set_channel_state(pe, pci_channel_io_perm_failure);
eeh_set_irq_state(pe, false);
eeh_pe_report("error_detected(permanent failure)", pe,
eeh_report_failure, NULL);
eeh_slot_error_detail(pe, EEH_LOG_PERM);
/* Mark the PE to be removed permanently */
eeh_pe_state_mark(pe, EEH_PE_REMOVED);
/* Notify all devices that they're about to go down. */
eeh_set_irq_state(pe, false);
eeh_pe_report("error_detected(permanent failure)", pe,
eeh_report_failure, NULL);
eeh_set_channel_state(pe, pci_channel_io_perm_failure);
/*
* Shut down the device drivers for good. We mark
* all removed devices correctly to avoid access
* the their PCI config any more.
*/
if (pe->type & EEH_PE_VF) {
eeh_pe_dev_traverse(pe, eeh_rmv_device, NULL);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
} else {
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
/* Mark the PE to be removed permanently */
eeh_pe_state_mark(pe, EEH_PE_REMOVED);
pci_lock_rescan_remove();
pci_hp_remove_devices(bus);
pci_unlock_rescan_remove();
/* The passed PE should no longer be used */
return;
}
/*
* Shut down the device drivers for good. We mark
* all removed devices correctly to avoid access
* the their PCI config any more.
*/
if (pe->type & EEH_PE_VF) {
eeh_pe_dev_traverse(pe, eeh_rmv_device, NULL);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
} else {
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
pci_lock_rescan_remove();
pci_hp_remove_devices(bus);
pci_unlock_rescan_remove();
/* The passed PE should no longer be used */
return;
}
out:
@ -1206,10 +1207,10 @@ void eeh_handle_special_event(void)
/* Notify all devices to be down */
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
eeh_set_channel_state(pe, pci_channel_io_perm_failure);
eeh_pe_report(
"error_detected(permanent failure)", pe,
eeh_report_failure, NULL);
eeh_set_channel_state(pe, pci_channel_io_perm_failure);
pci_lock_rescan_remove();
list_for_each_entry(hose, &hose_list, list_node) {

View File

@ -51,10 +51,10 @@ struct rtas_t rtas = {
EXPORT_SYMBOL(rtas);
DEFINE_SPINLOCK(rtas_data_buf_lock);
EXPORT_SYMBOL(rtas_data_buf_lock);
EXPORT_SYMBOL_GPL(rtas_data_buf_lock);
char rtas_data_buf[RTAS_DATA_BUF_SIZE] __cacheline_aligned;
EXPORT_SYMBOL(rtas_data_buf);
char rtas_data_buf[RTAS_DATA_BUF_SIZE] __aligned(SZ_4K);
EXPORT_SYMBOL_GPL(rtas_data_buf);
unsigned long rtas_rmo_buf;
@ -63,7 +63,7 @@ unsigned long rtas_rmo_buf;
* This is done like this so rtas_flash can be a module.
*/
void (*rtas_flash_term_hook)(int);
EXPORT_SYMBOL(rtas_flash_term_hook);
EXPORT_SYMBOL_GPL(rtas_flash_term_hook);
/* RTAS use home made raw locking instead of spin_lock_irqsave
* because those can be called from within really nasty contexts
@ -311,7 +311,7 @@ void rtas_progress(char *s, unsigned short hex)
spin_unlock(&progress_lock);
}
EXPORT_SYMBOL(rtas_progress); /* needed by rtas_flash module */
EXPORT_SYMBOL_GPL(rtas_progress); /* needed by rtas_flash module */
int rtas_token(const char *service)
{
@ -321,7 +321,7 @@ int rtas_token(const char *service)
tokp = of_get_property(rtas.dev, service, NULL);
return tokp ? be32_to_cpu(*tokp) : RTAS_UNKNOWN_SERVICE;
}
EXPORT_SYMBOL(rtas_token);
EXPORT_SYMBOL_GPL(rtas_token);
int rtas_service_present(const char *service)
{
@ -481,7 +481,7 @@ int rtas_call(int token, int nargs, int nret, int *outputs, ...)
}
return ret;
}
EXPORT_SYMBOL(rtas_call);
EXPORT_SYMBOL_GPL(rtas_call);
/* For RTAS_BUSY (-2), delay for 1 millisecond. For an extended busy status
* code of 990n, perform the hinted delay of 10^n (last digit) milliseconds.
@ -516,7 +516,7 @@ unsigned int rtas_busy_delay(int status)
return ms;
}
EXPORT_SYMBOL(rtas_busy_delay);
EXPORT_SYMBOL_GPL(rtas_busy_delay);
static int rtas_error_rc(int rtas_rc)
{
@ -562,7 +562,7 @@ int rtas_get_power_level(int powerdomain, int *level)
return rtas_error_rc(rc);
return rc;
}
EXPORT_SYMBOL(rtas_get_power_level);
EXPORT_SYMBOL_GPL(rtas_get_power_level);
int rtas_set_power_level(int powerdomain, int level, int *setlevel)
{
@ -580,7 +580,7 @@ int rtas_set_power_level(int powerdomain, int level, int *setlevel)
return rtas_error_rc(rc);
return rc;
}
EXPORT_SYMBOL(rtas_set_power_level);
EXPORT_SYMBOL_GPL(rtas_set_power_level);
int rtas_get_sensor(int sensor, int index, int *state)
{
@ -598,7 +598,7 @@ int rtas_get_sensor(int sensor, int index, int *state)
return rtas_error_rc(rc);
return rc;
}
EXPORT_SYMBOL(rtas_get_sensor);
EXPORT_SYMBOL_GPL(rtas_get_sensor);
int rtas_get_sensor_fast(int sensor, int index, int *state)
{
@ -659,7 +659,7 @@ int rtas_set_indicator(int indicator, int index, int new_value)
return rtas_error_rc(rc);
return rc;
}
EXPORT_SYMBOL(rtas_set_indicator);
EXPORT_SYMBOL_GPL(rtas_set_indicator);
/*
* Ignoring RTAS extended delay

View File

@ -6,6 +6,7 @@
#endif
#define BSS_FIRST_SECTIONS *(.bss.prominit)
#define RUNTIME_DISCARD_EXIT
#include <asm/page.h>
#include <asm-generic/vmlinux.lds.h>
@ -394,9 +395,12 @@ SECTIONS
DISCARDS
/DISCARD/ : {
*(*.EMB.apuinfo)
*(.glink .iplt .plt .rela* .comment)
*(.glink .iplt .plt .comment)
*(.gnu.version*)
*(.gnu.attributes)
*(.eh_frame)
#ifndef CONFIG_RELOCATABLE
*(.rela*)
#endif
}
}

View File

@ -104,14 +104,14 @@ static void __dma_sync_page(phys_addr_t paddr, size_t size, int dir)
#endif
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
__dma_sync_page(paddr, size, dir);
}
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
__dma_sync_page(paddr, size, dir);
}

View File

@ -3008,7 +3008,8 @@ static void pnv_ioda_setup_pe_res(struct pnv_ioda_pe *pe,
int index;
int64_t rc;
if (!res || !res->flags || res->start > res->end)
if (!res || !res->flags || res->start > res->end ||
res->flags & IORESOURCE_UNSET)
return;
if (res->flags & IORESOURCE_IO) {

View File

@ -1416,22 +1416,22 @@ static inline void __init check_lp_set_hblkrm(unsigned int lp,
void __init pseries_lpar_read_hblkrm_characteristics(void)
{
const s32 token = rtas_token("ibm,get-system-parameter");
unsigned char local_buffer[SPLPAR_TLB_BIC_MAXLENGTH];
int call_status, len, idx, bpsize;
if (!firmware_has_feature(FW_FEATURE_BLOCK_REMOVE))
return;
spin_lock(&rtas_data_buf_lock);
memset(rtas_data_buf, 0, RTAS_DATA_BUF_SIZE);
call_status = rtas_call(rtas_token("ibm,get-system-parameter"), 3, 1,
NULL,
SPLPAR_TLB_BIC_TOKEN,
__pa(rtas_data_buf),
RTAS_DATA_BUF_SIZE);
memcpy(local_buffer, rtas_data_buf, SPLPAR_TLB_BIC_MAXLENGTH);
local_buffer[SPLPAR_TLB_BIC_MAXLENGTH - 1] = '\0';
spin_unlock(&rtas_data_buf_lock);
do {
spin_lock(&rtas_data_buf_lock);
memset(rtas_data_buf, 0, RTAS_DATA_BUF_SIZE);
call_status = rtas_call(token, 3, 1, NULL, SPLPAR_TLB_BIC_TOKEN,
__pa(rtas_data_buf), RTAS_DATA_BUF_SIZE);
memcpy(local_buffer, rtas_data_buf, SPLPAR_TLB_BIC_MAXLENGTH);
local_buffer[SPLPAR_TLB_BIC_MAXLENGTH - 1] = '\0';
spin_unlock(&rtas_data_buf_lock);
} while (rtas_busy_delay(call_status));
if (call_status != 0) {
pr_warn("%s %s Error calling get-system-parameter (0x%x)\n",

View File

@ -289,6 +289,7 @@ static void parse_mpp_x_data(struct seq_file *m)
*/
static void parse_system_parameter_string(struct seq_file *m)
{
const s32 token = rtas_token("ibm,get-system-parameter");
int call_status;
unsigned char *local_buffer = kmalloc(SPLPAR_MAXLENGTH, GFP_KERNEL);
@ -298,16 +299,15 @@ static void parse_system_parameter_string(struct seq_file *m)
return;
}
spin_lock(&rtas_data_buf_lock);
memset(rtas_data_buf, 0, SPLPAR_MAXLENGTH);
call_status = rtas_call(rtas_token("ibm,get-system-parameter"), 3, 1,
NULL,
SPLPAR_CHARACTERISTICS_TOKEN,
__pa(rtas_data_buf),
RTAS_DATA_BUF_SIZE);
memcpy(local_buffer, rtas_data_buf, SPLPAR_MAXLENGTH);
local_buffer[SPLPAR_MAXLENGTH - 1] = '\0';
spin_unlock(&rtas_data_buf_lock);
do {
spin_lock(&rtas_data_buf_lock);
memset(rtas_data_buf, 0, SPLPAR_MAXLENGTH);
call_status = rtas_call(token, 3, 1, NULL, SPLPAR_CHARACTERISTICS_TOKEN,
__pa(rtas_data_buf), RTAS_DATA_BUF_SIZE);
memcpy(local_buffer, rtas_data_buf, SPLPAR_MAXLENGTH);
local_buffer[SPLPAR_MAXLENGTH - 1] = '\0';
spin_unlock(&rtas_data_buf_lock);
} while (rtas_busy_delay(call_status));
if (call_status != 0) {
printk(KERN_INFO

View File

@ -0,0 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
#ifndef _UAPI_ASM_RISCV_SETUP_H
#define _UAPI_ASM_RISCV_SETUP_H
#define COMMAND_LINE_SIZE 1024
#endif /* _UAPI_ASM_RISCV_SETUP_H */

View File

@ -89,7 +89,7 @@ void notrace walk_stackframe(struct task_struct *task,
while (!kstack_end(ksp)) {
if (__kernel_text_address(pc) && unlikely(fn(pc, arg)))
break;
pc = (*ksp++) - 0x4;
pc = READ_ONCE_NOCHECK(*ksp++) - 0x4;
}
}

View File

@ -5,6 +5,7 @@
*/
#include <linux/of_clk.h>
#include <linux/clockchips.h>
#include <linux/clocksource.h>
#include <linux/delay.h>
#include <asm/sbi.h>
@ -28,4 +29,6 @@ void __init time_init(void)
of_clk_init(NULL);
timer_probe();
tick_setup_hrtimer_broadcast();
}

View File

@ -57,11 +57,19 @@ static unsigned long find_bootdata_space(struct ipl_rb_components *comps,
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && INITRD_START && INITRD_SIZE &&
intersects(INITRD_START, INITRD_SIZE, safe_addr, size))
safe_addr = INITRD_START + INITRD_SIZE;
if (intersects(safe_addr, size, (unsigned long)comps, comps->len)) {
safe_addr = (unsigned long)comps + comps->len;
goto repeat;
}
for_each_rb_entry(comp, comps)
if (intersects(safe_addr, size, comp->addr, comp->len)) {
safe_addr = comp->addr + comp->len;
goto repeat;
}
if (intersects(safe_addr, size, (unsigned long)certs, certs->len)) {
safe_addr = (unsigned long)certs + certs->len;
goto repeat;
}
for_each_rb_entry(cert, certs)
if (intersects(safe_addr, size, cert->addr, cert->len)) {
safe_addr = cert->addr + cert->len;

View File

@ -255,6 +255,7 @@ static void pop_kprobe(struct kprobe_ctlblk *kcb)
{
__this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
kcb->kprobe_status = kcb->prev_kprobe.status;
kcb->prev_kprobe.kp = NULL;
}
NOKPROBE_SYMBOL(pop_kprobe);
@ -509,12 +510,11 @@ static int post_kprobe_handler(struct pt_regs *regs)
if (!p)
return 0;
resume_execution(p, regs);
if (kcb->kprobe_status != KPROBE_REENTER && p->post_handler) {
kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0);
}
resume_execution(p, regs);
pop_kprobe(kcb);
preempt_enable_no_resched();

View File

@ -502,9 +502,7 @@ long arch_ptrace(struct task_struct *child, long request,
}
return 0;
case PTRACE_GET_LAST_BREAK:
put_user(child->thread.last_break,
(unsigned long __user *) data);
return 0;
return put_user(child->thread.last_break, (unsigned long __user *)data);
case PTRACE_ENABLE_TE:
if (!MACHINE_HAS_TE)
return -EIO;
@ -856,9 +854,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
}
return 0;
case PTRACE_GET_LAST_BREAK:
put_user(child->thread.last_break,
(unsigned int __user *) data);
return 0;
return put_user(child->thread.last_break, (unsigned int __user *)data);
}
return compat_ptrace_request(child, request, addr, data);
}

View File

@ -15,6 +15,8 @@
/* Handle ro_after_init data on our own. */
#define RO_AFTER_INIT_DATA
#define RUNTIME_DISCARD_EXIT
#include <asm-generic/vmlinux.lds.h>
#include <asm/vmlinux.lds.h>
@ -189,5 +191,6 @@ SECTIONS
DISCARDS
/DISCARD/ : {
*(.eh_frame)
*(.interp)
}
}

View File

@ -4527,6 +4527,22 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
if (mem->guest_phys_addr + mem->memory_size > kvm->arch.mem_limit)
return -EINVAL;
if (!kvm->arch.migration_mode)
return 0;
/*
* Turn off migration mode when:
* - userspace creates a new memslot with dirty logging off,
* - userspace modifies an existing memslot (MOVE or FLAGS_ONLY) and
* dirty logging is turned off.
* Migration mode expects dirty page logging being enabled to store
* its dirty bitmap.
*/
if (change != KVM_MR_DELETE &&
!(mem->flags & KVM_MEM_LOG_DIRTY_PAGES))
WARN(kvm_s390_vm_stop_migration(kvm),
"Failed to stop migration mode");
return 0;
}

View File

@ -339,7 +339,7 @@ static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size
"4: slgr %0,%0\n"
"5:\n"
EX_TABLE(0b,2b) EX_TABLE(3b,5b)
: "+a" (size), "+a" (to), "+a" (tmp1), "=a" (tmp2)
: "+&a" (size), "+&a" (to), "+a" (tmp1), "=&a" (tmp2)
: "a" (empty_zero_page), "d" (reg0) : "cc", "memory");
return size;
}

View File

@ -51,6 +51,7 @@
#define SR_FD 0x00008000
#define SR_MD 0x40000000
#define SR_USER_MASK 0x00000303 // M, Q, S, T bits
/*
* DSP structure and data
*/

View File

@ -25,7 +25,7 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
* Pages from the page allocator may have data present in
* cache. So flush the cache before using uncached memory.
*/
arch_sync_dma_for_device(dev, virt_to_phys(ret), size,
arch_sync_dma_for_device(virt_to_phys(ret), size,
DMA_BIDIRECTIONAL);
ret_nocache = (void __force *)ioremap_nocache(virt_to_phys(ret), size);
@ -59,8 +59,8 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
iounmap(vaddr);
}
void arch_sync_dma_for_device(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
void *addr = sh_cacheop_vaddr(phys_to_virt(paddr));

View File

@ -116,6 +116,7 @@ static int
restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *r0_p)
{
unsigned int err = 0;
unsigned int sr = regs->sr & ~SR_USER_MASK;
#define COPY(x) err |= __get_user(regs->x, &sc->sc_##x)
COPY(regs[1]);
@ -131,6 +132,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *r0_p
COPY(sr); COPY(pc);
#undef COPY
regs->sr = (regs->sr & SR_USER_MASK) | sr;
#ifdef CONFIG_SH_FPU
if (boot_cpu_data.flags & CPU_HAS_FPU) {
int owned_fp;

View File

@ -10,6 +10,7 @@ OUTPUT_ARCH(sh:sh5)
#define LOAD_OFFSET 0
OUTPUT_ARCH(sh)
#endif
#define RUNTIME_DISCARD_EXIT
#include <asm/thread_info.h>
#include <asm/cache.h>

View File

@ -321,7 +321,7 @@ config FORCE_MAX_ZONEORDER
This config option is actually maximum order plus one. For example,
a value of 13 means that the largest free memory block is 2^12 pages.
if SPARC64
if SPARC64 || COMPILE_TEST
source "kernel/power/Kconfig"
endif

View File

@ -368,8 +368,8 @@ void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
/* IIep is write-through, not flushing on cpu to device transfer. */
void arch_sync_dma_for_cpu(struct device *dev, phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
if (dir != PCI_DMA_TODEVICE)
dma_make_coherent(paddr, PAGE_ALIGN(size));

View File

@ -746,6 +746,7 @@ static int vector_config(char *str, char **error_out)
if (parsed == NULL) {
*error_out = "vector_config failed to parse parameters";
kfree(params);
return -EINVAL;
}

View File

@ -1,4 +1,4 @@
#define RUNTIME_DISCARD_EXIT
KERNEL_STACK_SIZE = 4096 * (1 << CONFIG_KERNEL_STACK_ORDER);
#ifdef CONFIG_LD_SCRIPT_STATIC

View File

@ -19,6 +19,7 @@
#include <crypto/internal/simd.h>
#include <asm/cpu_device_id.h>
#include <asm/simd.h>
#include <asm/unaligned.h>
#define GHASH_BLOCK_SIZE 16
#define GHASH_DIGEST_SIZE 16
@ -54,7 +55,6 @@ static int ghash_setkey(struct crypto_shash *tfm,
const u8 *key, unsigned int keylen)
{
struct ghash_ctx *ctx = crypto_shash_ctx(tfm);
be128 *x = (be128 *)key;
u64 a, b;
if (keylen != GHASH_BLOCK_SIZE) {
@ -63,8 +63,8 @@ static int ghash_setkey(struct crypto_shash *tfm,
}
/* perform multiplication by 'x' in GF(2^128) */
a = be64_to_cpu(x->a);
b = be64_to_cpu(x->b);
a = get_unaligned_be64(key);
b = get_unaligned_be64(key + 8);
ctx->shash.a = (b << 1) | (a >> 63);
ctx->shash.b = (a << 1) | (b >> 63);

View File

@ -131,7 +131,7 @@ static inline unsigned int x86_cpuid_family(void)
int __init microcode_init(void);
extern void __init load_ucode_bsp(void);
extern void load_ucode_ap(void);
void reload_early_microcode(void);
void reload_early_microcode(unsigned int cpu);
extern bool get_builtin_firmware(struct cpio_data *cd, const char *name);
extern bool initrd_gone;
void microcode_bsp_resume(void);
@ -139,7 +139,7 @@ void microcode_bsp_resume(void);
static inline int __init microcode_init(void) { return 0; };
static inline void __init load_ucode_bsp(void) { }
static inline void load_ucode_ap(void) { }
static inline void reload_early_microcode(void) { }
static inline void reload_early_microcode(unsigned int cpu) { }
static inline void microcode_bsp_resume(void) { }
static inline bool
get_builtin_firmware(struct cpio_data *cd, const char *name) { return false; }

View File

@ -47,12 +47,12 @@ struct microcode_amd {
extern void __init load_ucode_amd_bsp(unsigned int family);
extern void load_ucode_amd_ap(unsigned int family);
extern int __init save_microcode_in_initrd_amd(unsigned int family);
void reload_ucode_amd(void);
void reload_ucode_amd(unsigned int cpu);
#else
static inline void __init load_ucode_amd_bsp(unsigned int family) {}
static inline void load_ucode_amd_ap(unsigned int family) {}
static inline int __init
save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }
void reload_ucode_amd(void) {}
static inline void reload_ucode_amd(unsigned int cpu) {}
#endif
#endif /* _ASM_X86_MICROCODE_AMD_H */

View File

@ -50,6 +50,10 @@
#define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */
#define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT)
/* A mask for bits which the kernel toggles when controlling mitigations */
#define SPEC_CTRL_MITIGATIONS_MASK (SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD \
| SPEC_CTRL_RRSBA_DIS_S)
#define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
#define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */

View File

@ -25,6 +25,8 @@ void __noreturn machine_real_restart(unsigned int type);
#define MRR_BIOS 0
#define MRR_APM 1
void cpu_emergency_disable_virtualization(void);
typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
void nmi_panic_self_stop(struct pt_regs *regs);
void nmi_shootdown_cpus(nmi_shootdown_cb callback);

View File

@ -51,24 +51,27 @@ DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
* simple as possible.
* Must be called with preemption disabled.
*/
static void __resctrl_sched_in(void)
static inline void __resctrl_sched_in(struct task_struct *tsk)
{
struct resctrl_pqr_state *state = this_cpu_ptr(&pqr_state);
u32 closid = state->default_closid;
u32 rmid = state->default_rmid;
u32 tmp;
/*
* If this task has a closid/rmid assigned, use it.
* Else use the closid/rmid assigned to this cpu.
*/
if (static_branch_likely(&rdt_alloc_enable_key)) {
if (current->closid)
closid = current->closid;
tmp = READ_ONCE(tsk->closid);
if (tmp)
closid = tmp;
}
if (static_branch_likely(&rdt_mon_enable_key)) {
if (current->rmid)
rmid = current->rmid;
tmp = READ_ONCE(tsk->rmid);
if (tmp)
rmid = tmp;
}
if (closid != state->cur_closid || rmid != state->cur_rmid) {
@ -78,15 +81,15 @@ static void __resctrl_sched_in(void)
}
}
static inline void resctrl_sched_in(void)
static inline void resctrl_sched_in(struct task_struct *tsk)
{
if (static_branch_likely(&rdt_enable_key))
__resctrl_sched_in();
__resctrl_sched_in(tsk);
}
#else
static inline void resctrl_sched_in(void) {}
static inline void resctrl_sched_in(struct task_struct *tsk) {}
#endif /* CONFIG_X86_CPU_RESCTRL */

View File

@ -120,7 +120,21 @@ static inline void cpu_svm_disable(void)
wrmsrl(MSR_VM_HSAVE_PA, 0);
rdmsrl(MSR_EFER, efer);
wrmsrl(MSR_EFER, efer & ~EFER_SVME);
if (efer & EFER_SVME) {
/*
* Force GIF=1 prior to disabling SVM to ensure INIT and NMI
* aren't blocked, e.g. if a fatal error occurred between CLGI
* and STGI. Note, STGI may #UD if SVM is disabled from NMI
* context between reading EFER and executing STGI. In that
* case, GIF must already be set, otherwise the NMI would have
* been blocked, so just eat the fault.
*/
asm_volatile_goto("1: stgi\n\t"
_ASM_EXTABLE(1b, %l[fault])
::: "memory" : fault);
fault:
wrmsrl(MSR_EFER, efer & ~EFER_SVME);
}
}
/** Makes sure SVM is disabled, if it is supported on the CPU

View File

@ -205,6 +205,15 @@ static void init_amd_k6(struct cpuinfo_x86 *c)
return;
}
#endif
/*
* Work around Erratum 1386. The XSAVES instruction malfunctions in
* certain circumstances on Zen1/2 uarch, and not all parts have had
* updated microcode at the time of writing (March 2023).
*
* Affected parts all have no supervisor XSAVE states, meaning that
* the XSAVEC instruction (which works fine) is equivalent.
*/
clear_cpu_cap(c, X86_FEATURE_XSAVES);
}
static void init_amd_k7(struct cpuinfo_x86 *c)

View File

@ -135,9 +135,17 @@ void __init check_bugs(void)
* have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
* init code as it is not enumerated and depends on the family.
*/
if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
if (cpu_feature_enabled(X86_FEATURE_MSR_SPEC_CTRL)) {
rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
/*
* Previously running kernel (kexec), may have some controls
* turned ON. Clear them and let the mitigations setup below
* rediscover them based on configuration.
*/
x86_spec_ctrl_base &= ~SPEC_CTRL_MITIGATIONS_MASK;
}
/* Select the proper CPU mitigations before patching alternatives: */
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
@ -975,14 +983,18 @@ spectre_v2_parse_user_cmdline(void)
return SPECTRE_V2_USER_CMD_AUTO;
}
static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
{
return mode == SPECTRE_V2_IBRS ||
mode == SPECTRE_V2_EIBRS ||
return mode == SPECTRE_V2_EIBRS ||
mode == SPECTRE_V2_EIBRS_RETPOLINE ||
mode == SPECTRE_V2_EIBRS_LFENCE;
}
static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
{
return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
}
static void __init
spectre_v2_user_select_mitigation(void)
{
@ -1045,12 +1057,19 @@ spectre_v2_user_select_mitigation(void)
}
/*
* If no STIBP, IBRS or enhanced IBRS is enabled, or SMT impossible,
* STIBP is not required.
* If no STIBP, enhanced IBRS is enabled, or SMT impossible, STIBP
* is not required.
*
* Enhanced IBRS also protects against cross-thread branch target
* injection in user-mode as the IBRS bit remains always set which
* implicitly enables cross-thread protections. However, in legacy IBRS
* mode, the IBRS bit is set only on kernel entry and cleared on return
* to userspace. This disables the implicit cross-thread protection,
* so allow for STIBP to be selected in that case.
*/
if (!boot_cpu_has(X86_FEATURE_STIBP) ||
!smt_possible ||
spectre_v2_in_ibrs_mode(spectre_v2_enabled))
spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return;
/*
@ -2113,7 +2132,7 @@ static ssize_t mmio_stale_data_show_state(char *buf)
static char *stibp_state(void)
{
if (spectre_v2_in_ibrs_mode(spectre_v2_enabled))
if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return "";
switch (spectre_v2_user_stibp) {

View File

@ -55,7 +55,9 @@ struct cont_desc {
};
static u32 ucode_new_rev;
static u8 amd_ucode_patch[PATCH_MAX_SIZE];
/* One blob per node. */
static u8 amd_ucode_patch[MAX_NUMNODES][PATCH_MAX_SIZE];
/*
* Microcode patch container file is prepended to the initrd in cpio
@ -429,7 +431,7 @@ apply_microcode_early_amd(u32 cpuid_1_eax, void *ucode, size_t size, bool save_p
patch = (u8 (*)[PATCH_MAX_SIZE])__pa_nodebug(&amd_ucode_patch);
#else
new_rev = &ucode_new_rev;
patch = &amd_ucode_patch;
patch = &amd_ucode_patch[0];
#endif
desc.cpuid_1_eax = cpuid_1_eax;
@ -548,8 +550,7 @@ void load_ucode_amd_ap(unsigned int cpuid_1_eax)
apply_microcode_early_amd(cpuid_1_eax, cp.data, cp.size, false);
}
static enum ucode_state
load_microcode_amd(bool save, u8 family, const u8 *data, size_t size);
static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size);
int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
{
@ -567,19 +568,19 @@ int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
if (!desc.mc)
return -EINVAL;
ret = load_microcode_amd(true, x86_family(cpuid_1_eax), desc.data, desc.size);
ret = load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);
if (ret > UCODE_UPDATED)
return -EINVAL;
return 0;
}
void reload_ucode_amd(void)
void reload_ucode_amd(unsigned int cpu)
{
struct microcode_amd *mc;
u32 rev, dummy;
struct microcode_amd *mc;
mc = (struct microcode_amd *)amd_ucode_patch;
mc = (struct microcode_amd *)amd_ucode_patch[cpu_to_node(cpu)];
rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
@ -845,9 +846,10 @@ static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,
return UCODE_OK;
}
static enum ucode_state
load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size)
{
struct cpuinfo_x86 *c;
unsigned int nid, cpu;
struct ucode_patch *p;
enum ucode_state ret;
@ -860,23 +862,23 @@ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
return ret;
}
p = find_patch(0);
if (!p) {
return ret;
} else {
if (boot_cpu_data.microcode >= p->patch_id)
return ret;
for_each_node(nid) {
cpu = cpumask_first(cpumask_of_node(nid));
c = &cpu_data(cpu);
p = find_patch(cpu);
if (!p)
continue;
if (c->microcode >= p->patch_id)
continue;
ret = UCODE_NEW;
memset(&amd_ucode_patch[nid], 0, PATCH_MAX_SIZE);
memcpy(&amd_ucode_patch[nid], p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
}
/* save BSP's matching patch for early load */
if (!save)
return ret;
memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
memcpy(amd_ucode_patch, p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
return ret;
}
@ -901,12 +903,11 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
{
char fw_name[36] = "amd-ucode/microcode_amd.bin";
struct cpuinfo_x86 *c = &cpu_data(cpu);
bool bsp = c->cpu_index == boot_cpu_data.cpu_index;
enum ucode_state ret = UCODE_NFOUND;
const struct firmware *fw;
/* reload ucode container only on the boot cpu */
if (!refresh_fw || !bsp)
if (!refresh_fw)
return UCODE_OK;
if (c->x86 >= 0x15)
@ -921,7 +922,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
if (!verify_container(fw->data, fw->size, false))
goto fw_release;
ret = load_microcode_amd(bsp, c->x86, fw->data, fw->size);
ret = load_microcode_amd(c->x86, fw->data, fw->size);
fw_release:
release_firmware(fw);

Some files were not shown because too many files have changed in this diff Show More