Merge 0ecca62beb ("Merge tag 'ceph-for-5.16-rc1' of git://github.com/ceph/ceph-client") into android-mainline

Steps on the way to 5.16-rc1

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I5e3593fdc24a02373dbce7cdbc4125075af0c2ad
This commit is contained in:
Greg Kroah-Hartman 2021-11-16 19:18:14 +01:00
commit db58b6aabb
173 changed files with 3748 additions and 1881 deletions

View File

@ -512,3 +512,19 @@ Date: July 2021
Contact: "Daeho Jeong" <daehojeong@google.com>
Description: You can control the multiplier value of bdi device readahead window size
between 2 (default) and 256 for POSIX_FADV_SEQUENTIAL advise option.
What: /sys/fs/f2fs/<disk>/max_fragment_chunk
Date: August 2021
Contact: "Daeho Jeong" <daehojeong@google.com>
Description: With "mode=fragment:block" mount options, we can scatter block allocation.
f2fs will allocate 1..<max_fragment_chunk> blocks in a chunk and make a hole
in the length of 1..<max_fragment_hole> by turns. This value can be set
between 1..512 and the default value is 4.
What: /sys/fs/f2fs/<disk>/max_fragment_hole
Date: August 2021
Contact: "Daeho Jeong" <daehojeong@google.com>
Description: With "mode=fragment:block" mount options, we can scatter block allocation.
f2fs will allocate 1..<max_fragment_chunk> blocks in a chunk and make a hole
in the length of 1..<max_fragment_hole> by turns. This value can be set
between 1..512 and the default value is 4.

View File

@ -197,10 +197,29 @@ fault_type=%d Support configuring fault injection type, should be
FAULT_DISCARD 0x000002000
FAULT_WRITE_IO 0x000004000
FAULT_SLAB_ALLOC 0x000008000
FAULT_DQUOT_INIT 0x000010000
=================== ===========
mode=%s Control block allocation mode which supports "adaptive"
and "lfs". In "lfs" mode, there should be no random
writes towards main area.
"fragment:segment" and "fragment:block" are newly added here.
These are developer options for experiments to simulate filesystem
fragmentation/after-GC situation itself. The developers use these
modes to understand filesystem fragmentation/after-GC condition well,
and eventually get some insights to handle them better.
In "fragment:segment", f2fs allocates a new segment in ramdom
position. With this, we can simulate the after-GC condition.
In "fragment:block", we can scatter block allocation with
"max_fragment_chunk" and "max_fragment_hole" sysfs nodes.
We added some randomness to both chunk and hole size to make
it close to realistic IO pattern. So, in this mode, f2fs will allocate
1..<max_fragment_chunk> blocks in a chunk and make a hole in the
length of 1..<max_fragment_hole> by turns. With this, the newly
allocated blocks will be scattered throughout the whole partition.
Note that "fragment:block" implicitly enables "fragment:segment"
option for more randomness.
Please, use these options for your experiments and we strongly
recommend to re-format the filesystem after using these options.
io_bits=%u Set the bit size of write IO requests. It should be set
with "mode=lfs".
usrquota Enable plain user disk quota accounting.

View File

@ -15,7 +15,10 @@ For security module support, three SCTP specific hooks have been implemented::
security_sctp_assoc_request()
security_sctp_bind_connect()
security_sctp_sk_clone()
security_sctp_assoc_established()
Also the following security hook has been utilised::
security_inet_conn_established()
The usage of these hooks are described below with the SELinux implementation
described in the `SCTP SELinux Support`_ chapter.
@ -119,12 +122,11 @@ calls **sctp_peeloff**\(3).
@newsk - pointer to new sock structure.
security_sctp_assoc_established()
security_inet_conn_established()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Called when a COOKIE ACK is received, and the peer secid will be
saved into ``@asoc->peer_secid`` for client::
Called when a COOKIE ACK is received::
@asoc - pointer to sctp association structure.
@sk - pointer to sock structure.
@skb - pointer to skbuff of the COOKIE ACK packet.
@ -132,7 +134,7 @@ Security Hooks used for Association Establishment
-------------------------------------------------
The following diagram shows the use of ``security_sctp_bind_connect()``,
``security_sctp_assoc_request()``, ``security_sctp_assoc_established()`` when
``security_sctp_assoc_request()``, ``security_inet_conn_established()`` when
establishing an association.
::
@ -170,7 +172,7 @@ establishing an association.
<------------------------------------------- COOKIE ACK
| |
sctp_sf_do_5_1E_ca |
Call security_sctp_assoc_established() |
Call security_inet_conn_established() |
to set the peer label. |
| |
| If SCTP_SOCKET_TCP or peeled off
@ -196,7 +198,7 @@ hooks with the SELinux specifics expanded below::
security_sctp_assoc_request()
security_sctp_bind_connect()
security_sctp_sk_clone()
security_sctp_assoc_established()
security_inet_conn_established()
security_sctp_assoc_request()
@ -269,12 +271,12 @@ sockets sid and peer sid to that contained in the ``@asoc sid`` and
@newsk - pointer to new sock structure.
security_sctp_assoc_established()
security_inet_conn_established()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Called when a COOKIE ACK is received where it sets the connection's peer sid
to that in ``@skb``::
@asoc - pointer to sctp association structure.
@sk - pointer to sock structure.
@skb - pointer to skbuff of the COOKIE ACK packet.

View File

@ -6911,6 +6911,20 @@ MAP_SHARED mmap will result in an -EINVAL return.
When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to
perform a bulk copy of tags to/from the guest.
7.29 KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM
-------------------------------------
Architectures: x86 SEV enabled
Type: vm
Parameters: args[0] is the fd of the source vm
Returns: 0 on success
This capability enables userspace to migrate the encryption context from the VM
indicated by the fd to the VM this is called on.
This is intended to support intra-host migration of VMs between userspace VMMs,
upgrading the VMM process without interrupting the guest.
8. Other capabilities.
======================

View File

@ -4676,11 +4676,10 @@ COCCINELLE/Semantic Patches (SmPL)
M: Julia Lawall <Julia.Lawall@inria.fr>
M: Gilles Muller <Gilles.Muller@inria.fr>
M: Nicolas Palix <nicolas.palix@imag.fr>
M: Michal Marek <michal.lkml@markovi.net>
L: cocci@systeme.lip6.fr (moderated for non-subscribers)
L: cocci@inria.fr (moderated for non-subscribers)
S: Supported
W: http://coccinelle.lip6.fr/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild.git misc
W: https://coccinelle.gitlabpages.inria.fr/website/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jlawall/linux.git
F: Documentation/dev-tools/coccinelle.rst
F: scripts/coccicheck
F: scripts/coccinelle/

View File

@ -68,6 +68,7 @@
#define ESR_ELx_EC_MAX (0x3F)
#define ESR_ELx_EC_SHIFT (26)
#define ESR_ELx_EC_WIDTH (6)
#define ESR_ELx_EC_MASK (UL(0x3F) << ESR_ELx_EC_SHIFT)
#define ESR_ELx_EC(esr) (((esr) & ESR_ELx_EC_MASK) >> ESR_ELx_EC_SHIFT)

View File

@ -584,7 +584,7 @@ struct kvm_vcpu_stat {
u64 exits;
};
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
void kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);

View File

@ -1389,12 +1389,9 @@ long kvm_arch_vm_ioctl(struct file *filp,
return kvm_vm_ioctl_set_device_addr(kvm, &dev_addr);
}
case KVM_ARM_PREFERRED_TARGET: {
int err;
struct kvm_vcpu_init init;
err = kvm_vcpu_preferred_target(&init);
if (err)
return err;
kvm_vcpu_preferred_target(&init);
if (copy_to_user(argp, &init, sizeof(init)))
return -EFAULT;

View File

@ -869,13 +869,10 @@ u32 __attribute_const__ kvm_target_cpu(void)
return KVM_ARM_TARGET_GENERIC_V8;
}
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
void kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
{
u32 target = kvm_target_cpu();
if (target < 0)
return -ENODEV;
memset(init, 0, sizeof(*init));
/*
@ -885,8 +882,6 @@ int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
* target type.
*/
init->target = (__u32)target;
return 0;
}
int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)

View File

@ -44,7 +44,7 @@
el1_sync: // Guest trapped into EL2
mrs x0, esr_el2
lsr x0, x0, #ESR_ELx_EC_SHIFT
ubfx x0, x0, #ESR_ELx_EC_SHIFT, #ESR_ELx_EC_WIDTH
cmp x0, #ESR_ELx_EC_HVC64
ccmp x0, #ESR_ELx_EC_HVC32, #4, ne
b.ne el1_trap

View File

@ -141,7 +141,7 @@ SYM_FUNC_END(__host_hvc)
.L__vect_start\@:
stp x0, x1, [sp, #-16]!
mrs x0, esr_el2
lsr x0, x0, #ESR_ELx_EC_SHIFT
ubfx x0, x0, #ESR_ELx_EC_SHIFT, #ESR_ELx_EC_WIDTH
cmp x0, #ESR_ELx_EC_HVC64
b.eq __host_hvc
b __host_exit

View File

@ -178,7 +178,7 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level,
phys = kvm_pte_to_phys(pte);
if (!addr_is_memory(phys))
return 0;
return -EINVAL;
/*
* Adjust the host stage-2 mappings to match the ownership attributes
@ -207,8 +207,18 @@ static int finalize_host_mappings(void)
.cb = finalize_host_mappings_walker,
.flags = KVM_PGTABLE_WALK_LEAF,
};
int i, ret;
return kvm_pgtable_walk(&pkvm_pgtable, 0, BIT(pkvm_pgtable.ia_bits), &walker);
for (i = 0; i < hyp_memblock_nr; i++) {
struct memblock_region *reg = &hyp_memory[i];
u64 start = (u64)hyp_phys_to_virt(reg->base);
ret = kvm_pgtable_walk(&pkvm_pgtable, start, reg->size, &walker);
if (ret)
return ret;
}
return 0;
}
void __noreturn __pkvm_init_finalise(void)

View File

@ -474,7 +474,7 @@ bool kvm_handle_pvm_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
return true;
}
/**
/*
* Handler for protected VM restricted exceptions.
*
* Inject an undefined exception into the guest and return true to indicate that

View File

@ -37,4 +37,4 @@ platform-$(CONFIG_MACH_TX49XX) += txx9/
platform-$(CONFIG_MACH_VR41XX) += vr41xx/
# include the platform specific files
include $(patsubst %, $(srctree)/arch/mips/%/Platform, $(platform-y))
include $(patsubst %/, $(srctree)/arch/mips/%/Platform, $(platform-y))

View File

@ -292,6 +292,8 @@ config BMIPS_GENERIC
select USB_OHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
select USB_OHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
select HARDIRQS_SW_RESEND
select HAVE_PCI
select PCI_DRIVERS_GENERIC
help
Build a generic DT-based kernel image that boots on select
BCM33xx cable modem chips, BCM63xx DSL chips, and BCM7xxx set-top
@ -333,6 +335,9 @@ config BCM63XX
select SYS_SUPPORTS_32BIT_KERNEL
select SYS_SUPPORTS_BIG_ENDIAN
select SYS_HAS_EARLY_PRINTK
select SYS_HAS_CPU_BMIPS32_3300
select SYS_HAS_CPU_BMIPS4350
select SYS_HAS_CPU_BMIPS4380
select SWAP_IO_SPACE
select GPIOLIB
select MIPS_L1_CACHE_SHIFT_4

View File

@ -253,7 +253,9 @@ endif
#
# Board-dependent options and extra files
#
ifdef need-compiler
include $(srctree)/arch/mips/Kbuild.platforms
endif
ifdef CONFIG_PHYSICAL_START
load-y = $(CONFIG_PHYSICAL_START)

View File

@ -1,3 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
ashldi3.c
bswapsi.c

View File

@ -50,19 +50,9 @@ vmlinuzobjs-$(CONFIG_MIPS_ALCHEMY) += $(obj)/uart-alchemy.o
vmlinuzobjs-$(CONFIG_ATH79) += $(obj)/uart-ath79.o
endif
extra-y += uart-ath79.c
$(obj)/uart-ath79.c: $(srctree)/arch/mips/ath79/early_printk.c
$(call cmd,shipped)
vmlinuzobjs-$(CONFIG_KERNEL_XZ) += $(obj)/ashldi3.o
extra-y += ashldi3.c
$(obj)/ashldi3.c: $(obj)/%.c: $(srctree)/lib/%.c FORCE
$(call if_changed,shipped)
extra-y += bswapsi.c
$(obj)/bswapsi.c: $(obj)/%.c: $(srctree)/arch/mips/lib/%.c FORCE
$(call if_changed,shipped)
vmlinuzobjs-$(CONFIG_KERNEL_ZSTD) += $(obj)/bswapdi.o
targets := $(notdir $(vmlinuzobjs-y))

View File

@ -0,0 +1,2 @@
// SPDX-License-Identifier: GPL-2.0-only
#include "../../../../lib/ashldi3.c"

View File

@ -0,0 +1,2 @@
// SPDX-License-Identifier: GPL-2.0-only
#include "../../lib/bswapdi.c"

View File

@ -0,0 +1,2 @@
// SPDX-License-Identifier: GPL-2.0-only
#include "../../lib/bswapsi.c"

View File

@ -0,0 +1,2 @@
// SPDX-License-Identifier: GPL-2.0-only
#include "../../ath79/early_printk.c"

View File

@ -1,6 +1,7 @@
# CONFIG_LOCALVERSION_AUTO is not set
# CONFIG_SWAP is not set
CONFIG_NO_HZ=y
CONFIG_HZ=1000
CONFIG_BLK_DEV_INITRD=y
CONFIG_EXPERT=y
# CONFIG_VM_EVENT_COUNTERS is not set
@ -8,17 +9,34 @@ CONFIG_EXPERT=y
CONFIG_BMIPS_GENERIC=y
CONFIG_CPU_LITTLE_ENDIAN=y
CONFIG_HIGHMEM=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
CONFIG_NR_CPUS=4
CONFIG_CC_STACKPROTECTOR_STRONG=y
# CONFIG_SECCOMP is not set
CONFIG_MIPS_O32_FP64_SUPPORT=y
# CONFIG_RD_GZIP is not set
# CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set
CONFIG_RD_XZ=y
# CONFIG_RD_LZO is not set
# CONFIG_RD_LZ4 is not set
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set
CONFIG_PCI=y
CONFIG_PCI_MSI=y
CONFIG_PCIEASPM_POWERSAVE=y
CONFIG_PCIEPORTBUS=y
CONFIG_PCIE_BRCMSTB=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT=y
CONFIG_CPU_FREQ_STAT_DETAILS=y
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y
CONFIG_BMIPS_CPUFREQ=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_NET=y
@ -32,32 +50,99 @@ CONFIG_INET=y
# CONFIG_INET_DIAG is not set
CONFIG_CFG80211=y
CONFIG_NL80211_TESTMODE=y
CONFIG_WIRELESS=y
CONFIG_MAC80211=y
CONFIG_NL80211=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
CONFIG_BRCMSTB_GISB_ARB=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
CONFIG_IP_MROUTE=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
CONFIG_INET_UDP_DIAG=y
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=y
# CONFIG_TCP_CONG_WESTWOOD is not set
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_IPV6 is not set
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_FILTER=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_XTABLES=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_NETFILTER=m
CONFIG_BRIDGE_NF_EBTABLES=m
CONFIG_BRIDGE_EBT_BROUTE=m
CONFIG_NET_DSA=y
CONFIG_NET_SWITCHDEV=y
CONFIG_DMA_CMA=y
CONFIG_CMA_ALIGNMENT=12
CONFIG_SPI=y
CONFIG_SPI_BRCMSTB=y
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_CFI=y
CONFIG_MTD_JEDECPROBE=y
CONFIG_MTD_CFI_INTELEXT=y
CONFIG_MTD_CFI_AMDSTD=y
CONFIG_MTD_PHYSMAP=y
CONFIG_MTD_CFI_STAA=y
CONFIG_MTD_ROM=y
CONFIG_MTD_ABSENT=y
CONFIG_MTD_PHYSMAP_OF=y
CONFIG_MTD_M25P80=y
CONFIG_MTD_NAND=y
CONFIG_MTD_NAND_BRCMNAND=y
CONFIG_MTD_SPI_NOR=y
# CONFIG_MTD_SPI_NOR_USE_4K_SECTORS is not set
CONFIG_MTD_UBI=y
CONFIG_MTD_UBI_GLUEBI=y
CONFIG_PROC_DEVICETREE=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=8192
# CONFIG_BLK_DEV is not set
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_SG=y
CONFIG_SCSI_MULTI_LUN=y
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_NETDEVICES=y
CONFIG_VLAN_8021Q=y
CONFIG_MACVLAN=y
CONFIG_BCMGENET=y
CONFIG_USB_USBNET=y
# CONFIG_INPUT is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
CONFIG_INPUT_MISC=y
CONFIG_INPUT_UINPUT=y
# CONFIG_SERIO is not set
# CONFIG_VT is not set
CONFIG_VT=y
CONFIG_VT_HW_CONSOLE_BINDING=y
# CONFIG_DEVKMEM is not set
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_OF_PLATFORM=y
# CONFIG_HW_RANDOM is not set
CONFIG_POWER_RESET=y
CONFIG_POWER_RESET_BRCMSTB=y
CONFIG_POWER_RESET_SYSCON=y
CONFIG_POWER_SUPPLY=y
# CONFIG_HWMON is not set
@ -69,22 +154,76 @@ CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PLATFORM=y
CONFIG_USB_STORAGE=y
CONFIG_SOC_BRCMSTB=y
CONFIG_MMC=y
CONFIG_MMC_BLOCK_MINORS=16
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_DNOTIFY is not set
CONFIG_FUSE_FS=y
CONFIG_VFAT_FS=y
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_NFS_FS=y
CONFIG_CIFS=y
CONFIG_JBD2_DEBUG=y
CONFIG_FUSE_FS=y
CONFIG_FHANDLE=y
CONFIG_CGROUPS=y
CONFIG_CUSE=y
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
CONFIG_JFFS2_FS=y
CONFIG_UBIFS_FS=y
CONFIG_SQUASHFS=y
CONFIG_SQUASHFS_LZO=y
CONFIG_SQUASHFS_XZ=y
CONFIG_NFS_FS=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_2=y
CONFIG_ROOT_NFS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_CRYPTO_HW is not set
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_INFO_REDUCED is not set
CONFIG_DEBUG_FS=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_LOCKUP_DETECTOR=y
CONFIG_DEBUG_USER=y
CONFIG_CMDLINE_BOOL=y
CONFIG_CMDLINE="earlycon"
# CONFIG_MIPS_CMDLINE_FROM_DTB is not set
CONFIG_MIPS_CMDLINE_DTB_EXTEND=y
# CONFIG_MIPS_CMDLINE_FROM_BOOTLOADER is not set
# CONFIG_CRYPTO_HW is not set
CONFIG_DT_BCM974XX=y
CONFIG_FW_CFE=y
CONFIG_ATA=y
CONFIG_SATA_AHCI_PLATFORM=y
CONFIG_AHCI_BRCMSTB=y
CONFIG_GENERIC_PHY=y
CONFIG_GPIOLIB=y
CONFIG_GPIO_SYSFS=y
CONFIG_PHY_BRCM_USB=y
CONFIG_PHY_BRCM_SATA=y
CONFIG_PM_RUNTIME=y
CONFIG_PM_DEBUG=y
CONFIG_SYSVIPC=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_IRQSOFF_TRACER=y
CONFIG_SCHED_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT=y
CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y
CONFIG_STACK_TRACER=y

View File

@ -117,21 +117,21 @@ static void __init dec_be_init(void)
{
switch (mips_machtype) {
case MACH_DS23100: /* DS2100/DS3100 Pmin/Pmax */
board_be_handler = dec_kn01_be_handler;
mips_set_be_handler(dec_kn01_be_handler);
busirq_handler = dec_kn01_be_interrupt;
busirq_flags |= IRQF_SHARED;
dec_kn01_be_init();
break;
case MACH_DS5000_1XX: /* DS5000/1xx 3min */
case MACH_DS5000_XX: /* DS5000/xx Maxine */
board_be_handler = dec_kn02xa_be_handler;
mips_set_be_handler(dec_kn02xa_be_handler);
busirq_handler = dec_kn02xa_be_interrupt;
dec_kn02xa_be_init();
break;
case MACH_DS5000_200: /* DS5000/200 3max */
case MACH_DS5000_2X0: /* DS5000/240 3max+ */
case MACH_DS5900: /* DS5900 bigmax */
board_be_handler = dec_ecc_be_handler;
mips_set_be_handler(dec_ecc_be_handler);
busirq_handler = dec_ecc_be_interrupt;
dec_ecc_be_init();
break;

View File

@ -15,7 +15,7 @@
#define MIPS_BE_FATAL 2 /* treat as an unrecoverable error */
extern void (*board_be_init)(void);
extern int (*board_be_handler)(struct pt_regs *regs, int is_fixup);
void mips_set_be_handler(int (*handler)(struct pt_regs *reg, int is_fixup));
extern void (*board_nmi_handler_setup)(void);
extern void (*board_ejtag_handler_setup)(void);

View File

@ -103,13 +103,19 @@ extern asmlinkage void handle_reserved(void);
extern void tlb_do_page_fault_0(void);
void (*board_be_init)(void);
int (*board_be_handler)(struct pt_regs *regs, int is_fixup);
static int (*board_be_handler)(struct pt_regs *regs, int is_fixup);
void (*board_nmi_handler_setup)(void);
void (*board_ejtag_handler_setup)(void);
void (*board_bind_eic_interrupt)(int irq, int regset);
void (*board_ebase_setup)(void);
void(*board_cache_error_setup)(void);
void mips_set_be_handler(int (*handler)(struct pt_regs *regs, int is_fixup))
{
board_be_handler = handler;
}
EXPORT_SYMBOL_GPL(mips_set_be_handler);
static void show_raw_backtrace(unsigned long reg29, const char *loglvl,
bool user)
{

View File

@ -112,5 +112,5 @@ static int ip22_be_handler(struct pt_regs *regs, int is_fixup)
void __init ip22_be_init(void)
{
board_be_handler = ip22_be_handler;
mips_set_be_handler(ip22_be_handler);
}

View File

@ -468,7 +468,7 @@ static int ip28_be_handler(struct pt_regs *regs, int is_fixup)
void __init ip22_be_init(void)
{
board_be_handler = ip28_be_handler;
mips_set_be_handler(ip28_be_handler);
}
int ip28_show_be_info(struct seq_file *m)

View File

@ -85,7 +85,7 @@ void __init ip27_be_init(void)
int cpu = LOCAL_HUB_L(PI_CPU_NUM);
int cpuoff = cpu << 8;
board_be_handler = ip27_be_handler;
mips_set_be_handler(ip27_be_handler);
LOCAL_HUB_S(PI_ERR_INT_PEND,
cpu ? PI_ERR_CLEAR_ALL_B : PI_ERR_CLEAR_ALL_A);

View File

@ -34,5 +34,5 @@ static int ip32_be_handler(struct pt_regs *regs, int is_fixup)
void __init ip32_be_init(void)
{
board_be_handler = ip32_be_handler;
mips_set_be_handler(ip32_be_handler);
}

View File

@ -122,7 +122,7 @@ void __init plat_mem_setup(void)
#error invalid SiByte board configuration
#endif
board_be_handler = swarm_be_handler;
mips_set_be_handler(swarm_be_handler);
if (xicor_probe())
swarm_rtc_type = RTC_XICOR;

View File

@ -80,7 +80,7 @@ static int tx4927_be_handler(struct pt_regs *regs, int is_fixup)
}
static void __init tx4927_be_init(void)
{
board_be_handler = tx4927_be_handler;
mips_set_be_handler(tx4927_be_handler);
}
static struct resource tx4927_sdram_resource[4];

View File

@ -82,7 +82,7 @@ static int tx4938_be_handler(struct pt_regs *regs, int is_fixup)
}
static void __init tx4938_be_init(void)
{
board_be_handler = tx4938_be_handler;
mips_set_be_handler(tx4938_be_handler);
}
static struct resource tx4938_sdram_resource[4];

View File

@ -86,7 +86,7 @@ static int tx4939_be_handler(struct pt_regs *regs, int is_fixup)
}
static void __init tx4939_be_init(void)
{
board_be_handler = tx4939_be_handler;
mips_set_be_handler(tx4939_be_handler);
}
static struct resource tx4939_sdram_resource[4];

View File

@ -57,7 +57,7 @@ endif
# VDSO linker flags.
ldflags-y := -Bsymbolic --no-undefined -soname=linux-vdso.so.1 \
$(filter -E%,$(KBUILD_CFLAGS)) -nostdlib -shared \
$(filter -E%,$(KBUILD_CFLAGS)) -shared \
-G 0 --eh-frame-hdr --hash-style=sysv --build-id=sha1 -T
CFLAGS_REMOVE_vdso.o = $(CC_FLAGS_FTRACE)

View File

@ -62,6 +62,7 @@ config RISCV
select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD
select GENERIC_TIME_VSYSCALL if MMU && 64BIT
select GENERIC_VDSO_TIME_NS if HAVE_GENERIC_VDSO
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE if !XIP_KERNEL

View File

@ -136,3 +136,13 @@ zinstall: install-image = Image.gz
install zinstall:
$(CONFIG_SHELL) $(srctree)/$(boot)/install.sh $(KERNELRELEASE) \
$(boot)/$(install-image) System.map "$(INSTALL_PATH)"
PHONY += rv32_randconfig
rv32_randconfig:
$(Q)$(MAKE) KCONFIG_ALLCONFIG=$(srctree)/arch/riscv/configs/32-bit.config \
-f $(srctree)/Makefile randconfig
PHONY += rv64_randconfig
rv64_randconfig:
$(Q)$(MAKE) KCONFIG_ALLCONFIG=$(srctree)/arch/riscv/configs/64-bit.config \
-f $(srctree)/Makefile randconfig

View File

@ -9,10 +9,8 @@
#define RTCCLK_FREQ 1000000
/ {
#address-cells = <2>;
#size-cells = <2>;
model = "Microchip PolarFire-SoC Icicle Kit";
compatible = "microchip,mpfs-icicle-kit";
compatible = "microchip,mpfs-icicle-kit", "microchip,mpfs";
aliases {
ethernet0 = &emac1;
@ -35,9 +33,6 @@ memory@80000000 {
reg = <0x0 0x80000000 0x0 0x40000000>;
clocks = <&clkcfg 26>;
};
soc {
};
};
&serial0 {
@ -56,8 +51,17 @@ &serial3 {
status = "okay";
};
&sdcard {
&mmc {
status = "okay";
bus-width = <4>;
disable-wp;
cap-sd-highspeed;
card-detect-delay = <200>;
sd-uhs-sdr12;
sd-uhs-sdr25;
sd-uhs-sdr50;
sd-uhs-sdr104;
};
&emac0 {

View File

@ -6,8 +6,8 @@
/ {
#address-cells = <2>;
#size-cells = <2>;
model = "Microchip MPFS Icicle Kit";
compatible = "microchip,mpfs-icicle-kit";
model = "Microchip PolarFire SoC";
compatible = "microchip,mpfs";
chosen {
};
@ -161,7 +161,7 @@ cache-controller@2010000 {
};
clint@2000000 {
compatible = "sifive,clint0";
compatible = "sifive,fu540-c000-clint", "sifive,clint0";
reg = <0x0 0x2000000 0x0 0xC000>;
interrupts-extended = <&cpu0_intc 3 &cpu0_intc 7
&cpu1_intc 3 &cpu1_intc 7
@ -172,7 +172,7 @@ &cpu3_intc 3 &cpu3_intc 7
plic: interrupt-controller@c000000 {
#interrupt-cells = <1>;
compatible = "sifive,plic-1.0.0";
compatible = "sifive,fu540-c000-plic", "sifive,plic-1.0.0";
reg = <0x0 0xc000000 0x0 0x4000000>;
riscv,ndev = <186>;
interrupt-controller;
@ -262,39 +262,13 @@ serial3: serial@20104000 {
status = "disabled";
};
emmc: mmc@20008000 {
compatible = "cdns,sd4hc";
/* Common node entry for emmc/sd */
mmc: mmc@20008000 {
compatible = "microchip,mpfs-sd4hc", "cdns,sd4hc";
reg = <0x0 0x20008000 0x0 0x1000>;
interrupt-parent = <&plic>;
interrupts = <88 89>;
pinctrl-names = "default";
clocks = <&clkcfg 6>;
bus-width = <4>;
cap-mmc-highspeed;
mmc-ddr-3_3v;
max-frequency = <200000000>;
non-removable;
no-sd;
no-sdio;
voltage-ranges = <3300 3300>;
status = "disabled";
};
sdcard: sdhc@20008000 {
compatible = "cdns,sd4hc";
reg = <0x0 0x20008000 0x0 0x1000>;
interrupt-parent = <&plic>;
interrupts = <88>;
pinctrl-names = "default";
clocks = <&clkcfg 6>;
bus-width = <4>;
disable-wp;
cap-sd-highspeed;
card-detect-delay = <200>;
sd-uhs-sdr12;
sd-uhs-sdr25;
sd-uhs-sdr50;
sd-uhs-sdr104;
max-frequency = <200000000>;
status = "disabled";
};

View File

@ -141,7 +141,7 @@ soc {
ranges;
plic0: interrupt-controller@c000000 {
#interrupt-cells = <1>;
compatible = "sifive,plic-1.0.0";
compatible = "sifive,fu540-c000-plic", "sifive,plic-1.0.0";
reg = <0x0 0xc000000 0x0 0x4000000>;
riscv,ndev = <53>;
interrupt-controller;

View File

@ -8,10 +8,9 @@
#define RTCCLK_FREQ 1000000
/ {
#address-cells = <2>;
#size-cells = <2>;
model = "SiFive HiFive Unleashed A00";
compatible = "sifive,hifive-unleashed-a00", "sifive,fu540-c000";
compatible = "sifive,hifive-unleashed-a00", "sifive,fu540-c000",
"sifive,fu540";
chosen {
stdout-path = "serial0";
@ -26,9 +25,6 @@ memory@80000000 {
reg = <0x0 0x80000000 0x2 0x00000000>;
};
soc {
};
hfclk: hfclk {
#clock-cells = <0>;
compatible = "fixed-clock";
@ -63,7 +59,7 @@ &i2c0 {
&qspi0 {
status = "okay";
flash@0 {
compatible = "issi,is25wp256", "jedec,spi-nor";
compatible = "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <50000000>;
m25p,fast-read;

View File

@ -8,8 +8,6 @@
#define RTCCLK_FREQ 1000000
/ {
#address-cells = <2>;
#size-cells = <2>;
model = "SiFive HiFive Unmatched A00";
compatible = "sifive,hifive-unmatched-a00", "sifive,fu740-c000",
"sifive,fu740";
@ -27,9 +25,6 @@ memory@80000000 {
reg = <0x0 0x80000000 0x4 0x00000000>;
};
soc {
};
hfclk: hfclk {
#clock-cells = <0>;
compatible = "fixed-clock";
@ -211,7 +206,7 @@ vdd_ldo11: ldo11 {
&qspi0 {
status = "okay";
flash@0 {
compatible = "issi,is25wp256", "jedec,spi-nor";
compatible = "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <50000000>;
m25p,fast-read;

View File

@ -0,0 +1,2 @@
CONFIG_ARCH_RV32I=y
CONFIG_32BIT=y

View File

@ -0,0 +1,2 @@
CONFIG_ARCH_RV64I=y
CONFIG_64BIT=y

View File

@ -72,9 +72,10 @@ CONFIG_GPIOLIB=y
CONFIG_GPIO_SIFIVE=y
# CONFIG_PTP_1588_CLOCK is not set
CONFIG_POWER_RESET=y
CONFIG_DRM=y
CONFIG_DRM_RADEON=y
CONFIG_DRM_VIRTIO_GPU=y
CONFIG_DRM=m
CONFIG_DRM_RADEON=m
CONFIG_DRM_NOUVEAU=m
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_USB=y
CONFIG_USB_XHCI_HCD=y

View File

@ -157,6 +157,8 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
#define page_to_bus(page) (page_to_phys(page))
#define phys_to_page(paddr) (pfn_to_page(phys_to_pfn(paddr)))
#define sym_to_pfn(x) __phys_to_pfn(__pa_symbol(x))
#ifdef CONFIG_FLATMEM
#define pfn_valid(pfn) \
(((pfn) >= ARCH_PFN_OFFSET) && (((pfn) - ARCH_PFN_OFFSET) < max_mapnr))

View File

@ -75,7 +75,8 @@
#endif
#ifdef CONFIG_XIP_KERNEL
#define XIP_OFFSET SZ_8M
#define XIP_OFFSET SZ_32M
#define XIP_OFFSET_MASK (SZ_32M - 1)
#else
#define XIP_OFFSET 0
#endif
@ -97,7 +98,8 @@
#ifdef CONFIG_XIP_KERNEL
#define XIP_FIXUP(addr) ({ \
uintptr_t __a = (uintptr_t)(addr); \
(__a >= CONFIG_XIP_PHYS_ADDR && __a < CONFIG_XIP_PHYS_ADDR + SZ_16M) ? \
(__a >= CONFIG_XIP_PHYS_ADDR && \
__a < CONFIG_XIP_PHYS_ADDR + XIP_OFFSET * 2) ? \
__a - CONFIG_XIP_PHYS_ADDR + CONFIG_PHYS_RAM_BASE - XIP_OFFSET :\
__a; \
})

View File

@ -8,30 +8,19 @@
#ifndef _ASM_RISCV_VDSO_H
#define _ASM_RISCV_VDSO_H
/*
* All systems with an MMU have a VDSO, but systems without an MMU don't
* support shared libraries and therefor don't have one.
*/
#ifdef CONFIG_MMU
#include <linux/types.h>
/*
* All systems with an MMU have a VDSO, but systems without an MMU don't
* support shared libraries and therefor don't have one.
*/
#ifdef CONFIG_MMU
#define __VVAR_PAGES 1
#define __VVAR_PAGES 2
#ifndef __ASSEMBLY__
#include <generated/vdso-offsets.h>
#define VDSO_SYMBOL(base, name) \
(void __user *)((unsigned long)(base) + __vdso_##name##_offset)
#endif /* CONFIG_MMU */
#endif /* !__ASSEMBLY__ */
#endif /* CONFIG_MMU */

View File

@ -76,6 +76,13 @@ static __always_inline const struct vdso_data *__arch_get_vdso_data(void)
return _vdso_data;
}
#ifdef CONFIG_TIME_NS
static __always_inline
const struct vdso_data *__arch_get_timens_vdso_data(const struct vdso_data *vd)
{
return _timens_data;
}
#endif
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_VDSO_GETTIMEOFDAY_H */

View File

@ -20,10 +20,20 @@
REG_L t0, _xip_fixup
add \reg, \reg, t0
.endm
.macro XIP_FIXUP_FLASH_OFFSET reg
la t1, __data_loc
li t0, XIP_OFFSET_MASK
and t1, t1, t0
li t1, XIP_OFFSET
sub t0, t0, t1
sub \reg, \reg, t0
.endm
_xip_fixup: .dword CONFIG_PHYS_RAM_BASE - CONFIG_XIP_PHYS_ADDR - XIP_OFFSET
#else
.macro XIP_FIXUP_OFFSET reg
.endm
.macro XIP_FIXUP_FLASH_OFFSET reg
.endm
#endif /* CONFIG_XIP_KERNEL */
__HEAD
@ -267,6 +277,7 @@ pmp_done:
la a3, hart_lottery
mv a2, a3
XIP_FIXUP_OFFSET a2
XIP_FIXUP_FLASH_OFFSET a3
lw t1, (a3)
amoswap.w t0, t1, (a2)
/* first time here if hart_lottery in RAM is not set */
@ -305,6 +316,7 @@ clear_bss_done:
XIP_FIXUP_OFFSET sp
#ifdef CONFIG_BUILTIN_DTB
la a0, __dtb_start
XIP_FIXUP_OFFSET a0
#else
mv a0, s1
#endif /* CONFIG_BUILTIN_DTB */

View File

@ -12,7 +12,7 @@ static void default_power_off(void)
wait_for_interrupt();
}
void (*pm_power_off)(void) = default_power_off;
void (*pm_power_off)(void) = NULL;
EXPORT_SYMBOL(pm_power_off);
void machine_restart(char *cmd)
@ -23,10 +23,16 @@ void machine_restart(char *cmd)
void machine_halt(void)
{
pm_power_off();
if (pm_power_off != NULL)
pm_power_off();
else
default_power_off();
}
void machine_power_off(void)
{
pm_power_off();
if (pm_power_off != NULL)
pm_power_off();
else
default_power_off();
}

View File

@ -13,6 +13,7 @@
#include <linux/err.h>
#include <asm/page.h>
#include <asm/vdso.h>
#include <linux/time_namespace.h>
#ifdef CONFIG_GENERIC_TIME_VSYSCALL
#include <vdso/datapage.h>
@ -25,14 +26,12 @@ extern char vdso_start[], vdso_end[];
enum vvar_pages {
VVAR_DATA_PAGE_OFFSET,
VVAR_TIMENS_PAGE_OFFSET,
VVAR_NR_PAGES,
};
#define VVAR_SIZE (VVAR_NR_PAGES << PAGE_SHIFT)
static unsigned int vdso_pages __ro_after_init;
static struct page **vdso_pagelist __ro_after_init;
/*
* The vDSO data page.
*/
@ -42,83 +41,228 @@ static union {
} vdso_data_store __page_aligned_data;
struct vdso_data *vdso_data = &vdso_data_store.data;
static int __init vdso_init(void)
struct __vdso_info {
const char *name;
const char *vdso_code_start;
const char *vdso_code_end;
unsigned long vdso_pages;
/* Data Mapping */
struct vm_special_mapping *dm;
/* Code Mapping */
struct vm_special_mapping *cm;
};
static struct __vdso_info vdso_info __ro_after_init = {
.name = "vdso",
.vdso_code_start = vdso_start,
.vdso_code_end = vdso_end,
};
static int vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma)
{
unsigned int i;
vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
vdso_pagelist =
kcalloc(vdso_pages + VVAR_NR_PAGES, sizeof(struct page *), GFP_KERNEL);
if (unlikely(vdso_pagelist == NULL)) {
pr_err("vdso: pagelist allocation failed\n");
return -ENOMEM;
}
for (i = 0; i < vdso_pages; i++) {
struct page *pg;
pg = virt_to_page(vdso_start + (i << PAGE_SHIFT));
vdso_pagelist[i] = pg;
}
vdso_pagelist[i] = virt_to_page(vdso_data);
current->mm->context.vdso = (void *)new_vma->vm_start;
return 0;
}
static int __init __vdso_init(void)
{
unsigned int i;
struct page **vdso_pagelist;
unsigned long pfn;
if (memcmp(vdso_info.vdso_code_start, "\177ELF", 4)) {
pr_err("vDSO is not a valid ELF object!\n");
return -EINVAL;
}
vdso_info.vdso_pages = (
vdso_info.vdso_code_end -
vdso_info.vdso_code_start) >>
PAGE_SHIFT;
vdso_pagelist = kcalloc(vdso_info.vdso_pages,
sizeof(struct page *),
GFP_KERNEL);
if (vdso_pagelist == NULL)
return -ENOMEM;
/* Grab the vDSO code pages. */
pfn = sym_to_pfn(vdso_info.vdso_code_start);
for (i = 0; i < vdso_info.vdso_pages; i++)
vdso_pagelist[i] = pfn_to_page(pfn + i);
vdso_info.cm->pages = vdso_pagelist;
return 0;
}
#ifdef CONFIG_TIME_NS
struct vdso_data *arch_get_vdso_data(void *vvar_page)
{
return (struct vdso_data *)(vvar_page);
}
/*
* The vvar mapping contains data for a specific time namespace, so when a task
* changes namespace we must unmap its vvar data for the old namespace.
* Subsequent faults will map in data for the new namespace.
*
* For more details see timens_setup_vdso_data().
*/
int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)
{
struct mm_struct *mm = task->mm;
struct vm_area_struct *vma;
mmap_read_lock(mm);
for (vma = mm->mmap; vma; vma = vma->vm_next) {
unsigned long size = vma->vm_end - vma->vm_start;
if (vma_is_special_mapping(vma, vdso_info.dm))
zap_page_range(vma, vma->vm_start, size);
}
mmap_read_unlock(mm);
return 0;
}
static struct page *find_timens_vvar_page(struct vm_area_struct *vma)
{
if (likely(vma->vm_mm == current->mm))
return current->nsproxy->time_ns->vvar_page;
/*
* VM_PFNMAP | VM_IO protect .fault() handler from being called
* through interfaces like /proc/$pid/mem or
* process_vm_{readv,writev}() as long as there's no .access()
* in special_mapping_vmops.
* For more details check_vma_flags() and __access_remote_vm()
*/
WARN(1, "vvar_page accessed remotely");
return NULL;
}
#else
static struct page *find_timens_vvar_page(struct vm_area_struct *vma)
{
return NULL;
}
#endif
static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
struct vm_area_struct *vma, struct vm_fault *vmf)
{
struct page *timens_page = find_timens_vvar_page(vma);
unsigned long pfn;
switch (vmf->pgoff) {
case VVAR_DATA_PAGE_OFFSET:
if (timens_page)
pfn = page_to_pfn(timens_page);
else
pfn = sym_to_pfn(vdso_data);
break;
#ifdef CONFIG_TIME_NS
case VVAR_TIMENS_PAGE_OFFSET:
/*
* If a task belongs to a time namespace then a namespace
* specific VVAR is mapped with the VVAR_DATA_PAGE_OFFSET and
* the real VVAR page is mapped with the VVAR_TIMENS_PAGE_OFFSET
* offset.
* See also the comment near timens_setup_vdso_data().
*/
if (!timens_page)
return VM_FAULT_SIGBUS;
pfn = sym_to_pfn(vdso_data);
break;
#endif /* CONFIG_TIME_NS */
default:
return VM_FAULT_SIGBUS;
}
return vmf_insert_pfn(vma, vmf->address, pfn);
}
enum rv_vdso_map {
RV_VDSO_MAP_VVAR,
RV_VDSO_MAP_VDSO,
};
static struct vm_special_mapping rv_vdso_maps[] __ro_after_init = {
[RV_VDSO_MAP_VVAR] = {
.name = "[vvar]",
.fault = vvar_fault,
},
[RV_VDSO_MAP_VDSO] = {
.name = "[vdso]",
.mremap = vdso_mremap,
},
};
static int __init vdso_init(void)
{
vdso_info.dm = &rv_vdso_maps[RV_VDSO_MAP_VVAR];
vdso_info.cm = &rv_vdso_maps[RV_VDSO_MAP_VDSO];
return __vdso_init();
}
arch_initcall(vdso_init);
int arch_setup_additional_pages(struct linux_binprm *bprm,
int uses_interp)
static int __setup_additional_pages(struct mm_struct *mm,
struct linux_binprm *bprm,
int uses_interp)
{
struct mm_struct *mm = current->mm;
unsigned long vdso_base, vdso_len;
int ret;
unsigned long vdso_base, vdso_text_len, vdso_mapping_len;
void *ret;
BUILD_BUG_ON(VVAR_NR_PAGES != __VVAR_PAGES);
vdso_len = (vdso_pages + VVAR_NR_PAGES) << PAGE_SHIFT;
vdso_text_len = vdso_info.vdso_pages << PAGE_SHIFT;
/* Be sure to map the data page */
vdso_mapping_len = vdso_text_len + VVAR_SIZE;
vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0);
if (IS_ERR_VALUE(vdso_base)) {
ret = ERR_PTR(vdso_base);
goto up_fail;
}
ret = _install_special_mapping(mm, vdso_base, VVAR_SIZE,
(VM_READ | VM_MAYREAD | VM_PFNMAP), vdso_info.dm);
if (IS_ERR(ret))
goto up_fail;
vdso_base += VVAR_SIZE;
mm->context.vdso = (void *)vdso_base;
ret =
_install_special_mapping(mm, vdso_base, vdso_text_len,
(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
vdso_info.cm);
if (IS_ERR(ret))
goto up_fail;
return 0;
up_fail:
mm->context.vdso = NULL;
return PTR_ERR(ret);
}
int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
{
struct mm_struct *mm = current->mm;
int ret;
if (mmap_write_lock_killable(mm))
return -EINTR;
vdso_base = get_unmapped_area(NULL, 0, vdso_len, 0, 0);
if (IS_ERR_VALUE(vdso_base)) {
ret = vdso_base;
goto end;
}
mm->context.vdso = NULL;
ret = install_special_mapping(mm, vdso_base, VVAR_SIZE,
(VM_READ | VM_MAYREAD), &vdso_pagelist[vdso_pages]);
if (unlikely(ret))
goto end;
ret =
install_special_mapping(mm, vdso_base + VVAR_SIZE,
vdso_pages << PAGE_SHIFT,
(VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC),
vdso_pagelist);
if (unlikely(ret))
goto end;
/*
* Put vDSO base into mm struct. We need to do this before calling
* install_special_mapping or the perf counter mmap tracking code
* will fail to recognise it as a vDSO (since arch_vma_name fails).
*/
mm->context.vdso = (void *)vdso_base + VVAR_SIZE;
end:
ret = __setup_additional_pages(mm, bprm, uses_interp);
mmap_write_unlock(mm);
return ret;
}
const char *arch_vma_name(struct vm_area_struct *vma)
{
if (vma->vm_mm && (vma->vm_start == (long)vma->vm_mm->context.vdso))
return "[vdso]";
if (vma->vm_mm && (vma->vm_start ==
(long)vma->vm_mm->context.vdso - VVAR_SIZE))
return "[vdso_data]";
return NULL;
}

View File

@ -10,6 +10,9 @@ OUTPUT_ARCH(riscv)
SECTIONS
{
PROVIDE(_vdso_data = . - __VVAR_PAGES * PAGE_SIZE);
#ifdef CONFIG_TIME_NS
PROVIDE(_timens_data = _vdso_data + PAGE_SIZE);
#endif
. = SIZEOF_HEADERS;
.hash : { *(.hash) } :text

View File

@ -64,8 +64,11 @@ SECTIONS
/*
* From this point, stuff is considered writable and will be copied to RAM
*/
__data_loc = ALIGN(16); /* location in file */
. = LOAD_OFFSET + XIP_OFFSET; /* location in memory */
__data_loc = ALIGN(PAGE_SIZE); /* location in file */
. = KERNEL_LINK_ADDR + XIP_OFFSET; /* location in memory */
#undef LOAD_OFFSET
#define LOAD_OFFSET (KERNEL_LINK_ADDR + XIP_OFFSET - (__data_loc & XIP_OFFSET_MASK))
_sdata = .; /* Start of data section */
_data = .;
@ -96,7 +99,6 @@ SECTIONS
KEEP(*(__soc_builtin_dtb_table))
__soc_builtin_dtb_table_end = .;
}
PERCPU_SECTION(L1_CACHE_BYTES)
. = ALIGN(8);
.alternative : {
@ -122,6 +124,8 @@ SECTIONS
BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
PERCPU_SECTION(L1_CACHE_BYTES)
.rel.dyn : AT(ADDR(.rel.dyn) - LOAD_OFFSET) {
*(.rel.dyn*)
}

View File

@ -233,8 +233,10 @@ static int __init asids_init(void)
local_flush_tlb_all();
/* Pre-compute ASID details */
num_asids = 1 << asid_bits;
asid_mask = num_asids - 1;
if (asid_bits) {
num_asids = 1 << asid_bits;
asid_mask = num_asids - 1;
}
/*
* Use ASID allocator only if number of HW ASIDs are
@ -255,7 +257,7 @@ static int __init asids_init(void)
pr_info("ASID allocator using %lu bits (%lu entries)\n",
asid_bits, num_asids);
} else {
pr_info("ASID allocator disabled\n");
pr_info("ASID allocator disabled (%lu bits)\n", asid_bits);
}
return 0;

View File

@ -41,7 +41,7 @@ phys_addr_t phys_ram_base __ro_after_init;
EXPORT_SYMBOL(phys_ram_base);
#ifdef CONFIG_XIP_KERNEL
extern char _xiprom[], _exiprom[];
extern char _xiprom[], _exiprom[], __data_loc;
#endif
unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
@ -454,10 +454,9 @@ static uintptr_t __init best_map_size(phys_addr_t base, phys_addr_t size)
/* called from head.S with MMU off */
asmlinkage void __init __copy_data(void)
{
void *from = (void *)(&_sdata);
void *end = (void *)(&_end);
void *from = (void *)(&__data_loc);
void *to = (void *)CONFIG_PHYS_RAM_BASE;
size_t sz = (size_t)(end - from + 1);
size_t sz = (size_t)((uintptr_t)(&_end) - (uintptr_t)(&_sdata));
memcpy(to, from, sz);
}

View File

@ -210,9 +210,11 @@ int zpci_deconfigure_device(struct zpci_dev *zdev);
void zpci_device_reserved(struct zpci_dev *zdev);
bool zpci_is_device_configured(struct zpci_dev *zdev);
int zpci_hot_reset_device(struct zpci_dev *zdev);
int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64);
int zpci_unregister_ioat(struct zpci_dev *, u8);
void zpci_remove_reserved_devices(void);
void zpci_update_fh(struct zpci_dev *zdev, u32 fh);
/* CLP */
int clp_setup_writeback_mio(void);
@ -294,8 +296,10 @@ void zpci_debug_exit(void);
void zpci_debug_init_device(struct zpci_dev *, const char *);
void zpci_debug_exit_device(struct zpci_dev *);
/* Error reporting */
/* Error handling */
int zpci_report_error(struct pci_dev *, struct zpci_report_error_header *);
int zpci_clear_error_state(struct zpci_dev *zdev);
int zpci_reset_load_store_blocked(struct zpci_dev *zdev);
#ifdef CONFIG_NUMA

View File

@ -687,8 +687,10 @@ static void cpumf_pmu_stop(struct perf_event *event, int flags)
false);
if (cfdiag_diffctr(cpuhw, event->hw.config_base))
cfdiag_push_sample(event, cpuhw);
} else
} else if (cpuhw->flags & PMU_F_RESERVED) {
/* Only update when PMU not hotplugged off */
hw_perf_event_update(event);
}
hwc->state |= PERF_HES_UPTODATE;
}
}

View File

@ -481,6 +481,34 @@ static void zpci_free_iomap(struct zpci_dev *zdev, int entry)
spin_unlock(&zpci_iomap_lock);
}
static void zpci_do_update_iomap_fh(struct zpci_dev *zdev, u32 fh)
{
int bar, idx;
spin_lock(&zpci_iomap_lock);
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
if (!zdev->bars[bar].size)
continue;
idx = zdev->bars[bar].map_idx;
if (!zpci_iomap_start[idx].count)
continue;
WRITE_ONCE(zpci_iomap_start[idx].fh, zdev->fh);
}
spin_unlock(&zpci_iomap_lock);
}
void zpci_update_fh(struct zpci_dev *zdev, u32 fh)
{
if (!fh || zdev->fh == fh)
return;
zdev->fh = fh;
if (zpci_use_mio(zdev))
return;
if (zdev->has_resources && zdev_enabled(zdev))
zpci_do_update_iomap_fh(zdev, fh);
}
static struct resource *__alloc_res(struct zpci_dev *zdev, unsigned long start,
unsigned long size, unsigned long flags)
{
@ -668,7 +696,7 @@ int zpci_enable_device(struct zpci_dev *zdev)
if (clp_enable_fh(zdev, &fh, ZPCI_NR_DMA_SPACES))
rc = -EIO;
else
zdev->fh = fh;
zpci_update_fh(zdev, fh);
return rc;
}
@ -679,14 +707,14 @@ int zpci_disable_device(struct zpci_dev *zdev)
cc = clp_disable_fh(zdev, &fh);
if (!cc) {
zdev->fh = fh;
zpci_update_fh(zdev, fh);
} else if (cc == CLP_RC_SETPCIFN_ALRDY) {
pr_info("Disabling PCI function %08x had no effect as it was already disabled\n",
zdev->fid);
/* Function is already disabled - update handle */
rc = clp_refresh_fh(zdev->fid, &fh);
if (!rc) {
zdev->fh = fh;
zpci_update_fh(zdev, fh);
rc = -EINVAL;
}
} else {
@ -695,6 +723,65 @@ int zpci_disable_device(struct zpci_dev *zdev)
return rc;
}
/**
* zpci_hot_reset_device - perform a reset of the given zPCI function
* @zdev: the slot which should be reset
*
* Performs a low level reset of the zPCI function. The reset is low level in
* the sense that the zPCI function can be reset without detaching it from the
* common PCI subsystem. The reset may be performed while under control of
* either DMA or IOMMU APIs in which case the existing DMA/IOMMU translation
* table is reinstated at the end of the reset.
*
* After the reset the functions internal state is reset to an initial state
* equivalent to its state during boot when first probing a driver.
* Consequently after reset the PCI function requires re-initialization via the
* common PCI code including re-enabling IRQs via pci_alloc_irq_vectors()
* and enabling the function via e.g.pci_enablde_device_flags().The caller
* must guard against concurrent reset attempts.
*
* In most cases this function should not be called directly but through
* pci_reset_function() or pci_reset_bus() which handle the save/restore and
* locking.
*
* Return: 0 on success and an error value otherwise
*/
int zpci_hot_reset_device(struct zpci_dev *zdev)
{
int rc;
zpci_dbg(3, "rst fid:%x, fh:%x\n", zdev->fid, zdev->fh);
if (zdev_enabled(zdev)) {
/* Disables device access, DMAs and IRQs (reset state) */
rc = zpci_disable_device(zdev);
/*
* Due to a z/VM vs LPAR inconsistency in the error state the
* FH may indicate an enabled device but disable says the
* device is already disabled don't treat it as an error here.
*/
if (rc == -EINVAL)
rc = 0;
if (rc)
return rc;
}
rc = zpci_enable_device(zdev);
if (rc)
return rc;
if (zdev->dma_table)
rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
(u64)zdev->dma_table);
else
rc = zpci_dma_init_device(zdev);
if (rc) {
zpci_disable_device(zdev);
return rc;
}
return 0;
}
/**
* zpci_create_device() - Create a new zpci_dev and add it to the zbus
* @fid: Function ID of the device to be created
@ -776,7 +863,7 @@ int zpci_scan_configured_device(struct zpci_dev *zdev, u32 fh)
{
int rc;
zdev->fh = fh;
zpci_update_fh(zdev, fh);
/* the PCI function will be scanned once function 0 appears */
if (!zdev->zbus->bus)
return 0;
@ -903,6 +990,59 @@ int zpci_report_error(struct pci_dev *pdev,
}
EXPORT_SYMBOL(zpci_report_error);
/**
* zpci_clear_error_state() - Clears the zPCI error state of the device
* @zdev: The zdev for which the zPCI error state should be reset
*
* Clear the zPCI error state of the device. If clearing the zPCI error state
* fails the device is left in the error state. In this case it may make sense
* to call zpci_io_perm_failure() on the associated pdev if it exists.
*
* Returns: 0 on success, -EIO otherwise
*/
int zpci_clear_error_state(struct zpci_dev *zdev)
{
u64 req = ZPCI_CREATE_REQ(zdev->fh, 0, ZPCI_MOD_FC_RESET_ERROR);
struct zpci_fib fib = {0};
u8 status;
int cc;
cc = zpci_mod_fc(req, &fib, &status);
if (cc) {
zpci_dbg(3, "ces fid:%x, cc:%d, status:%x\n", zdev->fid, cc, status);
return -EIO;
}
return 0;
}
/**
* zpci_reset_load_store_blocked() - Re-enables L/S from error state
* @zdev: The zdev for which to unblock load/store access
*
* Re-enables load/store access for a PCI function in the error state while
* keeping DMA blocked. In this state drivers can poke MMIO space to determine
* if error recovery is possible while catching any rogue DMA access from the
* device.
*
* Returns: 0 on success, -EIO otherwise
*/
int zpci_reset_load_store_blocked(struct zpci_dev *zdev)
{
u64 req = ZPCI_CREATE_REQ(zdev->fh, 0, ZPCI_MOD_FC_RESET_BLOCK);
struct zpci_fib fib = {0};
u8 status;
int cc;
cc = zpci_mod_fc(req, &fib, &status);
if (cc) {
zpci_dbg(3, "rls fid:%x, cc:%d, status:%x\n", zdev->fid, cc, status);
return -EIO;
}
return 0;
}
static int zpci_mem_init(void)
{
BUILD_BUG_ON(!is_power_of_2(__alignof__(struct zpci_fmb)) ||

View File

@ -47,18 +47,223 @@ struct zpci_ccdf_avail {
u16 pec; /* PCI event code */
} __packed;
static inline bool ers_result_indicates_abort(pci_ers_result_t ers_res)
{
switch (ers_res) {
case PCI_ERS_RESULT_CAN_RECOVER:
case PCI_ERS_RESULT_RECOVERED:
case PCI_ERS_RESULT_NEED_RESET:
return false;
default:
return true;
}
}
static bool is_passed_through(struct zpci_dev *zdev)
{
return zdev->s390_domain;
}
static bool is_driver_supported(struct pci_driver *driver)
{
if (!driver || !driver->err_handler)
return false;
if (!driver->err_handler->error_detected)
return false;
if (!driver->err_handler->slot_reset)
return false;
if (!driver->err_handler->resume)
return false;
return true;
}
static pci_ers_result_t zpci_event_notify_error_detected(struct pci_dev *pdev,
struct pci_driver *driver)
{
pci_ers_result_t ers_res = PCI_ERS_RESULT_DISCONNECT;
ers_res = driver->err_handler->error_detected(pdev, pdev->error_state);
if (ers_result_indicates_abort(ers_res))
pr_info("%s: Automatic recovery failed after initial reporting\n", pci_name(pdev));
else if (ers_res == PCI_ERS_RESULT_NEED_RESET)
pr_debug("%s: Driver needs reset to recover\n", pci_name(pdev));
return ers_res;
}
static pci_ers_result_t zpci_event_do_error_state_clear(struct pci_dev *pdev,
struct pci_driver *driver)
{
pci_ers_result_t ers_res = PCI_ERS_RESULT_DISCONNECT;
struct zpci_dev *zdev = to_zpci(pdev);
int rc;
pr_info("%s: Unblocking device access for examination\n", pci_name(pdev));
rc = zpci_reset_load_store_blocked(zdev);
if (rc) {
pr_err("%s: Unblocking device access failed\n", pci_name(pdev));
/* Let's try a full reset instead */
return PCI_ERS_RESULT_NEED_RESET;
}
if (driver->err_handler->mmio_enabled) {
ers_res = driver->err_handler->mmio_enabled(pdev);
if (ers_result_indicates_abort(ers_res)) {
pr_info("%s: Automatic recovery failed after MMIO re-enable\n",
pci_name(pdev));
return ers_res;
} else if (ers_res == PCI_ERS_RESULT_NEED_RESET) {
pr_debug("%s: Driver needs reset to recover\n", pci_name(pdev));
return ers_res;
}
}
pr_debug("%s: Unblocking DMA\n", pci_name(pdev));
rc = zpci_clear_error_state(zdev);
if (!rc) {
pdev->error_state = pci_channel_io_normal;
} else {
pr_err("%s: Unblocking DMA failed\n", pci_name(pdev));
/* Let's try a full reset instead */
return PCI_ERS_RESULT_NEED_RESET;
}
return ers_res;
}
static pci_ers_result_t zpci_event_do_reset(struct pci_dev *pdev,
struct pci_driver *driver)
{
pci_ers_result_t ers_res = PCI_ERS_RESULT_DISCONNECT;
pr_info("%s: Initiating reset\n", pci_name(pdev));
if (zpci_hot_reset_device(to_zpci(pdev))) {
pr_err("%s: The reset request failed\n", pci_name(pdev));
return ers_res;
}
pdev->error_state = pci_channel_io_normal;
ers_res = driver->err_handler->slot_reset(pdev);
if (ers_result_indicates_abort(ers_res)) {
pr_info("%s: Automatic recovery failed after slot reset\n", pci_name(pdev));
return ers_res;
}
return ers_res;
}
/* zpci_event_attempt_error_recovery - Try to recover the given PCI function
* @pdev: PCI function to recover currently in the error state
*
* We follow the scheme outlined in Documentation/PCI/pci-error-recovery.rst.
* With the simplification that recovery always happens per function
* and the platform determines which functions are affected for
* multi-function devices.
*/
static pci_ers_result_t zpci_event_attempt_error_recovery(struct pci_dev *pdev)
{
pci_ers_result_t ers_res = PCI_ERS_RESULT_DISCONNECT;
struct pci_driver *driver;
/*
* Ensure that the PCI function is not removed concurrently, no driver
* is unbound or probed and that userspace can't access its
* configuration space while we perform recovery.
*/
pci_dev_lock(pdev);
if (pdev->error_state == pci_channel_io_perm_failure) {
ers_res = PCI_ERS_RESULT_DISCONNECT;
goto out_unlock;
}
pdev->error_state = pci_channel_io_frozen;
if (is_passed_through(to_zpci(pdev))) {
pr_info("%s: Cannot be recovered in the host because it is a pass-through device\n",
pci_name(pdev));
goto out_unlock;
}
driver = to_pci_driver(pdev->dev.driver);
if (!is_driver_supported(driver)) {
if (!driver)
pr_info("%s: Cannot be recovered because no driver is bound to the device\n",
pci_name(pdev));
else
pr_info("%s: The %s driver bound to the device does not support error recovery\n",
pci_name(pdev),
driver->name);
goto out_unlock;
}
ers_res = zpci_event_notify_error_detected(pdev, driver);
if (ers_result_indicates_abort(ers_res))
goto out_unlock;
if (ers_res == PCI_ERS_RESULT_CAN_RECOVER) {
ers_res = zpci_event_do_error_state_clear(pdev, driver);
if (ers_result_indicates_abort(ers_res))
goto out_unlock;
}
if (ers_res == PCI_ERS_RESULT_NEED_RESET)
ers_res = zpci_event_do_reset(pdev, driver);
if (ers_res != PCI_ERS_RESULT_RECOVERED) {
pr_err("%s: Automatic recovery failed; operator intervention is required\n",
pci_name(pdev));
goto out_unlock;
}
pr_info("%s: The device is ready to resume operations\n", pci_name(pdev));
if (driver->err_handler->resume)
driver->err_handler->resume(pdev);
out_unlock:
pci_dev_unlock(pdev);
return ers_res;
}
/* zpci_event_io_failure - Report PCI channel failure state to driver
* @pdev: PCI function for which to report
* @es: PCI channel failure state to report
*/
static void zpci_event_io_failure(struct pci_dev *pdev, pci_channel_state_t es)
{
struct pci_driver *driver;
pci_dev_lock(pdev);
pdev->error_state = es;
/**
* While vfio-pci's error_detected callback notifies user-space QEMU
* reacts to this by freezing the guest. In an s390 environment PCI
* errors are rarely fatal so this is overkill. Instead in the future
* we will inject the error event and let the guest recover the device
* itself.
*/
if (is_passed_through(to_zpci(pdev)))
goto out;
driver = to_pci_driver(pdev->dev.driver);
if (driver && driver->err_handler && driver->err_handler->error_detected)
driver->err_handler->error_detected(pdev, pdev->error_state);
out:
pci_dev_unlock(pdev);
}
static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
{
struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
struct pci_dev *pdev = NULL;
pci_ers_result_t ers_res;
zpci_dbg(3, "err fid:%x, fh:%x, pec:%x\n",
ccdf->fid, ccdf->fh, ccdf->pec);
zpci_err("error CCDF:\n");
zpci_err_hex(ccdf, sizeof(*ccdf));
if (zdev)
pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
if (zdev) {
zpci_update_fh(zdev, ccdf->fh);
if (zdev->zbus->bus)
pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
}
pr_err("%s: Event 0x%x reports an error for PCI function 0x%x\n",
pdev ? pci_name(pdev) : "n/a", ccdf->pec, ccdf->fid);
@ -66,7 +271,20 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
if (!pdev)
return;
pdev->error_state = pci_channel_io_perm_failure;
switch (ccdf->pec) {
case 0x003a: /* Service Action or Error Recovery Successful */
ers_res = zpci_event_attempt_error_recovery(pdev);
if (ers_res != PCI_ERS_RESULT_RECOVERED)
zpci_event_io_failure(pdev, pci_channel_io_perm_failure);
break;
default:
/*
* Mark as frozen not permanently failed because the device
* could be subsequently recovered by the platform.
*/
zpci_event_io_failure(pdev, pci_channel_io_frozen);
break;
}
pci_dev_put(pdev);
}
@ -78,7 +296,7 @@ void zpci_event_error(void *data)
static void zpci_event_hard_deconfigured(struct zpci_dev *zdev, u32 fh)
{
zdev->fh = fh;
zpci_update_fh(zdev, fh);
/* Give the driver a hint that the function is
* already unusable.
*/
@ -121,7 +339,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
if (!zdev)
zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_STANDBY);
else
zdev->fh = ccdf->fh;
zpci_update_fh(zdev, ccdf->fh);
break;
case 0x0303: /* Deconfiguration requested */
if (zdev) {
@ -130,7 +348,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
*/
if (zdev->state != ZPCI_FN_STATE_CONFIGURED)
break;
zdev->fh = ccdf->fh;
zpci_update_fh(zdev, ccdf->fh);
zpci_deconfigure_device(zdev);
}
break;

View File

@ -163,7 +163,7 @@ static inline int zpci_load_fh(u64 *data, const volatile void __iomem *addr,
unsigned long len)
{
struct zpci_iomap_entry *entry = &zpci_iomap_start[ZPCI_IDX(addr)];
u64 req = ZPCI_CREATE_REQ(entry->fh, entry->bar, len);
u64 req = ZPCI_CREATE_REQ(READ_ONCE(entry->fh), entry->bar, len);
return __zpci_load(data, req, ZPCI_OFFSET(addr));
}
@ -244,7 +244,7 @@ static inline int zpci_store_fh(const volatile void __iomem *addr, u64 data,
unsigned long len)
{
struct zpci_iomap_entry *entry = &zpci_iomap_start[ZPCI_IDX(addr)];
u64 req = ZPCI_CREATE_REQ(entry->fh, entry->bar, len);
u64 req = ZPCI_CREATE_REQ(READ_ONCE(entry->fh), entry->bar, len);
return __zpci_store(data, req, ZPCI_OFFSET(addr));
}

View File

@ -387,6 +387,15 @@ void arch_teardown_msi_irqs(struct pci_dev *pdev)
airq_iv_free(zpci_ibv[0], zdev->msi_first_bit, zdev->msi_nr_irqs);
}
void arch_restore_msi_irqs(struct pci_dev *pdev)
{
struct zpci_dev *zdev = to_zpci(pdev);
if (!zdev->irqs_registered)
zpci_set_irq(zdev);
default_restore_msi_irqs(pdev);
}
static struct airq_struct zpci_airq = {
.handler = zpci_floating_irq_handler,
.isc = PCI_ISC,

View File

@ -38,7 +38,6 @@
#define __KVM_HAVE_ARCH_VCPU_DEBUGFS
#define KVM_MAX_VCPUS 1024
#define KVM_SOFT_MAX_VCPUS 710
/*
* In x86, the VCPU ID corresponds to the APIC ID, and APIC IDs
@ -725,6 +724,7 @@ struct kvm_vcpu_arch {
int cpuid_nent;
struct kvm_cpuid_entry2 *cpuid_entries;
u32 kvm_cpuid_base;
u64 reserved_gpa_bits;
int maxphyaddr;
@ -748,7 +748,7 @@ struct kvm_vcpu_arch {
u8 preempted;
u64 msr_val;
u64 last_steal;
struct gfn_to_pfn_cache cache;
struct gfn_to_hva_cache cache;
} st;
u64 l1_tsc_offset;
@ -1034,6 +1034,7 @@ struct kvm_x86_msr_filter {
#define APICV_INHIBIT_REASON_IRQWIN 3
#define APICV_INHIBIT_REASON_PIT_REINJ 4
#define APICV_INHIBIT_REASON_X2APIC 5
#define APICV_INHIBIT_REASON_BLOCKIRQ 6
struct kvm_arch {
unsigned long n_used_mmu_pages;
@ -1476,6 +1477,7 @@ struct kvm_x86_ops {
int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
int (*vm_move_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
int (*get_msr_feature)(struct kvm_msr_entry *entry);

View File

@ -83,6 +83,18 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
return ret;
}
static inline long kvm_sev_hypercall3(unsigned int nr, unsigned long p1,
unsigned long p2, unsigned long p3)
{
long ret;
asm volatile("vmmcall"
: "=a"(ret)
: "a"(nr), "b"(p1), "c"(p2), "d"(p3)
: "memory");
return ret;
}
#ifdef CONFIG_KVM_GUEST
void kvmclock_init(void);
void kvmclock_disable(void);

View File

@ -44,6 +44,8 @@ void __init sme_enable(struct boot_params *bp);
int __init early_set_memory_decrypted(unsigned long vaddr, unsigned long size);
int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size);
void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
bool enc);
void __init mem_encrypt_free_decrypted_mem(void);
@ -78,6 +80,8 @@ static inline int __init
early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; }
static inline int __init
early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; }
static inline void __init
early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc) {}
static inline void mem_encrypt_free_decrypted_mem(void) { }

View File

@ -97,6 +97,12 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
PVOP_VCALL1(mmu.exit_mmap, mm);
}
static inline void notify_page_enc_status_changed(unsigned long pfn,
int npages, bool enc)
{
PVOP_VCALL3(mmu.notify_page_enc_status_changed, pfn, npages, enc);
}
#ifdef CONFIG_PARAVIRT_XXL
static inline void load_sp0(unsigned long sp0)
{

View File

@ -168,6 +168,7 @@ struct pv_mmu_ops {
/* Hook for intercepting the destruction of an mm_struct. */
void (*exit_mmap)(struct mm_struct *mm);
void (*notify_page_enc_status_changed)(unsigned long pfn, int npages, bool enc);
#ifdef CONFIG_PARAVIRT_XXL
struct paravirt_callee_save read_cr2;

View File

@ -806,11 +806,14 @@ static inline u32 amd_get_nodes_per_socket(void) { return 0; }
static inline u32 amd_get_highest_perf(void) { return 0; }
#endif
#define for_each_possible_hypervisor_cpuid_base(function) \
for (function = 0x40000000; function < 0x40010000; function += 0x100)
static inline uint32_t hypervisor_cpuid_base(const char *sig, uint32_t leaves)
{
uint32_t base, eax, signature[3];
for (base = 0x40000000; base < 0x40010000; base += 0x100) {
for_each_possible_hypervisor_cpuid_base(base) {
cpuid(base, &eax, &signature[0], &signature[1], &signature[2]);
if (!memcmp(sig, signature, 12) &&

View File

@ -83,6 +83,7 @@ int set_pages_rw(struct page *page, int numpages);
int set_direct_map_invalid_noflush(struct page *page);
int set_direct_map_default_noflush(struct page *page);
bool kernel_page_present(struct page *page);
void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc);
extern int kernel_set_to_readonly;

View File

@ -8,6 +8,7 @@
* should be used to determine that a VM is running under KVM.
*/
#define KVM_CPUID_SIGNATURE 0x40000000
#define KVM_SIGNATURE "KVMKVMKVM\0\0\0"
/* This CPUID returns two feature bitmaps in eax, edx. Before enabling
* a particular paravirtualization, the appropriate feature bit should

View File

@ -28,6 +28,7 @@
#include <linux/swait.h>
#include <linux/syscore_ops.h>
#include <linux/cc_platform.h>
#include <linux/efi.h>
#include <asm/timer.h>
#include <asm/cpu.h>
#include <asm/traps.h>
@ -41,6 +42,7 @@
#include <asm/ptrace.h>
#include <asm/reboot.h>
#include <asm/svm.h>
#include <asm/e820/api.h>
DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled);
@ -434,6 +436,8 @@ static void kvm_guest_cpu_offline(bool shutdown)
kvm_disable_steal_time();
if (kvm_para_has_feature(KVM_FEATURE_PV_EOI))
wrmsrl(MSR_KVM_PV_EOI_EN, 0);
if (kvm_para_has_feature(KVM_FEATURE_MIGRATION_CONTROL))
wrmsrl(MSR_KVM_MIGRATION_CONTROL, 0);
kvm_pv_disable_apf();
if (!shutdown)
apf_task_wake_all();
@ -548,6 +552,55 @@ static void kvm_send_ipi_mask_allbutself(const struct cpumask *mask, int vector)
__send_ipi_mask(local_mask, vector);
}
static int __init setup_efi_kvm_sev_migration(void)
{
efi_char16_t efi_sev_live_migration_enabled[] = L"SevLiveMigrationEnabled";
efi_guid_t efi_variable_guid = AMD_SEV_MEM_ENCRYPT_GUID;
efi_status_t status;
unsigned long size;
bool enabled;
if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) ||
!kvm_para_has_feature(KVM_FEATURE_MIGRATION_CONTROL))
return 0;
if (!efi_enabled(EFI_BOOT))
return 0;
if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
pr_info("%s : EFI runtime services are not enabled\n", __func__);
return 0;
}
size = sizeof(enabled);
/* Get variable contents into buffer */
status = efi.get_variable(efi_sev_live_migration_enabled,
&efi_variable_guid, NULL, &size, &enabled);
if (status == EFI_NOT_FOUND) {
pr_info("%s : EFI live migration variable not found\n", __func__);
return 0;
}
if (status != EFI_SUCCESS) {
pr_info("%s : EFI variable retrieval failed\n", __func__);
return 0;
}
if (enabled == 0) {
pr_info("%s: live migration disabled in EFI\n", __func__);
return 0;
}
pr_info("%s : live migration enabled in EFI\n", __func__);
wrmsrl(MSR_KVM_MIGRATION_CONTROL, KVM_MIGRATION_READY);
return 1;
}
late_initcall(setup_efi_kvm_sev_migration);
/*
* Set the IPI entry points
*/
@ -756,7 +809,7 @@ static noinline uint32_t __kvm_cpuid_base(void)
return 0; /* So we don't blow up on old processors */
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
return hypervisor_cpuid_base("KVMKVMKVM\0\0\0", 0);
return hypervisor_cpuid_base(KVM_SIGNATURE, 0);
return 0;
}
@ -806,8 +859,62 @@ static bool __init kvm_msi_ext_dest_id(void)
return kvm_para_has_feature(KVM_FEATURE_MSI_EXT_DEST_ID);
}
static void kvm_sev_hc_page_enc_status(unsigned long pfn, int npages, bool enc)
{
kvm_sev_hypercall3(KVM_HC_MAP_GPA_RANGE, pfn << PAGE_SHIFT, npages,
KVM_MAP_GPA_RANGE_ENC_STAT(enc) | KVM_MAP_GPA_RANGE_PAGE_SZ_4K);
}
static void __init kvm_init_platform(void)
{
if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
kvm_para_has_feature(KVM_FEATURE_MIGRATION_CONTROL)) {
unsigned long nr_pages;
int i;
pv_ops.mmu.notify_page_enc_status_changed =
kvm_sev_hc_page_enc_status;
/*
* Reset the host's shared pages list related to kernel
* specific page encryption status settings before we load a
* new kernel by kexec. Reset the page encryption status
* during early boot intead of just before kexec to avoid SMP
* races during kvm_pv_guest_cpu_reboot().
* NOTE: We cannot reset the complete shared pages list
* here as we need to retain the UEFI/OVMF firmware
* specific settings.
*/
for (i = 0; i < e820_table->nr_entries; i++) {
struct e820_entry *entry = &e820_table->entries[i];
if (entry->type != E820_TYPE_RAM)
continue;
nr_pages = DIV_ROUND_UP(entry->size, PAGE_SIZE);
kvm_sev_hypercall3(KVM_HC_MAP_GPA_RANGE, entry->addr,
nr_pages,
KVM_MAP_GPA_RANGE_ENCRYPTED | KVM_MAP_GPA_RANGE_PAGE_SZ_4K);
}
/*
* Ensure that _bss_decrypted section is marked as decrypted in the
* shared pages list.
*/
nr_pages = DIV_ROUND_UP(__end_bss_decrypted - __start_bss_decrypted,
PAGE_SIZE);
early_set_mem_enc_dec_hypercall((unsigned long)__start_bss_decrypted,
nr_pages, 0);
/*
* If not booted using EFI, enable Live migration support.
*/
if (!efi_enabled(EFI_BOOT))
wrmsrl(MSR_KVM_MIGRATION_CONTROL,
KVM_MIGRATION_READY);
}
kvmclock_init();
x86_platform.apic_post_init = kvm_apic_init;
}

View File

@ -337,6 +337,7 @@ struct paravirt_patch_template pv_ops = {
(void (*)(struct mmu_gather *, void *))tlb_remove_page,
.mmu.exit_mmap = paravirt_nop,
.mmu.notify_page_enc_status_changed = paravirt_nop,
#ifdef CONFIG_PARAVIRT_XXL
.mmu.read_cr2 = __PV_IS_CALLEE_SAVE(pv_native_read_cr2),

View File

@ -106,7 +106,7 @@ void save_v86_state(struct kernel_vm86_regs *regs, int retval)
*/
local_irq_enable();
BUG_ON(!vm86 || !vm86->user_vm86);
BUG_ON(!vm86);
set_flags(regs->pt.flags, VEFLAGS, X86_EFLAGS_VIF | vm86->veflags_mask);
user = vm86->user_vm86;

View File

@ -99,11 +99,45 @@ static int kvm_check_cpuid(struct kvm_cpuid_entry2 *entries, int nent)
return 0;
}
static void kvm_update_kvm_cpuid_base(struct kvm_vcpu *vcpu)
{
u32 function;
struct kvm_cpuid_entry2 *entry;
vcpu->arch.kvm_cpuid_base = 0;
for_each_possible_hypervisor_cpuid_base(function) {
entry = kvm_find_cpuid_entry(vcpu, function, 0);
if (entry) {
u32 signature[3];
signature[0] = entry->ebx;
signature[1] = entry->ecx;
signature[2] = entry->edx;
BUILD_BUG_ON(sizeof(signature) > sizeof(KVM_SIGNATURE));
if (!memcmp(signature, KVM_SIGNATURE, sizeof(signature))) {
vcpu->arch.kvm_cpuid_base = function;
break;
}
}
}
}
struct kvm_cpuid_entry2 *kvm_find_kvm_cpuid_features(struct kvm_vcpu *vcpu)
{
u32 base = vcpu->arch.kvm_cpuid_base;
if (!base)
return NULL;
return kvm_find_cpuid_entry(vcpu, base | KVM_CPUID_FEATURES, 0);
}
void kvm_update_pv_runtime(struct kvm_vcpu *vcpu)
{
struct kvm_cpuid_entry2 *best;
best = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
struct kvm_cpuid_entry2 *best = kvm_find_kvm_cpuid_features(vcpu);
/*
* save the feature bitmap to avoid cpuid lookup for every PV
@ -142,7 +176,7 @@ void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
cpuid_entry_has(best, X86_FEATURE_XSAVEC)))
best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
best = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
best = kvm_find_kvm_cpuid_features(vcpu);
if (kvm_hlt_in_guest(vcpu->kvm) && best &&
(best->eax & (1 << KVM_FEATURE_PV_UNHALT)))
best->eax &= ~(1 << KVM_FEATURE_PV_UNHALT);
@ -239,6 +273,26 @@ u64 kvm_vcpu_reserved_gpa_bits_raw(struct kvm_vcpu *vcpu)
return rsvd_bits(cpuid_maxphyaddr(vcpu), 63);
}
static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
int nent)
{
int r;
r = kvm_check_cpuid(e2, nent);
if (r)
return r;
kvfree(vcpu->arch.cpuid_entries);
vcpu->arch.cpuid_entries = e2;
vcpu->arch.cpuid_nent = nent;
kvm_update_kvm_cpuid_base(vcpu);
kvm_update_cpuid_runtime(vcpu);
kvm_vcpu_after_set_cpuid(vcpu);
return 0;
}
/* when an old userspace process fills a new kernel module */
int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
struct kvm_cpuid *cpuid,
@ -275,18 +329,9 @@ int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
e2[i].padding[2] = 0;
}
r = kvm_check_cpuid(e2, cpuid->nent);
if (r) {
r = kvm_set_cpuid(vcpu, e2, cpuid->nent);
if (r)
kvfree(e2);
goto out_free_cpuid;
}
kvfree(vcpu->arch.cpuid_entries);
vcpu->arch.cpuid_entries = e2;
vcpu->arch.cpuid_nent = cpuid->nent;
kvm_update_cpuid_runtime(vcpu);
kvm_vcpu_after_set_cpuid(vcpu);
out_free_cpuid:
kvfree(e);
@ -310,20 +355,11 @@ int kvm_vcpu_ioctl_set_cpuid2(struct kvm_vcpu *vcpu,
return PTR_ERR(e2);
}
r = kvm_check_cpuid(e2, cpuid->nent);
if (r) {
r = kvm_set_cpuid(vcpu, e2, cpuid->nent);
if (r)
kvfree(e2);
return r;
}
kvfree(vcpu->arch.cpuid_entries);
vcpu->arch.cpuid_entries = e2;
vcpu->arch.cpuid_nent = cpuid->nent;
kvm_update_cpuid_runtime(vcpu);
kvm_vcpu_after_set_cpuid(vcpu);
return 0;
return r;
}
int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
@ -871,8 +907,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
}
break;
case KVM_CPUID_SIGNATURE: {
static const char signature[12] = "KVMKVMKVM\0\0";
const u32 *sigptr = (const u32 *)signature;
const u32 *sigptr = (const u32 *)KVM_SIGNATURE;
entry->eax = KVM_CPUID_FEATURES;
entry->ebx = sigptr[0];
entry->ecx = sigptr[1];

View File

@ -1472,7 +1472,7 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
if (!(data & HV_X64_MSR_VP_ASSIST_PAGE_ENABLE)) {
hv_vcpu->hv_vapic = data;
if (kvm_lapic_enable_pv_eoi(vcpu, 0, 0))
if (kvm_lapic_set_pv_eoi(vcpu, 0, 0))
return 1;
break;
}
@ -1490,7 +1490,7 @@ static int kvm_hv_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data, bool host)
return 1;
hv_vcpu->hv_vapic = data;
kvm_vcpu_mark_page_dirty(vcpu, gfn);
if (kvm_lapic_enable_pv_eoi(vcpu,
if (kvm_lapic_set_pv_eoi(vcpu,
gfn_to_gpa(gfn) | KVM_MSR_ENABLED,
sizeof(struct hv_vp_assist_page)))
return 1;

View File

@ -2856,25 +2856,30 @@ int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 reg, u64 *data)
return 0;
}
int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len)
int kvm_lapic_set_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len)
{
u64 addr = data & ~KVM_MSR_ENABLED;
struct gfn_to_hva_cache *ghc = &vcpu->arch.pv_eoi.data;
unsigned long new_len;
int ret;
if (!IS_ALIGNED(addr, 4))
return 1;
if (data & KVM_MSR_ENABLED) {
if (addr == ghc->gpa && len <= ghc->len)
new_len = ghc->len;
else
new_len = len;
ret = kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, addr, new_len);
if (ret)
return ret;
}
vcpu->arch.pv_eoi.msr_val = data;
if (!pv_eoi_enabled(vcpu))
return 0;
if (addr == ghc->gpa && len <= ghc->len)
new_len = ghc->len;
else
new_len = len;
return kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, addr, new_len);
return 0;
}
int kvm_apic_accept_events(struct kvm_vcpu *vcpu)

View File

@ -127,7 +127,7 @@ int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
int kvm_hv_vapic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data);
int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len);
int kvm_lapic_set_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len);
void kvm_lapic_exit(void);
#define VEC_POS(v) ((v) & (32 - 1))

View File

@ -3191,17 +3191,17 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
new_spte |= PT_WRITABLE_MASK;
/*
* Do not fix write-permission on the large spte. Since
* we only dirty the first page into the dirty-bitmap in
* Do not fix write-permission on the large spte when
* dirty logging is enabled. Since we only dirty the
* first page into the dirty-bitmap in
* fast_pf_fix_direct_spte(), other pages are missed
* if its slot has dirty logging enabled.
*
* Instead, we let the slow page fault path create a
* normal spte to fix the access.
*
* See the comments in kvm_arch_commit_memory_region().
*/
if (sp->role.level > PG_LEVEL_4K)
if (sp->role.level > PG_LEVEL_4K &&
kvm_slot_dirty_track_enabled(fault->slot))
break;
}

View File

@ -897,7 +897,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
struct kvm_page_fault *fault,
struct tdp_iter *iter)
{
struct kvm_mmu_page *sp = sptep_to_sp(iter->sptep);
struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(iter->sptep));
u64 new_spte;
int ret = RET_PF_FIXED;
bool wrprot = false;

View File

@ -319,7 +319,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu)
}
/* check if idx is a valid index to access PMU */
int kvm_pmu_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx)
bool kvm_pmu_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx)
{
return kvm_x86_ops.pmu_ops->is_valid_rdpmc_ecx(vcpu, idx);
}

View File

@ -32,7 +32,7 @@ struct kvm_pmu_ops {
struct kvm_pmc *(*rdpmc_ecx_to_pmc)(struct kvm_vcpu *vcpu,
unsigned int idx, u64 *mask);
struct kvm_pmc *(*msr_idx_to_pmc)(struct kvm_vcpu *vcpu, u32 msr);
int (*is_valid_rdpmc_ecx)(struct kvm_vcpu *vcpu, unsigned int idx);
bool (*is_valid_rdpmc_ecx)(struct kvm_vcpu *vcpu, unsigned int idx);
bool (*is_valid_msr)(struct kvm_vcpu *vcpu, u32 msr);
int (*get_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
int (*set_msr)(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
@ -149,7 +149,7 @@ void reprogram_counter(struct kvm_pmu *pmu, int pmc_idx);
void kvm_pmu_deliver_pmi(struct kvm_vcpu *vcpu);
void kvm_pmu_handle_event(struct kvm_vcpu *vcpu);
int kvm_pmu_rdpmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data);
int kvm_pmu_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx);
bool kvm_pmu_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx);
bool kvm_pmu_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr);
int kvm_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info);

View File

@ -904,7 +904,8 @@ bool svm_check_apicv_inhibit_reasons(ulong bit)
BIT(APICV_INHIBIT_REASON_NESTED) |
BIT(APICV_INHIBIT_REASON_IRQWIN) |
BIT(APICV_INHIBIT_REASON_PIT_REINJ) |
BIT(APICV_INHIBIT_REASON_X2APIC);
BIT(APICV_INHIBIT_REASON_X2APIC) |
BIT(APICV_INHIBIT_REASON_BLOCKIRQ);
return supported & BIT(bit);
}

View File

@ -181,14 +181,13 @@ static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
return get_gp_pmc_amd(pmu, base + pmc_idx, PMU_TYPE_COUNTER);
}
/* returns 0 if idx's corresponding MSR exists; otherwise returns 1. */
static int amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx)
static bool amd_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx)
{
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
idx &= ~(3u << 30);
return (idx >= pmu->nr_arch_gp_counters);
return idx < pmu->nr_arch_gp_counters;
}
/* idx is the ECX register of RDPMC instruction */

View File

@ -120,16 +120,26 @@ static bool __sev_recycle_asids(int min_asid, int max_asid)
return true;
}
static int sev_misc_cg_try_charge(struct kvm_sev_info *sev)
{
enum misc_res_type type = sev->es_active ? MISC_CG_RES_SEV_ES : MISC_CG_RES_SEV;
return misc_cg_try_charge(type, sev->misc_cg, 1);
}
static void sev_misc_cg_uncharge(struct kvm_sev_info *sev)
{
enum misc_res_type type = sev->es_active ? MISC_CG_RES_SEV_ES : MISC_CG_RES_SEV;
misc_cg_uncharge(type, sev->misc_cg, 1);
}
static int sev_asid_new(struct kvm_sev_info *sev)
{
int asid, min_asid, max_asid, ret;
bool retry = true;
enum misc_res_type type;
type = sev->es_active ? MISC_CG_RES_SEV_ES : MISC_CG_RES_SEV;
WARN_ON(sev->misc_cg);
sev->misc_cg = get_current_misc_cg();
ret = misc_cg_try_charge(type, sev->misc_cg, 1);
ret = sev_misc_cg_try_charge(sev);
if (ret) {
put_misc_cg(sev->misc_cg);
sev->misc_cg = NULL;
@ -162,7 +172,7 @@ static int sev_asid_new(struct kvm_sev_info *sev)
return asid;
e_uncharge:
misc_cg_uncharge(type, sev->misc_cg, 1);
sev_misc_cg_uncharge(sev);
put_misc_cg(sev->misc_cg);
sev->misc_cg = NULL;
return ret;
@ -179,7 +189,6 @@ static void sev_asid_free(struct kvm_sev_info *sev)
{
struct svm_cpu_data *sd;
int cpu;
enum misc_res_type type;
mutex_lock(&sev_bitmap_lock);
@ -192,8 +201,7 @@ static void sev_asid_free(struct kvm_sev_info *sev)
mutex_unlock(&sev_bitmap_lock);
type = sev->es_active ? MISC_CG_RES_SEV_ES : MISC_CG_RES_SEV;
misc_cg_uncharge(type, sev->misc_cg, 1);
sev_misc_cg_uncharge(sev);
put_misc_cg(sev->misc_cg);
sev->misc_cg = NULL;
}
@ -590,7 +598,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
* traditional VMSA as it has been built so far (in prep
* for LAUNCH_UPDATE_VMSA) to be the initial SEV-ES state.
*/
memcpy(svm->vmsa, save, sizeof(*save));
memcpy(svm->sev_es.vmsa, save, sizeof(*save));
return 0;
}
@ -612,11 +620,11 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
* the VMSA memory content (i.e it will write the same memory region
* with the guest's key), so invalidate it first.
*/
clflush_cache_range(svm->vmsa, PAGE_SIZE);
clflush_cache_range(svm->sev_es.vmsa, PAGE_SIZE);
vmsa.reserved = 0;
vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
vmsa.address = __sme_pa(svm->vmsa);
vmsa.address = __sme_pa(svm->sev_es.vmsa);
vmsa.len = PAGE_SIZE;
ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
if (ret)
@ -1536,6 +1544,201 @@ static bool cmd_allowed_from_miror(u32 cmd_id)
return false;
}
static int sev_lock_for_migration(struct kvm *kvm)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
/*
* Bail if this VM is already involved in a migration to avoid deadlock
* between two VMs trying to migrate to/from each other.
*/
if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1))
return -EBUSY;
mutex_lock(&kvm->lock);
return 0;
}
static void sev_unlock_after_migration(struct kvm *kvm)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
mutex_unlock(&kvm->lock);
atomic_set_release(&sev->migration_in_progress, 0);
}
static int sev_lock_vcpus_for_migration(struct kvm *kvm)
{
struct kvm_vcpu *vcpu;
int i, j;
kvm_for_each_vcpu(i, vcpu, kvm) {
if (mutex_lock_killable(&vcpu->mutex))
goto out_unlock;
}
return 0;
out_unlock:
kvm_for_each_vcpu(j, vcpu, kvm) {
if (i == j)
break;
mutex_unlock(&vcpu->mutex);
}
return -EINTR;
}
static void sev_unlock_vcpus_for_migration(struct kvm *kvm)
{
struct kvm_vcpu *vcpu;
int i;
kvm_for_each_vcpu(i, vcpu, kvm) {
mutex_unlock(&vcpu->mutex);
}
}
static void sev_migrate_from(struct kvm_sev_info *dst,
struct kvm_sev_info *src)
{
dst->active = true;
dst->asid = src->asid;
dst->handle = src->handle;
dst->pages_locked = src->pages_locked;
src->asid = 0;
src->active = false;
src->handle = 0;
src->pages_locked = 0;
INIT_LIST_HEAD(&dst->regions_list);
list_replace_init(&src->regions_list, &dst->regions_list);
}
static int sev_es_migrate_from(struct kvm *dst, struct kvm *src)
{
int i;
struct kvm_vcpu *dst_vcpu, *src_vcpu;
struct vcpu_svm *dst_svm, *src_svm;
if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus))
return -EINVAL;
kvm_for_each_vcpu(i, src_vcpu, src) {
if (!src_vcpu->arch.guest_state_protected)
return -EINVAL;
}
kvm_for_each_vcpu(i, src_vcpu, src) {
src_svm = to_svm(src_vcpu);
dst_vcpu = kvm_get_vcpu(dst, i);
dst_svm = to_svm(dst_vcpu);
/*
* Transfer VMSA and GHCB state to the destination. Nullify and
* clear source fields as appropriate, the state now belongs to
* the destination.
*/
memcpy(&dst_svm->sev_es, &src_svm->sev_es, sizeof(src_svm->sev_es));
dst_svm->vmcb->control.ghcb_gpa = src_svm->vmcb->control.ghcb_gpa;
dst_svm->vmcb->control.vmsa_pa = src_svm->vmcb->control.vmsa_pa;
dst_vcpu->arch.guest_state_protected = true;
memset(&src_svm->sev_es, 0, sizeof(src_svm->sev_es));
src_svm->vmcb->control.ghcb_gpa = INVALID_PAGE;
src_svm->vmcb->control.vmsa_pa = INVALID_PAGE;
src_vcpu->arch.guest_state_protected = false;
}
to_kvm_svm(src)->sev_info.es_active = false;
to_kvm_svm(dst)->sev_info.es_active = true;
return 0;
}
int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
{
struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info;
struct kvm_sev_info *src_sev, *cg_cleanup_sev;
struct file *source_kvm_file;
struct kvm *source_kvm;
bool charged = false;
int ret;
ret = sev_lock_for_migration(kvm);
if (ret)
return ret;
if (sev_guest(kvm)) {
ret = -EINVAL;
goto out_unlock;
}
source_kvm_file = fget(source_fd);
if (!file_is_kvm(source_kvm_file)) {
ret = -EBADF;
goto out_fput;
}
source_kvm = source_kvm_file->private_data;
ret = sev_lock_for_migration(source_kvm);
if (ret)
goto out_fput;
if (!sev_guest(source_kvm)) {
ret = -EINVAL;
goto out_source;
}
src_sev = &to_kvm_svm(source_kvm)->sev_info;
dst_sev->misc_cg = get_current_misc_cg();
cg_cleanup_sev = dst_sev;
if (dst_sev->misc_cg != src_sev->misc_cg) {
ret = sev_misc_cg_try_charge(dst_sev);
if (ret)
goto out_dst_cgroup;
charged = true;
}
ret = sev_lock_vcpus_for_migration(kvm);
if (ret)
goto out_dst_cgroup;
ret = sev_lock_vcpus_for_migration(source_kvm);
if (ret)
goto out_dst_vcpu;
if (sev_es_guest(source_kvm)) {
ret = sev_es_migrate_from(kvm, source_kvm);
if (ret)
goto out_source_vcpu;
}
sev_migrate_from(dst_sev, src_sev);
kvm_vm_dead(source_kvm);
cg_cleanup_sev = src_sev;
ret = 0;
out_source_vcpu:
sev_unlock_vcpus_for_migration(source_kvm);
out_dst_vcpu:
sev_unlock_vcpus_for_migration(kvm);
out_dst_cgroup:
/* Operates on the source on success, on the destination on failure. */
if (charged)
sev_misc_cg_uncharge(cg_cleanup_sev);
put_misc_cg(cg_cleanup_sev->misc_cg);
cg_cleanup_sev->misc_cg = NULL;
out_source:
sev_unlock_after_migration(source_kvm);
out_fput:
if (source_kvm_file)
fput(source_kvm_file);
out_unlock:
sev_unlock_after_migration(kvm);
return ret;
}
int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
{
struct kvm_sev_cmd sev_cmd;
@ -2038,16 +2241,16 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
svm = to_svm(vcpu);
if (vcpu->arch.guest_state_protected)
sev_flush_guest_memory(svm, svm->vmsa, PAGE_SIZE);
__free_page(virt_to_page(svm->vmsa));
sev_flush_guest_memory(svm, svm->sev_es.vmsa, PAGE_SIZE);
__free_page(virt_to_page(svm->sev_es.vmsa));
if (svm->ghcb_sa_free)
kfree(svm->ghcb_sa);
if (svm->sev_es.ghcb_sa_free)
kfree(svm->sev_es.ghcb_sa);
}
static void dump_ghcb(struct vcpu_svm *svm)
{
struct ghcb *ghcb = svm->ghcb;
struct ghcb *ghcb = svm->sev_es.ghcb;
unsigned int nbits;
/* Re-use the dump_invalid_vmcb module parameter */
@ -2073,7 +2276,7 @@ static void dump_ghcb(struct vcpu_svm *svm)
static void sev_es_sync_to_ghcb(struct vcpu_svm *svm)
{
struct kvm_vcpu *vcpu = &svm->vcpu;
struct ghcb *ghcb = svm->ghcb;
struct ghcb *ghcb = svm->sev_es.ghcb;
/*
* The GHCB protocol so far allows for the following data
@ -2093,7 +2296,7 @@ static void sev_es_sync_from_ghcb(struct vcpu_svm *svm)
{
struct vmcb_control_area *control = &svm->vmcb->control;
struct kvm_vcpu *vcpu = &svm->vcpu;
struct ghcb *ghcb = svm->ghcb;
struct ghcb *ghcb = svm->sev_es.ghcb;
u64 exit_code;
/*
@ -2140,7 +2343,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
struct ghcb *ghcb;
u64 exit_code = 0;
ghcb = svm->ghcb;
ghcb = svm->sev_es.ghcb;
/* Only GHCB Usage code 0 is supported */
if (ghcb->ghcb_usage)
@ -2258,33 +2461,34 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
void sev_es_unmap_ghcb(struct vcpu_svm *svm)
{
if (!svm->ghcb)
if (!svm->sev_es.ghcb)
return;
if (svm->ghcb_sa_free) {
if (svm->sev_es.ghcb_sa_free) {
/*
* The scratch area lives outside the GHCB, so there is a
* buffer that, depending on the operation performed, may
* need to be synced, then freed.
*/
if (svm->ghcb_sa_sync) {
if (svm->sev_es.ghcb_sa_sync) {
kvm_write_guest(svm->vcpu.kvm,
ghcb_get_sw_scratch(svm->ghcb),
svm->ghcb_sa, svm->ghcb_sa_len);
svm->ghcb_sa_sync = false;
ghcb_get_sw_scratch(svm->sev_es.ghcb),
svm->sev_es.ghcb_sa,
svm->sev_es.ghcb_sa_len);
svm->sev_es.ghcb_sa_sync = false;
}
kfree(svm->ghcb_sa);
svm->ghcb_sa = NULL;
svm->ghcb_sa_free = false;
kfree(svm->sev_es.ghcb_sa);
svm->sev_es.ghcb_sa = NULL;
svm->sev_es.ghcb_sa_free = false;
}
trace_kvm_vmgexit_exit(svm->vcpu.vcpu_id, svm->ghcb);
trace_kvm_vmgexit_exit(svm->vcpu.vcpu_id, svm->sev_es.ghcb);
sev_es_sync_to_ghcb(svm);
kvm_vcpu_unmap(&svm->vcpu, &svm->ghcb_map, true);
svm->ghcb = NULL;
kvm_vcpu_unmap(&svm->vcpu, &svm->sev_es.ghcb_map, true);
svm->sev_es.ghcb = NULL;
}
void pre_sev_run(struct vcpu_svm *svm, int cpu)
@ -2314,7 +2518,7 @@ void pre_sev_run(struct vcpu_svm *svm, int cpu)
static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
{
struct vmcb_control_area *control = &svm->vmcb->control;
struct ghcb *ghcb = svm->ghcb;
struct ghcb *ghcb = svm->sev_es.ghcb;
u64 ghcb_scratch_beg, ghcb_scratch_end;
u64 scratch_gpa_beg, scratch_gpa_end;
void *scratch_va;
@ -2350,7 +2554,7 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
return false;
}
scratch_va = (void *)svm->ghcb;
scratch_va = (void *)svm->sev_es.ghcb;
scratch_va += (scratch_gpa_beg - control->ghcb_gpa);
} else {
/*
@ -2380,12 +2584,12 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
* the vCPU next time (i.e. a read was requested so the data
* must be written back to the guest memory).
*/
svm->ghcb_sa_sync = sync;
svm->ghcb_sa_free = true;
svm->sev_es.ghcb_sa_sync = sync;
svm->sev_es.ghcb_sa_free = true;
}
svm->ghcb_sa = scratch_va;
svm->ghcb_sa_len = len;
svm->sev_es.ghcb_sa = scratch_va;
svm->sev_es.ghcb_sa_len = len;
return true;
}
@ -2504,15 +2708,15 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
return -EINVAL;
}
if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->ghcb_map)) {
if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->sev_es.ghcb_map)) {
/* Unable to map GHCB from guest */
vcpu_unimpl(vcpu, "vmgexit: error mapping GHCB [%#llx] from guest\n",
ghcb_gpa);
return -EINVAL;
}
svm->ghcb = svm->ghcb_map.hva;
ghcb = svm->ghcb_map.hva;
svm->sev_es.ghcb = svm->sev_es.ghcb_map.hva;
ghcb = svm->sev_es.ghcb_map.hva;
trace_kvm_vmgexit_enter(vcpu->vcpu_id, ghcb);
@ -2535,7 +2739,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
ret = kvm_sev_es_mmio_read(vcpu,
control->exit_info_1,
control->exit_info_2,
svm->ghcb_sa);
svm->sev_es.ghcb_sa);
break;
case SVM_VMGEXIT_MMIO_WRITE:
if (!setup_vmgexit_scratch(svm, false, control->exit_info_2))
@ -2544,7 +2748,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
ret = kvm_sev_es_mmio_write(vcpu,
control->exit_info_1,
control->exit_info_2,
svm->ghcb_sa);
svm->sev_es.ghcb_sa);
break;
case SVM_VMGEXIT_NMI_COMPLETE:
ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_IRET);
@ -2604,7 +2808,8 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in)
if (!setup_vmgexit_scratch(svm, in, bytes))
return -EINVAL;
return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->ghcb_sa, count, in);
return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->sev_es.ghcb_sa,
count, in);
}
void sev_es_init_vmcb(struct vcpu_svm *svm)
@ -2619,7 +2824,7 @@ void sev_es_init_vmcb(struct vcpu_svm *svm)
* VMCB page. Do not include the encryption mask on the VMSA physical
* address since hardware will access it using the guest key.
*/
svm->vmcb->control.vmsa_pa = __pa(svm->vmsa);
svm->vmcb->control.vmsa_pa = __pa(svm->sev_es.vmsa);
/* Can't intercept CR register access, HV can't modify CR registers */
svm_clr_intercept(svm, INTERCEPT_CR0_READ);
@ -2691,8 +2896,8 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
struct vcpu_svm *svm = to_svm(vcpu);
/* First SIPI: Use the values as initially set by the VMM */
if (!svm->received_first_sipi) {
svm->received_first_sipi = true;
if (!svm->sev_es.received_first_sipi) {
svm->sev_es.received_first_sipi = true;
return;
}
@ -2701,8 +2906,8 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
* the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a
* non-zero value.
*/
if (!svm->ghcb)
if (!svm->sev_es.ghcb)
return;
ghcb_set_sw_exit_info_2(svm->ghcb, 1);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1);
}

View File

@ -1452,7 +1452,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
svm_switch_vmcb(svm, &svm->vmcb01);
if (vmsa_page)
svm->vmsa = page_address(vmsa_page);
svm->sev_es.vmsa = page_address(vmsa_page);
svm->guest_state_loaded = false;
@ -2835,11 +2835,11 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
static int svm_complete_emulated_msr(struct kvm_vcpu *vcpu, int err)
{
struct vcpu_svm *svm = to_svm(vcpu);
if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->ghcb))
if (!err || !sev_es_guest(vcpu->kvm) || WARN_ON_ONCE(!svm->sev_es.ghcb))
return kvm_complete_insn_gp(vcpu, err);
ghcb_set_sw_exit_info_1(svm->ghcb, 1);
ghcb_set_sw_exit_info_2(svm->ghcb,
ghcb_set_sw_exit_info_1(svm->sev_es.ghcb, 1);
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb,
X86_TRAP_GP |
SVM_EVTINJ_TYPE_EXEPT |
SVM_EVTINJ_VALID);
@ -3121,11 +3121,6 @@ static int invpcid_interception(struct kvm_vcpu *vcpu)
type = svm->vmcb->control.exit_info_2;
gva = svm->vmcb->control.exit_info_1;
if (type > 3) {
kvm_inject_gp(vcpu, 0);
return 1;
}
return kvm_handle_invpcid(vcpu, type, gva);
}
@ -4701,6 +4696,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.mem_enc_unreg_region = svm_unregister_enc_region,
.vm_copy_enc_context_from = svm_vm_copy_asid_from,
.vm_move_enc_context_from = svm_vm_migrate_from,
.can_emulate_instruction = svm_can_emulate_instruction,

View File

@ -80,6 +80,7 @@ struct kvm_sev_info {
u64 ap_jump_table; /* SEV-ES AP Jump Table address */
struct kvm *enc_context_owner; /* Owner of copied encryption context */
struct misc_cg *misc_cg; /* For misc cgroup accounting */
atomic_t migration_in_progress;
};
struct kvm_svm {
@ -123,6 +124,20 @@ struct svm_nested_state {
bool initialized;
};
struct vcpu_sev_es_state {
/* SEV-ES support */
struct vmcb_save_area *vmsa;
struct ghcb *ghcb;
struct kvm_host_map ghcb_map;
bool received_first_sipi;
/* SEV-ES scratch area support */
void *ghcb_sa;
u32 ghcb_sa_len;
bool ghcb_sa_sync;
bool ghcb_sa_free;
};
struct vcpu_svm {
struct kvm_vcpu vcpu;
/* vmcb always points at current_vmcb->ptr, it's purely a shorthand. */
@ -186,17 +201,7 @@ struct vcpu_svm {
DECLARE_BITMAP(write, MAX_DIRECT_ACCESS_MSRS);
} shadow_msr_intercept;
/* SEV-ES support */
struct vmcb_save_area *vmsa;
struct ghcb *ghcb;
struct kvm_host_map ghcb_map;
bool received_first_sipi;
/* SEV-ES scratch area support */
void *ghcb_sa;
u32 ghcb_sa_len;
bool ghcb_sa_sync;
bool ghcb_sa_free;
struct vcpu_sev_es_state sev_es;
bool guest_state_loaded;
};
@ -558,6 +563,7 @@ int svm_register_enc_region(struct kvm *kvm,
int svm_unregister_enc_region(struct kvm *kvm,
struct kvm_enc_region *range);
int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd);
int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd);
void pre_sev_run(struct vcpu_svm *svm, int cpu);
void __init sev_set_cpu_caps(void);
void __init sev_hardware_setup(void);

View File

@ -525,67 +525,19 @@ static int nested_vmx_check_tpr_shadow_controls(struct kvm_vcpu *vcpu,
}
/*
* Check if MSR is intercepted for L01 MSR bitmap.
* For x2APIC MSRs, ignore the vmcs01 bitmap. L1 can enable x2APIC without L1
* itself utilizing x2APIC. All MSRs were previously set to be intercepted,
* only the "disable intercept" case needs to be handled.
*/
static bool msr_write_intercepted_l01(struct kvm_vcpu *vcpu, u32 msr)
static void nested_vmx_disable_intercept_for_x2apic_msr(unsigned long *msr_bitmap_l1,
unsigned long *msr_bitmap_l0,
u32 msr, int type)
{
unsigned long *msr_bitmap;
int f = sizeof(unsigned long);
if (type & MSR_TYPE_R && !vmx_test_msr_bitmap_read(msr_bitmap_l1, msr))
vmx_clear_msr_bitmap_read(msr_bitmap_l0, msr);
if (!cpu_has_vmx_msr_bitmap())
return true;
msr_bitmap = to_vmx(vcpu)->vmcs01.msr_bitmap;
if (msr <= 0x1fff) {
return !!test_bit(msr, msr_bitmap + 0x800 / f);
} else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
msr &= 0x1fff;
return !!test_bit(msr, msr_bitmap + 0xc00 / f);
}
return true;
}
/*
* If a msr is allowed by L0, we should check whether it is allowed by L1.
* The corresponding bit will be cleared unless both of L0 and L1 allow it.
*/
static void nested_vmx_disable_intercept_for_msr(unsigned long *msr_bitmap_l1,
unsigned long *msr_bitmap_nested,
u32 msr, int type)
{
int f = sizeof(unsigned long);
/*
* See Intel PRM Vol. 3, 20.6.9 (MSR-Bitmap Address). Early manuals
* have the write-low and read-high bitmap offsets the wrong way round.
* We can control MSRs 0x00000000-0x00001fff and 0xc0000000-0xc0001fff.
*/
if (msr <= 0x1fff) {
if (type & MSR_TYPE_R &&
!test_bit(msr, msr_bitmap_l1 + 0x000 / f))
/* read-low */
__clear_bit(msr, msr_bitmap_nested + 0x000 / f);
if (type & MSR_TYPE_W &&
!test_bit(msr, msr_bitmap_l1 + 0x800 / f))
/* write-low */
__clear_bit(msr, msr_bitmap_nested + 0x800 / f);
} else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
msr &= 0x1fff;
if (type & MSR_TYPE_R &&
!test_bit(msr, msr_bitmap_l1 + 0x400 / f))
/* read-high */
__clear_bit(msr, msr_bitmap_nested + 0x400 / f);
if (type & MSR_TYPE_W &&
!test_bit(msr, msr_bitmap_l1 + 0xc00 / f))
/* write-high */
__clear_bit(msr, msr_bitmap_nested + 0xc00 / f);
}
if (type & MSR_TYPE_W && !vmx_test_msr_bitmap_write(msr_bitmap_l1, msr))
vmx_clear_msr_bitmap_write(msr_bitmap_l0, msr);
}
static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap)
@ -600,6 +552,34 @@ static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap)
}
}
#define BUILD_NVMX_MSR_INTERCEPT_HELPER(rw) \
static inline \
void nested_vmx_set_msr_##rw##_intercept(struct vcpu_vmx *vmx, \
unsigned long *msr_bitmap_l1, \
unsigned long *msr_bitmap_l0, u32 msr) \
{ \
if (vmx_test_msr_bitmap_##rw(vmx->vmcs01.msr_bitmap, msr) || \
vmx_test_msr_bitmap_##rw(msr_bitmap_l1, msr)) \
vmx_set_msr_bitmap_##rw(msr_bitmap_l0, msr); \
else \
vmx_clear_msr_bitmap_##rw(msr_bitmap_l0, msr); \
}
BUILD_NVMX_MSR_INTERCEPT_HELPER(read)
BUILD_NVMX_MSR_INTERCEPT_HELPER(write)
static inline void nested_vmx_set_intercept_for_msr(struct vcpu_vmx *vmx,
unsigned long *msr_bitmap_l1,
unsigned long *msr_bitmap_l0,
u32 msr, int types)
{
if (types & MSR_TYPE_R)
nested_vmx_set_msr_read_intercept(vmx, msr_bitmap_l1,
msr_bitmap_l0, msr);
if (types & MSR_TYPE_W)
nested_vmx_set_msr_write_intercept(vmx, msr_bitmap_l1,
msr_bitmap_l0, msr);
}
/*
* Merge L0's and L1's MSR bitmap, return false to indicate that
* we do not use the hardware.
@ -607,10 +587,11 @@ static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap)
static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
int msr;
unsigned long *msr_bitmap_l1;
unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap;
struct kvm_host_map *map = &to_vmx(vcpu)->nested.msr_bitmap_map;
unsigned long *msr_bitmap_l0 = vmx->nested.vmcs02.msr_bitmap;
struct kvm_host_map *map = &vmx->nested.msr_bitmap_map;
/* Nothing to do if the MSR bitmap is not in use. */
if (!cpu_has_vmx_msr_bitmap() ||
@ -625,7 +606,7 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
/*
* To keep the control flow simple, pay eight 8-byte writes (sixteen
* 4-byte writes on 32-bit systems) up front to enable intercepts for
* the x2APIC MSR range and selectively disable them below.
* the x2APIC MSR range and selectively toggle those relevant to L2.
*/
enable_x2apic_msr_intercepts(msr_bitmap_l0);
@ -644,61 +625,44 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
}
}
nested_vmx_disable_intercept_for_msr(
nested_vmx_disable_intercept_for_x2apic_msr(
msr_bitmap_l1, msr_bitmap_l0,
X2APIC_MSR(APIC_TASKPRI),
MSR_TYPE_R | MSR_TYPE_W);
if (nested_cpu_has_vid(vmcs12)) {
nested_vmx_disable_intercept_for_msr(
nested_vmx_disable_intercept_for_x2apic_msr(
msr_bitmap_l1, msr_bitmap_l0,
X2APIC_MSR(APIC_EOI),
MSR_TYPE_W);
nested_vmx_disable_intercept_for_msr(
nested_vmx_disable_intercept_for_x2apic_msr(
msr_bitmap_l1, msr_bitmap_l0,
X2APIC_MSR(APIC_SELF_IPI),
MSR_TYPE_W);
}
}
/* KVM unconditionally exposes the FS/GS base MSRs to L1. */
#ifdef CONFIG_X86_64
nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
MSR_FS_BASE, MSR_TYPE_RW);
nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
MSR_GS_BASE, MSR_TYPE_RW);
nested_vmx_disable_intercept_for_msr(msr_bitmap_l1, msr_bitmap_l0,
MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
#endif
/*
* Checking the L0->L1 bitmap is trying to verify two things:
*
* 1. L0 gave a permission to L1 to actually passthrough the MSR. This
* ensures that we do not accidentally generate an L02 MSR bitmap
* from the L12 MSR bitmap that is too permissive.
* 2. That L1 or L2s have actually used the MSR. This avoids
* unnecessarily merging of the bitmap if the MSR is unused. This
* works properly because we only update the L01 MSR bitmap lazily.
* So even if L0 should pass L1 these MSRs, the L01 bitmap is only
* updated to reflect this when L1 (or its L2s) actually write to
* the MSR.
* Always check vmcs01's bitmap to honor userspace MSR filters and any
* other runtime changes to vmcs01's bitmap, e.g. dynamic pass-through.
*/
if (!msr_write_intercepted_l01(vcpu, MSR_IA32_SPEC_CTRL))
nested_vmx_disable_intercept_for_msr(
msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_SPEC_CTRL,
MSR_TYPE_R | MSR_TYPE_W);
#ifdef CONFIG_X86_64
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_FS_BASE, MSR_TYPE_RW);
if (!msr_write_intercepted_l01(vcpu, MSR_IA32_PRED_CMD))
nested_vmx_disable_intercept_for_msr(
msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_PRED_CMD,
MSR_TYPE_W);
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_GS_BASE, MSR_TYPE_RW);
kvm_vcpu_unmap(vcpu, &to_vmx(vcpu)->nested.msr_bitmap_map, false);
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
#endif
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_SPEC_CTRL, MSR_TYPE_RW);
nested_vmx_set_intercept_for_msr(vmx, msr_bitmap_l1, msr_bitmap_l0,
MSR_IA32_PRED_CMD, MSR_TYPE_W);
kvm_vcpu_unmap(vcpu, &vmx->nested.msr_bitmap_map, false);
return true;
}
@ -5379,7 +5343,7 @@ static int handle_invept(struct kvm_vcpu *vcpu)
struct {
u64 eptp, gpa;
} operand;
int i, r;
int i, r, gpr_index;
if (!(vmx->nested.msrs.secondary_ctls_high &
SECONDARY_EXEC_ENABLE_EPT) ||
@ -5392,7 +5356,8 @@ static int handle_invept(struct kvm_vcpu *vcpu)
return 1;
vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
type = kvm_register_read(vcpu, (vmx_instruction_info >> 28) & 0xf);
gpr_index = vmx_get_instr_info_reg2(vmx_instruction_info);
type = kvm_register_read(vcpu, gpr_index);
types = (vmx->nested.msrs.ept_caps >> VMX_EPT_EXTENT_SHIFT) & 6;
@ -5459,7 +5424,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
u64 gla;
} operand;
u16 vpid02;
int r;
int r, gpr_index;
if (!(vmx->nested.msrs.secondary_ctls_high &
SECONDARY_EXEC_ENABLE_VPID) ||
@ -5472,7 +5437,8 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
return 1;
vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
type = kvm_register_read(vcpu, (vmx_instruction_info >> 28) & 0xf);
gpr_index = vmx_get_instr_info_reg2(vmx_instruction_info);
type = kvm_register_read(vcpu, gpr_index);
types = (vmx->nested.msrs.vpid_caps &
VMX_VPID_EXTENT_SUPPORTED_MASK) >> 8;

View File

@ -118,16 +118,15 @@ static struct kvm_pmc *intel_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
}
}
/* returns 0 if idx's corresponding MSR exists; otherwise returns 1. */
static int intel_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx)
static bool intel_is_valid_rdpmc_ecx(struct kvm_vcpu *vcpu, unsigned int idx)
{
struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
bool fixed = idx & (1u << 30);
idx &= ~(3u << 30);
return (!fixed && idx >= pmu->nr_arch_gp_counters) ||
(fixed && idx >= pmu->nr_arch_fixed_counters);
return fixed ? idx < pmu->nr_arch_fixed_counters
: idx < pmu->nr_arch_gp_counters;
}
static struct kvm_pmc *intel_rdpmc_ecx_to_pmc(struct kvm_vcpu *vcpu,

View File

@ -769,24 +769,13 @@ void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu)
/*
* Check if MSR is intercepted for currently loaded MSR bitmap.
*/
static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
static bool msr_write_intercepted(struct vcpu_vmx *vmx, u32 msr)
{
unsigned long *msr_bitmap;
int f = sizeof(unsigned long);
if (!cpu_has_vmx_msr_bitmap())
if (!(exec_controls_get(vmx) & CPU_BASED_USE_MSR_BITMAPS))
return true;
msr_bitmap = to_vmx(vcpu)->loaded_vmcs->msr_bitmap;
if (msr <= 0x1fff) {
return !!test_bit(msr, msr_bitmap + 0x800 / f);
} else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
msr &= 0x1fff;
return !!test_bit(msr, msr_bitmap + 0xc00 / f);
}
return true;
return vmx_test_msr_bitmap_write(vmx->loaded_vmcs->msr_bitmap,
MSR_IA32_SPEC_CTRL);
}
static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
@ -3697,46 +3686,6 @@ void free_vpid(int vpid)
spin_unlock(&vmx_vpid_lock);
}
static void vmx_clear_msr_bitmap_read(ulong *msr_bitmap, u32 msr)
{
int f = sizeof(unsigned long);
if (msr <= 0x1fff)
__clear_bit(msr, msr_bitmap + 0x000 / f);
else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
__clear_bit(msr & 0x1fff, msr_bitmap + 0x400 / f);
}
static void vmx_clear_msr_bitmap_write(ulong *msr_bitmap, u32 msr)
{
int f = sizeof(unsigned long);
if (msr <= 0x1fff)
__clear_bit(msr, msr_bitmap + 0x800 / f);
else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
__clear_bit(msr & 0x1fff, msr_bitmap + 0xc00 / f);
}
static void vmx_set_msr_bitmap_read(ulong *msr_bitmap, u32 msr)
{
int f = sizeof(unsigned long);
if (msr <= 0x1fff)
__set_bit(msr, msr_bitmap + 0x000 / f);
else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
__set_bit(msr & 0x1fff, msr_bitmap + 0x400 / f);
}
static void vmx_set_msr_bitmap_write(ulong *msr_bitmap, u32 msr)
{
int f = sizeof(unsigned long);
if (msr <= 0x1fff)
__set_bit(msr, msr_bitmap + 0x800 / f);
else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff))
__set_bit(msr & 0x1fff, msr_bitmap + 0xc00 / f);
}
void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
@ -5494,6 +5443,7 @@ static int handle_invpcid(struct kvm_vcpu *vcpu)
u64 pcid;
u64 gla;
} operand;
int gpr_index;
if (!guest_cpuid_has(vcpu, X86_FEATURE_INVPCID)) {
kvm_queue_exception(vcpu, UD_VECTOR);
@ -5501,12 +5451,8 @@ static int handle_invpcid(struct kvm_vcpu *vcpu)
}
vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO);
type = kvm_register_read(vcpu, (vmx_instruction_info >> 28) & 0xf);
if (type > 3) {
kvm_inject_gp(vcpu, 0);
return 1;
}
gpr_index = vmx_get_instr_info_reg2(vmx_instruction_info);
type = kvm_register_read(vcpu, gpr_index);
/* According to the Intel instruction reference, the memory operand
* is read even if it isn't needed (e.g., for type==all)
@ -6749,7 +6695,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
* If the L02 MSR bitmap does not intercept the MSR, then we need to
* save it.
*/
if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL)))
vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0);
@ -7563,7 +7509,8 @@ static void hardware_unsetup(void)
static bool vmx_check_apicv_inhibit_reasons(ulong bit)
{
ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) |
BIT(APICV_INHIBIT_REASON_HYPERV);
BIT(APICV_INHIBIT_REASON_HYPERV) |
BIT(APICV_INHIBIT_REASON_BLOCKIRQ);
return supported & BIT(bit);
}

View File

@ -400,6 +400,34 @@ static inline void vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu);
/*
* Note, early Intel manuals have the write-low and read-high bitmap offsets
* the wrong way round. The bitmaps control MSRs 0x00000000-0x00001fff and
* 0xc0000000-0xc0001fff. The former (low) uses bytes 0-0x3ff for reads and
* 0x800-0xbff for writes. The latter (high) uses 0x400-0x7ff for reads and
* 0xc00-0xfff for writes. MSRs not covered by either of the ranges always
* VM-Exit.
*/
#define __BUILD_VMX_MSR_BITMAP_HELPER(rtype, action, bitop, access, base) \
static inline rtype vmx_##action##_msr_bitmap_##access(unsigned long *bitmap, \
u32 msr) \
{ \
int f = sizeof(unsigned long); \
\
if (msr <= 0x1fff) \
return bitop##_bit(msr, bitmap + base / f); \
else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) \
return bitop##_bit(msr & 0x1fff, bitmap + (base + 0x400) / f); \
return (rtype)true; \
}
#define BUILD_VMX_MSR_BITMAP_HELPERS(ret_type, action, bitop) \
__BUILD_VMX_MSR_BITMAP_HELPER(ret_type, action, bitop, read, 0x0) \
__BUILD_VMX_MSR_BITMAP_HELPER(ret_type, action, bitop, write, 0x800)
BUILD_VMX_MSR_BITMAP_HELPERS(bool, test, test)
BUILD_VMX_MSR_BITMAP_HELPERS(void, clear, __clear)
BUILD_VMX_MSR_BITMAP_HELPERS(void, set, __set)
static inline u8 vmx_get_rvi(void)
{
return vmcs_read16(GUEST_INTR_STATUS) & 0xff;
@ -522,4 +550,9 @@ static inline bool vmx_guest_state_valid(struct kvm_vcpu *vcpu)
void dump_vmcs(struct kvm_vcpu *vcpu);
static inline int vmx_get_instr_info_reg2(u32 vmx_instr_info)
{
return (vmx_instr_info >> 28) & 0xf;
}
#endif /* __KVM_X86_VMX_H */

View File

@ -3260,8 +3260,11 @@ static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
static void record_steal_time(struct kvm_vcpu *vcpu)
{
struct kvm_host_map map;
struct kvm_steal_time *st;
struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;
struct kvm_steal_time __user *st;
struct kvm_memslots *slots;
u64 steal;
u32 version;
if (kvm_xen_msr_enabled(vcpu->kvm)) {
kvm_xen_runstate_set_running(vcpu);
@ -3271,47 +3274,86 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
return;
/* -EAGAIN is returned in atomic context so we can just return. */
if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT,
&map, &vcpu->arch.st.cache, false))
if (WARN_ON_ONCE(current->mm != vcpu->kvm->mm))
return;
st = map.hva +
offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
slots = kvm_memslots(vcpu->kvm);
if (unlikely(slots->generation != ghc->generation ||
kvm_is_error_hva(ghc->hva) || !ghc->memslot)) {
gfn_t gfn = vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS;
/* We rely on the fact that it fits in a single page. */
BUILD_BUG_ON((sizeof(*st) - 1) & KVM_STEAL_VALID_BITS);
if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, gfn, sizeof(*st)) ||
kvm_is_error_hva(ghc->hva) || !ghc->memslot)
return;
}
st = (struct kvm_steal_time __user *)ghc->hva;
/*
* Doing a TLB flush here, on the guest's behalf, can avoid
* expensive IPIs.
*/
if (guest_pv_has(vcpu, KVM_FEATURE_PV_TLB_FLUSH)) {
u8 st_preempted = xchg(&st->preempted, 0);
u8 st_preempted = 0;
int err = -EFAULT;
if (!user_access_begin(st, sizeof(*st)))
return;
asm volatile("1: xchgb %0, %2\n"
"xor %1, %1\n"
"2:\n"
_ASM_EXTABLE_UA(1b, 2b)
: "+r" (st_preempted),
"+&r" (err)
: "m" (st->preempted));
if (err)
goto out;
user_access_end();
vcpu->arch.st.preempted = 0;
trace_kvm_pv_tlb_flush(vcpu->vcpu_id,
st_preempted & KVM_VCPU_FLUSH_TLB);
if (st_preempted & KVM_VCPU_FLUSH_TLB)
kvm_vcpu_flush_tlb_guest(vcpu);
if (!user_access_begin(st, sizeof(*st)))
goto dirty;
} else {
st->preempted = 0;
if (!user_access_begin(st, sizeof(*st)))
return;
unsafe_put_user(0, &st->preempted, out);
vcpu->arch.st.preempted = 0;
}
vcpu->arch.st.preempted = 0;
unsafe_get_user(version, &st->version, out);
if (version & 1)
version += 1; /* first time write, random junk */
if (st->version & 1)
st->version += 1; /* first time write, random junk */
st->version += 1;
version += 1;
unsafe_put_user(version, &st->version, out);
smp_wmb();
st->steal += current->sched_info.run_delay -
unsafe_get_user(steal, &st->steal, out);
steal += current->sched_info.run_delay -
vcpu->arch.st.last_steal;
vcpu->arch.st.last_steal = current->sched_info.run_delay;
unsafe_put_user(steal, &st->steal, out);
smp_wmb();
version += 1;
unsafe_put_user(version, &st->version, out);
st->version += 1;
kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, false);
out:
user_access_end();
dirty:
mark_page_dirty_in_slot(vcpu->kvm, ghc->memslot, gpa_to_gfn(ghc->gpa));
}
int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
@ -3517,7 +3559,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
if (!guest_pv_has(vcpu, KVM_FEATURE_PV_EOI))
return 1;
if (kvm_lapic_enable_pv_eoi(vcpu, data, sizeof(u8)))
if (kvm_lapic_set_pv_eoi(vcpu, data, sizeof(u8)))
return 1;
break;
@ -4137,7 +4179,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = !static_call(kvm_x86_cpu_has_accelerated_tpr)();
break;
case KVM_CAP_NR_VCPUS:
r = KVM_SOFT_MAX_VCPUS;
r = num_online_cpus();
break;
case KVM_CAP_MAX_VCPUS:
r = KVM_MAX_VCPUS;
@ -4351,8 +4393,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
{
struct kvm_host_map map;
struct kvm_steal_time *st;
struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;
struct kvm_steal_time __user *st;
struct kvm_memslots *slots;
static const u8 preempted = KVM_VCPU_PREEMPTED;
if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
return;
@ -4360,16 +4404,23 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
if (vcpu->arch.st.preempted)
return;
if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT, &map,
&vcpu->arch.st.cache, true))
/* This happens on process exit */
if (unlikely(current->mm != vcpu->kvm->mm))
return;
st = map.hva +
offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS);
slots = kvm_memslots(vcpu->kvm);
st->preempted = vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED;
if (unlikely(slots->generation != ghc->generation ||
kvm_is_error_hva(ghc->hva) || !ghc->memslot))
return;
kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, true);
st = (struct kvm_steal_time __user *)ghc->hva;
BUILD_BUG_ON(sizeof(st->preempted) != sizeof(preempted));
if (!copy_to_user_nofault(&st->preempted, &preempted, sizeof(preempted)))
vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED;
mark_page_dirty_in_slot(vcpu->kvm, ghc->memslot, gpa_to_gfn(ghc->gpa));
}
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
@ -5728,6 +5779,12 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
if (kvm_x86_ops.vm_copy_enc_context_from)
r = kvm_x86_ops.vm_copy_enc_context_from(kvm, cap->args[0]);
return r;
case KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM:
r = -EINVAL;
if (kvm_x86_ops.vm_move_enc_context_from)
r = kvm_x86_ops.vm_move_enc_context_from(
kvm, cap->args[0]);
return r;
case KVM_CAP_EXIT_HYPERCALL:
if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) {
r = -EINVAL;
@ -7328,7 +7385,9 @@ static void emulator_set_smbase(struct x86_emulate_ctxt *ctxt, u64 smbase)
static int emulator_check_pmc(struct x86_emulate_ctxt *ctxt,
u32 pmc)
{
return kvm_pmu_is_valid_rdpmc_ecx(emul_to_vcpu(ctxt), pmc);
if (kvm_pmu_is_valid_rdpmc_ecx(emul_to_vcpu(ctxt), pmc))
return 0;
return -EINVAL;
}
static int emulator_read_pmc(struct x86_emulate_ctxt *ctxt,
@ -9552,7 +9611,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
}
if (kvm_request_pending(vcpu)) {
if (kvm_check_request(KVM_REQ_VM_BUGGED, vcpu)) {
if (kvm_check_request(KVM_REQ_VM_DEAD, vcpu)) {
r = -EIO;
goto out;
}
@ -10564,6 +10623,24 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
return ret;
}
static void kvm_arch_vcpu_guestdbg_update_apicv_inhibit(struct kvm *kvm)
{
bool inhibit = false;
struct kvm_vcpu *vcpu;
int i;
down_write(&kvm->arch.apicv_update_lock);
kvm_for_each_vcpu(i, vcpu, kvm) {
if (vcpu->guest_debug & KVM_GUESTDBG_BLOCKIRQ) {
inhibit = true;
break;
}
}
__kvm_request_apicv_update(kvm, !inhibit, APICV_INHIBIT_REASON_BLOCKIRQ);
up_write(&kvm->arch.apicv_update_lock);
}
int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
struct kvm_guest_debug *dbg)
{
@ -10616,6 +10693,8 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
static_call(kvm_x86_update_exception_bitmap)(vcpu);
kvm_arch_vcpu_guestdbg_update_apicv_inhibit(vcpu->kvm);
r = 0;
out:
@ -10859,11 +10938,8 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
{
struct gfn_to_pfn_cache *cache = &vcpu->arch.st.cache;
int idx;
kvm_release_pfn(cache->pfn, cache->dirty, cache);
kvmclock_reset(vcpu);
static_call(kvm_x86_vcpu_free)(vcpu);
@ -12275,7 +12351,8 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva)
return kvm_skip_emulated_instruction(vcpu);
default:
BUG(); /* We have already checked above that type <= 3 */
kvm_inject_gp(vcpu, 0);
return 1;
}
}
EXPORT_SYMBOL_GPL(kvm_handle_invpcid);

View File

@ -229,28 +229,75 @@ void __init sev_setup_arch(void)
swiotlb_adjust_size(size);
}
static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
{
unsigned long pfn = 0;
pgprot_t prot;
switch (level) {
case PG_LEVEL_4K:
pfn = pte_pfn(*kpte);
prot = pte_pgprot(*kpte);
break;
case PG_LEVEL_2M:
pfn = pmd_pfn(*(pmd_t *)kpte);
prot = pmd_pgprot(*(pmd_t *)kpte);
break;
case PG_LEVEL_1G:
pfn = pud_pfn(*(pud_t *)kpte);
prot = pud_pgprot(*(pud_t *)kpte);
break;
default:
WARN_ONCE(1, "Invalid level for kpte\n");
return 0;
}
if (ret_prot)
*ret_prot = prot;
return pfn;
}
void notify_range_enc_status_changed(unsigned long vaddr, int npages, bool enc)
{
#ifdef CONFIG_PARAVIRT
unsigned long sz = npages << PAGE_SHIFT;
unsigned long vaddr_end = vaddr + sz;
while (vaddr < vaddr_end) {
int psize, pmask, level;
unsigned long pfn;
pte_t *kpte;
kpte = lookup_address(vaddr, &level);
if (!kpte || pte_none(*kpte)) {
WARN_ONCE(1, "kpte lookup for vaddr\n");
return;
}
pfn = pg_level_to_pfn(level, kpte, NULL);
if (!pfn)
continue;
psize = page_level_size(level);
pmask = page_level_mask(level);
notify_page_enc_status_changed(pfn, psize >> PAGE_SHIFT, enc);
vaddr = (vaddr & pmask) + psize;
}
#endif
}
static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
{
pgprot_t old_prot, new_prot;
unsigned long pfn, pa, size;
pte_t new_pte;
switch (level) {
case PG_LEVEL_4K:
pfn = pte_pfn(*kpte);
old_prot = pte_pgprot(*kpte);
break;
case PG_LEVEL_2M:
pfn = pmd_pfn(*(pmd_t *)kpte);
old_prot = pmd_pgprot(*(pmd_t *)kpte);
break;
case PG_LEVEL_1G:
pfn = pud_pfn(*(pud_t *)kpte);
old_prot = pud_pgprot(*(pud_t *)kpte);
break;
default:
pfn = pg_level_to_pfn(level, kpte, &old_prot);
if (!pfn)
return;
}
new_prot = old_prot;
if (enc)
@ -286,12 +333,13 @@ static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
static int __init early_set_memory_enc_dec(unsigned long vaddr,
unsigned long size, bool enc)
{
unsigned long vaddr_end, vaddr_next;
unsigned long vaddr_end, vaddr_next, start;
unsigned long psize, pmask;
int split_page_size_mask;
int level, ret;
pte_t *kpte;
start = vaddr;
vaddr_next = vaddr;
vaddr_end = vaddr + size;
@ -346,6 +394,7 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
ret = 0;
notify_range_enc_status_changed(start, PAGE_ALIGN(size) >> PAGE_SHIFT, enc);
out:
__flush_tlb_all();
return ret;
@ -361,6 +410,11 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
return early_set_memory_enc_dec(vaddr, size, true);
}
void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages, bool enc)
{
notify_range_enc_status_changed(vaddr, npages, enc);
}
/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
bool force_dma_unencrypted(struct device *dev)
{

View File

@ -2023,6 +2023,12 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
*/
cpa_flush(&cpa, 0);
/*
* Notify hypervisor that a given memory range is mapped encrypted
* or decrypted.
*/
notify_range_enc_status_changed(addr, numpages, enc);
return ret;
}

View File

@ -284,6 +284,8 @@ static struct crypto_larval *__crypto_register_alg(struct crypto_alg *alg)
if (larval)
list_add(&larval->alg.cra_list, &crypto_alg_list);
else
alg->cra_flags |= CRYPTO_ALG_TESTED;
crypto_stats_init(alg);

View File

@ -485,7 +485,7 @@ static int __init brcmstb_gisb_arb_probe(struct platform_device *pdev)
list_add_tail(&gdev->next, &brcmstb_gisb_arb_device_list);
#ifdef CONFIG_MIPS
board_be_handler = brcmstb_bus_error_handler;
mips_set_be_handler(brcmstb_bus_error_handler);
#endif
if (list_is_singular(&brcmstb_gisb_arb_device_list)) {

View File

@ -270,7 +270,8 @@ config VMD
config PCIE_BRCMSTB
tristate "Broadcom Brcmstb PCIe host controller"
depends on ARCH_BRCMSTB || ARCH_BCM2835 || ARCH_BCM4908 || COMPILE_TEST
depends on ARCH_BRCMSTB || ARCH_BCM2835 || ARCH_BCM4908 || \
BMIPS_GENERIC || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
default ARCH_BRCMSTB

View File

@ -57,6 +57,29 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
return zpci_deconfigure_device(zdev);
}
static int reset_slot(struct hotplug_slot *hotplug_slot, bool probe)
{
struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev,
hotplug_slot);
if (zdev->state != ZPCI_FN_STATE_CONFIGURED)
return -EIO;
/*
* We can't take the zdev->lock as reset_slot may be called during
* probing and/or device removal which already happens under the
* zdev->lock. Instead the user should use the higher level
* pci_reset_function() or pci_bus_reset() which hold the PCI device
* lock preventing concurrent removal. If not using these functions
* holding the PCI device lock is required.
*/
/* As long as the function is configured we can reset */
if (probe)
return 0;
return zpci_hot_reset_device(zdev);
}
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
{
struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev,
@ -76,6 +99,7 @@ static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value)
static const struct hotplug_slot_ops s390_hotplug_slot_ops = {
.enable_slot = enable_slot,
.disable_slot = disable_slot,
.reset_slot = reset_slot,
.get_power_status = get_power_status,
.get_adapter_status = get_adapter_status,
};

View File

@ -5106,12 +5106,13 @@ static int pci_reset_bus_function(struct pci_dev *dev, bool probe)
return pci_parent_bus_reset(dev, probe);
}
static void pci_dev_lock(struct pci_dev *dev)
void pci_dev_lock(struct pci_dev *dev)
{
pci_cfg_access_lock(dev);
/* block PM suspend, driver probe, etc. */
device_lock(&dev->dev);
}
EXPORT_SYMBOL_GPL(pci_dev_lock);
/* Return 1 on successful lock, 0 on contention */
int pci_dev_trylock(struct pci_dev *dev)

View File

@ -53,7 +53,6 @@ int
tape_std_assign(struct tape_device *device)
{
int rc;
struct timer_list timeout;
struct tape_request *request;
request = tape_alloc_request(2, 11);
@ -70,7 +69,7 @@ tape_std_assign(struct tape_device *device)
* So we set up a timeout for this call.
*/
timer_setup(&request->timer, tape_std_assign_timeout, 0);
mod_timer(&timeout, jiffies + 2 * HZ);
mod_timer(&request->timer, jiffies + msecs_to_jiffies(2000));
rc = tape_do_io_interruptible(device, request);

View File

@ -437,8 +437,8 @@ static ssize_t dev_busid_show(struct device *dev,
struct subchannel *sch = to_subchannel(dev);
struct pmcw *pmcw = &sch->schib.pmcw;
if ((pmcw->st == SUBCHANNEL_TYPE_IO ||
pmcw->st == SUBCHANNEL_TYPE_MSG) && pmcw->dnv)
if ((pmcw->st == SUBCHANNEL_TYPE_IO && pmcw->dnv) ||
(pmcw->st == SUBCHANNEL_TYPE_MSG && pmcw->w))
return sysfs_emit(buf, "0.%x.%04x\n", sch->schid.ssid,
pmcw->dev);
else

Some files were not shown because too many files have changed in this diff Show More