Skip to content

Commit 31c70b9

Browse files
author
Marc Zyngier
committed
Merge branch arm64/for-next/cpufeature into kvmarm-master/next
Merge arm64/for-next/cpufeature in to resolve conflicts resulting from the removal of CONFIG_PAN. * arm64/for-next/cpufeature: arm64: Add support for FEAT_{LS64, LS64_V} KVM: arm64: Enable FEAT_{LS64, LS64_V} in the supported guest arm64: Provide basic EL2 setup for FEAT_{LS64, LS64_V} usage at EL0/1 KVM: arm64: Handle DABT caused by LS64* instructions on unsupported memory KVM: arm64: Add documentation for KVM_EXIT_ARM_LDST64B KVM: arm64: Add exit to userspace on {LD,ST}64B* outside of memslots arm64: Unconditionally enable PAN support arm64: Unconditionally enable LSE support arm64: Add support for TSV110 Spectre-BHB mitigation Signed-off-by: Marc Zyngier <maz@kernel.org>
2 parents 9ace475 + 58ce786 commit 31c70b9

25 files changed

Lines changed: 193 additions & 105 deletions

File tree

Documentation/arch/arm64/booting.rst

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -556,6 +556,18 @@ Before jumping into the kernel, the following conditions must be met:
556556

557557
- MDCR_EL3.TPM (bit 6) must be initialized to 0b0
558558

559+
For CPUs with support for 64-byte loads and stores without status (FEAT_LS64):
560+
561+
- If the kernel is entered at EL1 and EL2 is present:
562+
563+
- HCRX_EL2.EnALS (bit 1) must be initialised to 0b1.
564+
565+
For CPUs with support for 64-byte stores with status (FEAT_LS64_V):
566+
567+
- If the kernel is entered at EL1 and EL2 is present:
568+
569+
- HCRX_EL2.EnASR (bit 2) must be initialised to 0b1.
570+
559571
The requirements described above for CPU mode, caches, MMUs, architected
560572
timers, coherency and system registers apply to all CPUs. All CPUs must
561573
enter the kernel in the same exception level. Where the values documented

Documentation/arch/arm64/elf_hwcaps.rst

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -444,6 +444,13 @@ HWCAP3_MTE_STORE_ONLY
444444
HWCAP3_LSFE
445445
Functionality implied by ID_AA64ISAR3_EL1.LSFE == 0b0001
446446

447+
HWCAP3_LS64
448+
Functionality implied by ID_AA64ISAR1_EL1.LS64 == 0b0001. Note that
449+
the function of instruction ld64b/st64b requires support by CPU, system
450+
and target (device) memory location and HWCAP3_LS64 implies the support
451+
of CPU. User should only use ld64b/st64b on supported target (device)
452+
memory location, otherwise fallback to the non-atomic alternatives.
453+
447454

448455
4. Unused AT_HWCAP bits
449456
-----------------------

Documentation/virt/kvm/api.rst

Lines changed: 36 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1303,12 +1303,13 @@ userspace, for example because of missing instruction syndrome decode
13031303
information or because there is no device mapped at the accessed IPA, then
13041304
userspace can ask the kernel to inject an external abort using the address
13051305
from the exiting fault on the VCPU. It is a programming error to set
1306-
ext_dabt_pending after an exit which was not either KVM_EXIT_MMIO or
1307-
KVM_EXIT_ARM_NISV. This feature is only available if the system supports
1308-
KVM_CAP_ARM_INJECT_EXT_DABT. This is a helper which provides commonality in
1309-
how userspace reports accesses for the above cases to guests, across different
1310-
userspace implementations. Nevertheless, userspace can still emulate all Arm
1311-
exceptions by manipulating individual registers using the KVM_SET_ONE_REG API.
1306+
ext_dabt_pending after an exit which was not either KVM_EXIT_MMIO,
1307+
KVM_EXIT_ARM_NISV, or KVM_EXIT_ARM_LDST64B. This feature is only available if
1308+
the system supports KVM_CAP_ARM_INJECT_EXT_DABT. This is a helper which
1309+
provides commonality in how userspace reports accesses for the above cases to
1310+
guests, across different userspace implementations. Nevertheless, userspace
1311+
can still emulate all Arm exceptions by manipulating individual registers
1312+
using the KVM_SET_ONE_REG API.
13121313

13131314
See KVM_GET_VCPU_EVENTS for the data structure.
13141315

@@ -7050,12 +7051,14 @@ in send_page or recv a buffer to recv_page).
70507051

70517052
::
70527053

7053-
/* KVM_EXIT_ARM_NISV */
7054+
/* KVM_EXIT_ARM_NISV / KVM_EXIT_ARM_LDST64B */
70547055
struct {
70557056
__u64 esr_iss;
70567057
__u64 fault_ipa;
70577058
} arm_nisv;
70587059

7060+
- KVM_EXIT_ARM_NISV:
7061+
70597062
Used on arm64 systems. If a guest accesses memory not in a memslot,
70607063
KVM will typically return to userspace and ask it to do MMIO emulation on its
70617064
behalf. However, for certain classes of instructions, no instruction decode
@@ -7089,6 +7092,32 @@ Note that although KVM_CAP_ARM_NISV_TO_USER will be reported if
70897092
queried outside of a protected VM context, the feature will not be
70907093
exposed if queried on a protected VM file descriptor.
70917094

7095+
- KVM_EXIT_ARM_LDST64B:
7096+
7097+
Used on arm64 systems. When a guest using a LD64B, ST64B, ST64BV, ST64BV0,
7098+
outside of a memslot, KVM will return to userspace with KVM_EXIT_ARM_LDST64B,
7099+
exposing the relevant ESR_EL2 information and faulting IPA, similarly to
7100+
KVM_EXIT_ARM_NISV.
7101+
7102+
Userspace is supposed to fully emulate the instructions, which includes:
7103+
7104+
- fetch of the operands for a store, including ACCDATA_EL1 in the case
7105+
of a ST64BV0 instruction
7106+
- deal with the endianness if the guest is big-endian
7107+
- emulate the access, including the delivery of an exception if the
7108+
access didn't succeed
7109+
- provide a return value in the case of ST64BV/ST64BV0
7110+
- return the data in the case of a load
7111+
- increment PC if the instruction was successfully executed
7112+
7113+
Note that there is no expectation of performance for this emulation, as it
7114+
involves a large number of interaction with the guest state. It is, however,
7115+
expected that the instruction's semantics are preserved, specially the
7116+
single-copy atomicity property of the 64 byte access.
7117+
7118+
This exit reason must be handled if userspace sets ID_AA64ISAR1_EL1.LS64 to a
7119+
non-zero value, indicating that FEAT_LS64* is enabled.
7120+
70927121
::
70937122

70947123
/* KVM_EXIT_X86_RDMSR / KVM_EXIT_X86_WRMSR */

arch/arm64/Kconfig

Lines changed: 0 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1680,7 +1680,6 @@ config MITIGATE_SPECTRE_BRANCH_HISTORY
16801680
config ARM64_SW_TTBR0_PAN
16811681
bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
16821682
depends on !KCSAN
1683-
select ARM64_PAN
16841683
help
16851684
Enabling this option prevents the kernel from accessing
16861685
user-space memory directly by pointing TTBR0_EL1 to a reserved
@@ -1859,36 +1858,6 @@ config ARM64_HW_AFDBM
18591858
to work on pre-ARMv8.1 hardware and the performance impact is
18601859
minimal. If unsure, say Y.
18611860

1862-
config ARM64_PAN
1863-
bool "Enable support for Privileged Access Never (PAN)"
1864-
default y
1865-
help
1866-
Privileged Access Never (PAN; part of the ARMv8.1 Extensions)
1867-
prevents the kernel or hypervisor from accessing user-space (EL0)
1868-
memory directly.
1869-
1870-
Choosing this option will cause any unprotected (not using
1871-
copy_to_user et al) memory access to fail with a permission fault.
1872-
1873-
The feature is detected at runtime, and will remain as a 'nop'
1874-
instruction if the cpu does not implement the feature.
1875-
1876-
config ARM64_LSE_ATOMICS
1877-
bool
1878-
default ARM64_USE_LSE_ATOMICS
1879-
1880-
config ARM64_USE_LSE_ATOMICS
1881-
bool "Atomic instructions"
1882-
default y
1883-
help
1884-
As part of the Large System Extensions, ARMv8.1 introduces new
1885-
atomic instructions that are designed specifically to scale in
1886-
very large systems.
1887-
1888-
Say Y here to make use of these instructions for the in-kernel
1889-
atomic routines. This incurs a small overhead on CPUs that do
1890-
not support these instructions.
1891-
18921861
endmenu # "ARMv8.1 architectural features"
18931862

18941863
menu "ARMv8.2 architectural features"
@@ -2125,7 +2094,6 @@ config ARM64_MTE
21252094
depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI
21262095
depends on AS_HAS_ARMV8_5
21272096
# Required for tag checking in the uaccess routines
2128-
select ARM64_PAN
21292097
select ARCH_HAS_SUBPAGE_FAULTS
21302098
select ARCH_USES_HIGH_VMA_FLAGS
21312099
select ARCH_USES_PG_ARCH_2
@@ -2157,7 +2125,6 @@ menu "ARMv8.7 architectural features"
21572125
config ARM64_EPAN
21582126
bool "Enable support for Enhanced Privileged Access Never (EPAN)"
21592127
default y
2160-
depends on ARM64_PAN
21612128
help
21622129
Enhanced Privileged Access Never (EPAN) allows Privileged
21632130
Access Never to be used with Execute-only mappings.

arch/arm64/include/asm/cpucaps.h

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,6 @@ cpucap_is_possible(const unsigned int cap)
1919
"cap must be < ARM64_NCAPS");
2020

2121
switch (cap) {
22-
case ARM64_HAS_PAN:
23-
return IS_ENABLED(CONFIG_ARM64_PAN);
2422
case ARM64_HAS_EPAN:
2523
return IS_ENABLED(CONFIG_ARM64_EPAN);
2624
case ARM64_SVE:

arch/arm64/include/asm/el2_setup.h

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,9 +83,19 @@
8383
/* Enable GCS if supported */
8484
mrs_s x1, SYS_ID_AA64PFR1_EL1
8585
ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4
86-
cbz x1, .Lset_hcrx_\@
86+
cbz x1, .Lskip_gcs_hcrx_\@
8787
orr x0, x0, #HCRX_EL2_GCSEn
8888

89+
.Lskip_gcs_hcrx_\@:
90+
/* Enable LS64, LS64_V if supported */
91+
mrs_s x1, SYS_ID_AA64ISAR1_EL1
92+
ubfx x1, x1, #ID_AA64ISAR1_EL1_LS64_SHIFT, #4
93+
cbz x1, .Lset_hcrx_\@
94+
orr x0, x0, #HCRX_EL2_EnALS
95+
cmp x1, #ID_AA64ISAR1_EL1_LS64_LS64_V
96+
b.lt .Lset_hcrx_\@
97+
orr x0, x0, #HCRX_EL2_EnASR
98+
8999
.Lset_hcrx_\@:
90100
msr_s SYS_HCRX_EL2, x0
91101
.Lskip_hcrx_\@:

arch/arm64/include/asm/esr.h

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,7 @@
124124
#define ESR_ELx_FSC_SEA_TTW(n) (0x14 + (n))
125125
#define ESR_ELx_FSC_SECC (0x18)
126126
#define ESR_ELx_FSC_SECC_TTW(n) (0x1c + (n))
127+
#define ESR_ELx_FSC_EXCL_ATOMIC (0x35)
127128
#define ESR_ELx_FSC_ADDRSZ (0x00)
128129

129130
/*
@@ -488,6 +489,13 @@ static inline bool esr_fsc_is_access_flag_fault(unsigned long esr)
488489
(esr == ESR_ELx_FSC_ACCESS_L(0));
489490
}
490491

492+
static inline bool esr_fsc_is_excl_atomic_fault(unsigned long esr)
493+
{
494+
esr = esr & ESR_ELx_FSC;
495+
496+
return esr == ESR_ELx_FSC_EXCL_ATOMIC;
497+
}
498+
491499
static inline bool esr_fsc_is_addr_sz_fault(unsigned long esr)
492500
{
493501
esr &= ESR_ELx_FSC;

arch/arm64/include/asm/hwcap.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,7 @@
179179
#define KERNEL_HWCAP_MTE_FAR __khwcap3_feature(MTE_FAR)
180180
#define KERNEL_HWCAP_MTE_STORE_ONLY __khwcap3_feature(MTE_STORE_ONLY)
181181
#define KERNEL_HWCAP_LSFE __khwcap3_feature(LSFE)
182+
#define KERNEL_HWCAP_LS64 __khwcap3_feature(LS64)
182183

183184
/*
184185
* This yields a mask that user programs can use to figure out what

arch/arm64/include/asm/insn.h

Lines changed: 0 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -671,7 +671,6 @@ u32 aarch64_insn_gen_extr(enum aarch64_insn_variant variant,
671671
enum aarch64_insn_register Rn,
672672
enum aarch64_insn_register Rd,
673673
u8 lsb);
674-
#ifdef CONFIG_ARM64_LSE_ATOMICS
675674
u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result,
676675
enum aarch64_insn_register address,
677676
enum aarch64_insn_register value,
@@ -683,28 +682,6 @@ u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
683682
enum aarch64_insn_register value,
684683
enum aarch64_insn_size_type size,
685684
enum aarch64_insn_mem_order_type order);
686-
#else
687-
static inline
688-
u32 aarch64_insn_gen_atomic_ld_op(enum aarch64_insn_register result,
689-
enum aarch64_insn_register address,
690-
enum aarch64_insn_register value,
691-
enum aarch64_insn_size_type size,
692-
enum aarch64_insn_mem_atomic_op op,
693-
enum aarch64_insn_mem_order_type order)
694-
{
695-
return AARCH64_BREAK_FAULT;
696-
}
697-
698-
static inline
699-
u32 aarch64_insn_gen_cas(enum aarch64_insn_register result,
700-
enum aarch64_insn_register address,
701-
enum aarch64_insn_register value,
702-
enum aarch64_insn_size_type size,
703-
enum aarch64_insn_mem_order_type order)
704-
{
705-
return AARCH64_BREAK_FAULT;
706-
}
707-
#endif
708685
u32 aarch64_insn_gen_dmb(enum aarch64_insn_mb_type type);
709686
u32 aarch64_insn_gen_dsb(enum aarch64_insn_mb_type type);
710687
u32 aarch64_insn_gen_mrs(enum aarch64_insn_register result,

arch/arm64/include/asm/kvm_emulate.h

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ void kvm_skip_instr32(struct kvm_vcpu *vcpu);
4747
void kvm_inject_undefined(struct kvm_vcpu *vcpu);
4848
int kvm_inject_serror_esr(struct kvm_vcpu *vcpu, u64 esr);
4949
int kvm_inject_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr);
50+
int kvm_inject_dabt_excl_atomic(struct kvm_vcpu *vcpu, u64 addr);
5051
void kvm_inject_size_fault(struct kvm_vcpu *vcpu);
5152

5253
static inline int kvm_inject_sea_dabt(struct kvm_vcpu *vcpu, u64 addr)
@@ -694,6 +695,12 @@ static inline void vcpu_set_hcrx(struct kvm_vcpu *vcpu)
694695

695696
if (kvm_has_sctlr2(kvm))
696697
vcpu->arch.hcrx_el2 |= HCRX_EL2_SCTLR2En;
698+
699+
if (kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64))
700+
vcpu->arch.hcrx_el2 |= HCRX_EL2_EnALS;
701+
702+
if (kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V))
703+
vcpu->arch.hcrx_el2 |= HCRX_EL2_EnASR;
697704
}
698705
}
699706
#endif /* __ARM64_KVM_EMULATE_H__ */

0 commit comments

Comments
 (0)