Skip to content

Commit 7e1b232

Browse files
author
Marc Zyngier
committed
KVM: arm64: nvhe: Synchronise with page table walker on TLBI
A TLBI from EL2 impacting EL1 involves messing with the EL1&0 translation regime, and the page table walker may still be performing speculative walks. Piggyback on the existing DSBs to always have a DSB ISH that will synchronise all load/store operations that the PTW may still have. Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org>
1 parent 55b5bac commit 7e1b232

1 file changed

Lines changed: 29 additions & 9 deletions

File tree

  • arch/arm64/kvm/hyp/nvhe

arch/arm64/kvm/hyp/nvhe/tlb.c

Lines changed: 29 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,31 @@ struct tlb_inv_context {
1515
};
1616

1717
static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu,
18-
struct tlb_inv_context *cxt)
18+
struct tlb_inv_context *cxt,
19+
bool nsh)
1920
{
21+
/*
22+
* We have two requirements:
23+
*
24+
* - ensure that the page table updates are visible to all
25+
* CPUs, for which a dsb(DOMAIN-st) is what we need, DOMAIN
26+
* being either ish or nsh, depending on the invalidation
27+
* type.
28+
*
29+
* - complete any speculative page table walk started before
30+
* we trapped to EL2 so that we can mess with the MM
31+
* registers out of context, for which dsb(nsh) is enough
32+
*
33+
* The composition of these two barriers is a dsb(DOMAIN), and
34+
* the 'nsh' parameter tracks the distinction between
35+
* Inner-Shareable and Non-Shareable, as specified by the
36+
* callers.
37+
*/
38+
if (nsh)
39+
dsb(nsh);
40+
else
41+
dsb(ish);
42+
2043
if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
2144
u64 val;
2245

@@ -60,10 +83,8 @@ void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu,
6083
{
6184
struct tlb_inv_context cxt;
6285

63-
dsb(ishst);
64-
6586
/* Switch to requested VMID */
66-
__tlb_switch_to_guest(mmu, &cxt);
87+
__tlb_switch_to_guest(mmu, &cxt, false);
6788

6889
/*
6990
* We could do so much better if we had the VA as well.
@@ -113,10 +134,8 @@ void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu)
113134
{
114135
struct tlb_inv_context cxt;
115136

116-
dsb(ishst);
117-
118137
/* Switch to requested VMID */
119-
__tlb_switch_to_guest(mmu, &cxt);
138+
__tlb_switch_to_guest(mmu, &cxt, false);
120139

121140
__tlbi(vmalls12e1is);
122141
dsb(ish);
@@ -130,7 +149,7 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
130149
struct tlb_inv_context cxt;
131150

132151
/* Switch to requested VMID */
133-
__tlb_switch_to_guest(mmu, &cxt);
152+
__tlb_switch_to_guest(mmu, &cxt, false);
134153

135154
__tlbi(vmalle1);
136155
asm volatile("ic iallu");
@@ -142,7 +161,8 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
142161

143162
void __kvm_flush_vm_context(void)
144163
{
145-
dsb(ishst);
164+
/* Same remark as in __tlb_switch_to_guest() */
165+
dsb(ish);
146166
__tlbi(alle1is);
147167

148168
/*

0 commit comments

Comments
 (0)