Skip to content

Commit c9b6256

Browse files
committed
KVM: x86/mmu: Dedup logic for detecting TLB flushes on leaf SPTE changes
Now that the shadow MMU and TDP MMU have identical logic for detecting required TLB flushes when updating SPTEs, move said logic to a helper so that the TDP MMU code can benefit from the comments that are currently exclusive to the shadow MMU. No functional change intended. Link: https://lore.kernel.org/r/20241011021051.1557902-16-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 51192eb commit c9b6256

3 files changed

Lines changed: 30 additions & 20 deletions

File tree

arch/x86/kvm/mmu/mmu.c

Lines changed: 1 addition & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -488,23 +488,6 @@ static void mmu_spte_set(u64 *sptep, u64 new_spte)
488488
/* Rules for using mmu_spte_update:
489489
* Update the state bits, it means the mapped pfn is not changed.
490490
*
491-
* If the MMU-writable flag is cleared, i.e. the SPTE is write-protected for
492-
* write-tracking, remote TLBs must be flushed, even if the SPTE was read-only,
493-
* as KVM allows stale Writable TLB entries to exist. When dirty logging, KVM
494-
* flushes TLBs based on whether or not dirty bitmap/ring entries were reaped,
495-
* not whether or not SPTEs were modified, i.e. only the write-tracking case
496-
* needs to flush at the time the SPTEs is modified, before dropping mmu_lock.
497-
*
498-
* Don't flush if the Accessed bit is cleared, as access tracking tolerates
499-
* false negatives, and the one path that does care about TLB flushes,
500-
* kvm_mmu_notifier_clear_flush_young(), flushes if a young SPTE is found, i.e.
501-
* doesn't rely on lower helpers to detect the need to flush.
502-
*
503-
* Lastly, don't flush if the Dirty bit is cleared, as KVM unconditionally
504-
* flushes when enabling dirty logging (see kvm_mmu_slot_apply_flags()), and
505-
* when clearing dirty logs, KVM flushes based on whether or not dirty entries
506-
* were reaped from the bitmap/ring, not whether or not dirty SPTEs were found.
507-
*
508491
* Returns true if the TLB needs to be flushed
509492
*/
510493
static bool mmu_spte_update(u64 *sptep, u64 new_spte)
@@ -527,7 +510,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
527510
WARN_ON_ONCE(!is_shadow_present_pte(old_spte) ||
528511
spte_to_pfn(old_spte) != spte_to_pfn(new_spte));
529512

530-
return is_mmu_writable_spte(old_spte) && !is_mmu_writable_spte(new_spte);
513+
return leaf_spte_change_needs_tlb_flush(old_spte, new_spte);
531514
}
532515

533516
/*

arch/x86/kvm/mmu/spte.h

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -467,6 +467,34 @@ static inline bool is_mmu_writable_spte(u64 spte)
467467
return spte & shadow_mmu_writable_mask;
468468
}
469469

470+
/*
471+
* If the MMU-writable flag is cleared, i.e. the SPTE is write-protected for
472+
* write-tracking, remote TLBs must be flushed, even if the SPTE was read-only,
473+
* as KVM allows stale Writable TLB entries to exist. When dirty logging, KVM
474+
* flushes TLBs based on whether or not dirty bitmap/ring entries were reaped,
475+
* not whether or not SPTEs were modified, i.e. only the write-tracking case
476+
* needs to flush at the time the SPTEs is modified, before dropping mmu_lock.
477+
*
478+
* Don't flush if the Accessed bit is cleared, as access tracking tolerates
479+
* false negatives, and the one path that does care about TLB flushes,
480+
* kvm_mmu_notifier_clear_flush_young(), flushes if a young SPTE is found, i.e.
481+
* doesn't rely on lower helpers to detect the need to flush.
482+
*
483+
* Lastly, don't flush if the Dirty bit is cleared, as KVM unconditionally
484+
* flushes when enabling dirty logging (see kvm_mmu_slot_apply_flags()), and
485+
* when clearing dirty logs, KVM flushes based on whether or not dirty entries
486+
* were reaped from the bitmap/ring, not whether or not dirty SPTEs were found.
487+
*
488+
* Note, this logic only applies to shadow-present leaf SPTEs. The caller is
489+
* responsible for checking that the old SPTE is shadow-present, and is also
490+
* responsible for determining whether or not a TLB flush is required when
491+
* modifying a shadow-present non-leaf SPTE.
492+
*/
493+
static inline bool leaf_spte_change_needs_tlb_flush(u64 old_spte, u64 new_spte)
494+
{
495+
return is_mmu_writable_spte(old_spte) && !is_mmu_writable_spte(new_spte);
496+
}
497+
470498
static inline u64 get_mmio_spte_generation(u64 spte)
471499
{
472500
u64 gen;

arch/x86/kvm/mmu/tdp_mmu.c

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1034,8 +1034,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
10341034
return RET_PF_RETRY;
10351035
else if (is_shadow_present_pte(iter->old_spte) &&
10361036
(!is_last_spte(iter->old_spte, iter->level) ||
1037-
WARN_ON_ONCE(is_mmu_writable_spte(iter->old_spte) &&
1038-
!is_mmu_writable_spte(new_spte))))
1037+
WARN_ON_ONCE(leaf_spte_change_needs_tlb_flush(iter->old_spte, new_spte))))
10391038
kvm_flush_remote_tlbs_gfn(vcpu->kvm, iter->gfn, iter->level);
10401039

10411040
/*

0 commit comments

Comments
 (0)