Skip to content

Commit e287e43

Browse files
committed
KVM: x86/mmu: Check yielded_gfn for forward progress iff resched is needed
Swap the order of the checks in tdp_mmu_iter_cond_resched() so that KVM checks to see if a resched is needed _before_ checking to see if yielding must be disallowed to guarantee forward progress. Iterating over TDP MMU SPTEs is a hot path, e.g. tearing down a root can touch millions of SPTEs, and not needing to reschedule is by far the common case. On the other hand, disallowing yielding because forward progress has not been made is a very rare case. Returning early for the common case (no resched), effectively reduces the number of checks from 2 to 1 for the common case, and should make the code slightly more predictable for the CPU. To resolve a weird conundrum where the forward progress check currently returns false, but the need resched check subtly returns iter->yielded, which _should_ be false (enforced by a WARN), return false unconditionally (which might also help make the sequence more predictable). If KVM has a bug where iter->yielded is left danging, continuing to yield is neither right nor wrong, it was simply an artifact of how the original code was written. Unconditionally returning false when yielding is unnecessary or unwanted will also allow extracting the "should resched" logic to a separate helper in a future patch. Cc: David Matlack <dmatlack@google.com> Reviewed-by: James Houghton <jthoughton@google.com> Link: https://lore.kernel.org/r/20241031170633.1502783-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 35ef80e commit e287e43

1 file changed

Lines changed: 14 additions & 14 deletions

File tree

arch/x86/kvm/mmu/tdp_mmu.c

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -658,29 +658,29 @@ static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm,
658658
{
659659
WARN_ON_ONCE(iter->yielded);
660660

661+
if (!need_resched() && !rwlock_needbreak(&kvm->mmu_lock))
662+
return false;
663+
661664
/* Ensure forward progress has been made before yielding. */
662665
if (iter->next_last_level_gfn == iter->yielded_gfn)
663666
return false;
664667

665-
if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) {
666-
if (flush)
667-
kvm_flush_remote_tlbs(kvm);
668-
669-
rcu_read_unlock();
668+
if (flush)
669+
kvm_flush_remote_tlbs(kvm);
670670

671-
if (shared)
672-
cond_resched_rwlock_read(&kvm->mmu_lock);
673-
else
674-
cond_resched_rwlock_write(&kvm->mmu_lock);
671+
rcu_read_unlock();
675672

676-
rcu_read_lock();
673+
if (shared)
674+
cond_resched_rwlock_read(&kvm->mmu_lock);
675+
else
676+
cond_resched_rwlock_write(&kvm->mmu_lock);
677677

678-
WARN_ON_ONCE(iter->gfn > iter->next_last_level_gfn);
678+
rcu_read_lock();
679679

680-
iter->yielded = true;
681-
}
680+
WARN_ON_ONCE(iter->gfn > iter->next_last_level_gfn);
682681

683-
return iter->yielded;
682+
iter->yielded = true;
683+
return true;
684684
}
685685

686686
static inline gfn_t tdp_mmu_max_gfn_exclusive(void)

0 commit comments

Comments
 (0)