Skip to content

Commit ee188de

Browse files
committed
KVM: Check for empty mask of harvested dirty ring entries in caller
When resetting a dirty ring, explicitly check that there is work to be done before calling kvm_reset_dirty_gfn(), e.g. if no harvested entries are found and/or on the loop's first iteration, and delete the extremely misleading comment "This is only needed to make compilers happy". KVM absolutely relies on mask to be zero-initialized, i.e. the comment is an outright lie. Furthermore, the compiler is right to complain that KVM is calling a function with uninitialized data, as there are no guarantees the implementation details of kvm_reset_dirty_gfn() will be visible to kvm_dirty_ring_reset(). While the flaw could be fixed by simply deleting (or rewording) the comment, and duplicating the check is unfortunate, checking mask in the caller will allow for additional cleanups. Opportunistically drop the zero-initialization of cur_slot and cur_offset. If a bug were introduced where either the slot or offset was consumed before mask is set to a non-zero value, then it is highly desirable for the compiler (or some other sanitizer) to yell. Cc: Peter Xu <peterx@redhat.com> Cc: Yan Zhao <yan.y.zhao@intel.com> Cc: Maxim Levitsky <mlevitsk@redhat.com> Cc: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com> Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/20250516213540.2546077-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
1 parent 1333c35 commit ee188de

1 file changed

Lines changed: 35 additions & 9 deletions

File tree

virt/kvm/dirty_ring.c

Lines changed: 35 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -55,9 +55,6 @@ static void kvm_reset_dirty_gfn(struct kvm *kvm, u32 slot, u64 offset, u64 mask)
5555
struct kvm_memory_slot *memslot;
5656
int as_id, id;
5757

58-
if (!mask)
59-
return;
60-
6158
as_id = slot >> 16;
6259
id = (u16)slot;
6360

@@ -108,15 +105,24 @@ static inline bool kvm_dirty_gfn_harvested(struct kvm_dirty_gfn *gfn)
108105
int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
109106
int *nr_entries_reset)
110107
{
108+
/*
109+
* To minimize mmu_lock contention, batch resets for harvested entries
110+
* whose gfns are in the same slot, and are within N frame numbers of
111+
* each other, where N is the number of bits in an unsigned long. For
112+
* simplicity, process the current set of entries when the next entry
113+
* can't be included in the batch.
114+
*
115+
* Track the current batch slot, the gfn offset into the slot for the
116+
* batch, and the bitmask of gfns that need to be reset (relative to
117+
* offset). Note, the offset may be adjusted backwards, e.g. so that
118+
* a sequence of gfns X, X-1, ... X-N-1 can be batched.
119+
*/
111120
u32 cur_slot, next_slot;
112121
u64 cur_offset, next_offset;
113-
unsigned long mask;
122+
unsigned long mask = 0;
114123
struct kvm_dirty_gfn *entry;
115124
bool first_round = true;
116125

117-
/* This is only needed to make compilers happy */
118-
cur_slot = cur_offset = mask = 0;
119-
120126
while (likely((*nr_entries_reset) < INT_MAX)) {
121127
if (signal_pending(current))
122128
return -EINTR;
@@ -164,14 +170,34 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
164170
continue;
165171
}
166172
}
167-
kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
173+
174+
/*
175+
* Reset the slot for all the harvested entries that have been
176+
* gathered, but not yet fully processed.
177+
*/
178+
if (mask)
179+
kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
180+
181+
/*
182+
* The current slot was reset or this is the first harvested
183+
* entry, (re)initialize the metadata.
184+
*/
168185
cur_slot = next_slot;
169186
cur_offset = next_offset;
170187
mask = 1;
171188
first_round = false;
172189
}
173190

174-
kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
191+
/*
192+
* Perform a final reset if there are harvested entries that haven't
193+
* been processed, which is guaranteed if at least one harvested was
194+
* found. The loop only performs a reset when the "next" entry can't
195+
* be batched with the "current" entry(s), and that reset processes the
196+
* _current_ entry(s); i.e. the last harvested entry, a.k.a. next, will
197+
* always be left pending.
198+
*/
199+
if (mask)
200+
kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
175201

176202
/*
177203
* The request KVM_REQ_DIRTY_RING_SOFT_FULL will be cleared

0 commit comments

Comments
 (0)