Skip to content

Commit 07a289a

Browse files
zcxGGmuavpatel
authored andcommitted
RISC-V: KVM: Avoid re-acquiring memslot in kvm_riscv_gstage_map()
The caller has already passed in the memslot, and there are two instances `{kvm_faultin_pfn/mark_page_dirty}` of retrieving the memslot again in `kvm_riscv_gstage_map`, we can replace them with `{__kvm_faultin_pfn/mark_page_dirty_in_slot}`. Signed-off-by: Quan Zhou <zhouquan@iscas.ac.cn> Reviewed-by: Anup Patel <anup@brainfault.org> Link: https://lore.kernel.org/r/50989f0a02790f9d7dc804c2ade6387c4e7fbdbc.1749634392.git.zhouquan@iscas.ac.cn Signed-off-by: Anup Patel <anup@brainfault.org>
1 parent fce11b6 commit 07a289a

1 file changed

Lines changed: 3 additions & 2 deletions

File tree

arch/riscv/kvm/mmu.c

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -391,7 +391,8 @@ int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot,
391391
return -EFAULT;
392392
}
393393

394-
hfn = kvm_faultin_pfn(vcpu, gfn, is_write, &writable, &page);
394+
hfn = __kvm_faultin_pfn(memslot, gfn, is_write ? FOLL_WRITE : 0,
395+
&writable, &page);
395396
if (hfn == KVM_PFN_ERR_HWPOISON) {
396397
send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva,
397398
vma_pageshift, current);
@@ -413,7 +414,7 @@ int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot,
413414
goto out_unlock;
414415

415416
if (writable) {
416-
mark_page_dirty(kvm, gfn);
417+
mark_page_dirty_in_slot(kvm, memslot, gfn);
417418
ret = kvm_riscv_gstage_map_page(&gstage, pcache, gpa, hfn << PAGE_SHIFT,
418419
vma_pagesize, false, true, out_map);
419420
} else {

0 commit comments

Comments
 (0)