Skip to content

Commit c971013

Browse files
yamahatabonzini
authored andcommitted
KVM: x86/mmu: Pass full 64-bit error code when handling page faults
Plumb the full 64-bit error code throughout the page fault handling code so that KVM can use the upper 32 bits, e.g. SNP's PFERR_GUEST_ENC_MASK will be used to determine whether or not a fault is private vs. shared. Note, passing the 64-bit error code to FNAME(walk_addr)() does NOT change the behavior of permission_fault() when invoked in the page fault path, as KVM explicitly clears PFERR_IMPLICIT_ACCESS in kvm_mmu_page_fault(). Continue passing '0' from the async #PF worker, as guest_memfd and thus private memory doesn't support async page faults. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> [mdr: drop references/changes on rebase, update commit message] Signed-off-by: Michael Roth <michael.roth@amd.com> [sean: drop truncation in call to FNAME(walk_addr)(), rewrite changelog] Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Message-ID: <20240228024147.41573-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1 parent dee281e commit c971013

3 files changed

Lines changed: 4 additions & 5 deletions

File tree

arch/x86/kvm/mmu/mmu.c

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5799,8 +5799,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err
57995799
}
58005800

58015801
if (r == RET_PF_INVALID) {
5802-
r = kvm_mmu_do_page_fault(vcpu, cr2_or_gpa,
5803-
lower_32_bits(error_code), false,
5802+
r = kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, error_code, false,
58045803
&emulation_type);
58055804
if (KVM_BUG_ON(r == RET_PF_INVALID, vcpu->kvm))
58065805
return -EIO;

arch/x86/kvm/mmu/mmu_internal.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ static inline bool is_nx_huge_page_enabled(struct kvm *kvm)
190190
struct kvm_page_fault {
191191
/* arguments to kvm_mmu_do_page_fault. */
192192
const gpa_t addr;
193-
const u32 error_code;
193+
const u64 error_code;
194194
const bool prefetch;
195195

196196
/* Derived from error_code. */
@@ -288,7 +288,7 @@ static inline void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu,
288288
}
289289

290290
static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
291-
u32 err, bool prefetch, int *emulation_type)
291+
u64 err, bool prefetch, int *emulation_type)
292292
{
293293
struct kvm_page_fault fault = {
294294
.addr = cr2_or_gpa,

arch/x86/kvm/mmu/mmutrace.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -260,7 +260,7 @@ TRACE_EVENT(
260260
TP_STRUCT__entry(
261261
__field(int, vcpu_id)
262262
__field(gpa_t, cr2_or_gpa)
263-
__field(u32, error_code)
263+
__field(u64, error_code)
264264
__field(u64 *, sptep)
265265
__field(u64, old_spte)
266266
__field(u64, new_spte)

0 commit comments

Comments
 (0)