Skip to content

Commit b3d5dc6

Browse files
sean-jcbonzini
authored andcommitted
KVM: x86/mmu: Use synthetic page fault error code to indicate private faults
Add and use a synthetic, KVM-defined page fault error code to indicate whether a fault is to private vs. shared memory. TDX and SNP have different mechanisms for reporting private vs. shared, and KVM's software-protected VMs have no mechanism at all. Usurp an error code flag to avoid having to plumb another parameter to kvm_mmu_page_fault() and friends. Alternatively, KVM could borrow AMD's PFERR_GUEST_ENC_MASK, i.e. set it for TDX and software-protected VMs as appropriate, but that would require *clearing* the flag for SEV and SEV-ES VMs, which support encrypted memory at the hardware layer, but don't utilize private memory at the KVM layer. Opportunistically add a comment to call out that the logic for software- protected VMs is (and was before this commit) broken for nested MMUs, i.e. for nested TDP, as the GPA is an L2 GPA. Punt on trying to play nice with nested MMUs as there is a _lot_ of functionality that simply doesn't work for software-protected VMs, e.g. all of the paths where KVM accesses guest memory need to be updated to be aware of private vs. shared memory. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20240228024147.41573-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1 parent 7bdbb82 commit b3d5dc6

3 files changed

Lines changed: 21 additions & 2 deletions

File tree

arch/x86/include/asm/kvm_host.h

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -273,7 +273,12 @@ enum x86_intercept_stage;
273273
* when emulating instructions that triggers implicit access.
274274
*/
275275
#define PFERR_IMPLICIT_ACCESS BIT_ULL(48)
276-
#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS)
276+
/*
277+
* PRIVATE_ACCESS is a KVM-defined flag us to indicate that a fault occurred
278+
* when the guest was accessing private memory.
279+
*/
280+
#define PFERR_PRIVATE_ACCESS BIT_ULL(49)
281+
#define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS)
277282

278283
#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \
279284
PFERR_WRITE_MASK | \

arch/x86/kvm/mmu/mmu.c

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5798,6 +5798,20 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err
57985798
if (WARN_ON_ONCE(!VALID_PAGE(vcpu->arch.mmu->root.hpa)))
57995799
return RET_PF_RETRY;
58005800

5801+
/*
5802+
* Except for reserved faults (emulated MMIO is shared-only), set the
5803+
* PFERR_PRIVATE_ACCESS flag for software-protected VMs based on the gfn's
5804+
* current attributes, which are the source of truth for such VMs. Note,
5805+
* this wrong for nested MMUs as the GPA is an L2 GPA, but KVM doesn't
5806+
* currently supported nested virtualization (among many other things)
5807+
* for software-protected VMs.
5808+
*/
5809+
if (IS_ENABLED(CONFIG_KVM_SW_PROTECTED_VM) &&
5810+
!(error_code & PFERR_RSVD_MASK) &&
5811+
vcpu->kvm->arch.vm_type == KVM_X86_SW_PROTECTED_VM &&
5812+
kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(cr2_or_gpa)))
5813+
error_code |= PFERR_PRIVATE_ACCESS;
5814+
58015815
r = RET_PF_INVALID;
58025816
if (unlikely(error_code & PFERR_RSVD_MASK)) {
58035817
r = handle_mmio_page_fault(vcpu, cr2_or_gpa, direct);

arch/x86/kvm/mmu/mmu_internal.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -306,7 +306,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
306306
.max_level = KVM_MAX_HUGEPAGE_LEVEL,
307307
.req_level = PG_LEVEL_4K,
308308
.goal_level = PG_LEVEL_4K,
309-
.is_private = kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT),
309+
.is_private = err & PFERR_PRIVATE_ACCESS,
310310
};
311311
int r;
312312

0 commit comments

Comments
 (0)