Skip to content

Commit 01fb93a

Browse files
ahunter6hansendc
authored andcommitted
x86/tdx: Skip clearing reclaimed pages unless X86_BUG_TDX_PW_MCE is present
Avoid clearing reclaimed TDX private pages unless the platform is affected by the X86_BUG_TDX_PW_MCE erratum. This significantly reduces VM shutdown time on unaffected systems. Background KVM currently clears reclaimed TDX private pages using MOVDIR64B, which: - Clears the TD Owner bit (which identifies TDX private memory) and integrity metadata without triggering integrity violations. - Clears poison from cache lines without consuming it, avoiding MCEs on access (refer TDX Module Base spec. 1348549-006US section 6.5. Handling Machine Check Events during Guest TD Operation). The TDX module also uses MOVDIR64B to initialize private pages before use. If cache flushing is needed, it sets TDX_FEATURES.CLFLUSH_BEFORE_ALLOC. However, KVM currently flushes unconditionally, refer commit 94c477a ("x86/virt/tdx: Add SEAMCALL wrappers to add TD private pages") In contrast, when private pages are reclaimed, the TDX Module handles flushing via the TDH.PHYMEM.CACHE.WB SEAMCALL. Problem Clearing all private pages during VM shutdown is costly. For guests with a large amount of memory it can take minutes. Solution TDX Module Base Architecture spec. documents that private pages reclaimed from a TD should be initialized using MOVDIR64B, in order to avoid integrity violation or TD bit mismatch detection when later being read using a shared HKID, refer April 2025 spec. "Page Initialization" in section "8.6.2. Platforms not Using ACT: Required Cache Flush and Initialization by the Host VMM" That is an overstatement and will be clarified in coming versions of the spec. In fact, as outlined in "Table 16.2: Non-ACT Platforms Checks on Memory" and "Table 16.3: Non-ACT Platforms Checks on Memory Reads in Li Mode" in the same spec, there is no issue accessing such reclaimed pages using a shared key that does not have integrity enabled. Linux always uses KeyID 0 which never has integrity enabled. KeyID 0 is also the TME KeyID which disallows integrity, refer "TME Policy/Encryption Algorithm" bit description in "Intel Architecture Memory Encryption Technologies" spec version 1.6 April 2025. So there is no need to clear pages to avoid integrity violations. There remains a risk of poison consumption. However, in the context of TDX, it is expected that there would be a machine check associated with the original poisoning. On some platforms that results in a panic. However platforms may support "SEAM_NR" Machine Check capability, in which case Linux machine check handler marks the page as poisoned, which prevents it from being allocated anymore, refer commit 7911f14 ("x86/mce: Implement recovery for errors in TDX/SEAM non-root mode") Improvement By skipping the clearing step on unaffected platforms, shutdown time can improve by up to 40%. On platforms with the X86_BUG_TDX_PW_MCE erratum (SPR and EMR), continue clearing because these platforms may trigger poison on partial writes to previously-private pages, even with KeyID 0, refer commit 1e536e1 ("x86/cpu: Detect TDX partial write machine check erratum") Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Kirill A. Shutemov <kas@kernel.org> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Acked-by: Kai Huang <kai.huang@intel.com> Acked-by: Vishal Annapurve <vannapurve@google.com> Link: https://lore.kernel.org/all/20250819155811.136099-4-adrian.hunter%40intel.com
1 parent a27b008 commit 01fb93a

1 file changed

Lines changed: 7 additions & 3 deletions

File tree

  • arch/x86/virt/vmx/tdx

arch/x86/virt/vmx/tdx/tdx.c

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -633,15 +633,19 @@ static int tdmrs_set_up_pamt_all(struct tdmr_info_list *tdmr_list,
633633
}
634634

635635
/*
636-
* Convert TDX private pages back to normal by using MOVDIR64B to
637-
* clear these pages. Note this function doesn't flush cache of
638-
* these TDX private pages. The caller should make sure of that.
636+
* Convert TDX private pages back to normal by using MOVDIR64B to clear these
637+
* pages. Typically, any write to the page will convert it from TDX private back
638+
* to normal kernel memory. Systems with the X86_BUG_TDX_PW_MCE erratum need to
639+
* do the conversion explicitly via MOVDIR64B.
639640
*/
640641
static void tdx_quirk_reset_paddr(unsigned long base, unsigned long size)
641642
{
642643
const void *zero_page = (const void *)page_address(ZERO_PAGE(0));
643644
unsigned long phys, end;
644645

646+
if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
647+
return;
648+
645649
end = base + size;
646650
for (phys = base; phys < end; phys += 64)
647651
movdir64b(__va(phys), zero_page);

0 commit comments

Comments
 (0)