Skip to content

Commit 16459fe

Browse files
andyhhpakpm00
authored andcommitted
x86/kfence: fix booting on 32bit non-PAE systems
The original patch inverted the PTE unconditionally to avoid L1TF-vulnerable PTEs, but Linux doesn't make this adjustment in 2-level paging. Adjust the logic to use the flip_protnone_guard() helper, which is a nop on 2-level paging but inverts the address bits in all other paging modes. This doesn't matter for the Xen aspect of the original change. Linux no longer supports running 32bit PV under Xen, and Xen doesn't support running any 32bit PV guests without using PAE paging. Link: https://lkml.kernel.org/r/20260126211046.2096622-1-andrew.cooper3@citrix.com Fixes: b505f19 ("x86/kfence: avoid writing L1TF-vulnerable PTEs") Reported-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> Closes: https://lore.kernel.org/lkml/CAKFNMokwjw68ubYQM9WkzOuH51wLznHpEOMSqtMoV1Rn9JV_gw@mail.gmail.com/ Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Tested-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> Tested-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: Alexander Potapenko <glider@google.com> Cc: Marco Elver <elver@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Jann Horn <jannh@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent bd58782 commit 16459fe

1 file changed

Lines changed: 4 additions & 3 deletions

File tree

arch/x86/include/asm/kfence.h

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
4242
{
4343
unsigned int level;
4444
pte_t *pte = lookup_address(addr, &level);
45-
pteval_t val;
45+
pteval_t val, new;
4646

4747
if (WARN_ON(!pte || level != PG_LEVEL_4K))
4848
return false;
@@ -57,11 +57,12 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
5757
return true;
5858

5959
/*
60-
* Otherwise, invert the entire PTE. This avoids writing out an
60+
* Otherwise, flip the Present bit, taking care to avoid writing an
6161
* L1TF-vulnerable PTE (not present, without the high address bits
6262
* set).
6363
*/
64-
set_pte(pte, __pte(~val));
64+
new = val ^ _PAGE_PRESENT;
65+
set_pte(pte, __pte(flip_protnone_guard(val, new, PTE_PFN_MASK)));
6566

6667
/*
6768
* If the page was protected (non-present) and we're making it

0 commit comments

Comments
 (0)