Skip to content

Commit 7733bc7

Browse files
author
Russell King (Oracle)
committed
ARM: fix hash_name() fault
Zizhi Wo reports: "During the execution of hash_name()->load_unaligned_zeropad(), a potential memory access beyond the PAGE boundary may occur. For example, when the filename length is near the PAGE_SIZE boundary. This triggers a page fault, which leads to a call to do_page_fault()->mmap_read_trylock(). If we can't acquire the lock, we have to fall back to the mmap_read_lock() path, which calls might_sleep(). This breaks RCU semantics because path lookup occurs under an RCU read-side critical section." This is seen with CONFIG_DEBUG_ATOMIC_SLEEP=y and CONFIG_KFENCE=y. Kernel addresses (with the exception of the vectors/kuser helper page) do not have VMAs associated with them. If the vectors/kuser helper page faults, then there are two possibilities: 1. if the fault happened while in kernel mode, then we're basically dead, because the CPU won't be able to vector through this page to handle the fault. 2. if the fault happened while in user mode, that means the page was protected from user access, and we want to fault anyway. Thus, we can handle kernel addresses from any context entirely separately without going anywhere near the mmap lock. This gives us an entirely non-sleeping path for all kernel mode kernel address faults. As we handle the kernel address faults before interrupts are enabled, this change has the side effect of improving the branch predictor hardening, but does not completely solve the issue. Reported-by: Zizhi Wo <wozizhi@huaweicloud.com> Reported-by: Xie Yuanbin <xieyuanbin1@huawei.com> Link: https://lore.kernel.org/r/20251126090505.3057219-1-wozizhi@huaweicloud.com Reviewed-by: Xie Yuanbin <xieyuanbin1@huawei.com> Tested-by: Xie Yuanbin <xieyuanbin1@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
1 parent 40b466d commit 7733bc7

1 file changed

Lines changed: 35 additions & 0 deletions

File tree

arch/arm/mm/fault.c

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -261,6 +261,35 @@ static inline bool ttbr0_usermode_access_allowed(struct pt_regs *regs)
261261
}
262262
#endif
263263

264+
static int __kprobes
265+
do_kernel_address_page_fault(struct mm_struct *mm, unsigned long addr,
266+
unsigned int fsr, struct pt_regs *regs)
267+
{
268+
if (user_mode(regs)) {
269+
/*
270+
* Fault from user mode for a kernel space address. User mode
271+
* should not be faulting in kernel space, which includes the
272+
* vector/khelper page. Send a SIGSEGV.
273+
*/
274+
__do_user_fault(addr, fsr, SIGSEGV, SEGV_MAPERR, regs);
275+
} else {
276+
/*
277+
* Fault from kernel mode. Enable interrupts if they were
278+
* enabled in the parent context. Section (upper page table)
279+
* translation faults are handled via do_translation_fault(),
280+
* so we will only get here for a non-present kernel space
281+
* PTE or PTE permission fault. This may happen in exceptional
282+
* circumstances and need the fixup tables to be walked.
283+
*/
284+
if (interrupts_enabled(regs))
285+
local_irq_enable();
286+
287+
__do_kernel_fault(mm, addr, fsr, regs);
288+
}
289+
290+
return 0;
291+
}
292+
264293
static int __kprobes
265294
do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
266295
{
@@ -274,6 +303,12 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
274303
if (kprobe_page_fault(regs, fsr))
275304
return 0;
276305

306+
/*
307+
* Handle kernel addresses faults separately, which avoids touching
308+
* the mmap lock from contexts that are not able to sleep.
309+
*/
310+
if (addr >= TASK_SIZE)
311+
return do_kernel_address_page_fault(mm, addr, fsr, regs);
277312

278313
/* Enable interrupts if they were enabled in the parent context. */
279314
if (interrupts_enabled(regs))

0 commit comments

Comments
 (0)