Skip to content

Commit 05b44ae

Browse files
KAGA-KOKOingomolnar
authored andcommitted
rseq: Implement fast path for exit to user
Implement the actual logic for handling RSEQ updates in a fast path after handling the TIF work and at the point where the task is actually returning to user space. This is the right point to do that because at this point the CPU and the MM CID are stable and cannot longer change due to yet another reschedule. That happens when the task is handling it via TIF_NOTIFY_RESUME in resume_user_mode_work(), which is invoked from the exit to user mode work loop. The function is invoked after the TIF work is handled and runs with interrupts disabled, which means it cannot resolve page faults. It therefore disables page faults and in case the access to the user space memory faults, it: - notes the fail in the event struct - raises TIF_NOTIFY_RESUME - returns false to the caller The caller has to go back to the TIF work, which runs with interrupts enabled and therefore can resolve the page faults. This happens mostly on fork() when the memory is marked COW. If the user memory inspection finds invalid data, the function returns false as well and sets the fatal flag in the event struct along with TIF_NOTIFY_RESUME. The slow path notify handler has to evaluate that flag and terminate the task with SIGSEGV as documented. The initial decision to invoke any of this is based on one flags in the event struct: @sched_switch. The decision is in pseudo ASM: load tsk::event::sched_switch jnz inspect_user_space mov $0, tsk::event::events ... leave So for the common case where the task was not scheduled out, this really boils down to three instructions before going out if the compiler is not completely stupid (and yes, some of them are). If the condition is true, then it checks, whether CPU ID or MM CID have changed. If so, then the CPU/MM IDs have to be updated and are thereby cached for the next round. The update unconditionally retrieves the user space critical section address to spare another user*begin/end() pair. If that's not zero and tsk::event::user_irq is set, then the critical section is analyzed and acted upon. If either zero or the entry came via syscall the critical section analysis is skipped. If the comparison is false then the critical section has to be analyzed because the event flag is then only true when entry from user was by interrupt. This is provided without the actual hookup to let reviewers focus on the implementation details. The hookup happens in the next step. Note: As with quite some other optimizations this depends on the generic entry infrastructure and is not enabled to be sucked into random architecture implementations. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://patch.msgid.link/20251027084307.638929615@linutronix.de
1 parent 39a1675 commit 05b44ae

3 files changed

Lines changed: 135 additions & 3 deletions

File tree

include/linux/rseq_entry.h

Lines changed: 130 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ struct rseq_stats {
1010
unsigned long exit;
1111
unsigned long signal;
1212
unsigned long slowpath;
13+
unsigned long fastpath;
1314
unsigned long ids;
1415
unsigned long cs;
1516
unsigned long clear;
@@ -245,12 +246,13 @@ rseq_update_user_cs(struct task_struct *t, struct pt_regs *regs, unsigned long c
245246
{
246247
struct rseq_cs __user *ucs = (struct rseq_cs __user *)(unsigned long)csaddr;
247248
unsigned long ip = instruction_pointer(regs);
249+
unsigned long tasksize = TASK_SIZE;
248250
u64 start_ip, abort_ip, offset;
249251
u32 usig, __user *uc_sig;
250252

251253
rseq_stat_inc(rseq_stats.cs);
252254

253-
if (unlikely(csaddr >= TASK_SIZE)) {
255+
if (unlikely(csaddr >= tasksize)) {
254256
t->rseq.event.fatal = true;
255257
return false;
256258
}
@@ -287,7 +289,7 @@ rseq_update_user_cs(struct task_struct *t, struct pt_regs *regs, unsigned long c
287289
* in TLS::rseq::rseq_cs. An RSEQ abort would then evade ROP
288290
* protection.
289291
*/
290-
if (abort_ip >= TASK_SIZE || abort_ip < sizeof(*uc_sig))
292+
if (unlikely(abort_ip >= tasksize || abort_ip < sizeof(*uc_sig)))
291293
goto die;
292294

293295
/* The address is guaranteed to be >= 0 and < TASK_SIZE */
@@ -397,6 +399,128 @@ static rseq_inline bool rseq_update_usr(struct task_struct *t, struct pt_regs *r
397399
return rseq_update_user_cs(t, regs, csaddr);
398400
}
399401

402+
/*
403+
* If you want to use this then convert your architecture to the generic
404+
* entry code. I'm tired of building workarounds for people who can't be
405+
* bothered to make the maintenance of generic infrastructure less
406+
* burdensome. Just sucking everything into the architecture code and
407+
* thereby making others chase the horrible hacks and keep them working is
408+
* neither acceptable nor sustainable.
409+
*/
410+
#ifdef CONFIG_GENERIC_ENTRY
411+
412+
/*
413+
* This is inlined into the exit path because:
414+
*
415+
* 1) It's a one time comparison in the fast path when there is no event to
416+
* handle
417+
*
418+
* 2) The access to the user space rseq memory (TLS) is unlikely to fault
419+
* so the straight inline operation is:
420+
*
421+
* - Four 32-bit stores only if CPU ID/ MM CID need to be updated
422+
* - One 64-bit load to retrieve the critical section address
423+
*
424+
* 3) In the unlikely case that the critical section address is != NULL:
425+
*
426+
* - One 64-bit load to retrieve the start IP
427+
* - One 64-bit load to retrieve the offset for calculating the end
428+
* - One 64-bit load to retrieve the abort IP
429+
* - One 64-bit load to retrieve the signature
430+
* - One store to clear the critical section address
431+
*
432+
* The non-debug case implements only the minimal required checking. It
433+
* provides protection against a rogue abort IP in kernel space, which
434+
* would be exploitable at least on x86, and also against a rogue CS
435+
* descriptor by checking the signature at the abort IP. Any fallout from
436+
* invalid critical section descriptors is a user space problem. The debug
437+
* case provides the full set of checks and terminates the task if a
438+
* condition is not met.
439+
*
440+
* In case of a fault or an invalid value, this sets TIF_NOTIFY_RESUME and
441+
* tells the caller to loop back into exit_to_user_mode_loop(). The rseq
442+
* slow path there will handle the failure.
443+
*/
444+
static __always_inline bool rseq_exit_user_update(struct pt_regs *regs, struct task_struct *t)
445+
{
446+
/*
447+
* Page faults need to be disabled as this is called with
448+
* interrupts disabled
449+
*/
450+
guard(pagefault)();
451+
if (likely(!t->rseq.event.ids_changed)) {
452+
struct rseq __user *rseq = t->rseq.usrptr;
453+
/*
454+
* If IDs have not changed rseq_event::user_irq must be true
455+
* See rseq_sched_switch_event().
456+
*/
457+
u64 csaddr;
458+
459+
if (unlikely(get_user_inline(csaddr, &rseq->rseq_cs)))
460+
return false;
461+
462+
if (static_branch_unlikely(&rseq_debug_enabled) || unlikely(csaddr)) {
463+
if (unlikely(!rseq_update_user_cs(t, regs, csaddr)))
464+
return false;
465+
}
466+
return true;
467+
}
468+
469+
struct rseq_ids ids = {
470+
.cpu_id = task_cpu(t),
471+
.mm_cid = task_mm_cid(t),
472+
};
473+
u32 node_id = cpu_to_node(ids.cpu_id);
474+
475+
return rseq_update_usr(t, regs, &ids, node_id);
476+
}
477+
478+
static __always_inline bool __rseq_exit_to_user_mode_restart(struct pt_regs *regs)
479+
{
480+
struct task_struct *t = current;
481+
482+
/*
483+
* If the task did not go through schedule or got the flag enforced
484+
* by the rseq syscall or execve, then nothing to do here.
485+
*
486+
* CPU ID and MM CID can only change when going through a context
487+
* switch.
488+
*
489+
* rseq_sched_switch_event() sets the rseq_event::sched_switch bit
490+
* only when rseq_event::has_rseq is true. That conditional is
491+
* required to avoid setting the TIF bit if RSEQ is not registered
492+
* for a task. rseq_event::sched_switch is cleared when RSEQ is
493+
* unregistered by a task so it's sufficient to check for the
494+
* sched_switch bit alone.
495+
*
496+
* A sane compiler requires three instructions for the nothing to do
497+
* case including clearing the events, but your mileage might vary.
498+
*/
499+
if (unlikely((t->rseq.event.sched_switch))) {
500+
rseq_stat_inc(rseq_stats.fastpath);
501+
502+
if (unlikely(!rseq_exit_user_update(regs, t)))
503+
return true;
504+
}
505+
/* Clear state so next entry starts from a clean slate */
506+
t->rseq.event.events = 0;
507+
return false;
508+
}
509+
510+
static __always_inline bool rseq_exit_to_user_mode_restart(struct pt_regs *regs)
511+
{
512+
if (unlikely(__rseq_exit_to_user_mode_restart(regs))) {
513+
current->rseq.event.slowpath = true;
514+
set_tsk_thread_flag(current, TIF_NOTIFY_RESUME);
515+
return true;
516+
}
517+
return false;
518+
}
519+
520+
#else /* CONFIG_GENERIC_ENTRY */
521+
static inline bool rseq_exit_to_user_mode_restart(struct pt_regs *regs) { return false; }
522+
#endif /* !CONFIG_GENERIC_ENTRY */
523+
400524
static __always_inline void rseq_exit_to_user_mode(void)
401525
{
402526
struct rseq_event *ev = &current->rseq.event;
@@ -421,9 +545,12 @@ static inline void rseq_debug_syscall_return(struct pt_regs *regs)
421545
if (static_branch_unlikely(&rseq_debug_enabled))
422546
__rseq_debug_syscall_return(regs);
423547
}
424-
425548
#else /* CONFIG_RSEQ */
426549
static inline void rseq_note_user_irq_entry(void) { }
550+
static inline bool rseq_exit_to_user_mode_restart(struct pt_regs *regs)
551+
{
552+
return false;
553+
}
427554
static inline void rseq_exit_to_user_mode(void) { }
428555
static inline void rseq_debug_syscall_return(struct pt_regs *regs) { }
429556
#endif /* !CONFIG_RSEQ */

include/linux/rseq_types.h

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@ struct rseq;
1818
* @has_rseq: True if the task has a rseq pointer installed
1919
* @error: Compound error code for the slow path to analyze
2020
* @fatal: User space data corrupted or invalid
21+
* @slowpath: Indicator that slow path processing via TIF_NOTIFY_RESUME
22+
* is required
2123
*
2224
* @sched_switch and @ids_changed must be adjacent and the combo must be
2325
* 16bit aligned to allow a single store, when both are set at the same
@@ -42,6 +44,7 @@ struct rseq_event {
4244
u16 error;
4345
struct {
4446
u8 fatal;
47+
u8 slowpath;
4548
};
4649
};
4750
};

kernel/rseq.c

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,7 @@ static int rseq_stats_show(struct seq_file *m, void *p)
133133
stats.exit += data_race(per_cpu(rseq_stats.exit, cpu));
134134
stats.signal += data_race(per_cpu(rseq_stats.signal, cpu));
135135
stats.slowpath += data_race(per_cpu(rseq_stats.slowpath, cpu));
136+
stats.fastpath += data_race(per_cpu(rseq_stats.fastpath, cpu));
136137
stats.ids += data_race(per_cpu(rseq_stats.ids, cpu));
137138
stats.cs += data_race(per_cpu(rseq_stats.cs, cpu));
138139
stats.clear += data_race(per_cpu(rseq_stats.clear, cpu));
@@ -142,6 +143,7 @@ static int rseq_stats_show(struct seq_file *m, void *p)
142143
seq_printf(m, "exit: %16lu\n", stats.exit);
143144
seq_printf(m, "signal: %16lu\n", stats.signal);
144145
seq_printf(m, "slowp: %16lu\n", stats.slowpath);
146+
seq_printf(m, "fastp: %16lu\n", stats.fastpath);
145147
seq_printf(m, "ids: %16lu\n", stats.ids);
146148
seq_printf(m, "cs: %16lu\n", stats.cs);
147149
seq_printf(m, "clear: %16lu\n", stats.clear);

0 commit comments

Comments
 (0)