Skip to content

Commit 4463c7a

Browse files
Thomas GleixnerPeter Zijlstra
authored andcommitted
sched/mmcid: Optimize transitional CIDs when scheduling out
During the investigation of the various transition mode issues instrumentation revealed that the amount of bitmap operations can be significantly reduced when a task with a transitional CID schedules out after the fixup function completed and disabled the transition mode. At that point the mode is stable and therefore it is not required to drop the transitional CID back into the pool. As the fixup is complete the potential exhaustion of the CID pool is not longer possible, so the CID can be transferred to the scheduling out task or to the CPU depending on the current ownership mode. The racy snapshot of mm_cid::mode which contains both the ownership state and the transition bit is valid because runqueue lock is held and the fixup function of a concurrent mode switch is serialized. Assigning the ownership right there not only spares the bitmap access for dropping the CID it also avoids it when the task is scheduled back in as it directly hits the fast path in both modes when the CID is within the optimal range. If it's outside the range the next schedule in will need to converge so dropping it right away is sensible. In the good case this also allows to go into the fast path on the next schedule in operation. With a thread pool benchmark which is configured to cross the mode switch boundaries frequently this reduces the number of bitmap operations by about 30% and increases the fastpath utilization in the low single digit percentage range. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://patch.msgid.link/20260201192835.100194627@kernel.org
1 parent 007d842 commit 4463c7a

1 file changed

Lines changed: 21 additions & 2 deletions

File tree

kernel/sched/sched.h

Lines changed: 21 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3902,12 +3902,31 @@ static __always_inline void mm_cid_schedin(struct task_struct *next)
39023902

39033903
static __always_inline void mm_cid_schedout(struct task_struct *prev)
39043904
{
3905+
struct mm_struct *mm = prev->mm;
3906+
unsigned int mode, cid;
3907+
39053908
/* During mode transitions CIDs are temporary and need to be dropped */
39063909
if (likely(!cid_in_transit(prev->mm_cid.cid)))
39073910
return;
39083911

3909-
mm_drop_cid(prev->mm, cid_from_transit_cid(prev->mm_cid.cid));
3910-
prev->mm_cid.cid = MM_CID_UNSET;
3912+
mode = READ_ONCE(mm->mm_cid.mode);
3913+
cid = cid_from_transit_cid(prev->mm_cid.cid);
3914+
3915+
/*
3916+
* If transition mode is done, transfer ownership when the CID is
3917+
* within the convergence range to optimize the next schedule in.
3918+
*/
3919+
if (!cid_in_transit(mode) && cid < READ_ONCE(mm->mm_cid.max_cids)) {
3920+
if (cid_on_cpu(mode))
3921+
cid = cid_to_cpu_cid(cid);
3922+
3923+
/* Update both so that the next schedule in goes into the fast path */
3924+
mm_cid_update_pcpu_cid(mm, cid);
3925+
prev->mm_cid.cid = cid;
3926+
} else {
3927+
mm_drop_cid(mm, cid);
3928+
prev->mm_cid.cid = MM_CID_UNSET;
3929+
}
39113930
}
39123931

39133932
static inline void mm_cid_switch_to(struct task_struct *prev, struct task_struct *next)

0 commit comments

Comments
 (0)