Skip to content

Commit a1dfb63

Browse files
matosattiingomolnar
authored andcommitted
tick/nohz: Kick only _queued_ task whose tick dependency is updated
When the tick dependency of a task is updated, we want it to aknowledge the new state and restart the tick if needed. If the task is not running, we don't need to kick it because it will observe the new dependency upon scheduling in. But if the task is running, we may need to send an IPI to it so that it gets notified. Unfortunately we don't have the means to check if a task is running in a race free way. Checking p->on_cpu in a synchronized way against p->tick_dep_mask would imply adding a full barrier between prepare_task_switch() and tick_nohz_task_switch(), which we want to avoid in this fast-path. Therefore we blindly fire an IPI to the task's CPU. Meanwhile we can check if the task is queued on the CPU rq because p->on_rq is always set to TASK_ON_RQ_QUEUED _before_ schedule() and its full barrier that precedes tick_nohz_task_switch(). And if the task is queued on a nohz_full CPU, it also has fair chances to be running as the isolation constraints prescribe running single tasks on full dynticks CPUs. So use this as a trick to check if we can spare an IPI toward a non-running task. NOTE: For the ordering to be correct, it is assumed that we never deactivate a task while it is running, the only exception being the task deactivating itself while scheduling out. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210512232924.150322-9-frederic@kernel.org
1 parent 1e4ca26 commit a1dfb63

3 files changed

Lines changed: 24 additions & 2 deletions

File tree

include/linux/sched.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2011,6 +2011,8 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
20112011

20122012
#endif /* CONFIG_SMP */
20132013

2014+
extern bool sched_task_on_rq(struct task_struct *p);
2015+
20142016
/*
20152017
* In order to reduce various lock holder preemption latencies provide an
20162018
* interface to see if a vCPU is currently running or not.

kernel/sched/core.c

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1590,6 +1590,11 @@ static inline void uclamp_post_fork(struct task_struct *p) { }
15901590
static inline void init_uclamp(void) { }
15911591
#endif /* CONFIG_UCLAMP_TASK */
15921592

1593+
bool sched_task_on_rq(struct task_struct *p)
1594+
{
1595+
return task_on_rq_queued(p);
1596+
}
1597+
15931598
static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
15941599
{
15951600
if (!(flags & ENQUEUE_NOCLOCK))

kernel/time/tick-sched.c

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -324,14 +324,28 @@ void tick_nohz_full_kick_cpu(int cpu)
324324

325325
static void tick_nohz_kick_task(struct task_struct *tsk)
326326
{
327-
int cpu = task_cpu(tsk);
327+
int cpu;
328+
329+
/*
330+
* If the task is not running, run_posix_cpu_timers()
331+
* has nothing to elapse, IPI can then be spared.
332+
*
333+
* activate_task() STORE p->tick_dep_mask
334+
* STORE p->on_rq
335+
* __schedule() (switch to task 'p') smp_mb() (atomic_fetch_or())
336+
* LOCK rq->lock LOAD p->on_rq
337+
* smp_mb__after_spin_lock()
338+
* tick_nohz_task_switch()
339+
* LOAD p->tick_dep_mask
340+
*/
341+
if (!sched_task_on_rq(tsk))
342+
return;
328343

329344
/*
330345
* If the task concurrently migrates to another CPU,
331346
* we guarantee it sees the new tick dependency upon
332347
* schedule.
333348
*
334-
*
335349
* set_task_cpu(p, cpu);
336350
* STORE p->cpu = @cpu
337351
* __schedule() (switch to task 'p')
@@ -340,6 +354,7 @@ static void tick_nohz_kick_task(struct task_struct *tsk)
340354
* tick_nohz_task_switch() smp_mb() (atomic_fetch_or())
341355
* LOAD p->tick_dep_mask LOAD p->cpu
342356
*/
357+
cpu = task_cpu(tsk);
343358

344359
preempt_disable();
345360
if (cpu_online(cpu))

0 commit comments

Comments
 (0)