Skip to content

Commit 48eb3f4

Browse files
Gregory Haskinsingomolnar
authored andcommitted
locking/rtmutex: Implement equal priority lock stealing
The current logic only allows lock stealing to occur if the current task is of higher priority than the pending owner. Significant throughput improvements can be gained by allowing the lock stealing to include tasks of equal priority when the contended lock is a spin_lock or a rw_lock and the tasks are not in a RT scheduling task. The assumption was that the system will make faster progress by allowing the task already on the CPU to take the lock rather than waiting for the system to wake up a different task. This does add a degree of unfairness, but in reality no negative side effects have been observed in the many years that this has been used in the RT kernel. [ tglx: Refactored and rewritten several times by Steve Rostedt, Sebastian Siewior and myself ] Signed-off-by: Gregory Haskins <ghaskins@novell.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210815211305.857240222@linutronix.de
1 parent 015680a commit 48eb3f4

1 file changed

Lines changed: 35 additions & 17 deletions

File tree

kernel/locking/rtmutex.c

Lines changed: 35 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -338,6 +338,26 @@ static __always_inline int rt_mutex_waiter_equal(struct rt_mutex_waiter *left,
338338
return 1;
339339
}
340340

341+
static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
342+
struct rt_mutex_waiter *top_waiter)
343+
{
344+
if (rt_mutex_waiter_less(waiter, top_waiter))
345+
return true;
346+
347+
#ifdef RT_MUTEX_BUILD_SPINLOCKS
348+
/*
349+
* Note that RT tasks are excluded from same priority (lateral)
350+
* steals to prevent the introduction of an unbounded latency.
351+
*/
352+
if (rt_prio(waiter->prio) || dl_prio(waiter->prio))
353+
return false;
354+
355+
return rt_mutex_waiter_equal(waiter, top_waiter);
356+
#else
357+
return false;
358+
#endif
359+
}
360+
341361
#define __node_2_waiter(node) \
342362
rb_entry((node), struct rt_mutex_waiter, tree_entry)
343363

@@ -932,19 +952,21 @@ try_to_take_rt_mutex(struct rt_mutex_base *lock, struct task_struct *task,
932952
* trylock attempt.
933953
*/
934954
if (waiter) {
935-
/*
936-
* If waiter is not the highest priority waiter of
937-
* @lock, give up.
938-
*/
939-
if (waiter != rt_mutex_top_waiter(lock))
940-
return 0;
955+
struct rt_mutex_waiter *top_waiter = rt_mutex_top_waiter(lock);
941956

942957
/*
943-
* We can acquire the lock. Remove the waiter from the
944-
* lock waiters tree.
958+
* If waiter is the highest priority waiter of @lock,
959+
* or allowed to steal it, take it over.
945960
*/
946-
rt_mutex_dequeue(lock, waiter);
947-
961+
if (waiter == top_waiter || rt_mutex_steal(waiter, top_waiter)) {
962+
/*
963+
* We can acquire the lock. Remove the waiter from the
964+
* lock waiters tree.
965+
*/
966+
rt_mutex_dequeue(lock, waiter);
967+
} else {
968+
return 0;
969+
}
948970
} else {
949971
/*
950972
* If the lock has waiters already we check whether @task is
@@ -955,13 +977,9 @@ try_to_take_rt_mutex(struct rt_mutex_base *lock, struct task_struct *task,
955977
* not need to be dequeued.
956978
*/
957979
if (rt_mutex_has_waiters(lock)) {
958-
/*
959-
* If @task->prio is greater than or equal to
960-
* the top waiter priority (kernel view),
961-
* @task lost.
962-
*/
963-
if (!rt_mutex_waiter_less(task_to_waiter(task),
964-
rt_mutex_top_waiter(lock)))
980+
/* Check whether the trylock can steal it. */
981+
if (!rt_mutex_steal(task_to_waiter(task),
982+
rt_mutex_top_waiter(lock)))
965983
return 0;
966984

967985
/*

0 commit comments

Comments
 (0)