Skip to content

Commit f80fe66

Browse files
committed
Merge branches 'doc.2021.11.30c', 'exp.2021.12.07a', 'fastnohz.2021.11.30c', 'fixes.2021.11.30c', 'nocb.2021.12.09a', 'nolibc.2021.11.30c', 'tasks.2021.12.09a', 'torture.2021.12.07a' and 'torturescript.2021.11.30c' into HEAD
doc.2021.11.30c: Documentation updates. exp.2021.12.07a: Expedited-grace-period fixes. fastnohz.2021.11.30c: Remove CONFIG_RCU_FAST_NO_HZ. fixes.2021.11.30c: Miscellaneous fixes. nocb.2021.12.09a: No-CB CPU updates. nolibc.2021.11.30c: Tiny in-kernel library updates. tasks.2021.12.09a: RCU-tasks updates, including update-side scalability. torture.2021.12.07a: Torture-test in-kernel module updates. torturescript.2021.11.30c: Torture-test scripting updates.
9 parents 5861dad + 81f6d49 + bc849e9 + 1f8da40 + 10d4703 + b0fe9de + fd796e4 + 53b541f + 90b21bc commit f80fe66

51 files changed

Lines changed: 1089 additions & 719 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

Documentation/RCU/stallwarn.rst

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -254,17 +254,6 @@ period (in this case 2603), the grace-period sequence number (7075), and
254254
an estimate of the total number of RCU callbacks queued across all CPUs
255255
(625 in this case).
256256

257-
In kernels with CONFIG_RCU_FAST_NO_HZ, more information is printed
258-
for each CPU::
259-
260-
0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 dyntick_enabled: 1
261-
262-
The "last_accelerate:" prints the low-order 16 bits (in hex) of the
263-
jiffies counter when this CPU last invoked rcu_try_advance_all_cbs()
264-
from rcu_needs_cpu() or last invoked rcu_accelerate_cbs() from
265-
rcu_prepare_for_idle(). "dyntick_enabled: 1" indicates that dyntick-idle
266-
processing is enabled.
267-
268257
If the grace period ends just as the stall warning starts printing,
269258
there will be a spurious stall-warning message, which will include
270259
the following::

Documentation/admin-guide/kernel-parameters.txt

Lines changed: 52 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -4343,19 +4343,30 @@
43434343
Disable the Correctable Errors Collector,
43444344
see CONFIG_RAS_CEC help text.
43454345

4346-
rcu_nocbs= [KNL]
4347-
The argument is a cpu list, as described above.
4348-
4349-
In kernels built with CONFIG_RCU_NOCB_CPU=y, set
4350-
the specified list of CPUs to be no-callback CPUs.
4351-
Invocation of these CPUs' RCU callbacks will be
4352-
offloaded to "rcuox/N" kthreads created for that
4353-
purpose, where "x" is "p" for RCU-preempt, and
4354-
"s" for RCU-sched, and "N" is the CPU number.
4355-
This reduces OS jitter on the offloaded CPUs,
4356-
which can be useful for HPC and real-time
4357-
workloads. It can also improve energy efficiency
4358-
for asymmetric multiprocessors.
4346+
rcu_nocbs[=cpu-list]
4347+
[KNL] The optional argument is a cpu list,
4348+
as described above.
4349+
4350+
In kernels built with CONFIG_RCU_NOCB_CPU=y,
4351+
enable the no-callback CPU mode, which prevents
4352+
such CPUs' callbacks from being invoked in
4353+
softirq context. Invocation of such CPUs' RCU
4354+
callbacks will instead be offloaded to "rcuox/N"
4355+
kthreads created for that purpose, where "x" is
4356+
"p" for RCU-preempt, "s" for RCU-sched, and "g"
4357+
for the kthreads that mediate grace periods; and
4358+
"N" is the CPU number. This reduces OS jitter on
4359+
the offloaded CPUs, which can be useful for HPC
4360+
and real-time workloads. It can also improve
4361+
energy efficiency for asymmetric multiprocessors.
4362+
4363+
If a cpulist is passed as an argument, the specified
4364+
list of CPUs is set to no-callback mode from boot.
4365+
4366+
Otherwise, if the '=' sign and the cpulist
4367+
arguments are omitted, no CPU will be set to
4368+
no-callback mode from boot but the mode may be
4369+
toggled at runtime via cpusets.
43594370

43604371
rcu_nocb_poll [KNL]
43614372
Rather than requiring that offloaded CPUs
@@ -4489,10 +4500,6 @@
44894500
on rcutree.qhimark at boot time and to zero to
44904501
disable more aggressive help enlistment.
44914502

4492-
rcutree.rcu_idle_gp_delay= [KNL]
4493-
Set wakeup interval for idle CPUs that have
4494-
RCU callbacks (RCU_FAST_NO_HZ=y).
4495-
44964503
rcutree.rcu_kick_kthreads= [KNL]
44974504
Cause the grace-period kthread to get an extra
44984505
wake_up() if it sleeps three times longer than
@@ -4603,8 +4610,12 @@
46034610
in seconds.
46044611

46054612
rcutorture.fwd_progress= [KNL]
4606-
Enable RCU grace-period forward-progress testing
4613+
Specifies the number of kthreads to be used
4614+
for RCU grace-period forward-progress testing
46074615
for the types of RCU supporting this notion.
4616+
Defaults to 1 kthread, values less than zero or
4617+
greater than the number of CPUs cause the number
4618+
of CPUs to be used.
46084619

46094620
rcutorture.fwd_progress_div= [KNL]
46104621
Specify the fraction of a CPU-stall-warning
@@ -4805,6 +4816,29 @@
48054816
period to instead use normal non-expedited
48064817
grace-period processing.
48074818

4819+
rcupdate.rcu_task_collapse_lim= [KNL]
4820+
Set the maximum number of callbacks present
4821+
at the beginning of a grace period that allows
4822+
the RCU Tasks flavors to collapse back to using
4823+
a single callback queue. This switching only
4824+
occurs when rcupdate.rcu_task_enqueue_lim is
4825+
set to the default value of -1.
4826+
4827+
rcupdate.rcu_task_contend_lim= [KNL]
4828+
Set the minimum number of callback-queuing-time
4829+
lock-contention events per jiffy required to
4830+
cause the RCU Tasks flavors to switch to per-CPU
4831+
callback queuing. This switching only occurs
4832+
when rcupdate.rcu_task_enqueue_lim is set to
4833+
the default value of -1.
4834+
4835+
rcupdate.rcu_task_enqueue_lim= [KNL]
4836+
Set the number of callback queues to use for the
4837+
RCU Tasks family of RCU flavors. The default
4838+
of -1 allows this to be automatically (and
4839+
dynamically) adjusted. This parameter is intended
4840+
for use in testing.
4841+
48084842
rcupdate.rcu_task_ipi_delay= [KNL]
48094843
Set time in jiffies during which RCU tasks will
48104844
avoid sending IPIs, starting with the beginning

Documentation/timers/no_hz.rst

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -184,16 +184,12 @@ There are situations in which idle CPUs cannot be permitted to
184184
enter either dyntick-idle mode or adaptive-tick mode, the most
185185
common being when that CPU has RCU callbacks pending.
186186

187-
The CONFIG_RCU_FAST_NO_HZ=y Kconfig option may be used to cause such CPUs
188-
to enter dyntick-idle mode or adaptive-tick mode anyway. In this case,
189-
a timer will awaken these CPUs every four jiffies in order to ensure
190-
that the RCU callbacks are processed in a timely fashion.
191-
192-
Another approach is to offload RCU callback processing to "rcuo" kthreads
187+
Avoid this by offloading RCU callback processing to "rcuo" kthreads
193188
using the CONFIG_RCU_NOCB_CPU=y Kconfig option. The specific CPUs to
194189
offload may be selected using The "rcu_nocbs=" kernel boot parameter,
195190
which takes a comma-separated list of CPUs and CPU ranges, for example,
196-
"1,3-5" selects CPUs 1, 3, 4, and 5.
191+
"1,3-5" selects CPUs 1, 3, 4, and 5. Note that CPUs specified by
192+
the "nohz_full" kernel boot parameter are also offloaded.
197193

198194
The offloaded CPUs will never queue RCU callbacks, and therefore RCU
199195
never prevents offloaded CPUs from entering either dyntick-idle mode

include/linux/rcu_segcblist.h

Lines changed: 37 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -69,15 +69,15 @@ struct rcu_cblist {
6969
*
7070
*
7171
* ----------------------------------------------------------------------------
72-
* | SEGCBLIST_SOFTIRQ_ONLY |
72+
* | SEGCBLIST_RCU_CORE |
7373
* | |
7474
* | Callbacks processed by rcu_core() from softirqs or local |
7575
* | rcuc kthread, without holding nocb_lock. |
7676
* ----------------------------------------------------------------------------
7777
* |
7878
* v
7979
* ----------------------------------------------------------------------------
80-
* | SEGCBLIST_OFFLOADED |
80+
* | SEGCBLIST_RCU_CORE | SEGCBLIST_LOCKING | SEGCBLIST_OFFLOADED |
8181
* | |
8282
* | Callbacks processed by rcu_core() from softirqs or local |
8383
* | rcuc kthread, while holding nocb_lock. Waking up CB and GP kthreads, |
@@ -89,7 +89,9 @@ struct rcu_cblist {
8989
* | |
9090
* v v
9191
* --------------------------------------- ----------------------------------|
92-
* | SEGCBLIST_OFFLOADED | | | SEGCBLIST_OFFLOADED | |
92+
* | SEGCBLIST_RCU_CORE | | | SEGCBLIST_RCU_CORE | |
93+
* | SEGCBLIST_LOCKING | | | SEGCBLIST_LOCKING | |
94+
* | SEGCBLIST_OFFLOADED | | | SEGCBLIST_OFFLOADED | |
9395
* | SEGCBLIST_KTHREAD_CB | | SEGCBLIST_KTHREAD_GP |
9496
* | | | |
9597
* | | | |
@@ -104,9 +106,10 @@ struct rcu_cblist {
104106
* |
105107
* v
106108
* |--------------------------------------------------------------------------|
107-
* | SEGCBLIST_OFFLOADED | |
108-
* | SEGCBLIST_KTHREAD_CB | |
109-
* | SEGCBLIST_KTHREAD_GP |
109+
* | SEGCBLIST_LOCKING | |
110+
* | SEGCBLIST_OFFLOADED | |
111+
* | SEGCBLIST_KTHREAD_GP | |
112+
* | SEGCBLIST_KTHREAD_CB |
110113
* | |
111114
* | Kthreads handle callbacks holding nocb_lock, local rcu_core() stops |
112115
* | handling callbacks. Enable bypass queueing. |
@@ -120,7 +123,8 @@ struct rcu_cblist {
120123
*
121124
*
122125
* |--------------------------------------------------------------------------|
123-
* | SEGCBLIST_OFFLOADED | |
126+
* | SEGCBLIST_LOCKING | |
127+
* | SEGCBLIST_OFFLOADED | |
124128
* | SEGCBLIST_KTHREAD_CB | |
125129
* | SEGCBLIST_KTHREAD_GP |
126130
* | |
@@ -130,6 +134,22 @@ struct rcu_cblist {
130134
* |
131135
* v
132136
* |--------------------------------------------------------------------------|
137+
* | SEGCBLIST_RCU_CORE | |
138+
* | SEGCBLIST_LOCKING | |
139+
* | SEGCBLIST_OFFLOADED | |
140+
* | SEGCBLIST_KTHREAD_CB | |
141+
* | SEGCBLIST_KTHREAD_GP |
142+
* | |
143+
* | CB/GP kthreads handle callbacks holding nocb_lock, local rcu_core() |
144+
* | handles callbacks concurrently. Bypass enqueue is enabled. |
145+
* | Invoke RCU core so we make sure not to preempt it in the middle with |
146+
* | leaving some urgent work unattended within a jiffy. |
147+
* ----------------------------------------------------------------------------
148+
* |
149+
* v
150+
* |--------------------------------------------------------------------------|
151+
* | SEGCBLIST_RCU_CORE | |
152+
* | SEGCBLIST_LOCKING | |
133153
* | SEGCBLIST_KTHREAD_CB | |
134154
* | SEGCBLIST_KTHREAD_GP |
135155
* | |
@@ -143,7 +163,9 @@ struct rcu_cblist {
143163
* | |
144164
* v v
145165
* ---------------------------------------------------------------------------|
146-
* | |
166+
* | | |
167+
* | SEGCBLIST_RCU_CORE | | SEGCBLIST_RCU_CORE | |
168+
* | SEGCBLIST_LOCKING | | SEGCBLIST_LOCKING | |
147169
* | SEGCBLIST_KTHREAD_CB | SEGCBLIST_KTHREAD_GP |
148170
* | | |
149171
* | GP kthread woke up and | CB kthread woke up and |
@@ -159,7 +181,7 @@ struct rcu_cblist {
159181
* |
160182
* v
161183
* ----------------------------------------------------------------------------
162-
* | 0 |
184+
* | SEGCBLIST_RCU_CORE | SEGCBLIST_LOCKING |
163185
* | |
164186
* | Callbacks processed by rcu_core() from softirqs or local |
165187
* | rcuc kthread, while holding nocb_lock. Forbid nocb_timer to be armed. |
@@ -168,17 +190,18 @@ struct rcu_cblist {
168190
* |
169191
* v
170192
* ----------------------------------------------------------------------------
171-
* | SEGCBLIST_SOFTIRQ_ONLY |
193+
* | SEGCBLIST_RCU_CORE |
172194
* | |
173195
* | Callbacks processed by rcu_core() from softirqs or local |
174196
* | rcuc kthread, without holding nocb_lock. |
175197
* ----------------------------------------------------------------------------
176198
*/
177199
#define SEGCBLIST_ENABLED BIT(0)
178-
#define SEGCBLIST_SOFTIRQ_ONLY BIT(1)
179-
#define SEGCBLIST_KTHREAD_CB BIT(2)
180-
#define SEGCBLIST_KTHREAD_GP BIT(3)
181-
#define SEGCBLIST_OFFLOADED BIT(4)
200+
#define SEGCBLIST_RCU_CORE BIT(1)
201+
#define SEGCBLIST_LOCKING BIT(2)
202+
#define SEGCBLIST_KTHREAD_CB BIT(3)
203+
#define SEGCBLIST_KTHREAD_GP BIT(4)
204+
#define SEGCBLIST_OFFLOADED BIT(5)
182205

183206
struct rcu_segcblist {
184207
struct rcu_head *head;

include/linux/rcupdate.h

Lines changed: 28 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -364,46 +364,48 @@ static inline void rcu_preempt_sleep_check(void) { }
364364
#define rcu_check_sparse(p, space)
365365
#endif /* #else #ifdef __CHECKER__ */
366366

367+
#define __unrcu_pointer(p, local) \
368+
({ \
369+
typeof(*p) *local = (typeof(*p) *__force)(p); \
370+
rcu_check_sparse(p, __rcu); \
371+
((typeof(*p) __force __kernel *)(local)); \
372+
})
367373
/**
368374
* unrcu_pointer - mark a pointer as not being RCU protected
369375
* @p: pointer needing to lose its __rcu property
370376
*
371377
* Converts @p from an __rcu pointer to a __kernel pointer.
372378
* This allows an __rcu pointer to be used with xchg() and friends.
373379
*/
374-
#define unrcu_pointer(p) \
375-
({ \
376-
typeof(*p) *_________p1 = (typeof(*p) *__force)(p); \
377-
rcu_check_sparse(p, __rcu); \
378-
((typeof(*p) __force __kernel *)(_________p1)); \
379-
})
380+
#define unrcu_pointer(p) __unrcu_pointer(p, __UNIQUE_ID(rcu))
380381

381-
#define __rcu_access_pointer(p, space) \
382+
#define __rcu_access_pointer(p, local, space) \
382383
({ \
383-
typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
384+
typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
384385
rcu_check_sparse(p, space); \
385-
((typeof(*p) __force __kernel *)(_________p1)); \
386+
((typeof(*p) __force __kernel *)(local)); \
386387
})
387-
#define __rcu_dereference_check(p, c, space) \
388+
#define __rcu_dereference_check(p, local, c, space) \
388389
({ \
389390
/* Dependency order vs. p above. */ \
390-
typeof(*p) *________p1 = (typeof(*p) *__force)READ_ONCE(p); \
391+
typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
391392
RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
392393
rcu_check_sparse(p, space); \
393-
((typeof(*p) __force __kernel *)(________p1)); \
394+
((typeof(*p) __force __kernel *)(local)); \
394395
})
395-
#define __rcu_dereference_protected(p, c, space) \
396+
#define __rcu_dereference_protected(p, local, c, space) \
396397
({ \
397398
RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_protected() usage"); \
398399
rcu_check_sparse(p, space); \
399400
((typeof(*p) __force __kernel *)(p)); \
400401
})
401-
#define rcu_dereference_raw(p) \
402+
#define __rcu_dereference_raw(p, local) \
402403
({ \
403404
/* Dependency order vs. p above. */ \
404-
typeof(p) ________p1 = READ_ONCE(p); \
405-
((typeof(*p) __force __kernel *)(________p1)); \
405+
typeof(p) local = READ_ONCE(p); \
406+
((typeof(*p) __force __kernel *)(local)); \
406407
})
408+
#define rcu_dereference_raw(p) __rcu_dereference_raw(p, __UNIQUE_ID(rcu))
407409

408410
/**
409411
* RCU_INITIALIZER() - statically initialize an RCU-protected global variable
@@ -490,7 +492,7 @@ do { \
490492
* when tearing down multi-linked structures after a grace period
491493
* has elapsed.
492494
*/
493-
#define rcu_access_pointer(p) __rcu_access_pointer((p), __rcu)
495+
#define rcu_access_pointer(p) __rcu_access_pointer((p), __UNIQUE_ID(rcu), __rcu)
494496

495497
/**
496498
* rcu_dereference_check() - rcu_dereference with debug checking
@@ -526,7 +528,8 @@ do { \
526528
* annotated as __rcu.
527529
*/
528530
#define rcu_dereference_check(p, c) \
529-
__rcu_dereference_check((p), (c) || rcu_read_lock_held(), __rcu)
531+
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
532+
(c) || rcu_read_lock_held(), __rcu)
530533

531534
/**
532535
* rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
@@ -541,7 +544,8 @@ do { \
541544
* rcu_read_lock() but also rcu_read_lock_bh() into account.
542545
*/
543546
#define rcu_dereference_bh_check(p, c) \
544-
__rcu_dereference_check((p), (c) || rcu_read_lock_bh_held(), __rcu)
547+
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
548+
(c) || rcu_read_lock_bh_held(), __rcu)
545549

546550
/**
547551
* rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
@@ -556,7 +560,8 @@ do { \
556560
* only rcu_read_lock() but also rcu_read_lock_sched() into account.
557561
*/
558562
#define rcu_dereference_sched_check(p, c) \
559-
__rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \
563+
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
564+
(c) || rcu_read_lock_sched_held(), \
560565
__rcu)
561566

562567
/*
@@ -566,7 +571,8 @@ do { \
566571
* The no-tracing version of rcu_dereference_raw() must not call
567572
* rcu_read_lock_held().
568573
*/
569-
#define rcu_dereference_raw_check(p) __rcu_dereference_check((p), 1, __rcu)
574+
#define rcu_dereference_raw_check(p) \
575+
__rcu_dereference_check((p), __UNIQUE_ID(rcu), 1, __rcu)
570576

571577
/**
572578
* rcu_dereference_protected() - fetch RCU pointer when updates prevented
@@ -585,7 +591,7 @@ do { \
585591
* but very ugly failures.
586592
*/
587593
#define rcu_dereference_protected(p, c) \
588-
__rcu_dereference_protected((p), (c), __rcu)
594+
__rcu_dereference_protected((p), __UNIQUE_ID(rcu), (c), __rcu)
589595

590596

591597
/**

include/linux/rcutiny.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ static inline void rcu_irq_enter_irqson(void) { }
8585
static inline void rcu_irq_exit(void) { }
8686
static inline void rcu_irq_exit_check_preempt(void) { }
8787
#define rcu_is_idle_cpu(cpu) \
88-
(is_idle_task(current) && !in_nmi() && !in_irq() && !in_serving_softirq())
88+
(is_idle_task(current) && !in_nmi() && !in_hardirq() && !in_serving_softirq())
8989
static inline void exit_rcu(void) { }
9090
static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t)
9191
{

include/linux/srcu.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,8 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
117117
* lockdep_is_held() calls.
118118
*/
119119
#define srcu_dereference_check(p, ssp, c) \
120-
__rcu_dereference_check((p), (c) || srcu_read_lock_held(ssp), __rcu)
120+
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
121+
(c) || srcu_read_lock_held(ssp), __rcu)
121122

122123
/**
123124
* srcu_dereference - fetch SRCU-protected pointer for later dereferencing

0 commit comments

Comments
 (0)