@@ -194,14 +194,13 @@ over a rather long period of time, but improvements are always welcome!
194194 when publicizing a pointer to a structure that can
195195 be traversed by an RCU read-side critical section.
196196
197- 5. If any of call_rcu(), call_srcu(), call_rcu_tasks(),
198- call_rcu_tasks_rude(), or call_rcu_tasks_trace() is used,
199- the callback function may be invoked from softirq context,
200- and in any case with bottom halves disabled. In particular,
201- this callback function cannot block. If you need the callback
202- to block, run that code in a workqueue handler scheduled from
203- the callback. The queue_rcu_work() function does this for you
204- in the case of call_rcu().
197+ 5. If any of call_rcu(), call_srcu(), call_rcu_tasks(), or
198+ call_rcu_tasks_trace() is used, the callback function may be
199+ invoked from softirq context, and in any case with bottom halves
200+ disabled. In particular, this callback function cannot block.
201+ If you need the callback to block, run that code in a workqueue
202+ handler scheduled from the callback. The queue_rcu_work()
203+ function does this for you in the case of call_rcu().
205204
2062056. Since synchronize_rcu() can block, it cannot be called
207206 from any sort of irq context. The same rule applies
@@ -254,10 +253,10 @@ over a rather long period of time, but improvements are always welcome!
254253 corresponding readers must use rcu_read_lock_trace()
255254 and rcu_read_unlock_trace().
256255
257- c. If an updater uses call_rcu_tasks_rude() or
258- synchronize_rcu_tasks_rude(), then the corresponding
259- readers must use anything that disables preemption,
260- for example, preempt_disable() and preempt_enable().
256+ c. If an updater uses synchronize_rcu_tasks_rude(),
257+ then the corresponding readers must use anything that
258+ disables preemption, for example, preempt_disable()
259+ and preempt_enable().
261260
262261 Mixing things up will result in confusion and broken kernels, and
263262 has even resulted in an exploitable security issue. Therefore,
@@ -326,11 +325,9 @@ over a rather long period of time, but improvements are always welcome!
326325 d. Periodically invoke rcu_barrier(), permitting a limited
327326 number of updates per grace period.
328327
329- The same cautions apply to call_srcu(), call_rcu_tasks(),
330- call_rcu_tasks_rude(), and call_rcu_tasks_trace(). This is
331- why there is an srcu_barrier(), rcu_barrier_tasks(),
332- rcu_barrier_tasks_rude(), and rcu_barrier_tasks_rude(),
333- respectively.
328+ The same cautions apply to call_srcu(), call_rcu_tasks(), and
329+ call_rcu_tasks_trace(). This is why there is an srcu_barrier(),
330+ rcu_barrier_tasks(), and rcu_barrier_tasks_trace(), respectively.
334331
335332 Note that although these primitives do take action to avoid
336333 memory exhaustion when any given CPU has too many callbacks,
@@ -383,17 +380,17 @@ over a rather long period of time, but improvements are always welcome!
383380 must use whatever locking or other synchronization is required
384381 to safely access and/or modify that data structure.
385382
386- Do not assume that RCU callbacks will be executed on
387- the same CPU that executed the corresponding call_rcu(),
388- call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(), or
389- call_rcu_tasks_trace(). For example, if a given CPU goes offline
390- while having an RCU callback pending, then that RCU callback
391- will execute on some surviving CPU. (If this was not the case,
392- a self-spawning RCU callback would prevent the victim CPU from
393- ever going offline.) Furthermore, CPUs designated by rcu_nocbs=
394- might well * always * have their RCU callbacks executed on some
395- other CPUs, in fact, for some real-time workloads, this is the
396- whole point of using the rcu_nocbs= kernel boot parameter.
383+ Do not assume that RCU callbacks will be executed on the same
384+ CPU that executed the corresponding call_rcu(), call_srcu (),
385+ call_rcu_tasks(), or call_rcu_tasks_trace(). For example, if
386+ a given CPU goes offline while having an RCU callback pending,
387+ then that RCU callback will execute on some surviving CPU.
388+ (If this was not the case, a self-spawning RCU callback would
389+ prevent the victim CPU from ever going offline.) Furthermore,
390+ CPUs designated by rcu_nocbs= might well * always * have their
391+ RCU callbacks executed on some other CPUs, in fact, for some
392+ real-time workloads, this is the whole point of using the
393+ rcu_nocbs= kernel boot parameter.
397394
398395 In addition, do not assume that callbacks queued in a given order
399396 will be invoked in that order, even if they all are queued on the
@@ -507,9 +504,9 @@ over a rather long period of time, but improvements are always welcome!
507504 These debugging aids can help you find problems that are
508505 otherwise extremely difficult to spot.
509506
510- 17. If you pass a callback function defined within a module to one of
511- call_rcu(), call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(),
512- or call_rcu_tasks_trace(), then it is necessary to wait for all
507+ 17. If you pass a callback function defined within a module
508+ to one of call_rcu(), call_srcu(), call_rcu_tasks(), or
509+ call_rcu_tasks_trace(), then it is necessary to wait for all
513510 pending callbacks to be invoked before unloading that module.
514511 Note that it is absolutely *not * sufficient to wait for a grace
515512 period! For example, synchronize_rcu() implementation is *not *
@@ -522,7 +519,6 @@ over a rather long period of time, but improvements are always welcome!
522519 - call_rcu() -> rcu_barrier()
523520 - call_srcu() -> srcu_barrier()
524521 - call_rcu_tasks() -> rcu_barrier_tasks()
525- - call_rcu_tasks_rude() -> rcu_barrier_tasks_rude()
526522 - call_rcu_tasks_trace() -> rcu_barrier_tasks_trace()
527523
528524 However, these barrier functions are absolutely *not * guaranteed
@@ -539,7 +535,6 @@ over a rather long period of time, but improvements are always welcome!
539535 - Either synchronize_srcu() or synchronize_srcu_expedited(),
540536 together with and srcu_barrier()
541537 - synchronize_rcu_tasks() and rcu_barrier_tasks()
542- - synchronize_tasks_rude() and rcu_barrier_tasks_rude()
543538 - synchronize_tasks_trace() and rcu_barrier_tasks_trace()
544539
545540 If necessary, you can use something like workqueues to execute
0 commit comments