Skip to content

Commit 0e9e702

Browse files
committed
Merge branch 'doc.2025.08.20a' into HEAD
RCU documentation updates: * Update whatisRCU.rst for recent RCU API additions * Add RCU guards to checklist.rst * Requirements.rst: Abide by conventions of kernel documentation * Fix formatting/typo issues in torture.rst and index.rst * Fix dead URLs in RTFP.txt
2 parents 61399e0 + de117fe commit 0e9e702

6 files changed

Lines changed: 169 additions & 76 deletions

File tree

Documentation/RCU/Design/Requirements/Requirements.rst

Lines changed: 24 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1973,9 +1973,7 @@ code, and the FQS loop, all of which refer to or modify this bookkeeping.
19731973
Note that grace period initialization (rcu_gp_init()) must carefully sequence
19741974
CPU hotplug scanning with grace period state changes. For example, the
19751975
following race could occur in rcu_gp_init() if rcu_seq_start() were to happen
1976-
after the CPU hotplug scanning.
1977-
1978-
.. code-block:: none
1976+
after the CPU hotplug scanning::
19791977

19801978
CPU0 (rcu_gp_init) CPU1 CPU2
19811979
--------------------- ---- ----
@@ -2008,22 +2006,22 @@ after the CPU hotplug scanning.
20082006
kfree(r1);
20092007
r2 = *r0; // USE-AFTER-FREE!
20102008

2011-
By incrementing gp_seq first, CPU1's RCU read-side critical section
2009+
By incrementing ``gp_seq`` first, CPU1's RCU read-side critical section
20122010
is guaranteed to not be missed by CPU2.
20132011

2014-
**Concurrent Quiescent State Reporting for Offline CPUs**
2012+
Concurrent Quiescent State Reporting for Offline CPUs
2013+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
20152014

20162015
RCU must ensure that CPUs going offline report quiescent states to avoid
20172016
blocking grace periods. This requires careful synchronization to handle
20182017
race conditions
20192018

2020-
**Race condition causing Offline CPU to hang GP**
2021-
2022-
A race between CPU offlining and new GP initialization (gp_init) may occur
2023-
because `rcu_report_qs_rnp()` in `rcutree_report_cpu_dead()` must temporarily
2024-
release the `rcu_node` lock to wake the RCU grace-period kthread:
2019+
Race condition causing Offline CPU to hang GP
2020+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
20252021

2026-
.. code-block:: none
2022+
A race between CPU offlining and new GP initialization (gp_init()) may occur
2023+
because rcu_report_qs_rnp() in rcutree_report_cpu_dead() must temporarily
2024+
release the ``rcu_node`` lock to wake the RCU grace-period kthread::
20272025

20282026
CPU1 (going offline) CPU0 (GP kthread)
20292027
-------------------- -----------------
@@ -2044,15 +2042,14 @@ release the `rcu_node` lock to wake the RCU grace-period kthread:
20442042
// Reacquire lock (but too late)
20452043
rnp->qsmaskinitnext &= ~mask // Finally clears bit
20462044

2047-
Without `ofl_lock`, the new grace period includes the offline CPU and waits
2045+
Without ``ofl_lock``, the new grace period includes the offline CPU and waits
20482046
forever for its quiescent state causing a GP hang.
20492047

2050-
**A solution with ofl_lock**
2048+
A solution with ofl_lock
2049+
^^^^^^^^^^^^^^^^^^^^^^^^
20512050

2052-
The `ofl_lock` (offline lock) prevents `rcu_gp_init()` from running during
2053-
the vulnerable window when `rcu_report_qs_rnp()` has released `rnp->lock`:
2054-
2055-
.. code-block:: none
2051+
The ``ofl_lock`` (offline lock) prevents rcu_gp_init() from running during
2052+
the vulnerable window when rcu_report_qs_rnp() has released ``rnp->lock``::
20562053

20572054
CPU0 (rcu_gp_init) CPU1 (rcutree_report_cpu_dead)
20582055
------------------ ------------------------------
@@ -2065,21 +2062,20 @@ the vulnerable window when `rcu_report_qs_rnp()` has released `rnp->lock`:
20652062
arch_spin_unlock(&ofl_lock) ---> // Now CPU1 can proceed
20662063
} // But snapshot already taken
20672064

2068-
**Another race causing GP hangs in rcu_gpu_init(): Reporting QS for Now-offline CPUs**
2065+
Another race causing GP hangs in rcu_gpu_init(): Reporting QS for Now-offline CPUs
2066+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
20692067

20702068
After the first loop takes an atomic snapshot of online CPUs, as shown above,
2071-
the second loop in `rcu_gp_init()` detects CPUs that went offline between
2072-
releasing `ofl_lock` and acquiring the per-node `rnp->lock`. This detection is
2073-
crucial because:
2069+
the second loop in rcu_gp_init() detects CPUs that went offline between
2070+
releasing ``ofl_lock`` and acquiring the per-node ``rnp->lock``.
2071+
This detection is crucial because:
20742072

20752073
1. The CPU might have gone offline after the snapshot but before the second loop
20762074
2. The offline CPU cannot report its own QS if it's already dead
20772075
3. Without this detection, the grace period would wait forever for CPUs that
20782076
are now offline.
20792077

2080-
The second loop performs this detection safely:
2081-
2082-
.. code-block:: none
2078+
The second loop performs this detection safely::
20832079

20842080
rcu_for_each_node_breadth_first(rnp) {
20852081
raw_spin_lock_irqsave_rcu_node(rnp, flags);
@@ -2093,10 +2089,10 @@ The second loop performs this detection safely:
20932089
}
20942090

20952091
This approach ensures atomicity: quiescent state reporting for offline CPUs
2096-
happens either in `rcu_gp_init()` (second loop) or in `rcutree_report_cpu_dead()`,
2097-
never both and never neither. The `rnp->lock` held throughout the sequence
2098-
prevents races - `rcutree_report_cpu_dead()` also acquires this lock when
2099-
clearing `qsmaskinitnext`, ensuring mutual exclusion.
2092+
happens either in rcu_gp_init() (second loop) or in rcutree_report_cpu_dead(),
2093+
never both and never neither. The ``rnp->lock`` held throughout the sequence
2094+
prevents races - rcutree_report_cpu_dead() also acquires this lock when
2095+
clearing ``qsmaskinitnext``, ensuring mutual exclusion.
21002096

21012097
Scheduler and RCU
21022098
~~~~~~~~~~~~~~~~~

Documentation/RCU/RTFP.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -641,7 +641,7 @@ Orran Krieger and Rusty Russell and Dipankar Sarma and Maneesh Soni"
641641
,Month="July"
642642
,Year="2001"
643643
,note="Available:
644-
\url{http://www.linuxsymposium.org/2001/abstracts/readcopy.php}
644+
\url{https://kernel.org/doc/ols/2001/read-copy.pdf}
645645
\url{http://www.rdrop.com/users/paulmck/RCU/rclock_OLS.2001.05.01c.pdf}
646646
[Viewed June 23, 2004]"
647647
,annotation={
@@ -1480,7 +1480,7 @@ Suparna Bhattacharya"
14801480
,Year="2006"
14811481
,pages="v2 123-138"
14821482
,note="Available:
1483-
\url{http://www.linuxsymposium.org/2006/view_abstract.php?content_key=184}
1483+
\url{https://kernel.org/doc/ols/2006/ols2006v2-pages-131-146.pdf}
14841484
\url{http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf}
14851485
[Viewed January 1, 2007]"
14861486
,annotation={
@@ -1511,7 +1511,7 @@ Canis Rufus and Zoicon5 and Anome and Hal Eisen"
15111511
,Year="2006"
15121512
,pages="v2 249-254"
15131513
,note="Available:
1514-
\url{http://www.linuxsymposium.org/2006/view_abstract.php?content_key=184}
1514+
\url{https://kernel.org/doc/ols/2006/ols2006v2-pages-249-262.pdf}
15151515
[Viewed January 11, 2009]"
15161516
,annotation={
15171517
Uses RCU-protected radix tree for a lockless page cache.

Documentation/RCU/checklist.rst

Lines changed: 19 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,13 @@ over a rather long period of time, but improvements are always welcome!
6969
Explicit disabling of preemption (preempt_disable(), for example)
7070
can serve as rcu_read_lock_sched(), but is less readable and
7171
prevents lockdep from detecting locking issues. Acquiring a
72-
spinlock also enters an RCU read-side critical section.
72+
raw spinlock also enters an RCU read-side critical section.
73+
74+
The guard(rcu)() and scoped_guard(rcu) primitives designate
75+
the remainder of the current scope or the next statement,
76+
respectively, as the RCU read-side critical section. Use of
77+
these guards can be less error-prone than rcu_read_lock(),
78+
rcu_read_unlock(), and friends.
7379

7480
Please note that you *cannot* rely on code known to be built
7581
only in non-preemptible kernels. Such code can and will break,
@@ -405,9 +411,11 @@ over a rather long period of time, but improvements are always welcome!
405411
13. Unlike most flavors of RCU, it *is* permissible to block in an
406412
SRCU read-side critical section (demarked by srcu_read_lock()
407413
and srcu_read_unlock()), hence the "SRCU": "sleepable RCU".
408-
Please note that if you don't need to sleep in read-side critical
409-
sections, you should be using RCU rather than SRCU, because RCU
410-
is almost always faster and easier to use than is SRCU.
414+
As with RCU, guard(srcu)() and scoped_guard(srcu) forms are
415+
available, and often provide greater ease of use. Please note
416+
that if you don't need to sleep in read-side critical sections,
417+
you should be using RCU rather than SRCU, because RCU is almost
418+
always faster and easier to use than is SRCU.
411419

412420
Also unlike other forms of RCU, explicit initialization and
413421
cleanup is required either at build time via DEFINE_SRCU()
@@ -443,10 +451,13 @@ over a rather long period of time, but improvements are always welcome!
443451
real-time workloads than is synchronize_rcu_expedited().
444452

445453
It is also permissible to sleep in RCU Tasks Trace read-side
446-
critical section, which are delimited by rcu_read_lock_trace() and
447-
rcu_read_unlock_trace(). However, this is a specialized flavor
448-
of RCU, and you should not use it without first checking with
449-
its current users. In most cases, you should instead use SRCU.
454+
critical section, which are delimited by rcu_read_lock_trace()
455+
and rcu_read_unlock_trace(). However, this is a specialized
456+
flavor of RCU, and you should not use it without first checking
457+
with its current users. In most cases, you should instead
458+
use SRCU. As with RCU and SRCU, guard(rcu_tasks_trace)() and
459+
scoped_guard(rcu_tasks_trace) are available, and often provide
460+
greater ease of use.
450461

451462
Note that rcu_assign_pointer() relates to SRCU just as it does to
452463
other forms of RCU, but instead of rcu_dereference() you should

Documentation/RCU/index.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
.. SPDX-License-Identifier: GPL-2.0
22
3-
.. _rcu_concepts:
3+
.. _rcu_handbook:
44

55
============
6-
RCU concepts
6+
RCU Handbook
77
============
88

99
.. toctree::
10-
:maxdepth: 3
10+
:maxdepth: 2
1111

1212
checklist
1313
lockdep

Documentation/RCU/torture.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -344,7 +344,7 @@ painstaking and error-prone.
344344

345345
And this is why the kvm-remote.sh script exists.
346346

347-
If you the following command works::
347+
If the following command works::
348348

349349
ssh system0 date
350350

@@ -364,7 +364,7 @@ systems must come first.
364364
The kvm.sh ``--dryrun scenarios`` argument is useful for working out
365365
how many scenarios may be run in one batch across a group of systems.
366366

367-
You can also re-run a previous remote run in a manner similar to kvm.sh:
367+
You can also re-run a previous remote run in a manner similar to kvm.sh::
368368

369369
kvm-remote.sh "system0 system1 system2 system3 system4 system5" \
370370
tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28-remote \

0 commit comments

Comments
 (0)