Skip to content

Commit c438b7d

Browse files
fbqpaulmckrcu
authored andcommitted
tools/memory-model: litmus: Add two tests for unlock(A)+lock(B) ordering
The memory model has been updated to provide a stronger ordering guarantee for unlock(A)+lock(B) on the same CPU/thread. Therefore add two litmus tests describing this new guarantee, these tests are simple yet can clearly show the usage of the new guarantee, also they can serve as the self tests for the modification in the model. Co-developed-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
1 parent b47c05e commit c438b7d

3 files changed

Lines changed: 76 additions & 0 deletions

File tree

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
C LB+unlocklockonceonce+poacquireonce
2+
3+
(*
4+
* Result: Never
5+
*
6+
* If two locked critical sections execute on the same CPU, all accesses
7+
* in the first must execute before any accesses in the second, even if the
8+
* critical sections are protected by different locks. Note: Even when a
9+
* write executes before a read, their memory effects can be reordered from
10+
* the viewpoint of another CPU (the kind of reordering allowed by TSO).
11+
*)
12+
13+
{}
14+
15+
P0(spinlock_t *s, spinlock_t *t, int *x, int *y)
16+
{
17+
int r1;
18+
19+
spin_lock(s);
20+
r1 = READ_ONCE(*x);
21+
spin_unlock(s);
22+
spin_lock(t);
23+
WRITE_ONCE(*y, 1);
24+
spin_unlock(t);
25+
}
26+
27+
P1(int *x, int *y)
28+
{
29+
int r2;
30+
31+
r2 = smp_load_acquire(y);
32+
WRITE_ONCE(*x, 1);
33+
}
34+
35+
exists (0:r1=1 /\ 1:r2=1)
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
C MP+unlocklockonceonce+fencermbonceonce
2+
3+
(*
4+
* Result: Never
5+
*
6+
* If two locked critical sections execute on the same CPU, stores in the
7+
* first must propagate to each CPU before stores in the second do, even if
8+
* the critical sections are protected by different locks.
9+
*)
10+
11+
{}
12+
13+
P0(spinlock_t *s, spinlock_t *t, int *x, int *y)
14+
{
15+
spin_lock(s);
16+
WRITE_ONCE(*x, 1);
17+
spin_unlock(s);
18+
spin_lock(t);
19+
WRITE_ONCE(*y, 1);
20+
spin_unlock(t);
21+
}
22+
23+
P1(int *x, int *y)
24+
{
25+
int r1;
26+
int r2;
27+
28+
r1 = READ_ONCE(*y);
29+
smp_rmb();
30+
r2 = READ_ONCE(*x);
31+
}
32+
33+
exists (1:r1=1 /\ 1:r2=0)

tools/memory-model/litmus-tests/README

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,10 @@ LB+poonceonces.litmus
6363
As above, but with store-release replaced with WRITE_ONCE()
6464
and load-acquire replaced with READ_ONCE().
6565

66+
LB+unlocklockonceonce+poacquireonce.litmus
67+
Does a unlock+lock pair provides ordering guarantee between a
68+
load and a store?
69+
6670
MP+onceassign+derefonce.litmus
6771
As below, but with rcu_assign_pointer() and an rcu_dereference().
6872

@@ -90,6 +94,10 @@ MP+porevlocks.litmus
9094
As below, but with the first access of the writer process
9195
and the second access of reader process protected by a lock.
9296

97+
MP+unlocklockonceonce+fencermbonceonce.litmus
98+
Does a unlock+lock pair provides ordering guarantee between a
99+
store and another store?
100+
93101
MP+fencewmbonceonce+fencermbonceonce.litmus
94102
Does a smp_wmb() (between the stores) and an smp_rmb() (between
95103
the loads) suffice for the message-passing litmus test, where one

0 commit comments

Comments
 (0)