Skip to content

Commit 07252b0

Browse files
Qi Zhengakpm00
authored andcommitted
Revert "mm: vmscan: remove shrinker_rwsem from synchronize_shrinkers()"
This reverts commit 1643db9. Kernel test robot reports -88.8% regression in stress-ng.ramfs.ops_per_sec test case [1], which is caused by commit f95bdb7 ("mm: vmscan: make global slab shrink lockless"). The root cause is that SRCU has to be careful to not frequently check for SRCU read-side critical section exits. Therefore, even if no one is currently in the SRCU read-side critical section, synchronize_srcu() cannot return quickly. That's why unregister_shrinker() has become slower. We will try to use the refcount+RCU method [2] proposed by Dave Chinner to continue to re-implement the lockless slab shrink. So we still need shrinker_rwsem in synchronize_shrinkers() after reverting the shrinker_srcu related changes. [1]. https://lore.kernel.org/lkml/202305230837.db2c233f-yujie.liu@intel.com/ [2]. https://lore.kernel.org/lkml/ZIJhou1d55d4H1s0@dread.disaster.area/ Link: https://lkml.kernel.org/r/20230609081518.3039120-3-qi.zheng@linux.dev Reported-by: kernel test robot <yujie.liu@intel.com> Closes: https://lore.kernel.org/oe-lkp/202305230837.db2c233f-yujie.liu@intel.com Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Kirill Tkhai <tkhai@ya.ru> Cc: Muchun Song <muchun.song@linux.dev> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 47a7c01 commit 07252b0

1 file changed

Lines changed: 6 additions & 2 deletions

File tree

mm/vmscan.c

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -831,11 +831,15 @@ EXPORT_SYMBOL(unregister_shrinker);
831831
/**
832832
* synchronize_shrinkers - Wait for all running shrinkers to complete.
833833
*
834-
* This is useful to guarantee that all shrinker invocations have seen an
835-
* update, before freeing memory.
834+
* This is equivalent to calling unregister_shrink() and register_shrinker(),
835+
* but atomically and with less overhead. This is useful to guarantee that all
836+
* shrinker invocations have seen an update, before freeing memory, similar to
837+
* rcu.
836838
*/
837839
void synchronize_shrinkers(void)
838840
{
841+
down_write(&shrinker_rwsem);
842+
up_write(&shrinker_rwsem);
839843
atomic_inc(&shrinker_srcu_generation);
840844
synchronize_srcu(&shrinker_srcu);
841845
}

0 commit comments

Comments
 (0)