Skip to content

Commit d2658f2

Browse files
sergey-senozhatskyakpm00
authored andcommitted
zsmalloc: allow only one active pool compaction context
zsmalloc pool can be compacted concurrently by many contexts, e.g. cc1 handle_mm_fault() do_anonymous_page() __alloc_pages_slowpath() try_to_free_pages() do_try_to_free_pages( lru_gen_shrink_node() shrink_slab() do_shrink_slab() zs_shrinker_scan() zs_compact() Pool compaction is currently (basically) single-threaded as it is performed under pool->lock. Having multiple compaction threads results in unnecessary contention, as each thread competes for pool->lock. This, in turn, affects all zsmalloc operations such as zs_malloc(), zs_map_object(), zs_free(), etc. Introduce the pool->compaction_in_progress atomic variable, which ensures that only one compaction context can run at a time. This reduces overall pool->lock contention in (corner) cases when many contexts attempt to shrink zspool simultaneously. Link: https://lkml.kernel.org/r/20230418074639.1903197-1-senozhatsky@chromium.org Fixes: c0547d0 ("zsmalloc: consolidate zs_pool's migrate_lock and size_class's locks") Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Reviewed-by: Yosry Ahmed <yosryahmed@google.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 07115fc commit d2658f2

1 file changed

Lines changed: 12 additions & 0 deletions

File tree

mm/zsmalloc.c

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -264,6 +264,7 @@ struct zs_pool {
264264
struct work_struct free_work;
265265
#endif
266266
spinlock_t lock;
267+
atomic_t compaction_in_progress;
267268
};
268269

269270
struct zspage {
@@ -2274,13 +2275,23 @@ unsigned long zs_compact(struct zs_pool *pool)
22742275
struct size_class *class;
22752276
unsigned long pages_freed = 0;
22762277

2278+
/*
2279+
* Pool compaction is performed under pool->lock so it is basically
2280+
* single-threaded. Having more than one thread in __zs_compact()
2281+
* will increase pool->lock contention, which will impact other
2282+
* zsmalloc operations that need pool->lock.
2283+
*/
2284+
if (atomic_xchg(&pool->compaction_in_progress, 1))
2285+
return 0;
2286+
22772287
for (i = ZS_SIZE_CLASSES - 1; i >= 0; i--) {
22782288
class = pool->size_class[i];
22792289
if (class->index != i)
22802290
continue;
22812291
pages_freed += __zs_compact(pool, class);
22822292
}
22832293
atomic_long_add(pages_freed, &pool->stats.pages_compacted);
2294+
atomic_set(&pool->compaction_in_progress, 0);
22842295

22852296
return pages_freed;
22862297
}
@@ -2388,6 +2399,7 @@ struct zs_pool *zs_create_pool(const char *name)
23882399

23892400
init_deferred_free(pool);
23902401
spin_lock_init(&pool->lock);
2402+
atomic_set(&pool->compaction_in_progress, 0);
23912403

23922404
pool->name = kstrdup(name, GFP_KERNEL);
23932405
if (!pool->name)

0 commit comments

Comments
 (0)