Skip to content

Commit 671776b

Browse files
Xiongwei Songtehcaster
authored andcommitted
mm/slub: unify all sl[au]b parameters with "slab_$param"
Since the SLAB allocator has been removed, so we can clean up the sl[au]b_$params. With only one slab allocator left, it's better to use the generic "slab" term instead of "slub" which is an implementation detail, which is pointed out by Vlastimil Babka. For more information please see [1]. Hence, we are going to use "slab_$param" as the primary prefix. This patch is changing the following slab parameters - slub_max_order - slub_min_order - slub_min_objects - slub_debug to - slab_max_order - slab_min_order - slab_min_objects - slab_debug as the primary slab parameters for all references of them in docs and comments. But this patch won't change variables and functions inside slub as we will have wider slub/slab change. Meanwhile, "slub_$params" can also be passed by command line, which is to keep backward compatibility. Also mark all "slub_$params" as legacy. Remove the separate descriptions for slub_[no]merge, append legacy tip for them at the end of descriptions of slab_[no]merge. [1] https://lore.kernel.org/linux-mm/7512b350-4317-21a0-fab3-4101bc4d8f7a@suse.cz/ Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
1 parent f186816 commit 671776b

6 files changed

Lines changed: 62 additions & 64 deletions

File tree

Documentation/admin-guide/kernel-parameters.txt

Lines changed: 32 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -5891,65 +5891,58 @@
58915891
simeth= [IA-64]
58925892
simscsi=
58935893

5894-
slram= [HW,MTD]
5895-
5896-
slab_merge [MM]
5897-
Enable merging of slabs with similar size when the
5898-
kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
5899-
5900-
slab_nomerge [MM]
5901-
Disable merging of slabs with similar size. May be
5902-
necessary if there is some reason to distinguish
5903-
allocs to different slabs, especially in hardened
5904-
environments where the risk of heap overflows and
5905-
layout control by attackers can usually be
5906-
frustrated by disabling merging. This will reduce
5907-
most of the exposure of a heap attack to a single
5908-
cache (risks via metadata attacks are mostly
5909-
unchanged). Debug options disable merging on their
5910-
own.
5911-
For more information see Documentation/mm/slub.rst.
5912-
5913-
slab_max_order= [MM, SLAB]
5914-
Determines the maximum allowed order for slabs.
5915-
A high setting may cause OOMs due to memory
5916-
fragmentation. Defaults to 1 for systems with
5917-
more than 32MB of RAM, 0 otherwise.
5918-
5919-
slub_debug[=options[,slabs][;[options[,slabs]]...] [MM, SLUB]
5920-
Enabling slub_debug allows one to determine the
5894+
slab_debug[=options[,slabs][;[options[,slabs]]...] [MM]
5895+
Enabling slab_debug allows one to determine the
59215896
culprit if slab objects become corrupted. Enabling
5922-
slub_debug can create guard zones around objects and
5897+
slab_debug can create guard zones around objects and
59235898
may poison objects when not in use. Also tracks the
59245899
last alloc / free. For more information see
59255900
Documentation/mm/slub.rst.
5901+
(slub_debug legacy name also accepted for now)
59265902

5927-
slub_max_order= [MM, SLUB]
5903+
slab_max_order= [MM]
59285904
Determines the maximum allowed order for slabs.
59295905
A high setting may cause OOMs due to memory
59305906
fragmentation. For more information see
59315907
Documentation/mm/slub.rst.
5908+
(slub_max_order legacy name also accepted for now)
5909+
5910+
slab_merge [MM]
5911+
Enable merging of slabs with similar size when the
5912+
kernel is built without CONFIG_SLAB_MERGE_DEFAULT.
5913+
(slub_merge legacy name also accepted for now)
59325914

5933-
slub_min_objects= [MM, SLUB]
5915+
slab_min_objects= [MM]
59345916
The minimum number of objects per slab. SLUB will
5935-
increase the slab order up to slub_max_order to
5917+
increase the slab order up to slab_max_order to
59365918
generate a sufficiently large slab able to contain
59375919
the number of objects indicated. The higher the number
59385920
of objects the smaller the overhead of tracking slabs
59395921
and the less frequently locks need to be acquired.
59405922
For more information see Documentation/mm/slub.rst.
5923+
(slub_min_objects legacy name also accepted for now)
59415924

5942-
slub_min_order= [MM, SLUB]
5925+
slab_min_order= [MM]
59435926
Determines the minimum page order for slabs. Must be
5944-
lower than slub_max_order.
5945-
For more information see Documentation/mm/slub.rst.
5927+
lower or equal to slab_max_order. For more information see
5928+
Documentation/mm/slub.rst.
5929+
(slub_min_order legacy name also accepted for now)
59465930

5947-
slub_merge [MM, SLUB]
5948-
Same with slab_merge.
5931+
slab_nomerge [MM]
5932+
Disable merging of slabs with similar size. May be
5933+
necessary if there is some reason to distinguish
5934+
allocs to different slabs, especially in hardened
5935+
environments where the risk of heap overflows and
5936+
layout control by attackers can usually be
5937+
frustrated by disabling merging. This will reduce
5938+
most of the exposure of a heap attack to a single
5939+
cache (risks via metadata attacks are mostly
5940+
unchanged). Debug options disable merging on their
5941+
own.
5942+
For more information see Documentation/mm/slub.rst.
5943+
(slub_nomerge legacy name also accepted for now)
59495944

5950-
slub_nomerge [MM, SLUB]
5951-
Same with slab_nomerge. This is supported for legacy.
5952-
See slab_nomerge for more information.
5945+
slram= [HW,MTD]
59535946

59545947
smart2= [HW]
59555948
Format: <io1>[,<io2>[,...,<io8>]]

drivers/misc/lkdtm/heap.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ static void lkdtm_VMALLOC_LINEAR_OVERFLOW(void)
4848
* correctly.
4949
*
5050
* This should get caught by either memory tagging, KASan, or by using
51-
* CONFIG_SLUB_DEBUG=y and slub_debug=ZF (or CONFIG_SLUB_DEBUG_ON=y).
51+
* CONFIG_SLUB_DEBUG=y and slab_debug=ZF (or CONFIG_SLUB_DEBUG_ON=y).
5252
*/
5353
static void lkdtm_SLAB_LINEAR_OVERFLOW(void)
5454
{

mm/Kconfig.debug

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -64,11 +64,11 @@ config SLUB_DEBUG_ON
6464
help
6565
Boot with debugging on by default. SLUB boots by default with
6666
the runtime debug capabilities switched off. Enabling this is
67-
equivalent to specifying the "slub_debug" parameter on boot.
67+
equivalent to specifying the "slab_debug" parameter on boot.
6868
There is no support for more fine grained debug control like
69-
possible with slub_debug=xxx. SLUB debugging may be switched
69+
possible with slab_debug=xxx. SLUB debugging may be switched
7070
off in a kernel built with CONFIG_SLUB_DEBUG_ON by specifying
71-
"slub_debug=-".
71+
"slab_debug=-".
7272

7373
config PAGE_OWNER
7474
bool "Track page owner"

mm/slab.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -528,7 +528,7 @@ static inline bool __slub_debug_enabled(void)
528528
#endif
529529

530530
/*
531-
* Returns true if any of the specified slub_debug flags is enabled for the
531+
* Returns true if any of the specified slab_debug flags is enabled for the
532532
* cache. Use only for flags parsed by setup_slub_debug() as it also enables
533533
* the static key.
534534
*/

mm/slab_common.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -282,7 +282,7 @@ kmem_cache_create_usercopy(const char *name,
282282

283283
#ifdef CONFIG_SLUB_DEBUG
284284
/*
285-
* If no slub_debug was enabled globally, the static key is not yet
285+
* If no slab_debug was enabled globally, the static key is not yet
286286
* enabled by setup_slub_debug(). Enable it if the cache is being
287287
* created with any of the debugging flags passed explicitly.
288288
* It's also possible that this is the first cache created with
@@ -766,7 +766,7 @@ EXPORT_SYMBOL(kmalloc_size_roundup);
766766
}
767767

768768
/*
769-
* kmalloc_info[] is to make slub_debug=,kmalloc-xx option work at boot time.
769+
* kmalloc_info[] is to make slab_debug=,kmalloc-xx option work at boot time.
770770
* kmalloc_index() supports up to 2^21=2MB, so the final entry of the table is
771771
* kmalloc-2M.
772772
*/

mm/slub.c

Lines changed: 23 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
295295

296296
/*
297297
* Debugging flags that require metadata to be stored in the slab. These get
298-
* disabled when slub_debug=O is used and a cache's min order increases with
298+
* disabled when slab_debug=O is used and a cache's min order increases with
299299
* metadata.
300300
*/
301301
#define DEBUG_METADATA_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
@@ -1616,7 +1616,7 @@ static inline int free_consistency_checks(struct kmem_cache *s,
16161616
}
16171617

16181618
/*
1619-
* Parse a block of slub_debug options. Blocks are delimited by ';'
1619+
* Parse a block of slab_debug options. Blocks are delimited by ';'
16201620
*
16211621
* @str: start of block
16221622
* @flags: returns parsed flags, or DEBUG_DEFAULT_FLAGS if none specified
@@ -1677,7 +1677,7 @@ parse_slub_debug_flags(char *str, slab_flags_t *flags, char **slabs, bool init)
16771677
break;
16781678
default:
16791679
if (init)
1680-
pr_err("slub_debug option '%c' unknown. skipped\n", *str);
1680+
pr_err("slab_debug option '%c' unknown. skipped\n", *str);
16811681
}
16821682
}
16831683
check_slabs:
@@ -1736,7 +1736,7 @@ static int __init setup_slub_debug(char *str)
17361736
/*
17371737
* For backwards compatibility, a single list of flags with list of
17381738
* slabs means debugging is only changed for those slabs, so the global
1739-
* slub_debug should be unchanged (0 or DEBUG_DEFAULT_FLAGS, depending
1739+
* slab_debug should be unchanged (0 or DEBUG_DEFAULT_FLAGS, depending
17401740
* on CONFIG_SLUB_DEBUG_ON). We can extended that to multiple lists as
17411741
* long as there is no option specifying flags without a slab list.
17421742
*/
@@ -1760,7 +1760,8 @@ static int __init setup_slub_debug(char *str)
17601760
return 1;
17611761
}
17621762

1763-
__setup("slub_debug", setup_slub_debug);
1763+
__setup("slab_debug", setup_slub_debug);
1764+
__setup_param("slub_debug", slub_debug, setup_slub_debug, 0);
17641765

17651766
/*
17661767
* kmem_cache_flags - apply debugging options to the cache
@@ -1770,7 +1771,7 @@ __setup("slub_debug", setup_slub_debug);
17701771
*
17711772
* Debug option(s) are applied to @flags. In addition to the debug
17721773
* option(s), if a slab name (or multiple) is specified i.e.
1773-
* slub_debug=<Debug-Options>,<slab name1>,<slab name2> ...
1774+
* slab_debug=<Debug-Options>,<slab name1>,<slab name2> ...
17741775
* then only the select slabs will receive the debug option(s).
17751776
*/
17761777
slab_flags_t kmem_cache_flags(unsigned int object_size,
@@ -3263,7 +3264,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
32633264
oo_order(s->min));
32643265

32653266
if (oo_order(s->min) > get_order(s->object_size))
3266-
pr_warn(" %s debugging increased min order, use slub_debug=O to disable.\n",
3267+
pr_warn(" %s debugging increased min order, use slab_debug=O to disable.\n",
32673268
s->name);
32683269

32693270
for_each_kmem_cache_node(s, node, n) {
@@ -3792,11 +3793,11 @@ void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg,
37923793
zero_size = orig_size;
37933794

37943795
/*
3795-
* When slub_debug is enabled, avoid memory initialization integrated
3796+
* When slab_debug is enabled, avoid memory initialization integrated
37963797
* into KASAN and instead zero out the memory via the memset below with
37973798
* the proper size. Otherwise, KASAN might overwrite SLUB redzones and
37983799
* cause false-positive reports. This does not lead to a performance
3799-
* penalty on production builds, as slub_debug is not intended to be
3800+
* penalty on production builds, as slab_debug is not intended to be
38003801
* enabled there.
38013802
*/
38023803
if (__slub_debug_enabled())
@@ -4702,8 +4703,8 @@ static unsigned int slub_min_objects;
47024703
* activity on the partial lists which requires taking the list_lock. This is
47034704
* less a concern for large slabs though which are rarely used.
47044705
*
4705-
* slub_max_order specifies the order where we begin to stop considering the
4706-
* number of objects in a slab as critical. If we reach slub_max_order then
4706+
* slab_max_order specifies the order where we begin to stop considering the
4707+
* number of objects in a slab as critical. If we reach slab_max_order then
47074708
* we try to keep the page order as low as possible. So we accept more waste
47084709
* of space in favor of a small page order.
47094710
*
@@ -4770,14 +4771,14 @@ static inline int calculate_order(unsigned int size)
47704771
* and backing off gradually.
47714772
*
47724773
* We start with accepting at most 1/16 waste and try to find the
4773-
* smallest order from min_objects-derived/slub_min_order up to
4774-
* slub_max_order that will satisfy the constraint. Note that increasing
4774+
* smallest order from min_objects-derived/slab_min_order up to
4775+
* slab_max_order that will satisfy the constraint. Note that increasing
47754776
* the order can only result in same or less fractional waste, not more.
47764777
*
47774778
* If that fails, we increase the acceptable fraction of waste and try
47784779
* again. The last iteration with fraction of 1/2 would effectively
47794780
* accept any waste and give us the order determined by min_objects, as
4780-
* long as at least single object fits within slub_max_order.
4781+
* long as at least single object fits within slab_max_order.
47814782
*/
47824783
for (unsigned int fraction = 16; fraction > 1; fraction /= 2) {
47834784
order = calc_slab_order(size, min_order, slub_max_order,
@@ -4787,7 +4788,7 @@ static inline int calculate_order(unsigned int size)
47874788
}
47884789

47894790
/*
4790-
* Doh this slab cannot be placed using slub_max_order.
4791+
* Doh this slab cannot be placed using slab_max_order.
47914792
*/
47924793
order = get_order(size);
47934794
if (order <= MAX_PAGE_ORDER)
@@ -5313,7 +5314,9 @@ static int __init setup_slub_min_order(char *str)
53135314
return 1;
53145315
}
53155316

5316-
__setup("slub_min_order=", setup_slub_min_order);
5317+
__setup("slab_min_order=", setup_slub_min_order);
5318+
__setup_param("slub_min_order=", slub_min_order, setup_slub_min_order, 0);
5319+
53175320

53185321
static int __init setup_slub_max_order(char *str)
53195322
{
@@ -5326,7 +5329,8 @@ static int __init setup_slub_max_order(char *str)
53265329
return 1;
53275330
}
53285331

5329-
__setup("slub_max_order=", setup_slub_max_order);
5332+
__setup("slab_max_order=", setup_slub_max_order);
5333+
__setup_param("slub_max_order=", slub_max_order, setup_slub_max_order, 0);
53305334

53315335
static int __init setup_slub_min_objects(char *str)
53325336
{
@@ -5335,7 +5339,8 @@ static int __init setup_slub_min_objects(char *str)
53355339
return 1;
53365340
}
53375341

5338-
__setup("slub_min_objects=", setup_slub_min_objects);
5342+
__setup("slab_min_objects=", setup_slub_min_objects);
5343+
__setup_param("slub_min_objects=", slub_min_objects, setup_slub_min_objects, 0);
53395344

53405345
#ifdef CONFIG_HARDENED_USERCOPY
53415346
/*

0 commit comments

Comments
 (0)