Skip to content

Commit 736b378

Browse files
committed
Merge tag 'slab-for-6.4' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab
Pull slab updates from Vlastimil Babka: "The main change is naturally the SLOB removal. Since its deprecation in 6.2 I've seen no complaints so hopefully SLUB_(TINY) works well for everyone and we can proceed. Besides the code cleanup, the main immediate benefit will be allowing kfree() family of function to work on kmem_cache_alloc() objects, which was incompatible with SLOB. This includes kfree_rcu() which had no kmem_cache_free_rcu() counterpart yet and now it shouldn't be necessary anymore. Besides that, there are several small code and comment improvements from Thomas, Thorsten and Vernon" * tag 'slab-for-6.4' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: mm/slab: document kfree() as allowed for kmem_cache_alloc() objects mm/slob: remove slob.c mm/slab: remove CONFIG_SLOB code from slab common code mm, pagemap: remove SLOB and SLQB from comments and documentation mm, page_flags: remove PG_slob_free mm/slob: remove CONFIG_SLOB mm/slub: fix help comment of SLUB_DEBUG mm: slub: make kobj_type structure constant slab: Adjust comment after refactoring of gfp.h
2 parents 1170453 + ed4cdfb commit 736b378

16 files changed

Lines changed: 32 additions & 917 deletions

File tree

Documentation/admin-guide/mm/pagemap.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -91,9 +91,9 @@ Short descriptions to the page flags
9191
The page is being locked for exclusive access, e.g. by undergoing read/write
9292
IO.
9393
7 - SLAB
94-
The page is managed by the SLAB/SLOB/SLUB/SLQB kernel memory allocator.
95-
When compound page is used, SLUB/SLQB will only set this flag on the head
96-
page; SLOB will not flag it at all.
94+
The page is managed by the SLAB/SLUB kernel memory allocator.
95+
When compound page is used, either will only set this flag on the head
96+
page.
9797
10 - BUDDY
9898
A free memory block managed by the buddy system allocator.
9999
The buddy system organizes free memory in blocks of various orders.

Documentation/core-api/memory-allocation.rst

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,16 @@ should be used if a part of the cache might be copied to the userspace.
170170
After the cache is created kmem_cache_alloc() and its convenience
171171
wrappers can allocate memory from that cache.
172172

173-
When the allocated memory is no longer needed it must be freed. You can
174-
use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and
175-
`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And
176-
don't forget to destroy the cache with kmem_cache_destroy().
173+
When the allocated memory is no longer needed it must be freed.
174+
175+
Objects allocated by `kmalloc` can be freed by `kfree` or `kvfree`. Objects
176+
allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`, `kfree`
177+
or `kvfree`, where the latter two might be more convenient thanks to not
178+
needing the kmem_cache pointer.
179+
180+
The same rules apply to _bulk and _rcu flavors of freeing functions.
181+
182+
Memory allocated by `vmalloc` can be freed with `vfree` or `kvfree`.
183+
Memory allocated by `kvmalloc` can be freed with `kvfree`.
184+
Caches created by `kmem_cache_create` should be freed with
185+
`kmem_cache_destroy` only after freeing all the allocated objects first.

fs/proc/page.c

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ u64 stable_page_flags(struct page *page)
125125
/*
126126
* pseudo flags for the well known (anonymous) memory mapped pages
127127
*
128-
* Note that page->_mapcount is overloaded in SLOB/SLUB/SLQB, so the
128+
* Note that page->_mapcount is overloaded in SLAB, so the
129129
* simple test in page_mapped() is not enough.
130130
*/
131131
if (!PageSlab(page) && page_mapped(page))
@@ -165,9 +165,8 @@ u64 stable_page_flags(struct page *page)
165165

166166

167167
/*
168-
* Caveats on high order pages: page->_refcount will only be set
169-
* -1 on the head page; SLUB/SLQB do the same for PG_slab;
170-
* SLOB won't set PG_slab at all on compound pages.
168+
* Caveats on high order pages: PG_buddy and PG_slab will only be set
169+
* on the head page.
171170
*/
172171
if (PageBuddy(page))
173172
u |= 1 << KPF_BUDDY;
@@ -185,7 +184,7 @@ u64 stable_page_flags(struct page *page)
185184
u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked);
186185

187186
u |= kpf_copy_bit(k, KPF_SLAB, PG_slab);
188-
if (PageTail(page) && PageSlab(compound_head(page)))
187+
if (PageTail(page) && PageSlab(page))
189188
u |= 1 << KPF_SLAB;
190189

191190
u |= kpf_copy_bit(k, KPF_ERROR, PG_error);

include/linux/page-flags.h

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -174,9 +174,6 @@ enum pageflags {
174174
/* Remapped by swiotlb-xen. */
175175
PG_xen_remapped = PG_owner_priv_1,
176176

177-
/* SLOB */
178-
PG_slob_free = PG_private,
179-
180177
#ifdef CONFIG_MEMORY_FAILURE
181178
/*
182179
* Compound pages. Stored in first tail page's flags.
@@ -483,7 +480,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD)
483480
PAGEFLAG(Workingset, workingset, PF_HEAD)
484481
TESTCLEARFLAG(Workingset, workingset, PF_HEAD)
485482
__PAGEFLAG(Slab, slab, PF_NO_TAIL)
486-
__PAGEFLAG(SlobFree, slob_free, PF_NO_TAIL)
487483
PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */
488484

489485
/* Xen */

include/linux/rcupdate.h

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -976,8 +976,10 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
976976
* either fall back to use of call_rcu() or rearrange the structure to
977977
* position the rcu_head structure into the first 4096 bytes.
978978
*
979-
* Note that the allowable offset might decrease in the future, for example,
980-
* to allow something like kmem_cache_free_rcu().
979+
* The object to be freed can be allocated either by kmalloc() or
980+
* kmem_cache_alloc().
981+
*
982+
* Note that the allowable offset might decrease in the future.
981983
*
982984
* The BUILD_BUG_ON check must not involve any function calls, hence the
983985
* checks are done in macros here.

include/linux/slab.h

Lines changed: 1 addition & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -298,19 +298,6 @@ static inline unsigned int arch_slab_minalign(void)
298298
#endif
299299
#endif
300300

301-
#ifdef CONFIG_SLOB
302-
/*
303-
* SLOB passes all requests larger than one page to the page allocator.
304-
* No kmalloc array is necessary since objects of different sizes can
305-
* be allocated from the same page.
306-
*/
307-
#define KMALLOC_SHIFT_HIGH PAGE_SHIFT
308-
#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
309-
#ifndef KMALLOC_SHIFT_LOW
310-
#define KMALLOC_SHIFT_LOW 3
311-
#endif
312-
#endif
313-
314301
/* Maximum allocatable size */
315302
#define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_MAX)
316303
/* Maximum size for which we actually use a slab cache */
@@ -366,7 +353,6 @@ enum kmalloc_cache_type {
366353
NR_KMALLOC_TYPES
367354
};
368355

369-
#ifndef CONFIG_SLOB
370356
extern struct kmem_cache *
371357
kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1];
372358

@@ -458,7 +444,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size,
458444
}
459445
static_assert(PAGE_SHIFT <= 20);
460446
#define kmalloc_index(s) __kmalloc_index(s, true)
461-
#endif /* !CONFIG_SLOB */
462447

463448
void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1);
464449

@@ -487,10 +472,6 @@ void kmem_cache_free(struct kmem_cache *s, void *objp);
487472
void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p);
488473
int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p);
489474

490-
/*
491-
* Caller must not use kfree_bulk() on memory not originally allocated
492-
* by kmalloc(), because the SLOB allocator cannot handle this.
493-
*/
494475
static __always_inline void kfree_bulk(size_t size, void **p)
495476
{
496477
kmem_cache_free_bulk(NULL, size, p);
@@ -526,7 +507,7 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_align
526507
* to be at least to the size.
527508
*
528509
* The @flags argument may be one of the GFP flags defined at
529-
* include/linux/gfp.h and described at
510+
* include/linux/gfp_types.h and described at
530511
* :ref:`Documentation/core-api/mm-api.rst <mm-api-gfp-flags>`
531512
*
532513
* The recommended usage of the @flags is described at
@@ -567,7 +548,6 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_align
567548
* Try really hard to succeed the allocation but fail
568549
* eventually.
569550
*/
570-
#ifndef CONFIG_SLOB
571551
static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
572552
{
573553
if (__builtin_constant_p(size) && size) {
@@ -583,17 +563,7 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
583563
}
584564
return __kmalloc(size, flags);
585565
}
586-
#else
587-
static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
588-
{
589-
if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
590-
return kmalloc_large(size, flags);
591-
592-
return __kmalloc(size, flags);
593-
}
594-
#endif
595566

596-
#ifndef CONFIG_SLOB
597567
static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
598568
{
599569
if (__builtin_constant_p(size) && size) {
@@ -609,15 +579,6 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla
609579
}
610580
return __kmalloc_node(size, flags, node);
611581
}
612-
#else
613-
static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
614-
{
615-
if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
616-
return kmalloc_large_node(size, flags, node);
617-
618-
return __kmalloc_node(size, flags, node);
619-
}
620-
#endif
621582

622583
/**
623584
* kmalloc_array - allocate memory for an array.

init/Kconfig

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -945,7 +945,7 @@ config MEMCG
945945

946946
config MEMCG_KMEM
947947
bool
948-
depends on MEMCG && !SLOB
948+
depends on MEMCG
949949
default y
950950

951951
config BLK_CGROUP

kernel/configs/tiny.config

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,5 @@ CONFIG_KERNEL_XZ=y
77
# CONFIG_KERNEL_LZO is not set
88
# CONFIG_KERNEL_LZ4 is not set
99
# CONFIG_SLAB is not set
10-
# CONFIG_SLOB_DEPRECATED is not set
1110
CONFIG_SLUB=y
1211
CONFIG_SLUB_TINY=y

mm/Kconfig

Lines changed: 0 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -238,30 +238,8 @@ config SLUB
238238
and has enhanced diagnostics. SLUB is the default choice for
239239
a slab allocator.
240240

241-
config SLOB_DEPRECATED
242-
depends on EXPERT
243-
bool "SLOB (Simple Allocator - DEPRECATED)"
244-
depends on !PREEMPT_RT
245-
help
246-
Deprecated and scheduled for removal in a few cycles. SLUB
247-
recommended as replacement. CONFIG_SLUB_TINY can be considered
248-
on systems with 16MB or less RAM.
249-
250-
If you need SLOB to stay, please contact linux-mm@kvack.org and
251-
people listed in the SLAB ALLOCATOR section of MAINTAINERS file,
252-
with your use case.
253-
254-
SLOB replaces the stock allocator with a drastically simpler
255-
allocator. SLOB is generally more space efficient but
256-
does not perform as well on large systems.
257-
258241
endchoice
259242

260-
config SLOB
261-
bool
262-
default y
263-
depends on SLOB_DEPRECATED
264-
265243
config SLUB_TINY
266244
bool "Configure SLUB for minimal memory footprint"
267245
depends on SLUB && EXPERT

mm/Kconfig.debug

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -60,9 +60,9 @@ config SLUB_DEBUG
6060
select STACKDEPOT if STACKTRACE_SUPPORT
6161
help
6262
SLUB has extensive debug support features. Disabling these can
63-
result in significant savings in code size. This also disables
64-
SLUB sysfs support. /sys/slab will not exist and there will be
65-
no support for cache validation etc.
63+
result in significant savings in code size. While /sys/kernel/slab
64+
will still exist (with SYSFS enabled), it will not provide e.g. cache
65+
validation.
6666

6767
config SLUB_DEBUG_ON
6868
bool "SLUB debugging on by default"

0 commit comments

Comments
 (0)