Skip to content

Commit ed4cdfb

Browse files
committed
Merge branch 'slab/for-6.4/slob-removal' into slab/for-next
A series by myself to remove CONFIG_SLOB: The SLOB allocator was deprecated in 6.2 and there have been no complaints so far so let's proceed with the removal. Besides the code cleanup, the main immediate benefit will be allowing kfree() family of function to work on kmem_cache_alloc() objects, which was incompatible with SLOB. This includes kfree_rcu() which had no kmem_cache_free_rcu() counterpart yet and now it shouldn't be necessary anymore. Otherwise it's all straightforward removal. After this series, 'git grep slob' or 'git grep SLOB' will have 3 remaining relevant hits in non-mm code: - tomoyo - patch submitted and carried there, doesn't need to wait for this series - skbuff - patch to cleanup now-unnecessary #ifdefs will be posted to netdev after this is merged, as requested to avoid conflicts - ftrace ring_buffer - patch to remove obsolete comment is carried there The rest of 'git grep SLOB' hits are false positives, or intentional (CREDITS, and mm/Kconfig SLUB_TINY description to help those that will happen to migrate later).
2 parents 8f0293b + ae65a52 commit ed4cdfb

14 files changed

Lines changed: 27 additions & 912 deletions

File tree

Documentation/admin-guide/mm/pagemap.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -91,9 +91,9 @@ Short descriptions to the page flags
9191
The page is being locked for exclusive access, e.g. by undergoing read/write
9292
IO.
9393
7 - SLAB
94-
The page is managed by the SLAB/SLOB/SLUB/SLQB kernel memory allocator.
95-
When compound page is used, SLUB/SLQB will only set this flag on the head
96-
page; SLOB will not flag it at all.
94+
The page is managed by the SLAB/SLUB kernel memory allocator.
95+
When compound page is used, either will only set this flag on the head
96+
page.
9797
10 - BUDDY
9898
A free memory block managed by the buddy system allocator.
9999
The buddy system organizes free memory in blocks of various orders.

Documentation/core-api/memory-allocation.rst

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,16 @@ should be used if a part of the cache might be copied to the userspace.
170170
After the cache is created kmem_cache_alloc() and its convenience
171171
wrappers can allocate memory from that cache.
172172

173-
When the allocated memory is no longer needed it must be freed. You can
174-
use kvfree() for the memory allocated with `kmalloc`, `vmalloc` and
175-
`kvmalloc`. The slab caches should be freed with kmem_cache_free(). And
176-
don't forget to destroy the cache with kmem_cache_destroy().
173+
When the allocated memory is no longer needed it must be freed.
174+
175+
Objects allocated by `kmalloc` can be freed by `kfree` or `kvfree`. Objects
176+
allocated by `kmem_cache_alloc` can be freed with `kmem_cache_free`, `kfree`
177+
or `kvfree`, where the latter two might be more convenient thanks to not
178+
needing the kmem_cache pointer.
179+
180+
The same rules apply to _bulk and _rcu flavors of freeing functions.
181+
182+
Memory allocated by `vmalloc` can be freed with `vfree` or `kvfree`.
183+
Memory allocated by `kvmalloc` can be freed with `kvfree`.
184+
Caches created by `kmem_cache_create` should be freed with
185+
`kmem_cache_destroy` only after freeing all the allocated objects first.

fs/proc/page.c

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ u64 stable_page_flags(struct page *page)
125125
/*
126126
* pseudo flags for the well known (anonymous) memory mapped pages
127127
*
128-
* Note that page->_mapcount is overloaded in SLOB/SLUB/SLQB, so the
128+
* Note that page->_mapcount is overloaded in SLAB, so the
129129
* simple test in page_mapped() is not enough.
130130
*/
131131
if (!PageSlab(page) && page_mapped(page))
@@ -165,9 +165,8 @@ u64 stable_page_flags(struct page *page)
165165

166166

167167
/*
168-
* Caveats on high order pages: page->_refcount will only be set
169-
* -1 on the head page; SLUB/SLQB do the same for PG_slab;
170-
* SLOB won't set PG_slab at all on compound pages.
168+
* Caveats on high order pages: PG_buddy and PG_slab will only be set
169+
* on the head page.
171170
*/
172171
if (PageBuddy(page))
173172
u |= 1 << KPF_BUDDY;
@@ -185,7 +184,7 @@ u64 stable_page_flags(struct page *page)
185184
u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked);
186185

187186
u |= kpf_copy_bit(k, KPF_SLAB, PG_slab);
188-
if (PageTail(page) && PageSlab(compound_head(page)))
187+
if (PageTail(page) && PageSlab(page))
189188
u |= 1 << KPF_SLAB;
190189

191190
u |= kpf_copy_bit(k, KPF_ERROR, PG_error);

include/linux/page-flags.h

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -174,9 +174,6 @@ enum pageflags {
174174
/* Remapped by swiotlb-xen. */
175175
PG_xen_remapped = PG_owner_priv_1,
176176

177-
/* SLOB */
178-
PG_slob_free = PG_private,
179-
180177
#ifdef CONFIG_MEMORY_FAILURE
181178
/*
182179
* Compound pages. Stored in first tail page's flags.
@@ -483,7 +480,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD)
483480
PAGEFLAG(Workingset, workingset, PF_HEAD)
484481
TESTCLEARFLAG(Workingset, workingset, PF_HEAD)
485482
__PAGEFLAG(Slab, slab, PF_NO_TAIL)
486-
__PAGEFLAG(SlobFree, slob_free, PF_NO_TAIL)
487483
PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */
488484

489485
/* Xen */

include/linux/rcupdate.h

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -976,8 +976,10 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
976976
* either fall back to use of call_rcu() or rearrange the structure to
977977
* position the rcu_head structure into the first 4096 bytes.
978978
*
979-
* Note that the allowable offset might decrease in the future, for example,
980-
* to allow something like kmem_cache_free_rcu().
979+
* The object to be freed can be allocated either by kmalloc() or
980+
* kmem_cache_alloc().
981+
*
982+
* Note that the allowable offset might decrease in the future.
981983
*
982984
* The BUILD_BUG_ON check must not involve any function calls, hence the
983985
* checks are done in macros here.

include/linux/slab.h

Lines changed: 0 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -298,19 +298,6 @@ static inline unsigned int arch_slab_minalign(void)
298298
#endif
299299
#endif
300300

301-
#ifdef CONFIG_SLOB
302-
/*
303-
* SLOB passes all requests larger than one page to the page allocator.
304-
* No kmalloc array is necessary since objects of different sizes can
305-
* be allocated from the same page.
306-
*/
307-
#define KMALLOC_SHIFT_HIGH PAGE_SHIFT
308-
#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1)
309-
#ifndef KMALLOC_SHIFT_LOW
310-
#define KMALLOC_SHIFT_LOW 3
311-
#endif
312-
#endif
313-
314301
/* Maximum allocatable size */
315302
#define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_MAX)
316303
/* Maximum size for which we actually use a slab cache */
@@ -366,7 +353,6 @@ enum kmalloc_cache_type {
366353
NR_KMALLOC_TYPES
367354
};
368355

369-
#ifndef CONFIG_SLOB
370356
extern struct kmem_cache *
371357
kmalloc_caches[NR_KMALLOC_TYPES][KMALLOC_SHIFT_HIGH + 1];
372358

@@ -458,7 +444,6 @@ static __always_inline unsigned int __kmalloc_index(size_t size,
458444
}
459445
static_assert(PAGE_SHIFT <= 20);
460446
#define kmalloc_index(s) __kmalloc_index(s, true)
461-
#endif /* !CONFIG_SLOB */
462447

463448
void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1);
464449

@@ -487,10 +472,6 @@ void kmem_cache_free(struct kmem_cache *s, void *objp);
487472
void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p);
488473
int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p);
489474

490-
/*
491-
* Caller must not use kfree_bulk() on memory not originally allocated
492-
* by kmalloc(), because the SLOB allocator cannot handle this.
493-
*/
494475
static __always_inline void kfree_bulk(size_t size, void **p)
495476
{
496477
kmem_cache_free_bulk(NULL, size, p);
@@ -567,7 +548,6 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_align
567548
* Try really hard to succeed the allocation but fail
568549
* eventually.
569550
*/
570-
#ifndef CONFIG_SLOB
571551
static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
572552
{
573553
if (__builtin_constant_p(size) && size) {
@@ -583,17 +563,7 @@ static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
583563
}
584564
return __kmalloc(size, flags);
585565
}
586-
#else
587-
static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags)
588-
{
589-
if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
590-
return kmalloc_large(size, flags);
591-
592-
return __kmalloc(size, flags);
593-
}
594-
#endif
595566

596-
#ifndef CONFIG_SLOB
597567
static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
598568
{
599569
if (__builtin_constant_p(size) && size) {
@@ -609,15 +579,6 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla
609579
}
610580
return __kmalloc_node(size, flags, node);
611581
}
612-
#else
613-
static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node)
614-
{
615-
if (__builtin_constant_p(size) && size > KMALLOC_MAX_CACHE_SIZE)
616-
return kmalloc_large_node(size, flags, node);
617-
618-
return __kmalloc_node(size, flags, node);
619-
}
620-
#endif
621582

622583
/**
623584
* kmalloc_array - allocate memory for an array.

init/Kconfig

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -973,7 +973,7 @@ config MEMCG
973973

974974
config MEMCG_KMEM
975975
bool
976-
depends on MEMCG && !SLOB
976+
depends on MEMCG
977977
default y
978978

979979
config BLK_CGROUP

kernel/configs/tiny.config

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,5 @@ CONFIG_KERNEL_XZ=y
77
# CONFIG_KERNEL_LZO is not set
88
# CONFIG_KERNEL_LZ4 is not set
99
# CONFIG_SLAB is not set
10-
# CONFIG_SLOB_DEPRECATED is not set
1110
CONFIG_SLUB=y
1211
CONFIG_SLUB_TINY=y

mm/Kconfig

Lines changed: 0 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -238,30 +238,8 @@ config SLUB
238238
and has enhanced diagnostics. SLUB is the default choice for
239239
a slab allocator.
240240

241-
config SLOB_DEPRECATED
242-
depends on EXPERT
243-
bool "SLOB (Simple Allocator - DEPRECATED)"
244-
depends on !PREEMPT_RT
245-
help
246-
Deprecated and scheduled for removal in a few cycles. SLUB
247-
recommended as replacement. CONFIG_SLUB_TINY can be considered
248-
on systems with 16MB or less RAM.
249-
250-
If you need SLOB to stay, please contact linux-mm@kvack.org and
251-
people listed in the SLAB ALLOCATOR section of MAINTAINERS file,
252-
with your use case.
253-
254-
SLOB replaces the stock allocator with a drastically simpler
255-
allocator. SLOB is generally more space efficient but
256-
does not perform as well on large systems.
257-
258241
endchoice
259242

260-
config SLOB
261-
bool
262-
default y
263-
depends on SLOB_DEPRECATED
264-
265243
config SLUB_TINY
266244
bool "Configure SLUB for minimal memory footprint"
267245
depends on SLUB && EXPERT

mm/Makefile

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,6 @@ KCSAN_INSTRUMENT_BARRIERS := y
2222
# flaky coverage that is not a function of syscall inputs. E.g. slab is out of
2323
# free pages, or a task is migrated between nodes.
2424
KCOV_INSTRUMENT_slab_common.o := n
25-
KCOV_INSTRUMENT_slob.o := n
2625
KCOV_INSTRUMENT_slab.o := n
2726
KCOV_INSTRUMENT_slub.o := n
2827
KCOV_INSTRUMENT_page_alloc.o := n
@@ -81,7 +80,6 @@ obj-$(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) += hugetlb_vmemmap.o
8180
obj-$(CONFIG_NUMA) += mempolicy.o
8281
obj-$(CONFIG_SPARSEMEM) += sparse.o
8382
obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
84-
obj-$(CONFIG_SLOB) += slob.o
8583
obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
8684
obj-$(CONFIG_KSM) += ksm.o
8785
obj-$(CONFIG_PAGE_POISONING) += page_poison.o

0 commit comments

Comments
 (0)