Skip to content

Commit 82feeaa

Browse files
Matthew Wilcox (Oracle)akpm00
authored andcommitted
slub: use a folio in __kmalloc_large_node
Mirror the code in free_large_kmalloc() and alloc_pages_node() and use a folio directly. Avoid the use of folio_alloc() as that will set up an rmappable folio which we do not want here. Link: https://lkml.kernel.org/r/20231228085748.1083901-5-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: David Rientjes <rientjes@google.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 2443fb5 commit 82feeaa

1 file changed

Lines changed: 5 additions & 5 deletions

File tree

mm/slab_common.c

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1137,18 +1137,18 @@ gfp_t kmalloc_fix_flags(gfp_t flags)
11371137

11381138
static void *__kmalloc_large_node(size_t size, gfp_t flags, int node)
11391139
{
1140-
struct page *page;
1140+
struct folio *folio;
11411141
void *ptr = NULL;
11421142
unsigned int order = get_order(size);
11431143

11441144
if (unlikely(flags & GFP_SLAB_BUG_MASK))
11451145
flags = kmalloc_fix_flags(flags);
11461146

11471147
flags |= __GFP_COMP;
1148-
page = alloc_pages_node(node, flags, order);
1149-
if (page) {
1150-
ptr = page_address(page);
1151-
mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
1148+
folio = (struct folio *)alloc_pages_node(node, flags, order);
1149+
if (folio) {
1150+
ptr = folio_address(folio);
1151+
lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,
11521152
PAGE_SIZE << order);
11531153
}
11541154

0 commit comments

Comments
 (0)