Skip to content

Commit 21d02f8

Browse files
gormanmtorvalds
authored andcommitted
mm/page_alloc: move free_the_page
Patch series "Allow high order pages to be stored on PCP", v2. The per-cpu page allocator (PCP) only handles order-0 pages. With the series "Use local_lock for pcp protection and reduce stat overhead" and "Calculate pcp->high based on zone sizes and active CPUs", it's now feasible to store high-order pages on PCP lists. This small series allows PCP to store "cheap" orders where cheap is determined by PAGE_ALLOC_COSTLY_ORDER and THP-sized allocations. This patch (of 2): In the next page, free_compount_page is going to use the common helper free_the_page. This patch moves the definition to ease review. No functional change. Link: https://lkml.kernel.org/r/20210603142220.10851-1-mgorman@techsingularity.net Link: https://lkml.kernel.org/r/20210603142220.10851-2-mgorman@techsingularity.net Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Jesper Dangaard Brouer <brouer@redhat.com> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent f7ec104 commit 21d02f8

1 file changed

Lines changed: 8 additions & 8 deletions

File tree

mm/page_alloc.c

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -687,6 +687,14 @@ static void bad_page(struct page *page, const char *reason)
687687
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
688688
}
689689

690+
static inline void free_the_page(struct page *page, unsigned int order)
691+
{
692+
if (order == 0) /* Via pcp? */
693+
free_unref_page(page);
694+
else
695+
__free_pages_ok(page, order, FPI_NONE);
696+
}
697+
690698
/*
691699
* Higher-order pages are called "compound pages". They are structured thusly:
692700
*
@@ -5349,14 +5357,6 @@ unsigned long get_zeroed_page(gfp_t gfp_mask)
53495357
}
53505358
EXPORT_SYMBOL(get_zeroed_page);
53515359

5352-
static inline void free_the_page(struct page *page, unsigned int order)
5353-
{
5354-
if (order == 0) /* Via pcp? */
5355-
free_unref_page(page);
5356-
else
5357-
__free_pages_ok(page, order, FPI_NONE);
5358-
}
5359-
53605360
/**
53615361
* __free_pages - Free pages allocated with alloc_pages().
53625362
* @page: The page pointer returned from alloc_pages().

0 commit comments

Comments
 (0)