Skip to content

Commit aba029c

Browse files
mjkravetzgregkh
authored andcommitted
hugetlbfs: fix races and page leaks during migration
commit cb6acd0 upstream. hugetlb pages should only be migrated if they are 'active'. The routines set/clear_page_huge_active() modify the active state of hugetlb pages. When a new hugetlb page is allocated at fault time, set_page_huge_active is called before the page is locked. Therefore, another thread could race and migrate the page while it is being added to page table by the fault code. This race is somewhat hard to trigger, but can be seen by strategically adding udelay to simulate worst case scheduling behavior. Depending on 'how' the code races, various BUG()s could be triggered. To address this issue, simply delay the set_page_huge_active call until after the page is successfully added to the page table. Hugetlb pages can also be leaked at migration time if the pages are associated with a file in an explicitly mounted hugetlbfs filesystem. For example, consider a two node system with 4GB worth of huge pages available. A program mmaps a 2G file in a hugetlbfs filesystem. It then migrates the pages associated with the file from one node to another. When the program exits, huge page counts are as follows: node0 1024 free_hugepages 1024 nr_hugepages node1 0 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool That is as expected. 2G of huge pages are taken from the free_hugepages counts, and 2G is the size of the file in the explicitly mounted filesystem. If the file is then removed, the counts become: node0 1024 free_hugepages 1024 nr_hugepages node1 1024 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool Note that the filesystem still shows 2G of pages used, while there actually are no huge pages in use. The only way to 'fix' the filesystem accounting is to unmount the filesystem If a hugetlb page is associated with an explicitly mounted filesystem, this information in contained in the page_private field. At migration time, this information is not preserved. To fix, simply transfer page_private from old to new page at migration time if necessary. There is a related race with removing a huge page from a file and migration. When a huge page is removed from the pagecache, the page_mapping() field is cleared, yet page_private remains set until the page is actually freed by free_huge_page(). A page could be migrated while in this state. However, since page_mapping() is not set the hugetlbfs specific routine to transfer page_private is not called and we leak the page count in the filesystem. To fix that, check for this condition before migrating a huge page. If the condition is detected, return EBUSY for the page. Link: http://lkml.kernel.org/r/74510272-7319-7372-9ea6-ec914734c179@oracle.com Link: http://lkml.kernel.org/r/20190212221400.3512-1-mike.kravetz@oracle.com Fixes: bcc5422 ("mm: hugetlb: introduce page_huge_active") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: <stable@vger.kernel.org> [mike.kravetz@oracle.com: v2] Link: http://lkml.kernel.org/r/7534d322-d782-8ac6-1c8d-a8dc380eb3ab@oracle.com [mike.kravetz@oracle.com: update comment and changelog] Link: http://lkml.kernel.org/r/420bcfd6-158b-38e4-98da-26d0cd85bd01@oracle.com Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 5b98f09 commit aba029c

3 files changed

Lines changed: 35 additions & 2 deletions

File tree

fs/hugetlbfs/inode.c

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -869,6 +869,18 @@ static int hugetlbfs_migrate_page(struct address_space *mapping,
869869
rc = migrate_huge_page_move_mapping(mapping, newpage, page);
870870
if (rc != MIGRATEPAGE_SUCCESS)
871871
return rc;
872+
873+
/*
874+
* page_private is subpool pointer in hugetlb pages. Transfer to
875+
* new page. PagePrivate is not associated with page_private for
876+
* hugetlb pages and can not be set here as only page_huge_active
877+
* pages can be migrated.
878+
*/
879+
if (page_private(page)) {
880+
set_page_private(newpage, page_private(page));
881+
set_page_private(page, 0);
882+
}
883+
872884
migrate_page_copy(newpage, page);
873885

874886
return MIGRATEPAGE_SUCCESS;

mm/hugetlb.c

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3472,7 +3472,6 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
34723472
copy_user_huge_page(new_page, old_page, address, vma,
34733473
pages_per_huge_page(h));
34743474
__SetPageUptodate(new_page);
3475-
set_page_huge_active(new_page);
34763475

34773476
mmun_start = address & huge_page_mask(h);
34783477
mmun_end = mmun_start + huge_page_size(h);
@@ -3494,6 +3493,7 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
34943493
make_huge_pte(vma, new_page, 1));
34953494
page_remove_rmap(old_page);
34963495
hugepage_add_new_anon_rmap(new_page, vma, address);
3496+
set_page_huge_active(new_page);
34973497
/* Make the old page be freed below */
34983498
new_page = old_page;
34993499
}
@@ -3575,6 +3575,7 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
35753575
struct page *page;
35763576
pte_t new_pte;
35773577
spinlock_t *ptl;
3578+
bool new_page = false;
35783579

35793580
/*
35803581
* Currently, we are forced to kill the process in the event the
@@ -3608,7 +3609,7 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
36083609
}
36093610
clear_huge_page(page, address, pages_per_huge_page(h));
36103611
__SetPageUptodate(page);
3611-
set_page_huge_active(page);
3612+
new_page = true;
36123613

36133614
if (vma->vm_flags & VM_MAYSHARE) {
36143615
int err = huge_add_to_page_cache(page, mapping, idx);
@@ -3680,6 +3681,15 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
36803681
}
36813682

36823683
spin_unlock(ptl);
3684+
3685+
/*
3686+
* Only make newly allocated pages active. Existing pages found
3687+
* in the pagecache could be !page_huge_active() if they have been
3688+
* isolated for migration.
3689+
*/
3690+
if (new_page)
3691+
set_page_huge_active(page);
3692+
36833693
unlock_page(page);
36843694
out:
36853695
return ret;

mm/migrate.c

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1056,6 +1056,16 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
10561056
lock_page(hpage);
10571057
}
10581058

1059+
/*
1060+
* Check for pages which are in the process of being freed. Without
1061+
* page_mapping() set, hugetlbfs specific move page routine will not
1062+
* be called and we could leak usage counts for subpools.
1063+
*/
1064+
if (page_private(hpage) && !page_mapping(hpage)) {
1065+
rc = -EBUSY;
1066+
goto out_unlock;
1067+
}
1068+
10591069
if (PageAnon(hpage))
10601070
anon_vma = page_get_anon_vma(hpage);
10611071

@@ -1086,6 +1096,7 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
10861096
put_new_page = NULL;
10871097
}
10881098

1099+
out_unlock:
10891100
unlock_page(hpage);
10901101
out:
10911102
if (rc != -EAGAIN)

0 commit comments

Comments
 (0)