Skip to content

Commit ece369c

Browse files
Hugh Dickinstorvalds
authored andcommitted
mm/munlock: add lru_add_drain() to fix memcg_stat_test
Mike reports that LTP memcg_stat_test usually leads to memcg_stat_test 3 TINFO: Test unevictable with MAP_LOCKED memcg_stat_test 3 TINFO: Running memcg_process --mmap-lock1 -s 135168 memcg_stat_test 3 TINFO: Warming up pid: 3460 memcg_stat_test 3 TINFO: Process is still here after warm up: 3460 memcg_stat_test 3 TFAIL: unevictable is 122880, 135168 expected but may also lead to memcg_stat_test 4 TINFO: Test unevictable with mlock memcg_stat_test 4 TINFO: Running memcg_process --mmap-lock2 -s 135168 memcg_stat_test 4 TINFO: Warming up pid: 4271 memcg_stat_test 4 TINFO: Process is still here after warm up: 4271 memcg_stat_test 4 TFAIL: unevictable is 122880, 135168 expected or both. A wee bit flaky. follow_page_pte() used to have an lru_add_drain() per each page mlocked, and the test came to rely on accurate stats. The pagevec to be drained is different now, but still covered by lru_add_drain(); and, never mind the test, I believe it's in everyone's interest that a bulk faulting interface like populate_vma_page_range() or faultin_vma_page_range() should drain its local pagevecs at the end, to save others sometimes needing the much more expensive lru_add_drain_all(). This does not absolutely guarantee exact stats - the mlocking task can be migrated between CPUs as it proceeds - but it's good enough and the tests pass. Link: https://lkml.kernel.org/r/47f6d39c-a075-50cb-1cfb-26dd957a48af@google.com Fixes: b67bf49 ("mm/munlock: delete FOLL_MLOCK and FOLL_POPULATE") Signed-off-by: Hugh Dickins <hughd@google.com> Reported-by: Mike Galbraith <efault@gmx.de> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent cdd81b3 commit ece369c

1 file changed

Lines changed: 8 additions & 2 deletions

File tree

mm/gup.c

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1404,6 +1404,7 @@ long populate_vma_page_range(struct vm_area_struct *vma,
14041404
struct mm_struct *mm = vma->vm_mm;
14051405
unsigned long nr_pages = (end - start) / PAGE_SIZE;
14061406
int gup_flags;
1407+
long ret;
14071408

14081409
VM_BUG_ON(!PAGE_ALIGNED(start));
14091410
VM_BUG_ON(!PAGE_ALIGNED(end));
@@ -1438,8 +1439,10 @@ long populate_vma_page_range(struct vm_area_struct *vma,
14381439
* We made sure addr is within a VMA, so the following will
14391440
* not result in a stack expansion that recurses back here.
14401441
*/
1441-
return __get_user_pages(mm, start, nr_pages, gup_flags,
1442+
ret = __get_user_pages(mm, start, nr_pages, gup_flags,
14421443
NULL, NULL, locked);
1444+
lru_add_drain();
1445+
return ret;
14431446
}
14441447

14451448
/*
@@ -1471,6 +1474,7 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start,
14711474
struct mm_struct *mm = vma->vm_mm;
14721475
unsigned long nr_pages = (end - start) / PAGE_SIZE;
14731476
int gup_flags;
1477+
long ret;
14741478

14751479
VM_BUG_ON(!PAGE_ALIGNED(start));
14761480
VM_BUG_ON(!PAGE_ALIGNED(end));
@@ -1498,8 +1502,10 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start,
14981502
if (check_vma_flags(vma, gup_flags))
14991503
return -EINVAL;
15001504

1501-
return __get_user_pages(mm, start, nr_pages, gup_flags,
1505+
ret = __get_user_pages(mm, start, nr_pages, gup_flags,
15021506
NULL, NULL, locked);
1507+
lru_add_drain();
1508+
return ret;
15031509
}
15041510

15051511
/*

0 commit comments

Comments
 (0)