Skip to content

Commit b7880cb

Browse files
Matthew Wilcox (Oracle)akpm00
authored andcommitted
migrate: correct lock ordering for hugetlb file folios
Syzbot has found a deadlock (analyzed by Lance Yang): 1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock). 2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire folio_lock. migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page() <- Takes folio_lock! -> remove_migration_ptes() -> __rmap_walk_file() -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)! hugetlbfs_fallocate() -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)! -> hugetlbfs_zero_partial_page() -> filemap_lock_hugetlb_folio() -> filemap_lock_folio() -> __filemap_get_folio <- Waits for folio_lock! The migration path is the one taking locks in the wrong order according to the documentation at the top of mm/rmap.c. So expand the scope of the existing i_mmap_lock to cover the calls to remove_migration_ptes() too. This is (mostly) how it used to be after commit c0d0381. That was removed by 336bf30 for both file & anon hugetlb pages when it should only have been removed for anon hugetlb pages. Link: https://lkml.kernel.org/r/20260109041345.3863089-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Fixes: 336bf30 ("hugetlbfs: fix anon huge page migration race") Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com Debugged-by: Lance Yang <lance.yang@linux.dev> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> Acked-by: Zi Yan <ziy@nvidia.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Byungchul Park <byungchul@sk.com> Cc: Gregory Price <gourry@gourry.net> Cc: Jann Horn <jannh@google.com> Cc: Joshua Hahn <joshua.hahnjy@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Brost <matthew.brost@intel.com> Cc: Rakie Kim <rakie.kim@sk.com> Cc: Rik van Riel <riel@surriel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Ying Huang <ying.huang@linux.alibaba.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 90f3c12 commit b7880cb

1 file changed

Lines changed: 6 additions & 6 deletions

File tree

mm/migrate.c

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1458,6 +1458,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
14581458
int page_was_mapped = 0;
14591459
struct anon_vma *anon_vma = NULL;
14601460
struct address_space *mapping = NULL;
1461+
enum ttu_flags ttu = 0;
14611462

14621463
if (folio_ref_count(src) == 1) {
14631464
/* page was freed from under us. So we are done. */
@@ -1498,8 +1499,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
14981499
goto put_anon;
14991500

15001501
if (folio_mapped(src)) {
1501-
enum ttu_flags ttu = 0;
1502-
15031502
if (!folio_test_anon(src)) {
15041503
/*
15051504
* In shared mappings, try_to_unmap could potentially
@@ -1516,16 +1515,17 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
15161515

15171516
try_to_migrate(src, ttu);
15181517
page_was_mapped = 1;
1519-
1520-
if (ttu & TTU_RMAP_LOCKED)
1521-
i_mmap_unlock_write(mapping);
15221518
}
15231519

15241520
if (!folio_mapped(src))
15251521
rc = move_to_new_folio(dst, src, mode);
15261522

15271523
if (page_was_mapped)
1528-
remove_migration_ptes(src, !rc ? dst : src, 0);
1524+
remove_migration_ptes(src, !rc ? dst : src,
1525+
ttu ? RMP_LOCKED : 0);
1526+
1527+
if (ttu & TTU_RMAP_LOCKED)
1528+
i_mmap_unlock_write(mapping);
15291529

15301530
unlock_put_anon:
15311531
folio_unlock(dst);

0 commit comments

Comments
 (0)