Skip to content

Commit 2030ddd

Browse files
ryncsnakpm00
authored andcommitted
mm, shmem: prevent infinite loop on truncate race
When truncating a large swap entry, shmem_free_swap() returns 0 when the entry's index doesn't match the given index due to lookup alignment. The failure fallback path checks if the entry crosses the end border and aborts when it happens, so truncate won't erase an unexpected entry or range. But one scenario was ignored. When `index` points to the middle of a large swap entry, and the large swap entry doesn't go across the end border, find_get_entries() will return that large swap entry as the first item in the batch with `indices[0]` equal to `index`. The entry's base index will be smaller than `indices[0]`, so shmem_free_swap() will fail and return 0 due to the "base < index" check. The code will then call shmem_confirm_swap(), get the order, check if it crosses the END boundary (which it doesn't), and retry with the same index. The next iteration will find the same entry again at the same index with same indices, leading to an infinite loop. Fix this by retrying with a round-down index, and abort if the index is smaller than the truncate range. Link: https://lkml.kernel.org/r/aXo6ltB5iqAKJzY8@KASONG-MC4 Fixes: 809bc86 ("mm: shmem: support large folio swap out") Fixes: 8a1968b ("mm/shmem, swap: fix race of truncate and swap entry split") Signed-off-by: Kairui Song <kasong@tencent.com> Reported-by: Chris Mason <clm@meta.com> Closes: https://lore.kernel.org/linux-mm/20260128130336.727049-1-clm@meta.com/ Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <baohua@kernel.org> Cc: Chris Li <chrisl@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent f1675db commit 2030ddd

1 file changed

Lines changed: 14 additions & 9 deletions

File tree

mm/shmem.c

Lines changed: 14 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1211,17 +1211,22 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
12111211
swaps_freed = shmem_free_swap(mapping, indices[i],
12121212
end - 1, folio);
12131213
if (!swaps_freed) {
1214-
/*
1215-
* If found a large swap entry cross the end border,
1216-
* skip it as the truncate_inode_partial_folio above
1217-
* should have at least zerod its content once.
1218-
*/
1214+
pgoff_t base = indices[i];
1215+
12191216
order = shmem_confirm_swap(mapping, indices[i],
12201217
radix_to_swp_entry(folio));
1221-
if (order > 0 && indices[i] + (1 << order) > end)
1222-
continue;
1223-
/* Swap was replaced by page: retry */
1224-
index = indices[i];
1218+
/*
1219+
* If found a large swap entry cross the end or start
1220+
* border, skip it as the truncate_inode_partial_folio
1221+
* above should have at least zerod its content once.
1222+
*/
1223+
if (order > 0) {
1224+
base = round_down(base, 1 << order);
1225+
if (base < start || base + (1 << order) > end)
1226+
continue;
1227+
}
1228+
/* Swap was replaced by page or extended, retry */
1229+
index = base;
12251230
break;
12261231
}
12271232
nr_swaps_freed += swaps_freed;

0 commit comments

Comments
 (0)