Skip to content

Commit b23bee8

Browse files
ryncsngregkh
authored andcommitted
mm/shmem, swap: fix race of truncate and swap entry split
commit 8a1968b upstream. The helper for shmem swap freeing is not handling the order of swap entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but it gets the entry order before that using xa_get_order without lock protection, and it may get an outdated order value if the entry is split or changed in other ways after the xa_get_order and before the xa_cmpxchg_irq. And besides, the order could grow and be larger than expected, and cause truncation to erase data beyond the end border. For example, if the target entry and following entries are swapped in or freed, then a large folio was added in place and swapped out, using the same entry, the xa_cmpxchg_irq will still succeed, it's very unlikely to happen though. To fix that, open code the Xarray cmpxchg and put the order retrieval and value checking in the same critical section. Also, ensure the order won't exceed the end border, skip it if the entry goes across the border. Skipping large swap entries crosses the end border is safe here. Shmem truncate iterates the range twice, in the first iteration, find_lock_entries already filtered such entries, and shmem will swapin the entries that cross the end border and partially truncate the folio (split the folio or at least zero part of it). So in the second loop here, if we see a swap entry that crosses the end order, it must at least have its content erased already. I observed random swapoff hangs and kernel panics when stress testing ZSWAP with shmem. After applying this patch, all problems are gone. Link: https://lkml.kernel.org/r/20260120-shmem-swap-fix-v3-1-3d33ebfbc057@tencent.com Fixes: 809bc86 ("mm: shmem: support large folio swap out") Signed-off-by: Kairui Song <kasong@tencent.com> Reviewed-by: Nhat Pham <nphamcs@gmail.com> Acked-by: Chris Li <chrisl@kernel.org> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <baohua@kernel.org> Cc: Hugh Dickins <hughd@google.com> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 40aed8b commit b23bee8

1 file changed

Lines changed: 34 additions & 11 deletions

File tree

mm/shmem.c

Lines changed: 34 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -944,17 +944,29 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
944944
* being freed).
945945
*/
946946
static long shmem_free_swap(struct address_space *mapping,
947-
pgoff_t index, void *radswap)
947+
pgoff_t index, pgoff_t end, void *radswap)
948948
{
949-
int order = xa_get_order(&mapping->i_pages, index);
950-
void *old;
949+
XA_STATE(xas, &mapping->i_pages, index);
950+
unsigned int nr_pages = 0;
951+
pgoff_t base;
952+
void *entry;
951953

952-
old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
953-
if (old != radswap)
954-
return 0;
955-
free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order);
954+
xas_lock_irq(&xas);
955+
entry = xas_load(&xas);
956+
if (entry == radswap) {
957+
nr_pages = 1 << xas_get_order(&xas);
958+
base = round_down(xas.xa_index, nr_pages);
959+
if (base < index || base + nr_pages - 1 > end)
960+
nr_pages = 0;
961+
else
962+
xas_store(&xas, NULL);
963+
}
964+
xas_unlock_irq(&xas);
965+
966+
if (nr_pages)
967+
free_swap_and_cache_nr(radix_to_swp_entry(radswap), nr_pages);
956968

957-
return 1 << order;
969+
return nr_pages;
958970
}
959971

960972
/*
@@ -1106,8 +1118,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
11061118
if (xa_is_value(folio)) {
11071119
if (unfalloc)
11081120
continue;
1109-
nr_swaps_freed += shmem_free_swap(mapping,
1110-
indices[i], folio);
1121+
nr_swaps_freed += shmem_free_swap(mapping, indices[i],
1122+
end - 1, folio);
11111123
continue;
11121124
}
11131125

@@ -1173,12 +1185,23 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
11731185
folio = fbatch.folios[i];
11741186

11751187
if (xa_is_value(folio)) {
1188+
int order;
11761189
long swaps_freed;
11771190

11781191
if (unfalloc)
11791192
continue;
1180-
swaps_freed = shmem_free_swap(mapping, indices[i], folio);
1193+
swaps_freed = shmem_free_swap(mapping, indices[i],
1194+
end - 1, folio);
11811195
if (!swaps_freed) {
1196+
/*
1197+
* If found a large swap entry cross the end border,
1198+
* skip it as the truncate_inode_partial_folio above
1199+
* should have at least zerod its content once.
1200+
*/
1201+
order = shmem_confirm_swap(mapping, indices[i],
1202+
radix_to_swp_entry(folio));
1203+
if (order > 0 && indices[i] + (1 << order) > end)
1204+
continue;
11821205
/* Swap was replaced by page: retry */
11831206
index = indices[i];
11841207
break;

0 commit comments

Comments
 (0)