Skip to content

Commit 55ac8bb

Browse files
David Stevensakpm00
authored andcommitted
mm/shmem: fix race in shmem_undo_range w/THP
Split folios during the second loop of shmem_undo_range. It's not sufficient to only split folios when dealing with partial pages, since it's possible for a THP to be faulted in after that point. Calling truncate_inode_folio in that situation can result in throwing away data outside of the range being targeted. [akpm@linux-foundation.org: tidy up comment layout] Link: https://lkml.kernel.org/r/20230418084031.3439795-1-stevensd@google.com Fixes: b9a8a41 ("truncate,shmem: Handle truncates that split large folios") Signed-off-by: David Stevens <stevensd@chromium.org> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Suleiman Souhlal <suleiman@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 43e8832 commit 55ac8bb

1 file changed

Lines changed: 18 additions & 1 deletion

File tree

mm/shmem.c

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1080,7 +1080,24 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
10801080
}
10811081
VM_BUG_ON_FOLIO(folio_test_writeback(folio),
10821082
folio);
1083-
truncate_inode_folio(mapping, folio);
1083+
1084+
if (!folio_test_large(folio)) {
1085+
truncate_inode_folio(mapping, folio);
1086+
} else if (truncate_inode_partial_folio(folio, lstart, lend)) {
1087+
/*
1088+
* If we split a page, reset the loop so
1089+
* that we pick up the new sub pages.
1090+
* Otherwise the THP was entirely
1091+
* dropped or the target range was
1092+
* zeroed, so just continue the loop as
1093+
* is.
1094+
*/
1095+
if (!folio_test_large(folio)) {
1096+
folio_unlock(folio);
1097+
index = start;
1098+
break;
1099+
}
1100+
}
10841101
}
10851102
folio_unlock(folio);
10861103
}

0 commit comments

Comments
 (0)