Skip to content

Commit fc745ff

Browse files
ryncsnakpm00
authored andcommitted
mm/shmem: fix THP allocation and fallback loop
The order check and fallback loop is updating the index value on every loop. This will cause the index to be wrongly aligned by a larger value while the loop shrinks the order. This may result in inserting and returning a folio of the wrong index and cause data corruption with some userspace workloads [1]. [kasong@tencent.com: introduce a temporary variable to improve code] Link: https://lkml.kernel.org/r/20251023065913.36925-1-ryncsn@gmail.com Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1] Link: https://lkml.kernel.org/r/20251022105719.18321-1-ryncsn@gmail.com Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1] Fixes: e7a2ab7 ("mm: shmem: add mTHP support for anonymous shmem") Closes: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ Signed-off-by: Kairui Song <kasong@tencent.com> Acked-by: David Hildenbrand <david@redhat.com> Acked-by: Zi Yan <ziy@nvidia.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Barry Song <baohua@kernel.org> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Nico Pache <npache@redhat.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent fa759cd commit fc745ff

1 file changed

Lines changed: 6 additions & 3 deletions

File tree

mm/shmem.c

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
18821882
struct shmem_inode_info *info = SHMEM_I(inode);
18831883
unsigned long suitable_orders = 0;
18841884
struct folio *folio = NULL;
1885+
pgoff_t aligned_index;
18851886
long pages;
18861887
int error, order;
18871888

@@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
18951896
order = highest_order(suitable_orders);
18961897
while (suitable_orders) {
18971898
pages = 1UL << order;
1898-
index = round_down(index, pages);
1899-
folio = shmem_alloc_folio(gfp, order, info, index);
1900-
if (folio)
1899+
aligned_index = round_down(index, pages);
1900+
folio = shmem_alloc_folio(gfp, order, info, aligned_index);
1901+
if (folio) {
1902+
index = aligned_index;
19011903
goto allocated;
1904+
}
19021905

19031906
if (pages == HPAGE_PMD_NR)
19041907
count_vm_event(THP_FILE_FALLBACK);

0 commit comments

Comments
 (0)