Skip to content

Commit 84d60fd

Browse files
davidhildenbrandtorvalds
authored andcommitted
mm: slightly clarify KSM logic in do_swap_page()
Let's make it clearer that KSM might only have to copy a page in case we have a page in the swapcache, not if we allocated a fresh page and bypassed the swapcache. While at it, add a comment why this is usually necessary and merge the two swapcache conditions. [akpm@linux-foundation.org: fix comment, per David] Link: https://lkml.kernel.org/r/20220131162940.210846-4-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: David Rientjes <rientjes@google.com> Cc: Don Dutile <ddutile@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Liang Zhang <zhangliang5@huawei.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Shakeel Butt <shakeelb@google.com> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent d4c4709 commit 84d60fd

1 file changed

Lines changed: 23 additions & 15 deletions

File tree

mm/memory.c

Lines changed: 23 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -3607,21 +3607,29 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
36073607
goto out_release;
36083608
}
36093609

3610-
/*
3611-
* Make sure try_to_free_swap or reuse_swap_page or swapoff did not
3612-
* release the swapcache from under us. The page pin, and pte_same
3613-
* test below, are not enough to exclude that. Even if it is still
3614-
* swapcache, we need to check that the page's swap has not changed.
3615-
*/
3616-
if (unlikely((!PageSwapCache(page) ||
3617-
page_private(page) != entry.val)) && swapcache)
3618-
goto out_page;
3619-
3620-
page = ksm_might_need_to_copy(page, vma, vmf->address);
3621-
if (unlikely(!page)) {
3622-
ret = VM_FAULT_OOM;
3623-
page = swapcache;
3624-
goto out_page;
3610+
if (swapcache) {
3611+
/*
3612+
* Make sure try_to_free_swap or swapoff did not release the
3613+
* swapcache from under us. The page pin, and pte_same test
3614+
* below, are not enough to exclude that. Even if it is still
3615+
* swapcache, we need to check that the page's swap has not
3616+
* changed.
3617+
*/
3618+
if (unlikely(!PageSwapCache(page) ||
3619+
page_private(page) != entry.val))
3620+
goto out_page;
3621+
3622+
/*
3623+
* KSM sometimes has to copy on read faults, for example, if
3624+
* page->index of !PageKSM() pages would be nonlinear inside the
3625+
* anon VMA -- PageKSM() is lost on actual swapout.
3626+
*/
3627+
page = ksm_might_need_to_copy(page, vma, vmf->address);
3628+
if (unlikely(!page)) {
3629+
ret = VM_FAULT_OOM;
3630+
page = swapcache;
3631+
goto out_page;
3632+
}
36253633
}
36263634

36273635
cgroup_throttle_swaprate(page, GFP_KERNEL);

0 commit comments

Comments
 (0)