Skip to content

Commit c145e0b

Browse files
davidhildenbrandtorvalds
authored andcommitted
mm: streamline COW logic in do_swap_page()
Currently we have a different COW logic when: * triggering a read-fault to swapin first and then trigger a write-fault -> do_swap_page() + do_wp_page() * triggering a write-fault to swapin -> do_swap_page() + do_wp_page() only if we fail reuse in do_swap_page() The COW logic in do_swap_page() is different than our reuse logic in do_wp_page(). The COW logic in do_wp_page() -- page_count() == 1 -- makes currently sure that we certainly don't have a remaining reference, e.g., via GUP, on the target page we want to reuse: if there is any unexpected reference, we have to copy to avoid information leaks. As do_swap_page() behaves differently, in environments with swap enabled we can currently have an unintended information leak from the parent to the child, similar as known from CVE-2020-29374: 1. Parent writes to anonymous page -> Page is mapped writable and modified 2. Page is swapped out -> Page is unmapped and replaced by swap entry 3. fork() -> Swap entries are copied to child 4. Child pins page R/O -> Page is mapped R/O into child 5. Child unmaps page -> Child still holds GUP reference 6. Parent writes to page -> Page is reused in do_swap_page() -> Child can observe changes Exchanging 2. and 3. should have the same effect. Let's apply the same COW logic as in do_wp_page(), conditionally trying to remove the page from the swapcache after freeing the swap entry, however, before actually mapping our page. We can change the order now that we use try_to_free_swap(), which doesn't care about the mapcount, instead of reuse_swap_page(). To handle references from the LRU pagevecs, conditionally drain the local LRU pagevecs when required, however, don't consider the page_count() when deciding whether to drain to keep it simple for now. Link: https://lkml.kernel.org/r/20220131162940.210846-5-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: David Rientjes <rientjes@google.com> Cc: Don Dutile <ddutile@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Liang Zhang <zhangliang5@huawei.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Shakeel Butt <shakeelb@google.com> Cc: Yang Shi <shy828301@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 84d60fd commit c145e0b

1 file changed

Lines changed: 43 additions & 12 deletions

File tree

mm/memory.c

Lines changed: 43 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3489,6 +3489,25 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
34893489
return 0;
34903490
}
34913491

3492+
static inline bool should_try_to_free_swap(struct page *page,
3493+
struct vm_area_struct *vma,
3494+
unsigned int fault_flags)
3495+
{
3496+
if (!PageSwapCache(page))
3497+
return false;
3498+
if (mem_cgroup_swap_full(page) || (vma->vm_flags & VM_LOCKED) ||
3499+
PageMlocked(page))
3500+
return true;
3501+
/*
3502+
* If we want to map a page that's in the swapcache writable, we
3503+
* have to detect via the refcount if we're really the exclusive
3504+
* user. Try freeing the swapcache to get rid of the swapcache
3505+
* reference only in case it's likely that we'll be the exlusive user.
3506+
*/
3507+
return (fault_flags & FAULT_FLAG_WRITE) && !PageKsm(page) &&
3508+
page_count(page) == 2;
3509+
}
3510+
34923511
/*
34933512
* We enter with non-exclusive mmap_lock (to exclude vma changes,
34943513
* but allow concurrent faults), and pte mapped but not yet locked.
@@ -3630,6 +3649,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
36303649
page = swapcache;
36313650
goto out_page;
36323651
}
3652+
3653+
/*
3654+
* If we want to map a page that's in the swapcache writable, we
3655+
* have to detect via the refcount if we're really the exclusive
3656+
* owner. Try removing the extra reference from the local LRU
3657+
* pagevecs if required.
3658+
*/
3659+
if ((vmf->flags & FAULT_FLAG_WRITE) && page == swapcache &&
3660+
!PageKsm(page) && !PageLRU(page))
3661+
lru_add_drain();
36333662
}
36343663

36353664
cgroup_throttle_swaprate(page, GFP_KERNEL);
@@ -3648,19 +3677,25 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
36483677
}
36493678

36503679
/*
3651-
* The page isn't present yet, go ahead with the fault.
3652-
*
3653-
* Be careful about the sequence of operations here.
3654-
* To get its accounting right, reuse_swap_page() must be called
3655-
* while the page is counted on swap but not yet in mapcount i.e.
3656-
* before page_add_anon_rmap() and swap_free(); try_to_free_swap()
3657-
* must be called after the swap_free(), or it will never succeed.
3680+
* Remove the swap entry and conditionally try to free up the swapcache.
3681+
* We're already holding a reference on the page but haven't mapped it
3682+
* yet.
36583683
*/
3684+
swap_free(entry);
3685+
if (should_try_to_free_swap(page, vma, vmf->flags))
3686+
try_to_free_swap(page);
36593687

36603688
inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
36613689
dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS);
36623690
pte = mk_pte(page, vma->vm_page_prot);
3663-
if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) {
3691+
3692+
/*
3693+
* Same logic as in do_wp_page(); however, optimize for fresh pages
3694+
* that are certainly not shared because we just allocated them without
3695+
* exposing them to the swapcache.
3696+
*/
3697+
if ((vmf->flags & FAULT_FLAG_WRITE) && !PageKsm(page) &&
3698+
(page != swapcache || page_count(page) == 1)) {
36643699
pte = maybe_mkwrite(pte_mkdirty(pte), vma);
36653700
vmf->flags &= ~FAULT_FLAG_WRITE;
36663701
ret |= VM_FAULT_WRITE;
@@ -3686,10 +3721,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
36863721
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
36873722
arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
36883723

3689-
swap_free(entry);
3690-
if (mem_cgroup_swap_full(page) ||
3691-
(vma->vm_flags & VM_LOCKED) || PageMlocked(page))
3692-
try_to_free_swap(page);
36933724
unlock_page(page);
36943725
if (page != swapcache && swapcache) {
36953726
/*

0 commit comments

Comments
 (0)