Skip to content

Commit d59ebc9

Browse files
VMoolagregkh
authored andcommitted
mm/hugetlb.c: fix UAF of vma in hugetlb fault pathway
commit 98b74bb upstream. Syzbot reports a UAF in hugetlb_fault(). This happens because vmf_anon_prepare() could drop the per-VMA lock and allow the current VMA to be freed before hugetlb_vma_unlock_read() is called. We can fix this by using a modified version of vmf_anon_prepare() that doesn't release the VMA lock on failure, and then release it ourselves after hugetlb_vma_unlock_read(). Link: https://lkml.kernel.org/r/20240914194243.245-2-vishal.moola@gmail.com Fixes: 9acad7b ("hugetlb: use vmf_anon_prepare() instead of anon_vma_prepare()") Reported-by: syzbot+2dab93857ee95f2eeb08@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-mm/00000000000067c20b06219fbc26@google.com/ Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent a5d3b94 commit d59ebc9

1 file changed

Lines changed: 18 additions & 2 deletions

File tree

mm/hugetlb.c

Lines changed: 18 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6076,7 +6076,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
60766076
* When the original hugepage is shared one, it does not have
60776077
* anon_vma prepared.
60786078
*/
6079-
ret = vmf_anon_prepare(vmf);
6079+
ret = __vmf_anon_prepare(vmf);
60806080
if (unlikely(ret))
60816081
goto out_release_all;
60826082

@@ -6275,7 +6275,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
62756275
}
62766276

62776277
if (!(vma->vm_flags & VM_MAYSHARE)) {
6278-
ret = vmf_anon_prepare(vmf);
6278+
ret = __vmf_anon_prepare(vmf);
62796279
if (unlikely(ret))
62806280
goto out;
62816281
}
@@ -6406,6 +6406,14 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping,
64066406
folio_unlock(folio);
64076407
out:
64086408
hugetlb_vma_unlock_read(vma);
6409+
6410+
/*
6411+
* We must check to release the per-VMA lock. __vmf_anon_prepare() is
6412+
* the only way ret can be set to VM_FAULT_RETRY.
6413+
*/
6414+
if (unlikely(ret & VM_FAULT_RETRY))
6415+
vma_end_read(vma);
6416+
64096417
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
64106418
return ret;
64116419

@@ -6627,6 +6635,14 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
66276635
}
66286636
out_mutex:
66296637
hugetlb_vma_unlock_read(vma);
6638+
6639+
/*
6640+
* We must check to release the per-VMA lock. __vmf_anon_prepare() in
6641+
* hugetlb_wp() is the only way ret can be set to VM_FAULT_RETRY.
6642+
*/
6643+
if (unlikely(ret & VM_FAULT_RETRY))
6644+
vma_end_read(vma);
6645+
66306646
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
66316647
/*
66326648
* Generally it's safe to hold refcount during waiting page lock. But

0 commit comments

Comments
 (0)