Skip to content

Commit 4838127

Browse files
mjkravetzakpm00
authored andcommitted
hugetlb: fix huge_pmd_unshare address update
The routine huge_pmd_unshare() is passed a pointer to an address associated with an area which may be unshared. If unshare is successful this address is updated to 'optimize' callers iterating over huge page addresses. For the optimization to work correctly, address should be updated to the last huge page in the unmapped/unshared area. However, in the common case where the passed address is PUD_SIZE aligned, the address is incorrectly updated to the address of the preceding huge page. That wastes CPU cycles as the unmapped/unshared range is scanned twice. Link: https://lkml.kernel.org/r/20220524205003.126184-1-mike.kravetz@oracle.com Fixes: 39dde65 ("shared page table for hugetlb page") Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Muchun Song <songmuchun@bytedance.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 2505a98 commit 4838127

1 file changed

Lines changed: 8 additions & 1 deletion

File tree

mm/hugetlb.c

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6562,7 +6562,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
65626562
pud_clear(pud);
65636563
put_page(virt_to_page(ptep));
65646564
mm_dec_nr_pmds(mm);
6565-
*addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE;
6565+
/*
6566+
* This update of passed address optimizes loops sequentially
6567+
* processing addresses in increments of huge page size (PMD_SIZE
6568+
* in this case). By clearing the pud, a PUD_SIZE area is unmapped.
6569+
* Update address to the 'last page' in the cleared area so that
6570+
* calling loop can move to first page past this area.
6571+
*/
6572+
*addr |= PUD_SIZE - PMD_SIZE;
65666573
return 1;
65676574
}
65686575

0 commit comments

Comments
 (0)