Skip to content

Commit 4d4b6d6

Browse files
yhuang-intelakpm00
authored andcommitted
mm,unmap: avoid flushing TLB in batch if PTE is inaccessible
0Day/LKP reported a performance regression for commit 7e12beb ("migrate_pages: batch flushing TLB"). In the commit, the TLB flushing during page migration is batched. So, in try_to_migrate_one(), ptep_clear_flush() is replaced with set_tlb_ubc_flush_pending(). In further investigation, it is found that the TLB flushing can be avoided in ptep_clear_flush() if the PTE is inaccessible. In fact, we can optimize in similar way for the batched TLB flushing too to improve the performance. So in this patch, we check pte_accessible() before set_tlb_ubc_flush_pending() in try_to_unmap/migrate_one(). Tests show that the benchmark score of the anon-cow-rand-mt test case of vm-scalability test suite can improve up to 2.1% with the patch on a Intel server machine. The TLB flushing IPI can reduce up to 44.3%. Link: https://lore.kernel.org/oe-lkp/202303192325.ecbaf968-yujie.liu@intel.com Link: https://lore.kernel.org/oe-lkp/ab92aaddf1b52ede15e2c608696c36765a2602c1.camel@intel.com/ Link: https://lkml.kernel.org/r/20230424065408.188498-1-ying.huang@intel.com Fixes: 7e12beb ("migrate_pages: batch flushing TLB") Signed-off-by: "Huang, Ying" <ying.huang@intel.com> Reported-by: kernel test robot <yujie.liu@intel.com> Reviewed-by: Nadav Amit <namit@vmware.com> Reviewed-by: Xin Hao <xhao@linux.alibaba.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Hugh Dickins <hughd@google.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
1 parent 01106e1 commit 4d4b6d6

1 file changed

Lines changed: 8 additions & 4 deletions

File tree

mm/rmap.c

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -642,10 +642,14 @@ void try_to_unmap_flush_dirty(void)
642642
#define TLB_FLUSH_BATCH_PENDING_LARGE \
643643
(TLB_FLUSH_BATCH_PENDING_MASK / 2)
644644

645-
static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
645+
static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
646646
{
647647
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
648648
int batch;
649+
bool writable = pte_dirty(pteval);
650+
651+
if (!pte_accessible(mm, pteval))
652+
return;
649653

650654
arch_tlbbatch_add_mm(&tlb_ubc->arch, mm);
651655
tlb_ubc->flush_required = true;
@@ -729,7 +733,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
729733
}
730734
}
731735
#else
732-
static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
736+
static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval)
733737
{
734738
}
735739

@@ -1580,7 +1584,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
15801584
*/
15811585
pteval = ptep_get_and_clear(mm, address, pvmw.pte);
15821586

1583-
set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
1587+
set_tlb_ubc_flush_pending(mm, pteval);
15841588
} else {
15851589
pteval = ptep_clear_flush(vma, address, pvmw.pte);
15861590
}
@@ -1961,7 +1965,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
19611965
*/
19621966
pteval = ptep_get_and_clear(mm, address, pvmw.pte);
19631967

1964-
set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
1968+
set_tlb_ubc_flush_pending(mm, pteval);
19651969
} else {
19661970
pteval = ptep_clear_flush(vma, address, pvmw.pte);
19671971
}

0 commit comments

Comments
 (0)