Skip to content

Commit 09c5272

Browse files
anadavingomolnar
authored andcommitted
x86/mm/tlb: Do not make is_lazy dirty for no reason
Blindly writing to is_lazy for no reason, when the written value is identical to the old value, makes the cacheline dirty for no reason. Avoid making such writes to prevent cache coherency traffic for no reason. Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Nadav Amit <namit@vmware.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20210220231712.2475218-7-namit@vmware.com
1 parent 2f4305b commit 09c5272

1 file changed

Lines changed: 2 additions & 1 deletion

File tree

arch/x86/mm/tlb.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -469,7 +469,8 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
469469
__flush_tlb_all();
470470
}
471471
#endif
472-
this_cpu_write(cpu_tlbstate_shared.is_lazy, false);
472+
if (was_lazy)
473+
this_cpu_write(cpu_tlbstate_shared.is_lazy, false);
473474

474475
/*
475476
* The membarrier system call requires a full memory barrier and

0 commit comments

Comments
 (0)