Skip to content

Commit 266ee58

Browse files
committed
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas: - Do not make a clean PTE dirty in pte_mkwrite() The Arm architecture, for backwards compatibility reasons (ARMv8.0 before in-hardware dirty bit management - DBM), uses the PTE_RDONLY bit to mean !dirty while the PTE_WRITE bit means DBM enabled. The arm64 pte_mkwrite() simply clears the PTE_RDONLY bit and this inadvertently makes the PTE pte_hw_dirty(). Most places making a PTE writable also invoke pte_mkdirty() but do_swap_page() does not and we end up with dirty, freshly swapped in, writeable pages. - Do not warn if the destination page is already MTE-tagged in copy_highpage() In the majority of the cases, a destination page copied into is freshly allocated without the PG_mte_tagged flag set. However, the folio migration may be restarted if __folio_migrate_mapping() failed, triggering the benign WARN_ON_ONCE(). * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: mte: Do not warn if the page is already tagged in copy_highpage() arm64, mm: avoid always making PTE dirty in pte_mkwrite()
2 parents ab431bc + b98c94e commit 266ee58

2 files changed

Lines changed: 10 additions & 4 deletions

File tree

arch/arm64/include/asm/pgtable.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -293,7 +293,8 @@ static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot)
293293
static inline pte_t pte_mkwrite_novma(pte_t pte)
294294
{
295295
pte = set_pte_bit(pte, __pgprot(PTE_WRITE));
296-
pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY));
296+
if (pte_sw_dirty(pte))
297+
pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY));
297298
return pte;
298299
}
299300

arch/arm64/mm/copypage.c

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ void copy_highpage(struct page *to, struct page *from)
3535
from != folio_page(src, 0))
3636
return;
3737

38-
WARN_ON_ONCE(!folio_try_hugetlb_mte_tagging(dst));
38+
folio_try_hugetlb_mte_tagging(dst);
3939

4040
/*
4141
* Populate tags for all subpages.
@@ -51,8 +51,13 @@ void copy_highpage(struct page *to, struct page *from)
5151
}
5252
folio_set_hugetlb_mte_tagged(dst);
5353
} else if (page_mte_tagged(from)) {
54-
/* It's a new page, shouldn't have been tagged yet */
55-
WARN_ON_ONCE(!try_page_mte_tagging(to));
54+
/*
55+
* Most of the time it's a new page that shouldn't have been
56+
* tagged yet. However, folio migration can end up reusing the
57+
* same page without untagging it. Ignore the warning if the
58+
* page is already tagged.
59+
*/
60+
try_page_mte_tagging(to);
5661

5762
mte_copy_page_tags(kto, kfrom);
5863
set_page_mte_tagged(to);

0 commit comments

Comments
 (0)