Skip to content

Commit b570f37

Browse files
Thomas Hellströmrodrigovivi
authored andcommitted
mm: Fix a hmm_range_fault() livelock / starvation problem
If hmm_range_fault() fails a folio_trylock() in do_swap_page, trying to acquire the lock of a device-private folio for migration, to ram, the function will spin until it succeeds grabbing the lock. However, if the process holding the lock is depending on a work item to be completed, which is scheduled on the same CPU as the spinning hmm_range_fault(), that work item might be starved and we end up in a livelock / starvation situation which is never resolved. This can happen, for example if the process holding the device-private folio lock is stuck in migrate_device_unmap()->lru_add_drain_all() sinc lru_add_drain_all() requires a short work-item to be run on all online cpus to complete. A prerequisite for this to happen is: a) Both zone device and system memory folios are considered in migrate_device_unmap(), so that there is a reason to call lru_add_drain_all() for a system memory folio while a folio lock is held on a zone device folio. b) The zone device folio has an initial mapcount > 1 which causes at least one migration PTE entry insertion to be deferred to try_to_migrate(), which can happen after the call to lru_add_drain_all(). c) No or voluntary only preemption. This all seems pretty unlikely to happen, but indeed is hit by the "xe_exec_system_allocator" igt test. Resolve this by waiting for the folio to be unlocked if the folio_trylock() fails in do_swap_page(). Rename migration_entry_wait_on_locked() to softleaf_entry_wait_unlock() and update its documentation to indicate the new use-case. Future code improvements might consider moving the lru_add_drain_all() call in migrate_device_unmap() to be called *after* all pages have migration entries inserted. That would eliminate also b) above. v2: - Instead of a cond_resched() in hmm_range_fault(), eliminate the problem by waiting for the folio to be unlocked in do_swap_page() (Alistair Popple, Andrew Morton) v3: - Add a stub migration_entry_wait_on_locked() for the !CONFIG_MIGRATION case. (Kernel Test Robot) v4: - Rename migrate_entry_wait_on_locked() to softleaf_entry_wait_on_locked() and update docs (Alistair Popple) v5: - Add a WARN_ON_ONCE() for the !CONFIG_MIGRATION version of softleaf_entry_wait_on_locked(). - Modify wording around function names in the commit message (Andrew Morton) Suggested-by: Alistair Popple <apopple@nvidia.com> Fixes: 1afaeb8 ("mm/migrate: Trylock device page in do_swap_page") Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Jason Gunthorpe <jgg@mellanox.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Leon Romanovsky <leon@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Matthew Brost <matthew.brost@intel.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: linux-mm@kvack.org Cc: <dri-devel@lists.freedesktop.org> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: <stable@vger.kernel.org> # v6.15+ Reviewed-by: John Hubbard <jhubbard@nvidia.com> #v3 Reviewed-by: Alistair Popple <apopple@nvidia.com> Link: https://patch.msgid.link/20260210115653.92413-1-thomas.hellstrom@linux.intel.com (cherry picked from commit a69d1ab) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
1 parent 99f9b53 commit b570f37

5 files changed

Lines changed: 26 additions & 12 deletions

File tree

include/linux/migrate.h

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list);
6565

6666
int migrate_huge_page_move_mapping(struct address_space *mapping,
6767
struct folio *dst, struct folio *src);
68-
void migration_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)
68+
void softleaf_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)
6969
__releases(ptl);
7070
void folio_migrate_flags(struct folio *newfolio, struct folio *folio);
7171
int folio_migrate_mapping(struct address_space *mapping,
@@ -97,6 +97,14 @@ static inline int set_movable_ops(const struct movable_operations *ops, enum pag
9797
return -ENOSYS;
9898
}
9999

100+
static inline void softleaf_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)
101+
__releases(ptl)
102+
{
103+
WARN_ON_ONCE(1);
104+
105+
spin_unlock(ptl);
106+
}
107+
100108
#endif /* CONFIG_MIGRATION */
101109

102110
#ifdef CONFIG_NUMA_BALANCING

mm/filemap.c

Lines changed: 10 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1379,22 +1379,24 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr,
13791379

13801380
#ifdef CONFIG_MIGRATION
13811381
/**
1382-
* migration_entry_wait_on_locked - Wait for a migration entry to be removed
1383-
* @entry: migration swap entry.
1382+
* softleaf_entry_wait_on_locked - Wait for a migration entry or
1383+
* device_private entry to be removed.
1384+
* @entry: migration or device_private swap entry.
13841385
* @ptl: already locked ptl. This function will drop the lock.
13851386
*
1386-
* Wait for a migration entry referencing the given page to be removed. This is
1387+
* Wait for a migration entry referencing the given page, or device_private
1388+
* entry referencing a dvice_private page to be unlocked. This is
13871389
* equivalent to folio_put_wait_locked(folio, TASK_UNINTERRUPTIBLE) except
13881390
* this can be called without taking a reference on the page. Instead this
1389-
* should be called while holding the ptl for the migration entry referencing
1391+
* should be called while holding the ptl for @entry referencing
13901392
* the page.
13911393
*
13921394
* Returns after unlocking the ptl.
13931395
*
13941396
* This follows the same logic as folio_wait_bit_common() so see the comments
13951397
* there.
13961398
*/
1397-
void migration_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)
1399+
void softleaf_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)
13981400
__releases(ptl)
13991401
{
14001402
struct wait_page_queue wait_page;
@@ -1428,6 +1430,9 @@ void migration_entry_wait_on_locked(softleaf_t entry, spinlock_t *ptl)
14281430
* If a migration entry exists for the page the migration path must hold
14291431
* a valid reference to the page, and it must take the ptl to remove the
14301432
* migration entry. So the page is valid until the ptl is dropped.
1433+
* Similarly any path attempting to drop the last reference to a
1434+
* device-private page needs to grab the ptl to remove the device-private
1435+
* entry.
14311436
*/
14321437
spin_unlock(ptl);
14331438

mm/memory.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4763,7 +4763,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
47634763
unlock_page(vmf->page);
47644764
put_page(vmf->page);
47654765
} else {
4766-
pte_unmap_unlock(vmf->pte, vmf->ptl);
4766+
pte_unmap(vmf->pte);
4767+
softleaf_entry_wait_on_locked(entry, vmf->ptl);
47674768
}
47684769
} else if (softleaf_is_hwpoison(entry)) {
47694770
ret = VM_FAULT_HWPOISON;

mm/migrate.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -500,7 +500,7 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
500500
if (!softleaf_is_migration(entry))
501501
goto out;
502502

503-
migration_entry_wait_on_locked(entry, ptl);
503+
softleaf_entry_wait_on_locked(entry, ptl);
504504
return;
505505
out:
506506
spin_unlock(ptl);
@@ -532,10 +532,10 @@ void migration_entry_wait_huge(struct vm_area_struct *vma, unsigned long addr, p
532532
* If migration entry existed, safe to release vma lock
533533
* here because the pgtable page won't be freed without the
534534
* pgtable lock released. See comment right above pgtable
535-
* lock release in migration_entry_wait_on_locked().
535+
* lock release in softleaf_entry_wait_on_locked().
536536
*/
537537
hugetlb_vma_unlock_read(vma);
538-
migration_entry_wait_on_locked(entry, ptl);
538+
softleaf_entry_wait_on_locked(entry, ptl);
539539
return;
540540
}
541541

@@ -553,7 +553,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd)
553553
ptl = pmd_lock(mm, pmd);
554554
if (!pmd_is_migration_entry(*pmd))
555555
goto unlock;
556-
migration_entry_wait_on_locked(softleaf_from_pmd(*pmd), ptl);
556+
softleaf_entry_wait_on_locked(softleaf_from_pmd(*pmd), ptl);
557557
return;
558558
unlock:
559559
spin_unlock(ptl);

mm/migrate_device.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ static int migrate_vma_collect_huge_pmd(pmd_t *pmdp, unsigned long start,
176176
}
177177

178178
if (softleaf_is_migration(entry)) {
179-
migration_entry_wait_on_locked(entry, ptl);
179+
softleaf_entry_wait_on_locked(entry, ptl);
180180
spin_unlock(ptl);
181181
return -EAGAIN;
182182
}

0 commit comments

Comments
 (0)