Skip to content

Commit fe3ccd2

Browse files
author
Thomas Hellström
committed
drm/xe: Drop preempt-fences when destroying imported dma-bufs.
When imported dma-bufs are destroyed, TTM is not fully individualizing the dma-resv, but it *is* copying the fences that need to be waited for before declaring idle. So in the case where the bo->resv != bo->_resv we can still drop the preempt-fences, but make sure we do that on bo->_resv which contains the fence-pointer copy. In the case where the copying fails, bo->_resv will typically not contain any fences pointers at all, so there will be nothing to drop. In that case, TTM would have ensured all fences that would have been copied are signaled, including any remaining preempt fences. Fixes: dd08ebf ("drm/xe: Introduce a new DRM driver for Intel GPUs") Fixes: fa0af72 ("drm/ttm: test private resv obj on release/destroy") Cc: Matthew Brost <matthew.brost@intel.com> Cc: <stable@vger.kernel.org> # v6.16+ Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Tested-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Link: https://patch.msgid.link/20251217093441.5073-1-thomas.hellstrom@linux.intel.com (cherry picked from commit 425fe55) Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
1 parent 3767ca4 commit fe3ccd2

1 file changed

Lines changed: 4 additions & 11 deletions

File tree

drivers/gpu/drm/xe/xe_bo.c

Lines changed: 4 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1527,7 +1527,7 @@ static bool xe_ttm_bo_lock_in_destructor(struct ttm_buffer_object *ttm_bo)
15271527
* always succeed here, as long as we hold the lru lock.
15281528
*/
15291529
spin_lock(&ttm_bo->bdev->lru_lock);
1530-
locked = dma_resv_trylock(ttm_bo->base.resv);
1530+
locked = dma_resv_trylock(&ttm_bo->base._resv);
15311531
spin_unlock(&ttm_bo->bdev->lru_lock);
15321532
xe_assert(xe, locked);
15331533

@@ -1547,13 +1547,6 @@ static void xe_ttm_bo_release_notify(struct ttm_buffer_object *ttm_bo)
15471547
bo = ttm_to_xe_bo(ttm_bo);
15481548
xe_assert(xe_bo_device(bo), !(bo->created && kref_read(&ttm_bo->base.refcount)));
15491549

1550-
/*
1551-
* Corner case where TTM fails to allocate memory and this BOs resv
1552-
* still points the VMs resv
1553-
*/
1554-
if (ttm_bo->base.resv != &ttm_bo->base._resv)
1555-
return;
1556-
15571550
if (!xe_ttm_bo_lock_in_destructor(ttm_bo))
15581551
return;
15591552

@@ -1563,22 +1556,22 @@ static void xe_ttm_bo_release_notify(struct ttm_buffer_object *ttm_bo)
15631556
* TODO: Don't do this for external bos once we scrub them after
15641557
* unbind.
15651558
*/
1566-
dma_resv_for_each_fence(&cursor, ttm_bo->base.resv,
1559+
dma_resv_for_each_fence(&cursor, &ttm_bo->base._resv,
15671560
DMA_RESV_USAGE_BOOKKEEP, fence) {
15681561
if (xe_fence_is_xe_preempt(fence) &&
15691562
!dma_fence_is_signaled(fence)) {
15701563
if (!replacement)
15711564
replacement = dma_fence_get_stub();
15721565

1573-
dma_resv_replace_fences(ttm_bo->base.resv,
1566+
dma_resv_replace_fences(&ttm_bo->base._resv,
15741567
fence->context,
15751568
replacement,
15761569
DMA_RESV_USAGE_BOOKKEEP);
15771570
}
15781571
}
15791572
dma_fence_put(replacement);
15801573

1581-
dma_resv_unlock(ttm_bo->base.resv);
1574+
dma_resv_unlock(&ttm_bo->base._resv);
15821575
}
15831576

15841577
static void xe_ttm_bo_delete_mem_notify(struct ttm_buffer_object *ttm_bo)

0 commit comments

Comments
 (0)