Skip to content

Commit f37e286

Browse files
lrq-maxrleon
authored andcommitted
RDMA/core: Reduce cond_resched() frequency in __ib_umem_release
The current implementation calls cond_resched() for every SG entry in __ib_umem_release(), which can increase needless overhead. This patch introduces RESCHED_LOOP_CNT_THRESHOLD (0x1000) to limit how often cond_resched() is called. The function now yields the CPU once every 4096 iterations, and yield at the very first iteration for lots of small umem case, to reduce scheduling overhead. Fixes: d056bc4 ("RDMA/core: Prevent soft lockup during large user memory region cleanup") Signed-off-by: Li RongQing <lirongqing@baidu.com> Link: https://patch.msgid.link/20251126025147.2627-1-lirongqing@baidu.com Signed-off-by: Leon Romanovsky <leon@kernel.org>
1 parent 01dad9c commit f37e286

1 file changed

Lines changed: 5 additions & 1 deletion

File tree

drivers/infiniband/core/umem.c

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,8 @@
4545

4646
#include "uverbs.h"
4747

48+
#define RESCHED_LOOP_CNT_THRESHOLD 0x1000
49+
4850
static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int dirty)
4951
{
5052
bool make_dirty = umem->writable && dirty;
@@ -58,7 +60,9 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d
5860
for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i) {
5961
unpin_user_page_range_dirty_lock(sg_page(sg),
6062
DIV_ROUND_UP(sg->length, PAGE_SIZE), make_dirty);
61-
cond_resched();
63+
64+
if (i && !(i % RESCHED_LOOP_CNT_THRESHOLD))
65+
cond_resched();
6266
}
6367

6468
sg_free_append_table(&umem->sgt_append);

0 commit comments

Comments
 (0)