Skip to content

Commit 697a528

Browse files
committed
io_uring: fix IOPOLL with passthrough I/O
A previous commit improving IOPOLL made an incorrect assumption that task_work isn't used with IOPOLL. This can cause crashes when doing passthrough I/O on nvme, where queueing the completion task_work will trample on the same memory that holds the completed list of requests. Fix it up by shuffling the members around, so we're not sharing any parts that end up getting used in this path. Fixes: 3c7d76d ("io_uring: IOPOLL polling improvements") Reported-by: Yi Zhang <yi.zhang@redhat.com> Link: https://lore.kernel.org/linux-block/CAHj4cs_SLPj9v9w5MgfzHKy+983enPx3ZQY2kMuMJ1202DBefw@mail.gmail.com/ Tested-by: Yi Zhang <yi.zhang@redhat.com> Cc: Ming Lei <ming.lei@redhat.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent d6406c4 commit 697a528

2 files changed

Lines changed: 7 additions & 9 deletions

File tree

include/linux/io_uring_types.h

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -713,13 +713,10 @@ struct io_kiocb {
713713
atomic_t refs;
714714
bool cancel_seq_set;
715715

716-
/*
717-
* IOPOLL doesn't use task_work, so use the ->iopoll_node list
718-
* entry to manage pending iopoll requests.
719-
*/
720716
union {
721717
struct io_task_work io_task_work;
722-
struct list_head iopoll_node;
718+
/* For IOPOLL setup queues, with hybrid polling */
719+
u64 iopoll_start;
723720
};
724721

725722
union {
@@ -728,8 +725,8 @@ struct io_kiocb {
728725
* poll
729726
*/
730727
struct hlist_node hash_node;
731-
/* For IOPOLL setup queues, with hybrid polling */
732-
u64 iopoll_start;
728+
/* IOPOLL completion handling */
729+
struct list_head iopoll_node;
733730
/* for private io_kiocb freeing */
734731
struct rcu_head rcu_head;
735732
};

io_uring/rw.c

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1296,12 +1296,13 @@ static int io_uring_hybrid_poll(struct io_kiocb *req,
12961296
struct io_comp_batch *iob, unsigned int poll_flags)
12971297
{
12981298
struct io_ring_ctx *ctx = req->ctx;
1299-
u64 runtime, sleep_time;
1299+
u64 runtime, sleep_time, iopoll_start;
13001300
int ret;
13011301

1302+
iopoll_start = READ_ONCE(req->iopoll_start);
13021303
sleep_time = io_hybrid_iopoll_delay(ctx, req);
13031304
ret = io_uring_classic_poll(req, iob, poll_flags);
1304-
runtime = ktime_get_ns() - req->iopoll_start - sleep_time;
1305+
runtime = ktime_get_ns() - iopoll_start - sleep_time;
13051306

13061307
/*
13071308
* Use minimum sleep time if we're polling devices with different

0 commit comments

Comments
 (0)