Skip to content

Commit 1db61b0

Browse files
hailan94axboe
authored andcommitted
blk-mq-sched: unify elevators checking for async requests
bfq and mq-deadline consider sync writes as async requests and only reserve tags for sync reads by async_depth, however, kyber doesn't consider sync writes as async requests for now. Consider the case there are lots of dirty pages, and user use fsync to flush dirty pages. In this case sched_tags can be exhausted by sync writes and sync reads can stuck waiting for tag. Hence let kyber follow what mq-deadline and bfq did, and unify async requests checking for all elevators. Signed-off-by: Yu Kuai <yukuai@fnnas.com> Reviewed-by: Nilay Shroff <nilay@linux.ibm.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent 9fc7900 commit 1db61b0

4 files changed

Lines changed: 8 additions & 3 deletions

File tree

block/bfq-iosched.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -697,7 +697,7 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
697697
unsigned int limit, act_idx;
698698

699699
/* Sync reads have full depth available */
700-
if (op_is_sync(opf) && !op_is_write(opf))
700+
if (blk_mq_is_sync_read(opf))
701701
limit = data->q->nr_requests;
702702
else
703703
limit = bfqd->async_depths[!!bfqd->wr_busy_queues][op_is_sync(opf)];

block/blk-mq-sched.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,4 +137,9 @@ static inline void blk_mq_set_min_shallow_depth(struct request_queue *q,
137137
depth);
138138
}
139139

140+
static inline bool blk_mq_is_sync_read(blk_opf_t opf)
141+
{
142+
return op_is_sync(opf) && !op_is_write(opf);
143+
}
144+
140145
#endif

block/kyber-iosched.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -556,7 +556,7 @@ static void kyber_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
556556
* We use the scheduler tags as per-hardware queue queueing tokens.
557557
* Async requests can be limited at this stage.
558558
*/
559-
if (!op_is_sync(opf)) {
559+
if (!blk_mq_is_sync_read(opf)) {
560560
struct kyber_queue_data *kqd = data->q->elevator->elevator_data;
561561

562562
data->shallow_depth = kqd->async_depth;

block/mq-deadline.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -495,7 +495,7 @@ static void dd_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
495495
struct deadline_data *dd = data->q->elevator->elevator_data;
496496

497497
/* Do not throttle synchronous reads. */
498-
if (op_is_sync(opf) && !op_is_write(opf))
498+
if (blk_mq_is_sync_read(opf))
499499
return;
500500

501501
/*

0 commit comments

Comments
 (0)