Skip to content

Commit 91ad250

Browse files
committed
Merge tag 'wq-for-6.16' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq
Pull workqueue updates from Tejun Heo: "Fix statistic update race condition and a couple documentation updates" * tag 'wq-for-6.16' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: workqueue: fix typo in comment workqueue: Fix race condition in wq->stats incrementation workqueue: Better document teardown for delayed_work
2 parents f1975e4 + 23227e7 commit 91ad250

2 files changed

Lines changed: 15 additions & 2 deletions

File tree

include/linux/workqueue.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -480,7 +480,7 @@ void workqueue_softirq_dead(unsigned int cpu);
480480
* executing at most one work item for the workqueue.
481481
*
482482
* For unbound workqueues, @max_active limits the number of in-flight work items
483-
* for the whole system. e.g. @max_active of 16 indicates that that there can be
483+
* for the whole system. e.g. @max_active of 16 indicates that there can be
484484
* at most 16 work items executing for the workqueue in the whole system.
485485
*
486486
* As sharing the same active counter for an unbound workqueue across multiple

kernel/workqueue.c

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3241,7 +3241,7 @@ __acquires(&pool->lock)
32413241
* point will only record its address.
32423242
*/
32433243
trace_workqueue_execute_end(work, worker->current_func);
3244-
pwq->stats[PWQ_STAT_COMPLETED]++;
3244+
32453245
lock_map_release(&lockdep_map);
32463246
if (!bh_draining)
32473247
lock_map_release(pwq->wq->lockdep_map);
@@ -3272,6 +3272,8 @@ __acquires(&pool->lock)
32723272

32733273
raw_spin_lock_irq(&pool->lock);
32743274

3275+
pwq->stats[PWQ_STAT_COMPLETED]++;
3276+
32753277
/*
32763278
* In addition to %WQ_CPU_INTENSIVE, @worker may also have been marked
32773279
* CPU intensive by wq_worker_tick() if @work hogged CPU longer than
@@ -5837,6 +5839,17 @@ static bool pwq_busy(struct pool_workqueue *pwq)
58375839
* @wq: target workqueue
58385840
*
58395841
* Safely destroy a workqueue. All work currently pending will be done first.
5842+
*
5843+
* This function does NOT guarantee that non-pending work that has been
5844+
* submitted with queue_delayed_work() and similar functions will be done
5845+
* before destroying the workqueue. The fundamental problem is that, currently,
5846+
* the workqueue has no way of accessing non-pending delayed_work. delayed_work
5847+
* is only linked on the timer-side. All delayed_work must, therefore, be
5848+
* canceled before calling this function.
5849+
*
5850+
* TODO: It would be better if the problem described above wouldn't exist and
5851+
* destroy_workqueue() would cleanly cancel all pending and non-pending
5852+
* delayed_work.
58405853
*/
58415854
void destroy_workqueue(struct workqueue_struct *wq)
58425855
{

0 commit comments

Comments
 (0)