Skip to content

Commit e1e92bf

Browse files
Charan Teja Reddytorvalds
authored andcommitted
mm: compaction: optimize proactive compaction deferrals
Vlastimil Babka figured out that when fragmentation score didn't go down across the proactive compaction i.e. when no progress is made, next wake up for proactive compaction is deferred for 1 << COMPACT_MAX_DEFER_SHIFT, i.e. 64 times, with each wakeup interval of HPAGE_FRAG_CHECK_INTERVAL_MSEC(=500). In each of this wakeup, it just decrement 'proactive_defer' counter and goes sleep i.e. it is getting woken to just decrement a counter. The same deferral time can also achieved by simply doing the HPAGE_FRAG_CHECK_INTERVAL_MSEC << COMPACT_MAX_DEFER_SHIFT thus unnecessary wakeup of kcompact thread is avoided thus also removes the need of 'proactive_defer' thread counter. [akpm@linux-foundation.org: tweak comment] Link: https://lore.kernel.org/linux-fsdevel/88abfdb6-2c13-b5a6-5b46-742d12d1c910@suse.cz/ Link: https://lkml.kernel.org/r/1626869599-25412-1-git-send-email-charante@codeaurora.org Signed-off-by: Charan Teja Reddy <charante@codeaurora.org> Acked-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Nitin Gupta <nigupta@nvidia.com> Cc: Vinayak Menon <vinmenon@codeaurora.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 1399af7 commit e1e92bf

1 file changed

Lines changed: 19 additions & 10 deletions

File tree

mm/compaction.c

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2885,7 +2885,8 @@ static int kcompactd(void *p)
28852885
{
28862886
pg_data_t *pgdat = (pg_data_t *)p;
28872887
struct task_struct *tsk = current;
2888-
unsigned int proactive_defer = 0;
2888+
long default_timeout = msecs_to_jiffies(HPAGE_FRAG_CHECK_INTERVAL_MSEC);
2889+
long timeout = default_timeout;
28892890

28902891
const struct cpumask *cpumask = cpumask_of_node(pgdat->node_id);
28912892

@@ -2902,32 +2903,40 @@ static int kcompactd(void *p)
29022903

29032904
trace_mm_compaction_kcompactd_sleep(pgdat->node_id);
29042905
if (wait_event_freezable_timeout(pgdat->kcompactd_wait,
2905-
kcompactd_work_requested(pgdat),
2906-
msecs_to_jiffies(HPAGE_FRAG_CHECK_INTERVAL_MSEC))) {
2906+
kcompactd_work_requested(pgdat), timeout)) {
29072907

29082908
psi_memstall_enter(&pflags);
29092909
kcompactd_do_work(pgdat);
29102910
psi_memstall_leave(&pflags);
2911+
/*
2912+
* Reset the timeout value. The defer timeout from
2913+
* proactive compaction is lost here but that is fine
2914+
* as the condition of the zone changing substantionally
2915+
* then carrying on with the previous defer interval is
2916+
* not useful.
2917+
*/
2918+
timeout = default_timeout;
29112919
continue;
29122920
}
29132921

2914-
/* kcompactd wait timeout */
2922+
/*
2923+
* Start the proactive work with default timeout. Based
2924+
* on the fragmentation score, this timeout is updated.
2925+
*/
2926+
timeout = default_timeout;
29152927
if (should_proactive_compact_node(pgdat)) {
29162928
unsigned int prev_score, score;
29172929

2918-
if (proactive_defer) {
2919-
proactive_defer--;
2920-
continue;
2921-
}
29222930
prev_score = fragmentation_score_node(pgdat);
29232931
proactive_compact_node(pgdat);
29242932
score = fragmentation_score_node(pgdat);
29252933
/*
29262934
* Defer proactive compaction if the fragmentation
29272935
* score did not go down i.e. no progress made.
29282936
*/
2929-
proactive_defer = score < prev_score ?
2930-
0 : 1 << COMPACT_MAX_DEFER_SHIFT;
2937+
if (unlikely(score >= prev_score))
2938+
timeout =
2939+
default_timeout << COMPACT_MAX_DEFER_SHIFT;
29312940
}
29322941
}
29332942

0 commit comments

Comments
 (0)