Skip to content

Commit 0b6a6b7

Browse files
committed
apparmor: document the buffer hold, add an overflow guard
The buffer hold is a measure of contention, but it is tracked per cpu where the lock is a globabl resource. On some systems (eg. real time) there is no guarantee that the code will be on the same cpu pre, and post spinlock acquisition, nor that the buffer will be put back to the same percpu cache when we are done with it. Because of this the hold value can move asynchronous to the buffers on the cache, meaning it is possible to underflow, and potentially in really pathelogical cases overflow. Reviewed-by: Georgia Garcia <georgia.garcia@canonical.com> Signed-off-by: John Johansen <john.johansen@canonical.com>
1 parent 640cf2f commit 0b6a6b7

1 file changed

Lines changed: 26 additions & 2 deletions

File tree

security/apparmor/lsm.c

Lines changed: 26 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2125,6 +2125,23 @@ static int param_set_mode(const char *val, const struct kernel_param *kp)
21252125
return 0;
21262126
}
21272127

2128+
/* arbitrary cap on how long to hold buffer because contention was
2129+
* encountered before trying to put it back into the global pool
2130+
*/
2131+
#define MAX_HOLD_COUNT 64
2132+
2133+
/* the hold count is a heuristic for lock contention, and can be
2134+
* incremented async to actual buffer alloc/free. Because buffers
2135+
* may be put back onto a percpu cache different than the ->hold was
2136+
* added to the counts can be out of sync. Guard against underflow
2137+
* and overflow
2138+
*/
2139+
static void cache_hold_inc(unsigned int *hold)
2140+
{
2141+
if (*hold > MAX_HOLD_COUNT)
2142+
(*hold)++;
2143+
}
2144+
21282145
char *aa_get_buffer(bool in_atomic)
21292146
{
21302147
union aa_buffer *aa_buf;
@@ -2143,11 +2160,18 @@ char *aa_get_buffer(bool in_atomic)
21432160
put_cpu_ptr(&aa_local_buffers);
21442161
return &aa_buf->buffer[0];
21452162
}
2163+
/* exit percpu as spinlocks may sleep on realtime kernels */
21462164
put_cpu_ptr(&aa_local_buffers);
21472165

21482166
if (!spin_trylock(&aa_buffers_lock)) {
2167+
/* had contention on lock so increase hold count. Doesn't
2168+
* really matter if recorded before or after the spin lock
2169+
* as there is no way to guarantee the buffer will be put
2170+
* back on the same percpu cache. Instead rely on holds
2171+
* roughly averaging out over time.
2172+
*/
21492173
cache = get_cpu_ptr(&aa_local_buffers);
2150-
cache->hold += 1;
2174+
cache_hold_inc(&cache->hold);
21512175
put_cpu_ptr(&aa_local_buffers);
21522176
spin_lock(&aa_buffers_lock);
21532177
} else {
@@ -2213,7 +2237,7 @@ void aa_put_buffer(char *buf)
22132237
}
22142238
/* contention on global list, fallback to percpu */
22152239
cache = get_cpu_ptr(&aa_local_buffers);
2216-
cache->hold += 1;
2240+
cache_hold_inc(&cache->hold);
22172241
}
22182242

22192243
/* cache in percpu list */

0 commit comments

Comments
 (0)