Skip to content

Commit 61cc453

Browse files
Waiman-LongPeter Zijlstra
authored andcommitted
locking/lockdep: Avoid potential access of invalid memory in lock_class
It was found that reading /proc/lockdep after a lockdep splat may potentially cause an access to freed memory if lockdep_unregister_key() is called after the splat but before access to /proc/lockdep [1]. This is due to the fact that graph_lock() call in lockdep_unregister_key() fails after the clearing of debug_locks by the splat process. After lockdep_unregister_key() is called, the lock_name may be freed but the corresponding lock_class structure still have a reference to it. That invalid memory pointer will then be accessed when /proc/lockdep is read by a user and a use-after-free (UAF) error will be reported if KASAN is enabled. To fix this problem, lockdep_unregister_key() is now modified to always search for a matching key irrespective of the debug_locks state and zap the corresponding lock class if a matching one is found. [1] https://lore.kernel.org/lkml/77f05c15-81b6-bddd-9650-80d5f23fe330@i-love.sakura.ne.jp/ Fixes: 8b39adb ("locking/lockdep: Make lockdep_unregister_key() honor 'debug_locks' again") Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Link: https://lkml.kernel.org/r/20220103023558.1377055-1-longman@redhat.com
1 parent e204193 commit 61cc453

1 file changed

Lines changed: 15 additions & 9 deletions

File tree

kernel/locking/lockdep.c

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -6287,7 +6287,13 @@ void lockdep_reset_lock(struct lockdep_map *lock)
62876287
lockdep_reset_lock_reg(lock);
62886288
}
62896289

6290-
/* Unregister a dynamically allocated key. */
6290+
/*
6291+
* Unregister a dynamically allocated key.
6292+
*
6293+
* Unlike lockdep_register_key(), a search is always done to find a matching
6294+
* key irrespective of debug_locks to avoid potential invalid access to freed
6295+
* memory in lock_class entry.
6296+
*/
62916297
void lockdep_unregister_key(struct lock_class_key *key)
62926298
{
62936299
struct hlist_head *hash_head = keyhashentry(key);
@@ -6302,22 +6308,22 @@ void lockdep_unregister_key(struct lock_class_key *key)
63026308
return;
63036309

63046310
raw_local_irq_save(flags);
6305-
if (!graph_lock())
6306-
goto out_irq;
6311+
lockdep_lock();
63076312

6308-
pf = get_pending_free();
63096313
hlist_for_each_entry_rcu(k, hash_head, hash_entry) {
63106314
if (k == key) {
63116315
hlist_del_rcu(&k->hash_entry);
63126316
found = true;
63136317
break;
63146318
}
63156319
}
6316-
WARN_ON_ONCE(!found);
6317-
__lockdep_free_key_range(pf, key, 1);
6318-
call_rcu_zapped(pf);
6319-
graph_unlock();
6320-
out_irq:
6320+
WARN_ON_ONCE(!found && debug_locks);
6321+
if (found) {
6322+
pf = get_pending_free();
6323+
__lockdep_free_key_range(pf, key, 1);
6324+
call_rcu_zapped(pf);
6325+
}
6326+
lockdep_unlock();
63216327
raw_local_irq_restore(flags);
63226328

63236329
/* Wait until is_dynamic_key() has finished accessing k->hash_entry. */

0 commit comments

Comments
 (0)