Skip to content

Commit 77de62a

Browse files
0ne1r0sPeter Zijlstra
authored andcommitted
perf/core: Fix refcount bug and potential UAF in perf_mmap
Syzkaller reported a refcount_t: addition on 0; use-after-free warning in perf_mmap. The issue is caused by a race condition between a failing mmap() setup and a concurrent mmap() on a dependent event (e.g., using output redirection). In perf_mmap(), the ring_buffer (rb) is allocated and assigned to event->rb with the mmap_mutex held. The mutex is then released to perform map_range(). If map_range() fails, perf_mmap_close() is called to clean up. However, since the mutex was dropped, another thread attaching to this event (via inherited events or output redirection) can acquire the mutex, observe the valid event->rb pointer, and attempt to increment its reference count. If the cleanup path has already dropped the reference count to zero, this results in a use-after-free or refcount saturation warning. Fix this by extending the scope of mmap_mutex to cover the map_range() call. This ensures that the ring buffer initialization and mapping (or cleanup on failure) happens atomically effectively, preventing other threads from accessing a half-initialized or dying ring buffer. Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/ Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20260202162057.7237-1-yuhaocheng035@gmail.com
1 parent 6a8a486 commit 77de62a

1 file changed

Lines changed: 19 additions & 19 deletions

File tree

kernel/events/core.c

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7465,28 +7465,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
74657465
ret = perf_mmap_aux(vma, event, nr_pages);
74667466
if (ret)
74677467
return ret;
7468-
}
74697468

7470-
/*
7471-
* Since pinned accounting is per vm we cannot allow fork() to copy our
7472-
* vma.
7473-
*/
7474-
vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
7475-
vma->vm_ops = &perf_mmap_vmops;
7469+
/*
7470+
* Since pinned accounting is per vm we cannot allow fork() to copy our
7471+
* vma.
7472+
*/
7473+
vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
7474+
vma->vm_ops = &perf_mmap_vmops;
74767475

7477-
mapped = get_mapped(event, event_mapped);
7478-
if (mapped)
7479-
mapped(event, vma->vm_mm);
7476+
mapped = get_mapped(event, event_mapped);
7477+
if (mapped)
7478+
mapped(event, vma->vm_mm);
74807479

7481-
/*
7482-
* Try to map it into the page table. On fail, invoke
7483-
* perf_mmap_close() to undo the above, as the callsite expects
7484-
* full cleanup in this case and therefore does not invoke
7485-
* vmops::close().
7486-
*/
7487-
ret = map_range(event->rb, vma);
7488-
if (ret)
7489-
perf_mmap_close(vma);
7480+
/*
7481+
* Try to map it into the page table. On fail, invoke
7482+
* perf_mmap_close() to undo the above, as the callsite expects
7483+
* full cleanup in this case and therefore does not invoke
7484+
* vmops::close().
7485+
*/
7486+
ret = map_range(event->rb, vma);
7487+
if (ret)
7488+
perf_mmap_close(vma);
7489+
}
74907490

74917491
return ret;
74927492
}

0 commit comments

Comments
 (0)