Skip to content

Commit 3c6e577

Browse files
committed
Merge tag 'trace-v7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing updates from Steven Rostedt: "User visible changes: - Add an entry into MAINTAINERS file for RUST versions of code There's now RUST code for tracing and static branches. To differentiate that code from the C code, add entries in for the RUST version (with "[RUST]" around it) so that the right maintainers get notified on changes. - New bitmask-list option added to tracefs When this is set, bitmasks in trace event are not displayed as hex numbers, but instead as lists: e.g. 0-5,7,9 instead of 0000015f - New show_event_filters file in tracefs Instead of having to search all events/*/*/filter for any active filters enabled in the trace instance, the file show_event_filters will list them so that there's only one file that needs to be examined to see if any filters are active. - New show_event_triggers file in tracefs Instead of having to search all events/*/*/trigger for any active triggers enabled in the trace instance, the file show_event_triggers will list them so that there's only one file that needs to be examined to see if any triggers are active. - Have traceoff_on_warning disable trace pintk buffer too Recently recording of trace_printk() could go to other trace instances instead of the top level instance. But if traceoff_on_warning triggers, it doesn't stop the buffer with trace_printk() and that data can easily be lost by being overwritten. Have traceoff_on_warning also disable the instance that has trace_printk() being written to it. - Update the hist_debug file to show what function the field uses When CONFIG_HIST_TRIGGERS_DEBUG is enabled, a hist_debug file exists for every event. This displays the internal data of any histogram enabled for that event. But it is lacking the function that is called to process one of its fields. This is very useful information that was missing when debugging histograms. - Up the histogram stack size from 16 to 31 Stack traces can be used as keys for event histograms. Currently the size of the stack that is stored is limited to just 16 entries. But the storage space in the histogram is 256 bytes, meaning that it can store up to 31 entries (plus one for the count of entries). Instead of letting that space go to waste, up the limit from 16 to 31. This makes the keys much more useful. - Fix permissions of per CPU file buffer_size_kb The per CPU file of buffer_size_kb was incorrectly set to read only in a previous cleanup. It should be writable. - Reset "last_boot_info" if the persistent buffer is cleared The last_boot_info shows address information of a persistent ring buffer if it contains data from a previous boot. It is cleared when recording starts again, but it is not cleared when the buffer is reset. The data is useless after a reset so clear it on reset too. Internal changes: - A change was made to allow tracepoint callbacks to have preemption enabled, and instead be protected by SRCU. This required some updates to the callbacks for perf and BPF. perf needed to disable preemption directly in its callback because it expects preemption disabled in the later code. BPF needed to disable migration, as its code expects to run completely on the same CPU. - Have irq_work wake up other CPU if current CPU is "isolated" When there's a waiter waiting on ring buffer data and a new event happens, an irq work is triggered to wake up that waiter. This is noisy on isolated CPUs (running NO_HZ_FULL). Trigger an IPI to a house keeping CPU instead. - Use proper free of trigger_data instead of open coding it in. - Remove redundant call of event_trigger_reset_filter() It was called immediately in a function that was called right after it. - Workqueue cleanups - Report errors if tracing_update_buffers() were to fail. - Make the enum update workqueue generic for other parts of tracing On boot up, a work queue is created to convert enum names into their numbers in the trace event format files. This work queue can also be used for other aspects of tracing that takes some time and shouldn't be called by the init call code. The blk_trace initialization takes a bit of time. Have the initialization code moved to the new tracing generic work queue function. - Skip kprobe boot event creation call if there's no kprobes defined on cmdline The kprobe initialization to set up kprobes if they are defined on the cmdline requires taking the event_mutex lock. This can be held by other tracing code doing initialization for a long time. Since kprobes added to the kernel command line need to be setup immediately, as they may be tracing early initialization code, they cannot be postponed in a work queue and must be setup in the initcall code. If there's no kprobe on the kernel cmdline, there's no reason to take the mutex and slow down the boot up code waiting to get the lock only to find out there's nothing to do. Simply exit out early if there's no kprobes on the kernel cmdline. If there are kprobes on the cmdline, then someone cares more about tracing over the speed of boot up. - Clean up the trigger code a bit - Move code out of trace.c and into their own files trace.c is now over 11,000 lines of code and has become more difficult to maintain. Start splitting it up so that related code is in their own files. Move all the trace_printk() related code into trace_printk.c. Move the __always_inline stack functions into trace.h. Move the pid filtering code into a new trace_pid.c file. - Better define the max latency and snapshot code The latency tracers have a "max latency" buffer that is a copy of the main buffer and gets swapped with it when a new high latency is detected. This keeps the trace up to the highest latency around where this max_latency buffer is never written to. It is only used to save the last max latency trace. A while ago a snapshot feature was added to tracefs to allow user space to perform the same logic. It could also enable events to trigger a "snapshot" if one of their fields hit a new high. This was built on top of the latency max_latency buffer logic. Because snapshots came later, they were dependent on the latency tracers to be enabled. In reality, the latency tracers depend on the snapshot code and not the other way around. It was just that they came first. Restructure the code and the kconfigs to have the latency tracers depend on snapshot code instead. This actually simplifies the logic a bit and allows to disable more when the latency tracers are not defined and the snapshot code is. - Fix a "false sharing" in the hwlat tracer code The loop to search for latency in hardware was using a variable that could be changed by user space for each sample. If the user change this variable, it could cause a bus contention, and reading that variable can show up as a large latency in the trace causing a false positive. Read this variable at the start of the sample with a READ_ONCE() into a local variable and keep the code from sharing cache lines with readers. - Fix function graph tracer static branch optimization code When only one tracer is defined for function graph tracing, it uses a static branch to call that tracer directly. When another tracer is added, it goes into loop logic to call all the registered callbacks. The code was incorrect when going back to one tracer and never re-enabled the static branch again to do the optimization code. - And other small fixes and cleanups" * tag 'trace-v7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (46 commits) function_graph: Restore direct mode when callbacks drop to one tracing: Fix indentation of return statement in print_trace_fmt() tracing: Reset last_boot_info if ring buffer is reset tracing: Fix to set write permission to per-cpu buffer_size_kb tracing: Fix false sharing in hwlat get_sample() tracing: Move d_max_latency out of CONFIG_FSNOTIFY protection tracing: Better separate SNAPSHOT and MAX_TRACE options tracing: Add tracer_uses_snapshot() helper to remove #ifdefs tracing: Rename trace_array field max_buffer to snapshot_buffer tracing: Move pid filtering into trace_pid.c tracing: Move trace_printk functions out of trace.c and into trace_printk.c tracing: Use system_state in trace_printk_init_buffers() tracing: Have trace_printk functions use flags instead of using global_trace tracing: Make tracing_update_buffers() take NULL for global_trace tracing: Make printk_trace global for tracing system tracing: Move ftrace_trace_stack() out of trace.c and into trace.h tracing: Move __trace_buffer_{un}lock_*() functions to trace.h tracing: Make tracing_selftest_running global to the tracing subsystem tracing: Make tracing_disabled global for tracing system tracing: Clean up use of trace_create_maxlat_file() ...
2 parents f50822f + 53b2fae commit 3c6e577

31 files changed

Lines changed: 1388 additions & 1073 deletions

Documentation/trace/ftrace.rst

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -684,6 +684,22 @@ of ftrace. Here is a list of some of the key files:
684684

685685
See events.rst for more information.
686686

687+
show_event_filters:
688+
689+
A list of events that have filters. This shows the
690+
system/event pair along with the filter that is attached to
691+
the event.
692+
693+
See events.rst for more information.
694+
695+
show_event_triggers:
696+
697+
A list of events that have triggers. This shows the
698+
system/event pair along with the trigger that is attached to
699+
the event.
700+
701+
See events.rst for more information.
702+
687703
available_events:
688704

689705
A list of events that can be enabled in tracing.
@@ -1290,6 +1306,15 @@ Here are the available options:
12901306
This will be useful if you want to find out which hashed
12911307
value is corresponding to the real value in trace log.
12921308

1309+
bitmask-list
1310+
When enabled, bitmasks are displayed as a human-readable list of
1311+
ranges (e.g., 0,2-5,7) using the printk "%*pbl" format specifier.
1312+
When disabled (the default), bitmasks are displayed in the
1313+
traditional hexadecimal bitmap representation. The list format is
1314+
particularly useful for tracing CPU masks and other large bitmasks
1315+
where individual bit positions are more meaningful than their
1316+
hexadecimal encoding.
1317+
12931318
record-cmd
12941319
When any event or tracer is enabled, a hook is enabled
12951320
in the sched_switch trace point to fill comm cache

MAINTAINERS

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25250,6 +25250,7 @@ STATIC BRANCH/CALL
2525025250
M: Peter Zijlstra <peterz@infradead.org>
2525125251
M: Josh Poimboeuf <jpoimboe@kernel.org>
2525225252
M: Jason Baron <jbaron@akamai.com>
25253+
M: Alice Ryhl <aliceryhl@google.com>
2525325254
R: Steven Rostedt <rostedt@goodmis.org>
2525425255
R: Ard Biesheuvel <ardb@kernel.org>
2525525256
S: Supported
@@ -25261,6 +25262,9 @@ F: include/linux/jump_label*.h
2526125262
F: include/linux/static_call*.h
2526225263
F: kernel/jump_label.c
2526325264
F: kernel/static_call*.c
25265+
F: rust/helpers/jump_label.c
25266+
F: rust/kernel/generated_arch_static_branch_asm.rs.S
25267+
F: rust/kernel/jump_label.rs
2526425268

2526525269
STI AUDIO (ASoC) DRIVERS
2526625270
M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
@@ -26727,6 +26731,17 @@ F: scripts/tracing/
2672726731
F: scripts/tracepoint-update.c
2672826732
F: tools/testing/selftests/ftrace/
2672926733

26734+
TRACING [RUST]
26735+
M: Alice Ryhl <aliceryhl@google.com>
26736+
M: Steven Rostedt <rostedt@goodmis.org>
26737+
R: Masami Hiramatsu <mhiramat@kernel.org>
26738+
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
26739+
L: linux-trace-kernel@vger.kernel.org
26740+
L: rust-for-linux@vger.kernel.org
26741+
S: Maintained
26742+
T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
26743+
F: rust/kernel/tracepoint.rs
26744+
2673026745
TRACING MMIO ACCESSES (MMIOTRACE)
2673126746
M: Steven Rostedt <rostedt@goodmis.org>
2673226747
M: Masami Hiramatsu <mhiramat@kernel.org>

include/linux/trace_events.h

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,10 @@ const char *trace_print_symbols_seq_u64(struct trace_seq *p,
3838
*symbol_array);
3939
#endif
4040

41-
const char *trace_print_bitmask_seq(struct trace_seq *p, void *bitmask_ptr,
41+
struct trace_iterator;
42+
struct trace_event;
43+
44+
const char *trace_print_bitmask_seq(struct trace_iterator *iter, void *bitmask_ptr,
4245
unsigned int bitmask_size);
4346

4447
const char *trace_print_hex_seq(struct trace_seq *p,
@@ -54,9 +57,6 @@ trace_print_hex_dump_seq(struct trace_seq *p, const char *prefix_str,
5457
int prefix_type, int rowsize, int groupsize,
5558
const void *buf, size_t len, bool ascii);
5659

57-
struct trace_iterator;
58-
struct trace_event;
59-
6060
int trace_raw_output_prep(struct trace_iterator *iter,
6161
struct trace_event *event);
6262
extern __printf(2, 3)

include/linux/trace_seq.h

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,11 @@ extern void trace_seq_putmem_hex(struct trace_seq *s, const void *mem,
114114
extern int trace_seq_path(struct trace_seq *s, const struct path *path);
115115

116116
extern void trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
117-
int nmaskbits);
117+
int nmaskbits);
118+
119+
extern void trace_seq_bitmask_list(struct trace_seq *s,
120+
const unsigned long *maskp,
121+
int nmaskbits);
118122

119123
extern int trace_seq_hex_dump(struct trace_seq *s, const char *prefix_str,
120124
int prefix_type, int rowsize, int groupsize,
@@ -137,6 +141,12 @@ trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
137141
{
138142
}
139143

144+
static inline void
145+
trace_seq_bitmask_list(struct trace_seq *s, const unsigned long *maskp,
146+
int nmaskbits)
147+
{
148+
}
149+
140150
static inline int trace_print_seq(struct seq_file *m, struct trace_seq *s)
141151
{
142152
return 0;

include/linux/tracepoint.h

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -108,14 +108,15 @@ void for_each_tracepoint_in_module(struct module *mod,
108108
* An alternative is to use the following for batch reclaim associated
109109
* with a given tracepoint:
110110
*
111-
* - tracepoint_is_faultable() == false: call_rcu()
111+
* - tracepoint_is_faultable() == false: call_srcu()
112112
* - tracepoint_is_faultable() == true: call_rcu_tasks_trace()
113113
*/
114114
#ifdef CONFIG_TRACEPOINTS
115+
extern struct srcu_struct tracepoint_srcu;
115116
static inline void tracepoint_synchronize_unregister(void)
116117
{
117118
synchronize_rcu_tasks_trace();
118-
synchronize_rcu();
119+
synchronize_srcu(&tracepoint_srcu);
119120
}
120121
static inline bool tracepoint_is_faultable(struct tracepoint *tp)
121122
{
@@ -275,13 +276,13 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
275276
return static_branch_unlikely(&__tracepoint_##name.key);\
276277
}
277278

278-
#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \
279+
#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \
279280
__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
280281
static inline void __do_trace_##name(proto) \
281282
{ \
282283
TRACEPOINT_CHECK(name) \
283284
if (cond) { \
284-
guard(preempt_notrace)(); \
285+
guard(srcu_fast_notrace)(&tracepoint_srcu); \
285286
__DO_TRACE_CALL(name, TP_ARGS(args)); \
286287
} \
287288
} \

include/trace/perf.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@ perf_trace_##call(void *__data, proto) \
7171
u64 __count __attribute__((unused)); \
7272
struct task_struct *__task __attribute__((unused)); \
7373
\
74+
guard(preempt_notrace)(); \
7475
do_perf_trace_##call(__data, args); \
7576
}
7677

@@ -85,9 +86,8 @@ perf_trace_##call(void *__data, proto) \
8586
struct task_struct *__task __attribute__((unused)); \
8687
\
8788
might_fault(); \
88-
preempt_disable_notrace(); \
89+
guard(preempt_notrace)(); \
8990
do_perf_trace_##call(__data, args); \
90-
preempt_enable_notrace(); \
9191
}
9292

9393
/*

include/trace/stages/stage3_trace_output.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@
3939
void *__bitmask = __get_dynamic_array(field); \
4040
unsigned int __bitmask_size; \
4141
__bitmask_size = __get_dynamic_array_len(field); \
42-
trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
42+
trace_print_bitmask_seq(iter, __bitmask, __bitmask_size); \
4343
})
4444

4545
#undef __get_cpumask
@@ -51,7 +51,7 @@
5151
void *__bitmask = __get_rel_dynamic_array(field); \
5252
unsigned int __bitmask_size; \
5353
__bitmask_size = __get_rel_dynamic_array_len(field); \
54-
trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
54+
trace_print_bitmask_seq(iter, __bitmask, __bitmask_size); \
5555
})
5656

5757
#undef __get_rel_cpumask

include/trace/trace_events.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -436,6 +436,7 @@ __DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), PARAMS(tstruct), \
436436
static notrace void \
437437
trace_event_raw_event_##call(void *__data, proto) \
438438
{ \
439+
guard(preempt_notrace)(); \
439440
do_trace_event_raw_event_##call(__data, args); \
440441
}
441442

@@ -447,9 +448,8 @@ static notrace void \
447448
trace_event_raw_event_##call(void *__data, proto) \
448449
{ \
449450
might_fault(); \
450-
preempt_disable_notrace(); \
451+
guard(preempt_notrace)(); \
451452
do_trace_event_raw_event_##call(__data, args); \
452-
preempt_enable_notrace(); \
453453
}
454454

455455
/*

kernel/rcu/srcutree.c

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -789,7 +789,8 @@ void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor)
789789
struct srcu_data *sdp;
790790

791791
/* NMI-unsafe use in NMI is a bad sign, as is multi-bit read_flavor values. */
792-
WARN_ON_ONCE((read_flavor != SRCU_READ_FLAVOR_NMI) && in_nmi());
792+
WARN_ON_ONCE(read_flavor != SRCU_READ_FLAVOR_NMI &&
793+
read_flavor != SRCU_READ_FLAVOR_FAST && in_nmi());
793794
WARN_ON_ONCE(read_flavor & (read_flavor - 1));
794795

795796
sdp = raw_cpu_ptr(ssp->sda);

kernel/trace/Kconfig

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -136,6 +136,7 @@ config BUILDTIME_MCOUNT_SORT
136136

137137
config TRACER_MAX_TRACE
138138
bool
139+
select TRACER_SNAPSHOT
139140

140141
config TRACE_CLOCK
141142
bool
@@ -425,7 +426,6 @@ config IRQSOFF_TRACER
425426
select GENERIC_TRACER
426427
select TRACER_MAX_TRACE
427428
select RING_BUFFER_ALLOW_SWAP
428-
select TRACER_SNAPSHOT
429429
select TRACER_SNAPSHOT_PER_CPU_SWAP
430430
help
431431
This option measures the time spent in irqs-off critical
@@ -448,7 +448,6 @@ config PREEMPT_TRACER
448448
select GENERIC_TRACER
449449
select TRACER_MAX_TRACE
450450
select RING_BUFFER_ALLOW_SWAP
451-
select TRACER_SNAPSHOT
452451
select TRACER_SNAPSHOT_PER_CPU_SWAP
453452
select TRACE_PREEMPT_TOGGLE
454453
help
@@ -470,7 +469,6 @@ config SCHED_TRACER
470469
select GENERIC_TRACER
471470
select CONTEXT_SWITCH_TRACER
472471
select TRACER_MAX_TRACE
473-
select TRACER_SNAPSHOT
474472
help
475473
This tracer tracks the latency of the highest priority task
476474
to be scheduled in, starting from the point it has woken up.
@@ -620,14 +618,16 @@ config TRACE_SYSCALL_BUF_SIZE_DEFAULT
620618

621619
config TRACER_SNAPSHOT
622620
bool "Create a snapshot trace buffer"
623-
select TRACER_MAX_TRACE
624621
help
625622
Allow tracing users to take snapshot of the current buffer using the
626623
ftrace interface, e.g.:
627624

628625
echo 1 > /sys/kernel/tracing/snapshot
629626
cat snapshot
630627

628+
Note, the latency tracers select this option. To disable it,
629+
all the latency tracers need to be disabled.
630+
631631
config TRACER_SNAPSHOT_PER_CPU_SWAP
632632
bool "Allow snapshot to swap per CPU"
633633
depends on TRACER_SNAPSHOT

0 commit comments

Comments
 (0)