-
Notifications
You must be signed in to change notification settings - Fork 82
Pull requests: meta-pytorch/tritonbench
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Fix stride mismatch for addmm tensor allocation (#1060)
cla signed
fb-exported
meta-exported
#1060
opened May 7, 2026 by
jananisriram
Contributor
Loading…
C Fast Cache for Triton JIT C dispatcher
cla signed
fb-exported
meta-exported
#1059
opened May 7, 2026 by
tissue3
Contributor
Loading…
[ci] minimize warmup and rep with
--test-only
cla signed
#1058
opened May 7, 2026 by
xuzhao9
Contributor
Loading…
Add sparsity, target_size, and max_attn_len arguments
cla signed
fb-exported
meta-exported
#1051
opened May 1, 2026 by
xuzhao9
Contributor
Loading…
Enable AutoWS matmul kernels on Hopper (#1025)
cla signed
fb-exported
meta-exported
#1025
opened Apr 21, 2026 by
njriasan
Contributor
Loading…
Add mxfp8_blackwell_attentions benchmark to TritonBench
cla signed
fb-exported
meta-exported
#1012
opened Apr 13, 2026 by
njriasan
Contributor
Loading…
Security: Unsafe eval() when parsing input loader data (FP8 GEMM)
#1008
opened Apr 11, 2026 by
tomaioo
Loading…
Add attn_vis metrics and widgets (#999)
cla signed
fb-exported
meta-exported
#999
opened Apr 8, 2026 by
momochen
Contributor
Loading…
adding support for every-n test sampling
cla signed
fb-exported
meta-exported
#992
opened Apr 2, 2026 by
jhou-jpg
Contributor
Loading…
[BE] Remove hstu submodule and install as packages
cla signed
#989
opened Apr 2, 2026 by
xuzhao9
Contributor
Loading…
[Flex Attention] Add a Triton version so we can test autoWS
cla signed
#983
opened Mar 30, 2026 by
manman-ren
Contributor
•
Draft
Add inductor_flex_attention_bwd operator
cla signed
fb-exported
meta-exported
#940
opened Mar 10, 2026 by
OmarPavel
Contributor
Loading…
Add inductor_flex_attention_fwd operator
cla signed
fb-exported
meta-exported
#939
opened Mar 10, 2026 by
OmarPavel
Contributor
Loading…
[wip] support multi-mode benchmarking
cla signed
#909
opened Mar 3, 2026 by
xuzhao9
Contributor
Loading…
[TLX][FA] update tritonbench version to support rescale opt
cla signed
#884
opened Feb 21, 2026 by
manman-ren
Contributor
Loading…
Add Diode Inductor max-autotune benchmarks to gemm, addmm, bmm
cla signed
fb-exported
meta-exported
#758
opened Dec 23, 2025 by
jananisriram
Contributor
Loading…
[DO NOT LAND] Test run for MLP bias accuracy issue
cla signed
#609
opened Oct 31, 2025 by
xuzhao9
Contributor
Loading…
Add backward compatibility for TensorDescriptor
cla signed
#457
opened Sep 19, 2025 by
bdbowyer
Loading…
Add a Blackwell-specific scaled persistent + TMA template for GEMMs
cla signed
fb-exported
meta-exported
#432
opened Sep 17, 2025 by
jananisriram
Contributor
Loading…
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.