-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Pull requests: uxlfoundation/oneDNN
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
[GPU] ngen: emul: fix s0Q*s1W multiply on XE3P. Backport
backport
third_party
#5048
opened Apr 17, 2026 by
skazakov1
Contributor
Loading…
[GPU] ngen: emul: fix s0Q*s1W multiply on XE3P
third_party
#5046
opened Apr 17, 2026 by
skazakov1
Contributor
Loading…
WIP: xe: conv: consolidate accumulator type setup
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5045
opened Apr 17, 2026 by
echeresh
Contributor
Loading…
x64: conv: relax large size check for 3d f32/int8 shapes
platform:cpu-x64
Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
#5042
opened Apr 16, 2026 by
tczeszun
Contributor
Loading…
WIP: [GPU] Use f32 accumulator for f16 input in convolution
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5041
opened Apr 16, 2026 by
echeresh
Contributor
Loading…
cpu: x64: matmul: enable treat_as_plain for weights format
bug
A confirmed library bug
platform:cpu-x64
Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
#5038
opened Apr 16, 2026 by
xuxinzen
Contributor
Loading…
cpu: aarch64: make jit_uni_pool SVE instantiation vector length agnostic
component:common
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#5036
opened Apr 16, 2026 by
Sqvid
Contributor
Loading…
3 tasks done
Test PR. Upconvert fp8 weights to xf16 in Matmul in case of xf16 activations for 3.10
backport
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:cpu-x64
Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
WIP: xe: jit: dsl: adjust type_t::is_scalar()
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5026
opened Apr 16, 2026 by
echeresh
Contributor
Loading…
xe: gemm: use dispatch table for simple strategy parameter parsing
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5025
opened Apr 15, 2026 by
Simonsays095
Contributor
Loading…
cpu: aarch64: enable ACL's inner-product for BF16
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#5024
opened Apr 15, 2026 by
fadara01
Contributor
Loading…
10 tasks
cpu: aarch64: make softmax SVE instantiation vector length agnostic
component:common
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#5023
opened Apr 15, 2026 by
Sqvid
Contributor
Loading…
2 tasks done
cpu: aarch64: shuffle: make SVE instantiation vector-length agnostic
component:common
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#5021
opened Apr 15, 2026 by
Sqvid
Contributor
Loading…
2 tasks done
[GPU] GEMM Acc fixup
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5017
opened Apr 14, 2026 by
kealan-barbieri
Contributor
Loading…
2 of 4 tasks
cpu: x64: matmul: Enable int8 grouped quantization
platform:cpu-x64
Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
#5014
opened Apr 14, 2026 by
inteldimitrius
Contributor
•
Draft
pu:aarch64: enable JIT binary op and binary post-ops on ASIMD
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#5008
opened Apr 13, 2026 by
renato-arantes
Contributor
Loading…
3 tasks done
oneDNN v3.12 release notes
backport
documentation
A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc
#5005
opened Apr 11, 2026 by
vpirogov
Contributor
Loading…
aarch64: support for per_dim_0 scales and bf16 dst_dt in jit int8 matmul
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#4987
opened Apr 9, 2026 by
michalowski-arm
Contributor
Loading…
2 tasks done
MFDNN-14690: Replace XE3P_35_10/11/UNKNOWN Core enum values with Xe3p
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
third_party
#4981
opened Apr 8, 2026 by
dyoussif
Contributor
Loading…
cpu, benchdnn: add reorder to/from grouped with different dts and use grouped in matmul ref
component:tests
Codeowner: @oneapi-src/onednn-arch
cpu: aarch64: implement forward lnorm in SVE
component:common
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
benchdnn: inputs: graph: add cases for gated mlp with gelu activation
component:graph-api
Codeowner: @oneapi-src/onednn-graph
component:tests
Codeowner: @oneapi-src/onednn-arch
documentation
A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc
#4962
opened Apr 7, 2026 by
TaoLv
Contributor
Loading…
ze api: add support for persistent cache
component:api
Codeowner: @oneapi-src/onednn-arch
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
third_party
#4959
opened Apr 6, 2026 by
dzarukin
Contributor
Loading…
benchdnn: parser infra touch up
component:graph-api
Codeowner: @oneapi-src/onednn-graph
component:tests
Codeowner: @oneapi-src/onednn-arch
#4957
opened Apr 6, 2026 by
dzarukin
Contributor
Loading…
Previous Next
ProTip!
Find all pull requests that aren't related to any open issues with -linked:issue.