Skip to content

Releases: oneapi-src/oneDNN

v2.6

29 Mar 21:57
Compare
Choose a tag to compare

Performance Optimizations

  • Intel Architecture Processors
    • Improved performance for future Intel Xeon® Scalable processors (code name Sapphire Rapids). The functionality requires Linux kernel 5.16 or later.
    • Improved performance of matmul primitive for processors with Intel AVX-512 support.
  • Intel Graphics Products
    • Improved performance for future Xe Architecture graphics (code name Ponte Vecchio).
    • Improved performance for future Intel Arc graphics (code name Alchemist and DG2).
  • AArch64-based Processors
    • Improved binary primitive performance with Arm Compute Library (ACL).
    • Improved shuffle primitive performance for processors with SVE 512 support.

Functionality

  • Introduced bfloat16 destination support for int8 convolution, matmul and inner product primitives for processors with Intel AVX-512 support and or future Intel Xeon® Scalable processors (code name Sapphire Rapids)
  • Extended RNN primitive with support for AUGRU cell.
  • Added support for non-zero negative slope in ReLU post-op for batch normalization primitive.
  • Introduced support for mixed source and destination data types in softmax primitive.
  • Introduced persistent cache API. This functionality allows to serialize and reuse JIT kernels.

Usability

  • Added build time options to manage the set of supported instruction set architectures on Intel Graphics Products. See ONEDNN_ENABLE_PRIMITIVE_GPU_ISA for more details. This feature further reduces the binary footprint.
  • Extended built time options ONEDNN_ENABLE_PRIMITIVE and ONEDNN_ENABLE_WORKLOAD to GPU implementations. This feature further reduces the binary footprint.
  • Reduced stack consumption in GEMM implementation.
  • Added command line help to benchdnn.

Deprecated Functionality

  • Support for SYCL 1.2.1 (aka SYCL 2017 standard) is deprecated and will be removed in future releases.

Breaking Changes

  • Removed performance optimizations for Intel Xeon Phi processors. oneDNN will continue to be functional on these processors using Intel AVX2 codepath.

Thanks to the Contributors

This release contains contributions from the project core team as well as Arthur Mitrano @aaraujom, Aslan @aslanxie, Attila T. Áfra @atafra, Damian Szwichtenberg @dszwicht, Diana Bite @diaena, Joel Dippold @jedippold, Jonathan Deakin @jondea, Jonathan Louis Kaplan @JLouisKaplan-Arm, Kentaro Kawakami @kawakami-k, Luke Ireland @LukeIreland1, Mesut Meterelliyoz @mmeterel, Nathan John Sircombe @nSircombe, Peter Caday @petercad, Tengfei Han @Tengfei09, and Thiago Macieira @thiagomacieira. We would also like to thank everyone who asked questions and reported issues.

v2.5.4

24 Mar 03:37
Compare
Choose a tag to compare

This is a patch release containing the following changes to v2.5.3:

  • Improved performance for batch normalization for tbb/threadpool (421a2ce, 7b7b763)
  • Fixed implicit conversion from double to float in examples (866b9ac)
  • Fixed issue in int8 matmul primitive for specific shapes (035c2d4, 9a1bf19)
  • Fixed performance regression for matmul primitive with binary post op and broadcast (dcd61ef, 31dec32)
  • Fixed performance regression in binary primitive when using NHWC layout (228493c)

v2.6-rc

17 Mar 21:32
Compare
Choose a tag to compare
v2.6-rc Pre-release
Pre-release

This is a release candidate for oneDNN v2.6. Please provide feedback and submit defect reports via Github issues.

Performance Optimizations

  • Intel Architecture Processors
    • Improved performance for future Intel Xeon® Scalable processors (code name Sapphire Rapids). The functionality requires Linux kernel 5.16 or later.
    • Improved performance of matmul primitive for processors with Intel AVX-512 support.
  • Intel Graphics Products
    • Improved performance for future Xe Architecture graphics (code name Ponte Vecchio).
    • Improved performance for future Intel Arc graphics (code name Alchemist and DG2).
  • AArch64-based Processors
    • Improved binary primitive performance with Arm Compute Library (ACL).
    • Improved shuffle primitive performance for processors with SVE 512 support.

Functionality

  • Extended RNN primitive with support for AUGRU cell.
  • Introduced support for mixed source and destination data types in softmax primitive.
  • Introduced persistent cache API. This functionality allows to serialize and reuse JIT kernels.

Usability

  • Added build time options to manage the set of supported instruction set architectures on Intel Graphics Products. See ONEDNN_ENABLE_PRIMITIVE_GPU_ISA for more details. This feature further reduces the binary footprint.
  • Extended built time options ONEDNN_ENABLE_PRIMITIVE and ONEDNN_ENABLE_WORKLOAD to GPU implementations. This feature further reduces the binary footprint.
  • Reduced stack consumption in GEMM implementation.
  • Added command line help to benchdnn.

Deprecated Functionality

  • Support for SYCL 1.2.1 (aka SYCL 2017 standard) is deprecated and will be removed in future releases.

Breaking Changes

  • Removed performance optimizations for Intel Xeon Phi processors. oneDNN will continue to be functional on these processors using Intel AVX2 codepath.

Thanks to the Contributors

This release contains contributions from the project core team as well as Arthur Mitrano @aaraujom, Aslan @aslanxie, Attila T. Áfra @atafra, Damian Szwichtenberg @dszwicht, Diana Bite @diaena, Joel Dippold @jedippold, Jonathan Deakin @jondea, Jonathan Louis Kaplan @JLouisKaplan-Arm, Kentaro Kawakami @kawakami-k, Luke Ireland @LukeIreland1, Mesut Meterelliyoz @mmeterel, Nathan John Sircombe @nSircombe, Peter Caday @petercad, Tengfei Han @Tengfei09, and Thiago Macieira @thiagomacieira. We would also like to thank everyone who asked questions and reported issues.

graph-v0.4.2

11 Mar 17:25
Compare
Choose a tag to compare
graph-v0.4.2 Pre-release
Pre-release

This is a patch release containing the following changes to graph-v0.4.1:

  • Fixed compiled partition cache by checking CPU threading number (68f262a, 343246e)
  • Enabled binary add and multiply patterns (71a0cfe)
  • Fixed the MHA (multi-head attention) patterns in compiler backend and benchdnn graph (45bbcb3, caaf841)
  • Fixed the build issues for semi-compiler backend (62dd2ca, 738276a, 347f1a9, 2123326)

v2.5.3

05 Mar 02:20
Compare
Choose a tag to compare

This is a patch release containing the following changes to v2.5.2:

  • Fixed accuracy issue in GELU post-op (3ff2c3d)
  • Added ability to enable code only on non-x64 systems (ff7ae00)
  • Fixed issue in reorder primitive on non-x64 systems (5917860)
  • Fixed build issue on OSX11 and older cmake (d9c8bbe)
  • Fixed assert in reorder primitive (79090bc)
  • Documentation fixes (d290758, ee7eacb, 543b8f8)
  • Fixed potential division by zero in example for binary primitive (2fffd96)
  • Fixed SIGFPE issue in reorder primitive (8c291fc)
  • Fixed potential size overflow in inner product primitive (c10f74a)
  • Added logic to reduce the number of threads (tasks spawned for threadpool) for small shapes (8f885e7, 4053989, 49ec406, 2977360)
  • Fixed SEGFAULT issue in matmul primitive (62c1170, a993d52)
  • Added bf16 support for sum post-op (3d2c37e)
  • Added fp:precise compiler flag for Intel Compiler identified as IntelLLVM (1558a4b)
  • Fixed issue in bf16 convolution primitive when fused with binary (b379fd9)
  • Fixed issue in backward depthwise convolution (d5e4122, f5cac23, eeaa19c)
  • Fixed SEGFAULT in int8 convolution with eltwise post_op (32a629f)
  • Fixed NaN issue in bf16 backward inner product (0c5e492)
  • Fixed performance regression for binary with broadcast (f79b030, 58ce3c1)

graph-v0.4.1

19 Jan 15:33
Compare
Choose a tag to compare
graph-v0.4.1 Pre-release
Pre-release

This is a patch release containing the following changes to graph-v0.4:

v2.5.2

14 Jan 00:54
Compare
Choose a tag to compare

This is a patch release containing the following changes to v2.5.1:

  • Fixed performance regression in binary primitive with broadcast (b972174, ff75122)
  • Fixed issue with SYCL device properties initialization (cabc5ca, 095f13e)
  • Fixed issue in matmul primitive with zero points (3157354)
  • Fixed segmentation fault in depthwise convolution primitive for shapes with huge spatial size for processors with Intel AVX-512 support (6834764, 1d2addc)
  • Fixed issue in forward convolution primitive for processors with Intel AVX2 support (d691137)
  • Fixed performance regression on GPUs with SYCL runtime (d8364e5)

graph-v0.4

29 Dec 20:27
Compare
Choose a tag to compare
graph-v0.4 Pre-release
Pre-release

This is a technical preview for oneDNN Graph API based on oneDNN v2.5.

Functionality

  • Introduced bf16 inference support.
  • Introduced multi-head attention (MHA) fusion supported by oneDNN Graph compiler with optimized code generation (experimental).
  • Updated API to comply with oneDNN Graph API specification v0.9.

Known Issues and Limitations

  • Some subgraphs might not be recognized as a partition even if it matches the general pattern description due to internal implementation.
  • The weight’s opaque layout can be queried only from a compiled partition, which requires that tensor shapes must be known at compilation time.
  • MHA fusion is not activated on machines without AVX-512 support, as oneDNN Graph compiler generates AVX-512 and newer instructions.

Thanks to the Contributors

This release contains contributions from the project core teams as well as Jiong Gong, Chunyuan Wu, Sanchit Jain, Yiqiang Li, Yunfei Mao, Kiefer Kuah and others.

v2.5.1

21 Dec 00:09
Compare
Choose a tag to compare

This is a patch release containing the following changes to v2.5:

v2.5

09 Dec 00:12
Compare
Choose a tag to compare

Performance Optimizations

  • Intel Architecture Processors
    • Improved performance for future Intel Xeon Scalable processors (code name Sapphire Rapids). The functionality is now enabled by default and requires Linux kernel 5.16.
    • Improved performance of matmul primitive for processors with Intel AVX-512 support.
  • Intel Graphics Products
    • Introduced initial optimizations for future Xe Architecture graphics (code name Ponte Vecchio).
    • Improved pooling and layer normalization primitives performance.
  • AArch64-based Processors
    • Improved softmax and logsoftmax primitives performance with Arm Compute Library (ACL)

Functionality

  • Introduced support for compiler with SYCL 2020 standard support.
  • Introduced support for the ICX/ICPX and DPCPP compiler drivers distributed with Intel oneAPI DPC++ Compiler on Windows.

Usability

  • Added compile time option to manage the set of supported instruction set architectures on Intel64/AMD64 processors. See 'DNNL_ENABLE_PRIMITIVE_CPU_ISA' for more details. This feature further reduces the binary footprint.
  • Added environment variables and build options with ONEDNN prefix.
  • Introduced support for QNX operating system.
  • Introduced support for RISC-V architecture.

Breaking Changes

  • The Intel MKL-DNN compatibility API is removed. See Transition from Intel MKL-DNN to oneDNN page for instructions on moving to the new API.
  • Updated minimal supported ACL version to 21.11 (was 21.08).

Deprecated Functionality

  • Support for Intel Xeon Phi processors is deprecated and will be removed in the next release.
  • Support for SYCL 1.2.1 (aka SYCL 2017 standard) is deprecated and will be removed in future releases.

Thanks to the Contributors

This release contains contributions from the project core team as well as Aaron Franke @aaronfranke, Arthur Mitrano @aaraujom, Crefeda Rodrigues @cfRod, Diana Bite @diaena, Joel Dippold @jedippold, Joe Konno @thac0, Jonathan Deakin @jondea, Luke Ireland @LukeIreland1, Mark Ryan @markdryan, Mesut Meterelliyoz @mmeterel, Michel Migdal @Michoumichmich, Nathan John Sircombe @nSircombe, Pablo Romero @pablorcum, Peter Caday @petercad, Sergey Razumovskiy @srazumov, and Tsao Zhong @CaoZhongZ. We would also like to thank everyone who asked questions and reported issues.