Skip to content
forked from oneapi-src/oneDNN

Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)

License

Notifications You must be signed in to change notification settings

wuhuikx/mkl-dnn

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Neural Network Library (DNNL)

Note

Starting with version 1.1 the library is renamed to DNNL. Please read Intel MKL-DNN to DNNL Transition Guide.

Note

Version 1.0 brings incompatible changes to the 0.20 version. Please read Version 1.0 Transition Guide.

Deep Neural Network Library (DNNL) is an open-source performance library for deep learning applications. The library includes basic building blocks for neural networks optimized for Intel Architecture Processors and Intel Processor Graphics.

DNNL is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs. Deep learning practitioners should use one of the applications enabled with DNNL:

Installation

Pre-built binaries for Linux*, Windows*, and macOS* are available for download in the releases section. Package names use the following convention:

OS Package name
Linux dnnl_lnx_<version>_cpu_<cpu runtime>[_gpu_<gpu runtime>].tgz
Windows dnnl_win_<version>_cpu_<cpu runtime>[_gpu_<gpu runtime>].zip
macOS dnnl_mac_<version>_cpu_<cpu runtime>.tgz

Several packages are available for each operating system to ensure interoperability with CPU or GPU runtime libraries used by the application.

Configuration Dependency
cpu_iomp Intel OpenMP runtime
cpu_gomp GNU* OpenMP runtime
cpu_vcomp Microsoft Visual C OpenMP runtime
cpu_tbb Threading Building Blocks

The packages do not include library dependencies and these need to be resolved in the application at build time. See the System Requirements section below and the Build Options section in the developer guide for more details on CPU and GPU runtimes.

If the configuration you need is not available, you can build the library from source.

Contributing

We welcome community contributions to DNNL. If you have an idea on how to improve the library:

For additional details, see contribution guidelines.

This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.

Support

Please submit your questions, feature requests, and bug reports on the GitHub issues page.

You may reach out to project maintainers privately at [email protected].

WARNING

The following functionality has preview status and might be changed without prior notification in future releases:

License

DNNL is licensed under Apache License Version 2.0. This software includes the following third-party components:

Documentation

  • Developer guide explains programming model, supported functionality, details of primitives implementations and includes annotated examples.
  • API reference provides comprehensive reference of the library API.

System Requirements

DNNL supports systems based on Intel 64 architecture or compatible processors.

The library is optimized for the following CPUs:

  • Intel Atom processor with Intel SSE4.1 support
  • 4th, 5th, 6th, 7th, and 8th generation Intel Core(TM) processor
  • Intel Xeon(R) processor E3, E5, and E7 family (formerly Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
  • Intel Xeon Phi(TM) processor (formerly Knights Landing and Knights Mill)
  • Intel Xeon Scalable processor (formerly Skylake and Cascade Lake)
  • future Intel Xeon Scalable processor (code name Cooper Lake)

DNNL detects instruction set architecture (ISA) in the runtime and uses just-in-time (JIT) code generation to deploy the code optimized for the latest supported ISA.

The library is optimized for the following GPUs:

  • Intel HD Graphics
  • Intel UHD Graphics
  • Intel Iris Plus Graphics

Requirements for Building from Source

DNNL supports systems meeting the following requirements:

  • Operating system with Intel 64 architecture support
  • C++ compiler with C++11 standard support
  • CMake 2.8.11 or later
  • Doxygen 1.8.5 or later to build documentation

Configurations of CPU and GPU engines may introduce additional build time dependencies.

CPU Engine

Intel Architecture Processors and compatible devices are supported by the DNNL CPU engine. The CPU engine is built by default and cannot be disabled at build time. The engine can be configured to use the OpenMP or TBB threading runtime. The following additional requirements apply:

Some implementations rely on OpenMP 4.0 SIMD extensions, and we recommend using the Intel C++ Compiler for the best performance results.

GPU Engine

Intel Processor Graphics is supported by the DNNL GPU engine. The GPU engine is disabled in the default build configuration. The following additional requirements apply when GPU engine is enabled:

  • OpenCL* runtime library (OpenCL version 1.2 or later)
  • OpenCL driver (with kernel language support for OpenCL C 2.0 or later) with Intel subgroups extension support

Runtime Dependencies

When DNNL is built from source, the library runtime dependencies and specific versions are defined by the build environment.

Linux

Common dependencies:

  • System C/C++ runtime (libc.so, libstdc++.so)
  • Dynamic Linking Library (libdl.so)
  • C Math Library (libm.so)
  • POSIX Threads Library (libpthread.so)

Runtime specific dependencies:

Runtime configuration Compiler Dependency
DNNL_CPU_RUNTIME=OMP GCC GNU OpenMP runtime (libgomp.so)
DNNL_CPU_RUNTIME=OMP Intel C/C++ Compiler Intel OpenMP runtime (libiomp5.so)
DNNL_CPU_RUNTIME=OMP Clang Intel OpenMP runtime (libiomp5.so)
DNNL_CPU_RUNTIME=TBB any Threading Building Blocks (libtbb.so)
DNNL_GPU_RUNTIME=OCL any OpenCL runtime (libOpenCL.so)

Windows

Common dependencies:

  • Microsoft Visual C++ Redistributable (msvcrt.dll)

Runtime specific dependencies:

Runtime configuration Compiler Dependency
DNNL_CPU_RUNTIME=OMP Microsoft Visual C++ Compiler No additional requirements
DNNL_CPU_RUNTIME=OMP Intel C/C++ Compiler Intel OpenMP runtime (iomp5.dll)
DNNL_CPU_RUNTIME=TBB any Threading Building Blocks (tbb.dll)
DNNL_GPU_RUNTIME=OCL any OpenCL runtime (OpenCL.dll)

macOS

Common dependencies:

  • System C/C++ runtime (libc++.dylib, libSystem.dylib)

Runtime specific dependencies:

Runtime configuration Compiler Dependency
DNNL_CPU_RUNTIME=OMP Intel C/C++ Compiler Intel OpenMP runtime (libiomp5.dylib)
DNNL_CPU_RUNTIME=TBB any Threading Building Blocks (libtbb.dylib)

Validated Configurations

CPU engine was validated on RedHat* Enterprise Linux 7 with

  • GNU Compiler Collection 4.8, 5.4, 6.1, 7.2, and 8.1
  • Clang* 3.8.0
  • Intel C/C++ Compiler 17.0, 18.0, and 19.0

on Windows Server* 2012 R2 with

on macOS 10.13 (High Sierra) with

GPU engine was validated on Ubuntu* 18.04 with

on Windows Server 2019 with

Requirements for Pre-built Binaries

Linux

Common dependencies:

  • GCC 4.8 or later

Runtime specific dependencies:

Runtime configuration Requirements
cpu_gomp No additional requirements
cpu_iomp Intel OpenMP runtime for Intel C/C++ Compiler 17.0 or later
cpu_tbb Threading Building Blocks 2017 or later

Windows

Common dependencies:

  • Microsoft Visual C++ Redistributable 2015 or later

Runtime specific dependencies:

Runtime configuration Requirements
cpu_vcomp No additional requirements
cpu_iomp Intel OpenMP runtime for Intel C/C++ Compiler 17.0 or later
cpu_tbb Threading Building Blocks 2017 or later

macOS

Common dependencies:

  • macOS 10.13 (High Sierra) or later

Runtime specific dependencies:

Runtime configuration Requirements
cpu_iomp Intel OpenMP runtime for Intel C/C++ Compiler 17.0 or later
cpu_tbb Threading Building Blocks 2017 or later

Legal Information

About

Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 93.9%
  • C 4.9%
  • CMake 1.0%
  • Shell 0.1%
  • Python 0.1%
  • Batchfile 0.0%