Skip to content

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

License

Notifications You must be signed in to change notification settings

dakenf/onnxruntime

This branch is 1 commit ahead of, 2240 commits behind microsoft/onnxruntime:main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

2dbed77 · Nov 21, 2023
Oct 26, 2022
Jul 22, 2022
Jun 15, 2023
Nov 10, 2023
Aug 20, 2023
Nov 11, 2023
Nov 15, 2023
Nov 21, 2023
Nov 14, 2023
Oct 20, 2023
Nov 17, 2023
Nov 18, 2023
Oct 17, 2023
Nov 21, 2023
Jul 21, 2023
Nov 21, 2023
Nov 18, 2023
Nov 9, 2023
Nov 18, 2023
Nov 18, 2023
Nov 8, 2023
Aug 23, 2023
Sep 30, 2022
Nov 25, 2020
Nov 20, 2018
Apr 25, 2023
Sep 20, 2023
Oct 25, 2023
Apr 13, 2022
Mar 31, 2023
Feb 28, 2023
Feb 12, 2021
Jan 28, 2021
Mar 1, 2022
Jun 1, 2023
May 25, 2022
Aug 31, 2023
Sep 20, 2023
Aug 11, 2023
Jul 17, 2023
Nov 10, 2022
Nov 12, 2019
Aug 20, 2023
Oct 27, 2023
Sep 26, 2023
Mar 22, 2021
Nov 8, 2023
Sep 26, 2023
May 20, 2022
Nov 18, 2023

Repository files navigation

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status
Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

License

This project is licensed under the MIT License.

About

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Resources

License

Security policy

Citation

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 88.2%
  • C 4.8%
  • Python 2.6%
  • C# 1.1%
  • Assembly 0.8%
  • Cuda 0.7%
  • Other 1.8%