Supercharge your machine learning with ONNX Runtime, a cross-platform inference and training accelerator.
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, and more. ONNX Runtime is compatible with a wide range of hardware, drivers, and operating systems, and delivers unparalleled performance by leveraging hardware accelerators where applicable alongside intelligent graph optimizations and transforms. Learn more →
ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models. Experience faster training with a simple one-line addition to your existing PyTorch scripts. Learn more →
Sample and tutorial repositories:
- ONNX Runtime Inferencing: microsoft/onnxruntime-inference-examples
- ONNX Runtime Training: microsoft/onnxruntime-training-examples