diff --git a/src/routes/components/training-and-inference.svelte b/src/routes/components/training-and-inference.svelte index affa40fb2e34c..dac6f94ed9ff3 100644 --- a/src/routes/components/training-and-inference.svelte +++ b/src/routes/components/training-and-inference.svelte @@ -6,80 +6,73 @@ import Ortweb from '../../images/undraw/image_ortweb.svelte'; -
-

ONNX Runtime Training

+
+

ONNX Runtime Inferencing

- ONNX Runtime can be used to accelerate large model training and enable on-device training. + ONNX Runtime is the same tech that powers AI in Microsoft products like Office, Azure, and Bing, as well as in thousands of other projects across the world.

- Learn more about ONNX Runtime Training → + Learn more about ONNX Runtime Inferencing →
-

Large Model Training

+

ONNX Runtime Web

- ORT Training can be used to accelerate training for a large number of popular models, - including Hugging Face - models like Llama-2-7b and curated models from the - Azure AI | Machine Learning Studio model catalog. + ONNX Runtime Web allows JavaScript developers to run and deploy machine learning models + in web browsers.

-

On-Device Training

+

ONNX Runtime Mobile

- On-device training extends ORT Mobile inferencing to enable training on edge devices. - Developers can take an inference model and train it locally on-device to provide an - improved user experience for end customers. + ONNX Runtime Mobile allows you to run model inferencing on Android and iOS mobile devices.

-
-

ONNX Runtime Inferencing

+
+

ONNX Runtime Training

- ONNX Runtime Inference powers machine learning models in key Microsoft products and services - across Office, Azure, Bing, as well as thousands of community projects. + ONNX Runtime enables on-device training and reduces costs for large model training.

- Learn more about ONNX Runtime Inferencing → + Learn more about ONNX Runtime Training →
-

ONNX Runtime Web

+

Large Model Training

- ONNX Runtime Web allows JavaScript developers to run and deploy machine learning models - in browsers. + Accelerated training with ONNX Runtime reduces costs and improves data scientist velocity for many popular models from Hugging Face + and Azure Machine Learning.

-

ONNX Runtime Mobile

+

On-Device Training

- ONNX Runtime Mobile allows you to run model inferencing on mobile devices. + On-device training with ONNX Runtime lets developers take an inference model and train it locally on-device to deliver a more personalized and privacy-respecting experience for customers.