Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added requested Mac x64/arm64 instructions. #19263

Merged
merged 11 commits into from
Jan 30, 2024
4 changes: 2 additions & 2 deletions docs/execution-providers/TensorRT-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -566,7 +566,7 @@ Please note that there is a constraint of using this explicit shape range featur

This example shows how to run the Faster R-CNN model on TensorRT execution provider.

1. Download the Faster R-CNN onnx model from the ONNX model zoo [here](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn).
1. Download the Faster R-CNN onnx model from the ONNX model zoo [here](https://github.com/onnx/models/tree/main/validated/vision/object_detection_segmentation/faster-rcnn).

2. Infer shapes in the model by running the [shape inference script](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/symbolic_shape_infer.py)
```bash
Expand All @@ -583,7 +583,7 @@ This example shows how to run the Faster R-CNN model on TensorRT execution provi

4. To test on model performance, run `onnxruntime_perf_test` on your shape-inferred Faster-RCNN model

> Download sample test data with model from [model zoo](https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/faster-rcnn), and put test_data_set folder next to your inferred model
> Download sample test data with model from [model zoo](https://github.com/onnx/models/tree/main/validated/vision/object_detection_segmentation/faster-rcnn), and put test_data_set folder next to your inferred model

```bash
# e.g.
Expand Down
47 changes: 24 additions & 23 deletions docs/reference/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,29 +66,30 @@ ONNX Runtime supports all opsets from the latest released version of the [ONNX](
* *Operators not supported in the current ONNX spec may be available as a [Contrib Operator](https://github.com/microsoft/onnxruntime/blob/main/docs/ContribOperators.md)*
* [How to add a custom operator/kernel](operators/add-custom-op.md)

| ONNX Runtime version | [ONNX version](https://github.com/onnx/onnx/blob/master/docs/Versioning.md) | ONNX opset version | ONNX ML opset version | ONNX IR version | [Windows ML Availability](https://docs.microsoft.com/en-us/windows/ai/windows-ml/release-notes/)|
|------------------------------|--------------------|--------------------|----------------------|------------------|------------------|
| 1.16 | **1.14.1** | 19 | 3 | 9 | Windows AI 1.16+ |
| 1.15 | **1.14** | 19 | 3 | 8 | Windows AI 1.15+ |
| 1.14 | **1.13** | 18 | 3 | 8 | Windows AI 1.14+ |
| 1.13 | **1.12** | 17 | 3 | 8 | Windows AI 1.13+ |
| 1.12 | **1.12** | 17 | 3 | 8 | Windows AI 1.12+ |
| 1.11 | **1.11** | 16 | 2 | 8 | Windows AI 1.11+ |
| 1.10 | **1.10** | 15 | 2 | 8 | Windows AI 1.10+ |
| 1.9 | **1.10** | 15 | 2 | 8 | Windows AI 1.9+ |
| 1.8 | **1.9** | 14 | 2 | 7 | Windows AI 1.8+ |
| 1.7 | **1.8** | 13 | 2 | 7 | Windows AI 1.7+ |
| 1.6 | **1.8** | 13 | 2 | 7 | Windows AI 1.6+ |
| 1.5 | **1.7** | 12 | 2 | 7 | Windows AI 1.5+ |
| 1.4 | **1.7** | 12 | 2 | 7 | Windows AI 1.4+ |
| 1.3 | **1.7** | 12 | 2 | 7 | Windows AI 1.3+ |
| 1.2<br/>1.1 | **1.6** | 11 | 2 | 6 | Windows AI 1.3+ |
| 1.0 | **1.6** | 11 | 2 | 6 | Windows AI 1.3+ |
| 0.5 | **1.5** | 10 | 1 | 5 | Windows AI 1.3+ |
| 0.4 | **1.5** | 10 | 1 | 5 | Windows AI 1.3+ |
| 0.3 | **1.4** | 9 | 1 | 3 | Windows 10 2004+ |
| 0.2 | **1.3** | 8 | 1 | 3 | Windows 10 1903+ |
| 0.1 | **1.3** | 8 | 1 | 3 | Windows 10 1809+ |
| ONNX Runtime version | [ONNX version](https://github.com/onnx/onnx/blob/master/docs/Versioning.md) | ONNX opset version | ONNX ML opset version | ONNX IR version |
|------------------------------|--------------------|--------------------|----------------------|------------------|
| 1.17 | **1.15** | 20 | 4 | 9 |
| 1.16 | **1.14.1** | 19 | 3 | 9 |
| 1.15 | **1.14** | 19 | 3 | 8 |
| 1.14 | **1.13** | 18 | 3 | 8 |
| 1.13 | **1.12** | 17 | 3 | 8 |
| 1.12 | **1.12** | 17 | 3 | 8 |
| 1.11 | **1.11** | 16 | 2 | 8 |
| 1.10 | **1.10** | 15 | 2 | 8 |
| 1.9 | **1.10** | 15 | 2 | 8 |
| 1.8 | **1.9** | 14 | 2 | 7 |
| 1.7 | **1.8** | 13 | 2 | 7 |
| 1.6 | **1.8** | 13 | 2 | 7 |
| 1.5 | **1.7** | 12 | 2 | 7 |
| 1.4 | **1.7** | 12 | 2 | 7 |
| 1.3 | **1.7** | 12 | 2 | 7 |
| 1.2<br/>1.1 | **1.6** | 11 | 2 | 6 |
| 1.0 | **1.6** | 11 | 2 | 6 |
| 0.5 | **1.5** | 10 | 1 | 5 |
| 0.4 | **1.5** | 10 | 1 | 5 |
| 0.3 | **1.4** | 9 | 1 | 3 |
| 0.2 | **1.3** | 8 | 1 | 3 |
| 0.1 | **1.3** | 8 | 1 | 3 |

Unless otherwise noted, please use the latest released version of the tools to convert/export the ONNX model. Most tools are backwards compatible and support multiple ONNX versions. Join this with the table above to evaluate ONNX Runtime compatibility.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The source code for this sample is available [here](https://github.com/microsoft
3. Use any sample image as input to the sample.

4. Download the latest Squeezenet model from the ONNX Model Zoo.
This example was adapted from [ONNX Model Zoo](https://github.com/onnx/models).Download the latest version of the [Squeezenet](https://github.com/onnx/models/tree/master/vision/classification/squeezenet) model from here.
This example was adapted from [ONNX Model Zoo](https://github.com/onnx/models).Download the latest version of the [Squeezenet](https://github.com/onnx/models/tree/main/validated/vision/classification/squeezenet) model from here.

## Install ONNX Runtime for OpenVINO Execution Provider

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The source code for this sample is available [here](https://github.com/microsoft
1. [The Intel<sup>®</sup> Distribution of OpenVINO toolkit](https://docs.openvinotoolkit.org/latest/index.html)

2. Download the latest tinyYOLOv2 model from the ONNX Model Zoo.
This model was adapted from [ONNX Model Zoo](https://github.com/onnx/models).Download the latest version of the [tinyYOLOv2](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/tiny-yolov2) model from here.
This model was adapted from [ONNX Model Zoo](https://github.com/onnx/models).Download the latest version of the [tinyYOLOv2](https://github.com/onnx/models/tree/main/validated/vision/object_detection_segmentation/tiny-yolov2) model from here.

## Install ONNX Runtime for OpenVINO Execution Provider

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/csharp/fasterrcnn_csharp.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ The source code for this sample is available [here](https://github.com/microsoft
To run this sample, you'll need the following things:

1. Install [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) or higher for you OS (Mac, Windows or Linux).
2. Download the [Faster R-CNN](https://github.com/onnx/models/blob/master/vision/object_detection_segmentation/faster-rcnn/model/FasterRCNN-10.onnx) ONNX model to your local system.
2. Download the [Faster R-CNN](https://github.com/onnx/models/blob/main/validated/vision/object_detection_segmentation/faster-rcnn/model/FasterRCNN-10.onnx) ONNX model to your local system.
3. Download [this demo image](/images/demo.jpg) to test the model. You can also use any image you like.

## Get started
Expand Down Expand Up @@ -68,7 +68,7 @@ image.Save(imageStream, format);

### Preprocess image

Next, we will preprocess the image according to the [requirements of the model](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn#preprocessing-steps):
Next, we will preprocess the image according to the [requirements of the model](https://github.com/onnx/models/tree/main/validated/vision/object_detection_segmentation/faster-rcnn#preprocessing-steps):

```cs
var paddedHeight = (int)(Math.Ceiling(image.Height / 32f) * 32f);
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/csharp/resnet50_csharp.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The source code for this sample is available [here](https://github.com/microsoft
To run this sample, you'll need the following things:

1. Install [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) or higher for you OS (Mac, Windows or Linux).
2. Download the [ResNet50 v2](https://github.com/onnx/models/blob/master/vision/classification/resnet/model/resnet50-v2-7.onnx) ONNX model to your local system.
2. Download the [ResNet50 v2](https://github.com/onnx/models/blob/main/validated/vision/classification/resnet/model/resnet50-v2-7.onnx) ONNX model to your local system.
3. Download [this picture of a dog](/images/dog.jpeg) to test the model. You can also use any image you like.

## Getting Started
Expand Down Expand Up @@ -74,7 +74,7 @@ Note, we're doing a centered crop resize to preserve aspect ratio.

### Preprocess image

Next, we will preprocess the image according to the [requirements of the model](https://github.com/onnx/models/tree/master/vision/classification/resnet#preprocessing):
Next, we will preprocess the image according to the [requirements of the model](https://github.com/onnx/models/tree/main/validated/vision/classification/resnet#preprocessing):

```cs
// We use DenseTensor for multi-dimensional access to populate the image data
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/csharp/yolov3_object_detection_csharp.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ The source code for this sample is available [here](https://github.com/microsoft
3. Use any sample Image as input to the sample.

4. Download the latest YOLOv3 model from the ONNX Model Zoo.
This example was adapted from [ONNX Model Zoo](https://github.com/onnx/models).Download the latest version of the [YOLOv3](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov3) model from here.
This example was adapted from [ONNX Model Zoo](https://github.com/onnx/models). Download the latest version of the [YOLOv3](https://github.com/onnx/models/tree/main/validated/vision/object_detection_segmentation/yolov3) model from here.

## Install ONNX Runtime for OpenVINO Execution Provider

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/iot-edge/rasp-pi-cv.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ In this tutorial we are using the Raspberry Pi [Camera Module](https://www.raspb
```
## Run inference on the Raspberry Pi with the `inference_mobilenet.py` script

Now that we have validated that the camera is connected and working on the Raspberry Pi, its time to inference the ONNX model provided in the source. The model is a [MobileNet](https://github.com/onnx/models/tree/main/vision/classification/mobilenet) model that performs image classification on 1000 classes.
Now that we have validated that the camera is connected and working on the Raspberry Pi, its time to inference the ONNX model provided in the source. The model is a [MobileNet](https://github.com/onnx/models/tree/main/validated/vision/classification/mobilenet) model that performs image classification on 1000 classes.

- Run the inference script with the below command.
```python
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/mnist_cpp.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ nav_exclude: true
# Number recognition with MNIST in C++
{: .no_toc }

This sample uses the MNIST model from the Model Zoo: https://github.com/onnx/models/tree/master/vision/classification/mnist
This sample uses the MNIST model from the Model Zoo: https://github.com/onnx/models/tree/main/validated/vision/classification/mnist

![Screenshot](../../../images/mnist-screenshot.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ This application performs inference on device, in the browser using the onnxrunt

## SqueezeNet machine learning model

We will be using [SqueezeNet](https://github.com/onnx/models/tree/master/vision/classification/squeezenet) from the [ONNX Model Zoo](https://github.com/onnx/models). SqueezeNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on the ImageNet dataset which contains images from 1000 different classes. SqueezeNet models are highly efficient in terms of size and speed while providing good accuracies. This makes them ideal for platforms with strict constraints on size, like client side inference.
We will be using [SqueezeNet](https://github.com/onnx/models/tree/main/validated/vision/classification/squeezenet) from the [ONNX Model Zoo](https://github.com/onnx/models). SqueezeNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on the ImageNet dataset which contains images from 1000 different classes. SqueezeNet models are highly efficient in terms of size and speed while providing good accuracies. This makes them ideal for platforms with strict constraints on size, like client side inference.

> If you need even more model memory and disk efficiency, you can convert the ONNX model to [ORT format](../../reference/ort-format-models) and use an ORT model in your application instead of the ONNX one. You can also also [reduce the size of the ONNX Runtime](../../build/custom.md) binary itself to only include support for the specific models in your application.

Expand Down
30 changes: 23 additions & 7 deletions src/routes/getting-started/table.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -143,11 +143,11 @@
"Install Nuget package&nbsp;<a class='text-blue-500' href='https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime' target='_blank'>Microsoft.ML.OnnxRuntime</a>",

'mac,C-API,X64,DefaultCPU':
"Download .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>",
"Add 'onnxruntime-c' using CocoaPods or download the .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>.",

'mac,C++,X64,DefaultCPU':
"Download .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>",

"Add 'onnxruntime-c' using CocoaPods or download the .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>.",
'mac,C#,X64,DefaultCPU':
"Download .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>",

Expand All @@ -158,6 +158,22 @@

'mac,Python,X64,DefaultCPU': 'pip install onnxruntime',

'mac,Python,X64,CoreML': 'pip install onnxruntime',

'mac,Python,ARM64,CoreML': 'pip install onnxruntime',
MaanavD marked this conversation as resolved.
Show resolved Hide resolved

'mac,objectivec,X64,DefaultCPU': "Add 'onnxruntime-objc' using CocoaPods.",

'mac,objectivec,ARM64,DefaultCPU': "Add 'onnxruntime-objc' using CocoaPods.",

'mac,objectivec,X64,CoreML': "Add 'onnxruntime-objc' using CocoaPods.",

'mac,objectivec,ARM64,CoreML': "Add 'onnxruntime-objc' using CocoaPods.",

'mac,C-API,X64,CoreML': "Add 'onnxruntime-c' using CocoaPods or download the .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>.",

'mac,C++,X64,CoreML': "Add 'onnxruntime-c' using CocoaPods or download the .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>.",

'linux,Python,X64,DefaultCPU': 'pip install onnxruntime',

'linux,Python,ARM64,DefaultCPU': 'pip install onnxruntime',
Expand Down Expand Up @@ -566,13 +582,13 @@

//mac m1
'mac,C-API,ARM64,CoreML':
"Install Nuget package&nbsp;<a class='text-blue-500' href='https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime' target='_blank'>Microsoft.ML.OnnxRuntime</a>",
"Add 'onnxruntime-c' using CocoaPods or download the .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>.",

'mac,C#,ARM64,CoreML':
"Install Nuget package&nbsp;<a class='text-blue-500' href='https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime' target='_blank'>Microsoft.ML.OnnxRuntime</a> <br/>Refer to <a class='text-blue-500' href='http://www.onnxruntime.ai/docs/execution-providers/CoreML-ExecutionProvider.html#requirements' target='_blank'>docs</a> for requirements.",

'mac,C++,ARM64,CoreML':
"Download .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>",
"Add 'onnxruntime-c' using CocoaPods or download the .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>.",

'mac,Java,ARM64,CoreML':
"Add a dependency on <a class='text-blue-500' href='https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime' target='_blank'>com.microsoft.onnxruntime:onnxruntime</a> using Maven/Gradle",
Expand All @@ -586,10 +602,10 @@
"Install Nuget package&nbsp;<a class='text-blue-500' href='https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime' target='_blank'>Microsoft.ML.OnnxRuntime</a>",

'mac,C-API,ARM64,DefaultCPU':
"Install Nuget package&nbsp;<a class='text-blue-500' href='https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime' target='_blank'>Microsoft.ML.OnnxRuntime</a>",
"Add 'onnxruntime-c' using CocoaPods or download the .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>.",

'mac,C++,ARM64,DefaultCPU':
"Install Nuget package&nbsp;<a class='text-blue-500' href='https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime' target='_blank'>Microsoft.ML.OnnxRuntime</a>",
"Add 'onnxruntime-c' using CocoaPods or download the .tgz file from&nbsp;<a class='text-blue-500' href='https://github.com/microsoft/onnxruntime/releases' target='_blank'>Github</a>.",

//power
'linux,C-API,Power,DefaultCPU':
Expand Down
Loading