These examples demonstrate how to use ONNX Runtime (ORT) in mobile applications.
These are some general prerequisites. Examples may specify other requirements if applicable. Please refer to the instructions for each example.
Clone this repo.
git clone https://github.com/microsoft/onnxruntime-inference-examples.git
- Xcode 12.5+
- CocoaPods
- A valid Apple Developer ID if you want to run the example on a device
The example app shows basic usage of the ORT APIs.
The example app uses image classification which is able to continuously classify the objects it sees from the device's camera in real-time and displays the most probable inference results on the screen.
The example app uses speech recognition to transcribe speech from audio recorded by the device.
The example app uses object detection which is able to continuously detect the objects in the frames seen by your iOS device's back camera and display the detected object bounding boxes, detected class and corresponding inference confidence on the screen.
The Xamarin.Forms example app demonstrates the use of several vision-related models, from the ONNX Model Zoo collection.
The example application accomplishes the task of recovering a high resolution (HR) image from its low resolution counterpart with Ort-Extensions support for pre/post processing. Currently supports on platform Android and iOS.
The example app gives a demo of introducing question answering models with pre/post processing into mobile scenario. Currently supports on platform Android and iOS.
This example shows how to use ORT to do speech recognition using the Whisper model. One version (Cloud) calls the OpenAI Whisper endpoint using the Azure custom op. The other uses a local Whisper model.
This is an example React Native Expo project which demonstrates basic usage of ORT such as loading onnx models and creating inference sessions, etc.