This is an example application for ONNX Runtime on Android. The demo app uses image classification which is able to continuously classify the objects it sees from the device's camera in real-time and displays the most probable inference results on the screen.
This example is loosely based on Google CodeLabs - Getting Started with CameraX
We use pre-trained MobileNet V2 models from the ONNX model zoo in this sample app.
- Android Studio Electric Eel 2022.1.1+ (installed on Mac/Windows/Linux)
- Android device with a camera in developer mode with USB debugging enabled
Clone this GitHub repository to your computer to get the sample application.
Run mobile/examples/image_classification/android/prepare_models.py
to download and prepare the labels file and model files in the sample application resource directory.
cd mobile/examples/image_classification/android # cd to this directory
python -m pip install -r ./prepare_models.requirements.txt
python ./prepare_models.py --output_dir ./app/src/main/res/raw
Then open the sample application in Android Studio. To do this, open Android Studio and select Open an existing project
, browse folders and open the folder mobile/examples/image_classification/android/
.
Select Build -> Make Project
in the top toolbar in Android Studio and check the projects has built successfully.
Connect your Android Device to the computer and select your device in the top-down device bar.
Then Select Run -> Run app
and this will prompt the app to be installed on your device.
Now you can test and try by opening the app ort_image_classifier
on your device. The app may request your permission for using the camera.
Here's an example screenshot of the app.