You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run a U-Net on a device with Android API Level 25. The device has a Mali T860 which I'm trying to use.
Apparently, NNAPI is not available on devices < API 27, but it is listed by OrtEnvironment.getProviders() : CPU, NNAPI, XNNPACK. How is that possible ? Will NNAPI use the GPU or simply revert to their CPU implementation ?
From the profiler, I can tell that everything runs on the CPU, no matter what configuration I try.
Alternatively, is there any way to run the inference on the GPU ?
To reproduce
Urgency
No response
Platform
Android
OS Version
25
ONNX Runtime Installation
Released Package
Compiler Version (if 'Built from Source')
No response
Package Name (if 'Released Package')
onnxruntime-android
ONNX Runtime Version or Commit ID
1.17.0
ONNX Runtime API
Java/Kotlin
Architecture
ARM64
Execution Provider
NNAPI
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered:
The NNAPI EP dynamically creates an NNAPI model at runtime from the ONNX model based on the device capabilities. So it's included in the build and will be returned by getProviders, but won't be used for any execution of the model if the device doesn't support NNAPI.
We don't have a GPU-only execution provider for mobile. There is an experimental opencl based execution provider but it only supports a limited number of operators as there wasn't a use-case that justified doing further work on it. FWIW executing a model on GPU can potentially impact the device responsiveness as you're competing with UI updates so using an approach that can utilize NPU is generally preferrable. One possibility is Chrome now supports WebGPU on Android and onnxruntime web might be able to be used, but that requires Android 12.
Thanks for all the info, I understand why NNAPI shows up then. Good point about the GPU bottleneck, I might actually change gears and run on the CPU, with only one thread for example, to avoid using all ressources
Describe the issue
I'm trying to run a U-Net on a device with Android API Level 25. The device has a Mali T860 which I'm trying to use.
Apparently, NNAPI is not available on devices < API 27, but it is listed by
OrtEnvironment.getProviders()
:CPU, NNAPI, XNNPACK
. How is that possible ? Will NNAPI use the GPU or simply revert to their CPU implementation ?From the profiler, I can tell that everything runs on the CPU, no matter what configuration I try.
Alternatively, is there any way to run the inference on the GPU ?
To reproduce
Urgency
No response
Platform
Android
OS Version
25
ONNX Runtime Installation
Released Package
Compiler Version (if 'Built from Source')
No response
Package Name (if 'Released Package')
onnxruntime-android
ONNX Runtime Version or Commit ID
1.17.0
ONNX Runtime API
Java/Kotlin
Architecture
ARM64
Execution Provider
NNAPI
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: