Skip to content

Commit

Permalink
updates re: android instructions
Browse files Browse the repository at this point in the history
  • Loading branch information
jywu-msft committed May 17, 2024
1 parent 66aef5a commit 38b61ff
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 11 deletions.
15 changes: 14 additions & 1 deletion docs/build/android.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,20 @@ If you want to use NNAPI Execution Provider on Android, see [NNAPI Execution Pro

Android NNAPI Execution Provider can be built using building commands in [Android Build instructions](#android-build-instructions) with `--use_nnapi`

## Test Android changes using emulator
## QNN Execution Provider

If your device has a supported Qualcomm Snapdragon SOC, and you want to use QNN Execution Provider on Android, see [QNN Execution Provider](../execution-providers/QNN-ExecutionProvider).

### Build Instructions

Download and install Qualcomm AI Engine Direct SDK (Qualcomm Neural Network SDK) [Linux/Android/Windows](https://qpm.qualcomm.com/main/tools/details/qualcomm_ai_engine_direct)
QNN Execution Provider can be built using building commands in [Android Build instructions](#android-build-instructions) with `--use_qnn --qnn_home [QNN_SDK path]`

### Testing and Validation

OnnxRuntime QNN Execution Provider is a supported runtime in (Qualcomm AI Hub)[https://aihub.qualcomm.com/]

## Test Android changes using emulator (not applicable for QNN Execution Provider)

See [Testing Android Changes using the Emulator](https://github.com/microsoft/onnxruntime/blob/main/docs/Android_testing.md).

Expand Down
24 changes: 14 additions & 10 deletions docs/execution-providers/QNN-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,27 +12,27 @@ redirect_from: /docs/reference/execution-providers/QNN-ExecutionProvider
The QNN Execution Provider for ONNX Runtime enables hardware accelerated execution on Qualcomm chipsets.
It uses the Qualcomm AI Engine Direct SDK (QNN SDK) to construct a QNN graph from an ONNX model which can
be executed by a supported accelerator backend library.

OnnxRuntime QNN Execution Provider can be used on Android and Windows(ARM64) devices with Qualcomm Snapdragon SOC's.

## Contents
{: .no_toc }

* TOC placeholder
{:toc}

## Install Pre-requisites (Building from Source Only)
## Install Pre-requisites (Build from Source Only)

If you build QNN Execution Provider from source, you should first
download the Qualcomm AI Engine Direct SDK (QNN SDK) from [https://qpm.qualcomm.com/main/tools/details/qualcomm_ai_engine_direct](https://qpm.qualcomm.com/main/tools/details/qualcomm_ai_engine_direct)

### QNN Version Requirements

ONNX Runtime QNN Execution Provider has been built and tested with QNN 2.22.x and Qualcomm SC8280, SM8350, Snapdragon X SOC's
ONNX Runtime QNN Execution Provider has been built and tested with QNN 2.22.x and Qualcomm SC8280, SM8350, Snapdragon X SOC's on Android and ARM64 Windows

## Build
## Build (Android and Windows)
For build instructions, please see the [BUILD page](../build/eps.md#qnn).

## Pre-built Packages
## Pre-built Packages (Windows Only)
Note: Starting version 1.18.0 , you do not need to separately download and install QNN SDK. The required QNN dependency libraries are included in the OnnxRuntime packages.
- [NuGet package](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.QNN)
- [Python package](https://pypi.org/project/onnxruntime-qnn/)
Expand All @@ -44,6 +44,10 @@ Note: Starting version 1.18.0 , you do not need to separately download and insta
- Install: `pip install onnxruntime-qnn`
- Install nightly package `python -m pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ort-nightly-qnn`

## Qualcomm AI Hub
Qualcomm AI Hub can be used to optimize and run models on Qualcomm hosted devices.
OnnxRuntime QNN Execution Provider is a supported runtime in (Qualcomm AI Hub)[https://aihub.qualcomm.com/]

## Configuration Options
The QNN Execution Provider supports a number of configuration options. These provider options are specified as key-value string pairs.

Expand Down Expand Up @@ -134,12 +138,12 @@ Alternatively to setting profiling_level at compile time, profiling can be enabl
|ai.onnx:Asin||
|ai.onnx:Atan||
|ai.onnx:AveragePool||
|ai.onnx:BatchNormalization||
|ai.onnx:BatchNormalization|fp16 supported since 1.18.0|
|ai.onnx:Cast||
|ai.onnx:Clip||
|ai.onnx:Clip|fp16 supported since 1.18.0|
|ai.onnx:Concat||
|ai.onnx:Conv||
|ai.onnx:ConvTranspose||
|ai.onnx:Conv|3d supported since 1.18.0|
|ai.onnx:ConvTranspose|3d supported since 1.18.0|
|ai.onnx:Cos||
|ai.onnx:DepthToSpace||
|ai.onnx:DequantizeLinear||
Expand Down Expand Up @@ -175,7 +179,7 @@ Alternatively to setting profiling_level at compile time, profiling can be enabl
|ai.onnx:Neg||
|ai.onnx:Not||
|ai.onnx:Or||
|ai.onnx:Prelu||
|ai.onnx:Prelu|fp16, int32 supported since 1.18.0|
|ai.onnx:Pad||
|ai.onnx:Pow||
|ai.onnx:QuantizeLinear||
Expand Down

0 comments on commit 38b61ff

Please sign in to comment.