Skip to content

Commit

Permalink
add link to inference example code
Browse files Browse the repository at this point in the history
  • Loading branch information
HectorSVC committed Dec 20, 2023
1 parent 4af1ff6 commit cb45c5d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/execution-providers/QNN-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ The QNN context binary generation can be done on the QualComm device which has H

The generated Onnx model which has QNN context binary can be deployed to production/real device to run inference. This Onnx model is treated as a normal model by QNN Execution Provider. Inference code keeps same as inference with QDQ model on HTP backend.

Code example
[Code example](https://github.com/microsoft/onnxruntime-inference-examples/blob/733ce6f3e8dd2ede8b67a8465684bca2f62a4a33/c_cxx/QNN_EP/mobilenetv2_classification/main.cpp#L90-L97)
```
#include "onnxruntime_session_options_config_keys.h"
Expand Down

0 comments on commit cb45c5d

Please sign in to comment.