nodejs + tensorrt :: how to set execution provider options? #18380
Labels
api:Java
issues related to the Java API
api:Javascript
issues related to the Javascript API
ep:TensorRT
issues related to TensorRT execution provider
stale
issues that have not been addressed in a while; categorized by a bot
Describe the issue
Hi.
I unable to find any examples on the web - how to set provider options for TensorRT via nodejs.
At the same time there are examples for C++/python/JAVA.
https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#samples
Setting provider options via ENV also does not work.
test.js creates session like that:
const model = await ort.InferenceSession.create(modelBuffer, {"executionProviders": ["tensorrt"], "logSeverityLevel": 0})
Everyting works ok, but there is nothing at /tmp/trt_cache/.
I see that nodejs binding simply ignores any ENV variables, related to TensorRT.
Same situation with "onnxruntime_perf_test" ( = nothing at /tmp/trt_cache/ ):
I don't understand something, or it is really not possible to set TensorRT provider options via nodejs?
Why ORT_TENSORRT_* ENV variables ignored?
Thanks.
To reproduce
Urgency
not urgent
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.17.0
ONNX Runtime API
JavaScript
Architecture
X64
Execution Provider
TensorRT
Execution Provider Library Version
TensorRT 8.6.1.6-1+cuda12.0
The text was updated successfully, but these errors were encountered: