onnxruntime-tvm #18955
Labels
documentation
improvements or additions to documentation; typically submitted using template
ep:CUDA
issues related to the CUDA execution provider
ep:ROCm
questions/issues related to ROCm execution provider
ep:tvm
issues related to TVM execution provider
Describe the documentation issue
When I used Onnxruntime tvm for inference based on backend GPU and ROCM using a precompiled model, the following error occurred
The code follows the example of resnet50, with only the modification of tartget='cuda '
Looking forward to your help!!!!
Page / URL
No response
The text was updated successfully, but these errors were encountered: