Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnxruntime-tvm #18955

Open
yfirecanfly opened this issue Dec 29, 2023 · 1 comment
Open

onnxruntime-tvm #18955

yfirecanfly opened this issue Dec 29, 2023 · 1 comment
Labels
documentation improvements or additions to documentation; typically submitted using template ep:CUDA issues related to the CUDA execution provider ep:ROCm questions/issues related to ROCm execution provider ep:tvm issues related to TVM execution provider

Comments

@yfirecanfly
Copy link

yfirecanfly commented Dec 29, 2023

Describe the documentation issue

When I used Onnxruntime tvm for inference based on backend GPU and ROCM using a precompiled model, the following error occurred

Check failed: (itr != physical_devices.end()) is false: Unable to find a physical device (from among the 1 given) to match the virtual device with device type 2

The code follows the example of resnet50, with only the modification of tartget='cuda '

compiled_vm_exec = compile_virtual_machine(onnx_model, target_str="cuda")
      so_folder = serialize_virtual_machine(compiled_vm_exec)
      provider_options = [dict(
          executor="vm",
          so_folder=so_folder,
      )]'

# so = onnxruntime.SessionOptions()

# so.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL

inference_session1 = onnxruntime.InferenceSession(
    onnx_model.SerializeToString(), 
    # sess_options=so,
    providers=["TvmExecutionProvider"],
    provider_options=provider_options,
    )

Looking forward to your help!!!!

Page / URL

No response

@yfirecanfly yfirecanfly added the documentation improvements or additions to documentation; typically submitted using template label Dec 29, 2023
@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:ROCm questions/issues related to ROCm execution provider ep:tvm issues related to TVM execution provider labels Dec 29, 2023
@yfirecanfly
Copy link
Author

The document only shows LLVM. Does it currently support CPU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation improvements or additions to documentation; typically submitted using template ep:CUDA issues related to the CUDA execution provider ep:ROCm questions/issues related to ROCm execution provider ep:tvm issues related to TVM execution provider
Projects
None yet
Development

No branches or pull requests

1 participant