You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Installing the dependencies works fine. Trying to run torch_ort.configure results in an error compiling 'torch_gpu_allocator'. I tried downloading Anaconda, using a different python environment, using a different cuda-toolkit version, but nothing worked. Doing the same in Google Colab resulted in success. I am unsure why it is failing to get cudart_version from onnxruntime build info. Thank you!
python -m torch_ort.configure
/home/maqsoftware/anaconda3/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_validation.py:114: UserWarning: WARNING: failed to get cudart_version from onnxruntime build info.
warnings.warn("WARNING: failed to get cudart_version from onnxruntime build info.")
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
There was an error compiling 'torch_gpu_allocator' PyTorch CPP extension
Visual Studio Version
No response
GCC / Compiler Version
gcc version - 11.4.0 | nvcc version - cuda_12.2.r12.2 | python version 3.11.5
The text was updated successfully, but these errors were encountered:
Describe the issue
Installing the dependencies works fine. Trying to run torch_ort.configure results in an error compiling 'torch_gpu_allocator'. I tried downloading Anaconda, using a different python environment, using a different cuda-toolkit version, but nothing worked. Doing the same in Google Colab resulted in success. I am unsure why it is failing to get cudart_version from onnxruntime build info. Thank you!
Urgency
No response
Target platform
WSL2 Ubuntu 22.04.2 LTS
Build script
pip install onnx ninja
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install onnxruntime-training -f https://download.onnxruntime.ai/onnxruntime_stable_cu116.html
pip install torch-ort
pip install --upgrade protobuf
Error / output
python -m torch_ort.configure
/home/maqsoftware/anaconda3/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_validation.py:114: UserWarning: WARNING: failed to get cudart_version from onnxruntime build info.
warnings.warn("WARNING: failed to get cudart_version from onnxruntime build info.")
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp-a34b3233.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
There was an error compiling 'torch_gpu_allocator' PyTorch CPP extension
Visual Studio Version
No response
GCC / Compiler Version
gcc version - 11.4.0 | nvcc version - cuda_12.2.r12.2 | python version 3.11.5
The text was updated successfully, but these errors were encountered: