-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ubuntu Container for Jetson ORIN #673
Comments
you are using docker for x86. Not for Jetson!!!! docker jetson: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-base |
@johnnynunez If you had actually read my post, you would notice that I have a working version of the docker with L4T Jetson base :) I am trying to understand if and why a arm64 docker image would or would not work (including GPU access) on Jetson. |
l4t version comes with everything. I think that ubuntu arm is for SBSA(grace etc) |
Hi, thank you so much this resource. it's been really helpful in guiding me in my work.
I have a docker setup working with
l4t-jetpack:r36.3.0
. However I am at a juncture where I am having to start with a non-jetpack base image, hence I am working withnvidia/cuda:12.2.0-devel-ubuntu22.04
I am able to install onnxruntime (from source), pytorch 2.3 with
nvidia/cuda:12.2.0-devel-ubuntu22.04
as base image.I have tiny scripts to check for GPU access in Pytorch and ORT, both of which show that it's able to find the GPU. However, it fails when it's trying to load the onnx model.
Is CUDA available: True
Current CUDA device: 0
Number of GPUs: 1
CUDA device name: Orin
CUDA memory allocated: 0.00 MB
CUDA memory cached: 0.00 MB
Total GPU memory: 62841.44 MB
cuDNN is installed.
8907
2024-10-10 00:28:10.301568556 [E:onnxruntime:, inference_session.cc:2117 operator()] Exception during initialization: /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:129 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cublasStatus_t; bool THRW = true; SUCCTYPE = cublasStatus_t; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, SUCCTYPE, const char*, const char*, int) [with ERRTYPE = cublasStatus_t; bool THRW = true; SUCCTYPE = cublasStatus_t; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUBLAS failure 3: CUBLAS_STATUS_ALLOC_FAILED ; GPU=0 ; hostname=ubuntu ; file=/opt/onnxruntime/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=178 ; expr=cublasCreate(&cublas_handle_);
It feels like I'm almost there.. but am (obsviously) missing a piece of the puzzle. Upon searching I see there are CUDA Toolkits for jetson specifically (not sure if this is source of issue even) however I didn't find one for CUDA 12.2, Ubuntu 22.04.
Could someone help understand/resolve what's going on with this CUBLAS thing?
dpkg -L libcudnn8
,dpkg -L libcudnn8-dev
on both the containers should the same paths.The text was updated successfully, but these errors were encountered: