You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I wanted to use the compile mode for my model in MACE, but it failed during testing. The error occurs when running pytest for test_compile.py, causing various InternalTorchDynamoError and Unsupported exceptions. The regular (not compile) mode works without issues.
To Reproduce Steps to reproduce the behavior:
Clone the MACE repository.
Set up the environment using Python 3.10.15 with PyTorch 2.2.2 and CUDA 12.6.
Run pytest on test_compile.py with the following configuration:
" pytest test_compile.py "
The errors include InternalTorchDynamoError: 'NoneType' object is not subscriptable and torch.dynamo.exc.Unsupported: Tensor.requires_grad.
Expected behavior The model should have compiled successfully using the available GPU with CUDA support. Instead, it failed with errors related to PyTorch Dynamo compilation.
Screenshots : N/A
Additional context Below is the platform and environment setup I used while encountering the bug:
Platform Information
• Operating System: Linux (64-bit, x86_64 architecture)
• Python Version: 3.10.15 (packaged by conda-forge, GCC 13.3.0)
• PyTorch Version: 2.2.2
• CUDA Version: 12.6
• NVIDIA Driver Version: 561.09
• GPU Model: NVIDIA GeForce RTX 3080 (10GB VRAM)
• CUDA Toolkit: nvcc version 12.6, build cuda_12.6.r12.6/compiler.34714021_0
• NVIDIA-SMI Output:
o Driver Version: 561.09
Additionally, torch.cuda.is_available() returns True, indicating CUDA is accessible. The errors persisted across multiple compile modes (default, reduce-overhead, max-autotune) and PyTorch versions.
The text was updated successfully, but these errors were encountered:
Describe the bug
I wanted to use the compile mode for my model in MACE, but it failed during testing. The error occurs when running pytest for test_compile.py, causing various InternalTorchDynamoError and Unsupported exceptions. The regular (not compile) mode works without issues.
To Reproduce Steps to reproduce the behavior:
" pytest test_compile.py "
Expected behavior The model should have compiled successfully using the available GPU with CUDA support. Instead, it failed with errors related to PyTorch Dynamo compilation.
Screenshots : N/A
Additional context Below is the platform and environment setup I used while encountering the bug:
Platform Information
• Operating System: Linux (64-bit, x86_64 architecture)
• Python Version: 3.10.15 (packaged by conda-forge, GCC 13.3.0)
• PyTorch Version: 2.2.2
• CUDA Version: 12.6
• NVIDIA Driver Version: 561.09
• GPU Model: NVIDIA GeForce RTX 3080 (10GB VRAM)
• CUDA Toolkit: nvcc version 12.6, build cuda_12.6.r12.6/compiler.34714021_0
• NVIDIA-SMI Output:
o Driver Version: 561.09
Additionally, torch.cuda.is_available() returns True, indicating CUDA is accessible. The errors persisted across multiple compile modes (default, reduce-overhead, max-autotune) and PyTorch versions.
The text was updated successfully, but these errors were encountered: