-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU memory #5
Comments
Hi, we train our model on eight Titan RTX (24G) GPUs. For SSGV321, the batch size is 2 for each GPU and it costs about 16G on each one. For SSGV353, the batch size is 1 for each GPU and it costs about 16G too. The reason that it needs many GPU memory is that the models consist of many tensor unfolding operation, the implementation of which in Pytorch is extremely memory-consuming during training. We are working at reducing the memory and speed now. Thanks for your interest. |
I set the "cudnn.benchmark = False", solve the "CUDA runtime error". |
Hello, will there be a light version that requires less GPU memory? really looking forward to it. |
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1 |
Hi, when I run the code with the model SSGV321, it always prompts me that CUDA OUT OF MEMORY until I set the batch_size to 1. My GPU memory is 12G, TITAN XP. I want to know how much GPU memory you use.
The text was updated successfully, but these errors were encountered: