-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One error #75
Comments
what's your batch size and what's the number of gpu you used? |
I have the same problem with only one GPU. |
@Nanboy-Ronan What's your batch size? |
Thank you for reply. My training batch_size is 64. Probably I should adjust it? But I see there is people saying that they train with 8 gpus for 1.5 days, which made me hesitate in continue doing this. |
Thanks for your patient reply, and I'm sorry for I saw it just now. Maybe I also have the problem like Nanboy-Ronan for I only have one GPU. |
@Nanboy-Ronan one gpu is not able to train it. But if you still one to train it, you should tune your batch size to a much smaller size. |
May i know how to do this this was my major project. |
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 7.94 GiB total capacity; 7.03 GiB already allocated; 144.81 MiB free; 7.12 GiB reserved in total by PyTorch)
I meet this question after i print "python exps/cifar_train" in Terminal ,and it apears after
"path:logs/cifar_train_2022_03_22_19_29_36
0%| |0/1563 [00:00<?, it/s]" .
I know this means the CUDA is out of memory, but i only run this one program ,and the image has not loaded. Did author or someone also meet this question and how did you deal with it?
The text was updated successfully, but these errors were encountered: