-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cuda out of memory #24
Comments
Hi, you may want to use a smaller '--num_images'. Also, please confirm that the flash attention (we use it by default) is used to reduce the memory usage. |
I encountered the same issue, and reducing the "--num_images" did not resolve the problem. Based on the error message indicating an "out of memory" error during the model weight loading phase, could you please provide an estimate of how much GPU memory is required to run this project? |
Hello, I have met the same problem. I tried reduce the --num_image to 2 or 1, and have confirmed that flash_attn is able to run normally. I ran the demo on RTX4060 with 8GB memory, and I would like to know what GPU memory is needed for training and deployment. @frank-xwang Thanks and looking forward reply. |
Hello! Thank you so much for your amazing work. I'm posting to ask about the cuda out of memory error that I encounter when I'm running the InstanceDiffusion inference demon. I'm using one RTX3050 to run the program and there's no other process using the gpu while I'm running.
The text was updated successfully, but these errors were encountered: