-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About GPU memory usage #46
Comments
I'm also experiencing CUDA out of memory issue with non-local block. I'm trying to use non-local block at the top of my network, which is for bbox regression conv head in faster r-cnn. Do you guys have any ideas to address this? |
@Monkey-D-Luffy-star @vombategeht Hi~ The larger the size (height, width, depth) of feature maps is, the more memories the matrix multiplication will occupy. When I encounter this problem,I will:
|
@AlexHex7 Thx, benefit a lot. |
If non-local is applied to the low-level feature map, CUDA out of memory will happen.Is this due to the amount of memory required to compute the Attention matrix?
Looking forward to your reply
The text was updated successfully, but these errors were encountered: