-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reporting a minor bug #71
Comments
Thanks so much! I'll fix it soon |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, thanks for making the codes public!
I found a minor bug here. The variable
self.pos_embed
keeps the CPU version of the positional embedding. This is the root cause of why you need to call.to()
during forward pass. To fix it, you can instead callx = x + self.pos_embed_1
, whichself.pos_embed_1
is the correct GPU copy auto-created by PyTorch.This bug causes additional CPU-GPU communication time during training, but I am not quite sure how much does this costs in reality.
The text was updated successfully, but these errors were encountered: