You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just wondering why it is the text encoder is wrapped in a 'with torch.no_grad()' block?
Thanks,
David
Did you continue this work? Text encoder was freeze six layers, and linear wrapped in a 'with torch.no_grad()' block. loss may not get so low under this setting, but the image encoder learned goog representation. This is my exprement. I tried setting freeze layer None and delete with torch.no_grad(), the loss go down quickly, but the image encoder was not so robust.
Also wondering why the FC head following the text embeddings are wrapped. Is it to prevent 'double counting' perhaps of error as we still consider the image half?
Hi, nice work with the repo!
Just wondering why it is the text encoder is wrapped in a 'with torch.no_grad()' block?
Thanks,
David
The text was updated successfully, but these errors were encountered: