-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some details questions #20
Comments
I have another small question, may I ask if the best result of the paper is obtained in the case of subset=True? |
I have same problem, if I set subset=False, pretrain loss even don't descend |
Me too. I hope the author can answer this doubt for us. |
same problem |
I have a problem. When I get one_hot_encode for my fine 275 line, Should I write
onehot_label = F.one_hot(labels,config.num_classes_target)
instead of
onehot_label = F.one_hot(labels)
The text was updated successfully, but these errors were encountered: