-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are pseudo labels with high confidence retained ? #17
Comments
We optimize the student model only based on the losses of the tokens which the student model predicts with high confidence. The corresponding line in the code is Line 88 in 32f2698
|
Thanks for the reply. But, in the above line 'pred_labels' are coming from teacher model and you are getting the mask of confident predictions of teacher model, right ? What I understood was, you check which token predictions of teacher are confident and then calculate loss of student for those tokens only . Please correct me if I'm wrong. |
What you understand is correct. I was saying that "we optimize the student model only based on the losses of the tokens which the teacher model predicts with high confidence". Sorry for the typo. |
Thanks @cliang1453 . Just one more query. My task is also token level classification. Would it make sense to just utilize mean teacher for training. Something like : |
In the paper, its mentioned that ' we select samples based on the prediction confidence of
the student model to further improve the quality of soft labels.' But its also mentioned that 'we discard all pseudo-labels from the (t-1)-th iteration, and only train the student model using pseudo-labels generated by the teacher model at the t-th iteration'.
Is the fist statement talking about calculating the loss of student model only on the high confidence pseudo labels or its something else because in the code i could'nt find any other justification for this line. Please suggest.
The text was updated successfully, but these errors were encountered: