Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are pseudo labels with high confidence retained ? #17

Open
tarunbhatiaind opened this issue Mar 13, 2022 · 4 comments
Open

Are pseudo labels with high confidence retained ? #17

tarunbhatiaind opened this issue Mar 13, 2022 · 4 comments

Comments

@tarunbhatiaind
Copy link

In the paper, its mentioned that ' we select samples based on the prediction confidence of
the student model to further improve the quality of soft labels.' But its also mentioned that 'we discard all pseudo-labels from the (t-1)-th iteration, and only train the student model using pseudo-labels generated by the teacher model at the t-th iteration'.

Is the fist statement talking about calculating the loss of student model only on the high confidence pseudo labels or its something else because in the code i could'nt find any other justification for this line. Please suggest.

@cliang1453
Copy link
Owner

We optimize the student model only based on the losses of the tokens which the student model predicts with high confidence. The corresponding line in the code is

label_mask = (pred_labels.max(dim=-1)[0]>_threshold)
.

@tarunbhatiaind
Copy link
Author

Thanks for the reply. But, in the above line 'pred_labels' are coming from teacher model and you are getting the mask of confident predictions of teacher model, right ? What I understood was, you check which token predictions of teacher are confident and then calculate loss of student for those tokens only . Please correct me if I'm wrong.

@cliang1453
Copy link
Owner

What you understand is correct. I was saying that "we optimize the student model only based on the losses of the tokens which the teacher model predicts with high confidence". Sorry for the typo.

@tarunbhatiaind
Copy link
Author

tarunbhatiaind commented Mar 25, 2022

Thanks @cliang1453 . Just one more query. My task is also token level classification. Would it make sense to just utilize mean teacher for training. Something like :
Learn a model in stage 1 with less data
Then use that model to initialize teacher and student for second stage.
Give teacher all unlabeled data and it will generate pseudo labels for student to train on(calculating cross entropy loss on confident predictions of teacher). Use consistency cost between soft labels of student and teacher, and then update teacher with exponential moving average of student's weights. Continue this for later epochs.
It would be really helpful if I can just get a comment on this.
Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants