-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paper Voting #1
Comments
Title: Knowledge distillation for semi-supervised domain adaptation A paper that's applied on images but that might offer some insights into how to easily adapt to distribution shifts / changes as we tackle more and more clients and have to match a big variety of skills or searches / provides. |
Title: Teaching Semi-Supervised Classifier via Generalized Distillation Describes the concept of "generalized knowledge distillation" where the teacher network uses a "privileged" set of features to "teach" the learner things. I was thinking that we might be able to use such an approach instead of having "one model to rule them all" (multiple tasks/heads), with the teacher network having access to privileged info such as co-occurrences of skills or scraped skill definitions. |
Changing the format, so will close this for now. Might resurrect my posts into their own issues. |
In this issue you can either:
Example: hadyelsahar#1
The text was updated successfully, but these errors were encountered: