You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This code has been working great for me. Currently, the discriminative regularization in this codebase uses the loss from the batch norm layers of the ConvolutionalSequence, but the paper covers the more general case of taking loss from elsewhere in the classifier, as shown below:
I'd like to add losses at the higher layers - MLP and y-hat. I'm curious if anyone else is interested in working on this with me - the discriminative regularization parts are some of trickier parts of building the training computation graph, and y-hat currently doesn't have a named node in the classifier graph.
The text was updated successfully, but these errors were encountered:
I think that would be a good idea. I won't have time to work on it, but if you have questions or would like guidance I can sporadically chime in (and of course review the PR).
This code has been working great for me. Currently, the discriminative regularization in this codebase uses the loss from the batch norm layers of the ConvolutionalSequence, but the paper covers the more general case of taking loss from elsewhere in the classifier, as shown below:
I'd like to add losses at the higher layers - MLP and y-hat. I'm curious if anyone else is interested in working on this with me - the discriminative regularization parts are some of trickier parts of building the training computation graph, and y-hat currently doesn't have a named node in the classifier graph.
The text was updated successfully, but these errors were encountered: