Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add discriminative regularization to mlp output #6

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

dribnet
Copy link
Contributor

@dribnet dribnet commented Aug 12, 2016

Mostly resolves #5.

@vdumoulin - curious if this looks roughly right. I've confirmed that training starts, but haven't fully trained a model with this change yet. Can do so if this doesn't have any obvious flaws and that would be useful.

Note: also includes a commit to change model loading so that this can be run with more recent vintage blocks.

@dribnet
Copy link
Contributor Author

dribnet commented Aug 15, 2016

Update: this runs and trains, but the loss explodes after about 6 epochs - and I've confirmed it's the new discriminative mlp term causing this. So something here is not quite right.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add discriminative loss for mlp and y-hat
1 participant