Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the correct way continue training after xth epoch? #129

Open
euwern opened this issue Aug 31, 2016 · 1 comment
Open

What is the correct way continue training after xth epoch? #129

euwern opened this issue Aug 31, 2016 · 1 comment

Comments

@euwern
Copy link

euwern commented Aug 31, 2016

I am currently using optim.adam to train my network. Let say, I am training my network up to xth epoch and I save my model, what settings in the optim function should I save in order to continue the training?

I notice that if I just load my save model, the loss computed is did not follow the trend. (the loss actually went back to loss computed in the first epoch). There must be some settings I need to reload in order to get back the similar loss.

The way I compare the result is by computing the loss at x + n epoch but I saved my model at xth epoch. After that I just reload my saved model at xth iteration and train to n epoch and compare the loss computed.

Technically speaking they should be similar. I hope someone can shed some light in this issue.

@gulvarol
Copy link

I am not sure, but could it be the 'state' variable that you might need to reload when continuing from a saved model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants