-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training loss doesn't decrease #2
Comments
me too ,tho loss doesn't decrease!! |
Use notebook 2.1 |
I've made few changes now. Check it out |
Thanks for the response. I was able to run notebook 2.1 on my own pc with GTX1080Ti, but it always crashes because out of graphics memory on the computation center of my university which has Tesla V100 32GB. Do you know the reason for this by any chance? |
How can I know if you didn't provide any details, not even the error message... |
Hi, I tried to run your notebooks. For notebook 2, my training loss is still around 0.19 after 200 epochs so the resulting model is still pretty bad. The only thing I changed is "use_multiprocessing" to False because it keeps getting an error when it's True.
Do you know how to fix this issue?
The text was updated successfully, but these errors were encountered: