We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorflow ==1.15 Keras = 2.2.4
Running on Google collab. I don't have GPU.
I'm trying to run the notebook SSD300_training but I'm getting an error of invalid loss. The error is:
Epoch 1/10 Epoch 00001: LearningRateScheduler setting learning rate to 0.001. 6/1000 [..............................] - ETA: 9:26:20 - loss: nan Batch 5: Invalid loss, terminating training
I'm running with adam optimizer. The parameters are:
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) batch_size = 8 initial_epoch = 0 final_epoch = 10 steps_per_epoch = 1000
Could someone help me? I really need to run this code I have tried to chance to sgd optimizer but gives the same error...
The text was updated successfully, but these errors were encountered:
did you solved the problem?
Sorry, something went wrong.
No... I give up :/
I got a fix for it .I increased the batch size and tried to run with few other changes in the dependency now I am able to train .
@ManishKakarla could you please share waht did change apart from a batch size? thanks
No branches or pull requests
Tensorflow ==1.15
Keras = 2.2.4
Running on Google collab. I don't have GPU.
I'm trying to run the notebook SSD300_training
but I'm getting an error of invalid loss. The error is:
I'm running with adam optimizer. The parameters are:
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
batch_size = 8
initial_epoch = 0
final_epoch = 10
steps_per_epoch = 1000
Could someone help me? I really need to run this code
I have tried to chance to sgd optimizer but gives the same error...
The text was updated successfully, but these errors were encountered: