-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The sentences generated by the trained model is incomplete #9
Comments
hello, how long does it take to train this model? Am I really need a GPU to do that? I'm a beginner, thank u. |
@Aoki1994 Hi, if you follow the parameters in the original code, the training time will take about 12h. For myself, I change the code, and the hidden units in the LSTM, I have set to 1000, I will take about 24h. Absolutely, I strongly suggest you should have a GPU. BTW, GTX 1080 is enough. |
@chenxinpeng thank u very much!I'll have a try. |
For anyone still interested in the original problem, I believe it is caused by the following: To fix this, modify ln 267 and 268 as following:
Disclaimer: Have not trained it yet, but the caption and mask now look correct :) Also, make sure your threshold for preProBuildWordVocab is not too low if you are missing words... Edit: Trained for 200 epochs, can confirm that this fixes it! |
First, thanks for your hard work on the code, it's very generous of you to share the code. :)
But, when I use the model which have been trained to generate sentences. I always get the sentences like this:
The sentences are mostly incomplete, like the sentences are truncated.
Strangely, then I used the coco-caption code to evaluate the sentences which I generated. The METEOR value is 27.7%, is very close to the paper.
So, I want to know, how to solve this problem? Can you give some some advice? I think the problem may caused by the code.
Thank you for your assistance.
The text was updated successfully, but these errors were encountered: