Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using trained model in online manner for arbitrary length output. #4

Open
filmo opened this issue Nov 11, 2017 · 1 comment
Open

Using trained model in online manner for arbitrary length output. #4

filmo opened this issue Nov 11, 2017 · 1 comment

Comments

@filmo
Copy link

filmo commented Nov 11, 2017

Right now it seems like generate.py using a lot of cuda memory during inference.

For example. I trained a small 2 layer 150 hidden sized GRU network on the Shakespeare corpus.

When it comes time to generate text, I feed it a largish --predict_len and my GTX-1070 (8GB) ends up running out of memory. (For example --predict_len 5000 dies due to lack of GPU memory)

I would think it should be possible to operate the inference of this model by feeding it one character at a time (basically the last predicted character) and then it should only be doing a forward pass through the network. As it is now, it seems like each forward pass of inference is allocating some bit of CPU memory that is not being freed or reused. Thus on a prediction run of 5000 characters it's allocating something close to the size of the model (or perhaps 2x the model size) for each character of inference.

Question: is this a bug, or a necessary condition of this algorithm? (I don't seem to recall having this same problem with the original torch lua implementation.)

Is there a way to adjust this code to essentially do infinite inference (basically just keep generating text until told to stop)

@ambyerhan
Copy link

have you fix your problem? and is this model's structure is the same with the original char-rnn-lm that written by yoon kim?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants