You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've initiated the model like so: nlp = Pipeline('english', gpu=False, cache_dir='./cache')
Than call it by using: with torch.no_grad(): for idx in range(10000): nlp.lemmatize('Hello World', is_sent=True).
When running the code, the RAM memory slowly fills.
I attached a graph of the memory filling up.
I'm using python3.7, trankit=1.1.0, torch=1.7.1.
Thank you!
The text was updated successfully, but these errors were encountered:
I confirm: when running on CPU there is an increasing memory consumption. @navotoz , could you, please, tell me whether you have been able to solve this issue?
Hi @Dielianss@olegpolivin
Thanks for the comments. We maneged to mitigate this issue by running inference in a docker and restarting it every predefined interval.
This is not a real solution to this issue, but at least we can work with the model.
Hello,
I've initiated the model like so:
nlp = Pipeline('english', gpu=False, cache_dir='./cache')
Than call it by using:
with torch.no_grad(): for idx in range(10000): nlp.lemmatize('Hello World', is_sent=True)
.When running the code, the RAM memory slowly fills.
I attached a graph of the memory filling up.
I'm using python3.7, trankit=1.1.0, torch=1.7.1.
Thank you!
The text was updated successfully, but these errors were encountered: