-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory issue when continuously calling the lambda function #5
Comments
solved it by adding after running the prediction |
Hi @jogando, This is an amazing finding, thanks for reporting 🙏 Would you mind creating a PR to update this repository? |
Sure! This is the statement causing the issue: |
Hi, when i continuously call the lambda function, the memory increases call after call.
This is how i'm debugging the code:
console.log("imgToTensor: memory before: "+JSON.stringify(tf.memory())); const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3])) console.log("imgToTensor: memory after: "+JSON.stringify(tf.memory()));
The first time i call the function I get this:
imgToTensor: memory before: {"unreliable":true,"numTensors":263,"numDataBuffers":263,"numBytes":47349088}
imgToTensor: memory after: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
The second time i call the function i get the following:
imgToTensor: memory before: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
imgToTensor: memory after: {"unreliable":true,"numTensors":265,"numDataBuffers":265,"numBytes":105978208}
Looks like the statement
const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3]))
If you take a look to the "numTensors" property, it's increased after each function call.
After 5 lambda executions my lambda fails with
Error: Runtime exited with error: signal: killed
Is there a way to clean the resources from the previous lambda function call?
Thanks!
The text was updated successfully, but these errors were encountered: