Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory issue when continuously calling the lambda function #5

Open
jogando opened this issue Dec 23, 2019 · 3 comments
Open

Memory issue when continuously calling the lambda function #5

jogando opened this issue Dec 23, 2019 · 3 comments

Comments

@jogando
Copy link

jogando commented Dec 23, 2019

Hi, when i continuously call the lambda function, the memory increases call after call.

This is how i'm debugging the code:
console.log("imgToTensor: memory before: "+JSON.stringify(tf.memory())); const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3])) console.log("imgToTensor: memory after: "+JSON.stringify(tf.memory()));

The first time i call the function I get this:
imgToTensor: memory before: {"unreliable":true,"numTensors":263,"numDataBuffers":263,"numBytes":47349088}
imgToTensor: memory after: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}

The second time i call the function i get the following:
imgToTensor: memory before: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
imgToTensor: memory after: {"unreliable":true,"numTensors":265,"numDataBuffers":265,"numBytes":105978208}

Looks like the statement
const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3]))

If you take a look to the "numTensors" property, it's increased after each function call.

After 5 lambda executions my lambda fails with
Error: Runtime exited with error: signal: killed

Is there a way to clean the resources from the previous lambda function call?

Thanks!

@jogando
Copy link
Author

jogando commented Dec 23, 2019

solved it by adding
tensor.dispose();

after running the prediction
const { scores, boxes } = await predict(tfModel, tensor)

@lucleray
Copy link
Owner

Hi @jogando,

This is an amazing finding, thanks for reporting 🙏 Would you mind creating a PR to update this repository?

@jogando
Copy link
Author

jogando commented Dec 25, 2019

Sure!
I'm still investigating another issue.
Jimp is not properly releasing the memory, so in each lambda execution, the total memory consumed is increased by 10MB.

This is the statement causing the issue:
const image = await Jimp.read(imgBuffer)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants