Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory of RAM #219

Open
hubery4 opened this issue Jun 25, 2021 · 4 comments
Open

Memory of RAM #219

hubery4 opened this issue Jun 25, 2021 · 4 comments

Comments

@hubery4
Copy link

hubery4 commented Jun 25, 2021

Hi JiaRen,

I have met a question that when I run 'finetune.py' in my computer with RAM = 32G. It always will be killed after one epoch. The error is this.
image
I just use the KITTI_2015 training data for finetune based on pretrained ScenFlow model.

Is the problem that the RAM=32G is not enough? How much the memory of RAM you used during training? Or is there anything in 'finetune.py' I need to change to run in RAM <= 32G?

Thank you

@JiaRenChang
Copy link
Owner

Hi, @hubery4
You could reduce the batch size both of trainset and valset.
In validation, a pair of KITTI image takes around ~4.xGB memory.
It seems that 32GB is not enough for batch size 8 in validation.

@hubery4
Copy link
Author

hubery4 commented Jun 28, 2021

@JiaRenChang
Thank you for the suggestion. I have tried batch-size=1, it still had this problem. I also tried in Google Colab. Each time the process will stop after 1 training epoch.
image

@hubery4
Copy link
Author

hubery4 commented Jun 28, 2021

@JiaRenChang
Thank you I have figured out this problem. It is the dimension problem of true_disp and pred_disp.

@tyleryzhu
Copy link

tyleryzhu commented Sep 13, 2021

@hubery4 Could you explain what you mean by dimension problem? I fixed this already in finetune.py by squeezing the extra dimension in axis=1, but I'm still getting this out of memory error with a 32G gpu. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants