Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

performance #15

Closed
JesseZhang92 opened this issue Jan 23, 2019 · 8 comments
Closed

performance #15

JesseZhang92 opened this issue Jan 23, 2019 · 8 comments

Comments

@JesseZhang92
Copy link

First, thanks for the code. It helps me a lot.
However with your code, I do not reach the same performance than what is reported in Godard's paper. Did you obtain the same performance? Could you report what you obtain in the readme?
best,

@voeykovroman
Copy link
Member

Hey @JesseZhang92. Unfortunately we didn't test our model in terms of metrics for depth evaluation but only made visual inspection of disparity maps and loss value. However, we'll probably add this in the future. What about your tests did you do it on our pretrained model or train it from scratch using this code?

@JesseZhang92
Copy link
Author

Thanks for the replying. I trained Resnet50_md from scratch using this code. The training set is the same as Godard's. Then we test it on the Kitti Eigen's split and find the best RMSE is 5.3, worse than the ones of Godard's paper. I've tried many settings of batch size or learning rate but it still produce not better results.

@JesseZhang92
Copy link
Author

Here the RMSE is calculated within 50m. In Godard's paper the corresponding results should be 4.471 in Table. 2.

@voeykovroman
Copy link
Member

Sorry for the late response, loaded with work currently. I see your problem and for me it seems like an issue with the correct training technique. Like again checking more hyperparameters, choosing right optimizer, training model (already trained with ADAM) with just SGD (sometimes help), or anneal already trained model.
Also I wouldn't expect the same performance between different frameworks because such performance differences take place quite often.
Moreover our model and TF model are not perfectly identical as we introduced batchnorm between convolution layers because without it we couldn't train it at all.

@GabrielMon
Copy link

Thanks for the replying. I trained Resnet50_md from scratch using this code. The training set is the same as Godard's. Then we test it on the Kitti Eigen's split and find the best RMSE is 5.3, worse than the ones of Godard's paper. I've tried many settings of batch size or learning rate but it still produce not better results.

Can you explain how you calculated RMSE in this code?

@JesseZhang92
Copy link
Author

Thanks for the replying. I trained Resnet50_md from scratch using this code. The training set is the same as Godard's. Then we test it on the Kitti Eigen's split and find the best RMSE is 5.3, worse than the ones of Godard's paper. I've tried many settings of batch size or learning rate but it still produce not better results.

Can you explain how you calculated RMSE in this code?

Hi, it has been a long time since I used this project. As I remember, you should carefully follow the evaluation protocol provided in the original project https://github.com/mrharicot/monodepth/blob/master/utils/evaluation_utils.py, and https://github.com/mrharicot/monodepth/blob/master/utils/evaluate_kitti.py.
Some 'masking' operations may influence the performance as it changes the valid points that measured. You may carefully check these operations.

@JesseZhang92
Copy link
Author

Thanks for the replying. I trained Resnet50_md from scratch using this code. The training set is the same as Godard's. Then we test it on the Kitti Eigen's split and find the best RMSE is 5.3, worse than the ones of Godard's paper. I've tried many settings of batch size or learning rate but it still produce not better results.

Can you explain how you calculated RMSE in this code?

#22 Check this issue.

@GabrielMon
Copy link

Thanks for the replying. I trained Resnet50_md from scratch using this code. The training set is the same as Godard's. Then we test it on the Kitti Eigen's split and find the best RMSE is 5.3, worse than the ones of Godard's paper. I've tried many settings of batch size or learning rate but it still produce not better results.

Can you explain how you calculated RMSE in this code?

Hi, it has been a long time since I used this project. As I remember, you should carefully follow the evaluation protocol provided in the original project https://github.com/mrharicot/monodepth/blob/master/utils/evaluation_utils.py, and https://github.com/mrharicot/monodepth/blob/master/utils/evaluate_kitti.py.
Some 'masking' operations may influence the performance as it changes the valid points that measured. You may carefully check these operations.

Do you still have your code?
Is it a problem for you to send me part of the code as you did, if necessary I can contact you privately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants