Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange Evaluatoin Result on Kitti #230

Open
aliko70 opened this issue Jun 11, 2019 · 5 comments
Open

Strange Evaluatoin Result on Kitti #230

aliko70 opened this issue Jun 11, 2019 · 5 comments

Comments

@aliko70
Copy link

aliko70 commented Jun 11, 2019

Hi @mrharicot
I am getting strangely high error result when I test by the downloaded kitti model. The generated depthmaps both using the given-model as well as my own trained models all look good while their evaluation on kitti_stereo_2015_test_files gives me the same abnormal results...

Testing:
python monodepth_main.py --mode test --data_path path_to_kittiDataset/test_data/
--filenames_file path_to_kitti_dataset/kitti_stereo_2015_test_files.txt --log_directory $logdir
--checkpoint_path path_to_models/model_kitti --output_directory $outputdir

Evaluating:
python utils/evaluate_kitti.py --split kitti --predicted_disp_path $outputdir/disparities.npy --gt_path utils/data_scene_flow/test_data

-sample of generated depth:

predicted_depth

  • after
    min_depth = 1e-3 max_depth = 80 pred_depth[pred_depth < min_depth] = min_depth pred_depth[pred_depth > max_depth] = max_depth

minmaxdepth

-corresponding gt-depth:

gt_depth

This ends up with very large values in 'thresh' when computing errors.

eval_results

Will really appreciate it if you can get me some hints @dantkz @gosip @Hirico .

Thanks, Ali

Originally posted by @aliko70 in #199 (comment)

@aliko70 aliko70 changed the title Strange Evaluatoin Result on Kitti Strange Evaluatoin Result on Kitti "help wanted" Jun 14, 2019
@aliko70 aliko70 changed the title Strange Evaluatoin Result on Kitti "help wanted" Strange Evaluatoin Result on Kitti Jun 14, 2019
@NovaMind-Z
Copy link

Hi, I use the pretrained kitti model to test a pciture in KITTI2015 and get the disparity map like this:
000000_10_disp
while the ground truth disparity map is as below :
000000_10
Could you please tell me why they are quite different in their pixel value?

@ndsclark
Copy link

ndsclark commented Jan 9, 2021

Hi @aliko70
Please check whether you have changed the format of image containing the ground truth from .png to .jpg. If so, please change it to .png.

@deffandchen
Copy link

Hi @aliko70
Please check whether you have changed the format of image containing the ground truth from .png to .jpg. If so, please change it to .png.
The ground truth in the original dataset is .png. Do you mean it doesn't need to be changed?

@ndsclark
Copy link

Please check whether you have changed the format of image containing the ground truth from .png to .jpg. If so, please change it to .png.
The ground truth in the original dataset is .png. Do you mean it doesn't need to be changed?

Yeah,training set need to be changed, but the ground truth needn't to be changed.

@deffandchen
Copy link

Please check whether you have changed the format of image containing the ground truth from .png to .jpg. If so, please change it to .png.
The ground truth in the original dataset is .png. Do you mean it doesn't need to be changed?

Yeah,training set need to be changed, but the ground truth needn't to be changed.

Thank you very much! I have got the right results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants