Xiaodong Yang, Zhuang Ma, Zhiyu Ji, Zhe Ren
GEDepth: Ground Embedding for Monocular Depth Estimation, ICCV 2023
[Paper] [Poster]
Please refer to INSTALL for the detail.
Please follow the instructions in DATA.
Please follow the instructions in RUN.
DepthFormer is used in this repo as the baseline to exemplify the improvement by the proposed GEDepth. Please refer to the paper for more results, in particular, on the generalization enhancement.
- KITTI
Model | Abs Rel | Sq Rel | RMSE | Checkpoint |
---|---|---|---|---|
Baseline | 0.052 | 0.156 | 2.133 | [Link] |
GEDepth-Vanilla | 0.049 | 0.144 | 2.061 | [Google Drive] [Baidu Cloud] |
GEDepth-Adaptive | 0.048 | 0.142 | 2.044 | [Google Drive] [Baidu Cloud] |
- DDAD
Model | Abs Rel | Sq Rel | RMSE | Checkpoint |
---|---|---|---|---|
Baseline | 0.152 | 2.230 | 11.051 | [Link] |
GEDepth-Vanilla | 0.147 | 2.155 | 10.784 | [Google Drive] [Baidu Cloud] |
GEDepth-Adaptive | 0.145 | 2.146 | 10.672 | [Google Drive] [Baidu Cloud] |
Please cite the following paper if this repo helps your research:
@inproceedings{yang2023gedepth,
title={GEDepth: Ground Embedding for Monocular Depth Estimation},
author={Yang, Xiaodong and Ma, Zhuang and Ji, Zhiyu and Ren, Zhe},
booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2023}
}
Copyright (C) 2023 QCraft. All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International). The code is released for academic research use only. For commercial use, please contact [email protected].