Pre-training has become a standard paradigm in many computer vision tasks. However, most of the methods are generally designed on the RGB image domain. Due to the discrepancy between the two-dimensional image plane and the three-dimensional space, such pre-trained models fail to perceive spatial information and serve as sub-optimal solutions for 3D-related tasks. To bridge this gap, we aim to learn a spatial-aware visual representation that can describe the three-dimensional space and is more suitable and effective for these tasks. To leverage point clouds, which are much more superior in providing spatial information compared to images, we propose a simple yet effective 2D Image and 3D Point cloud Unsupervised pre-training strategy, called SimIPU. Specifically, we develop a multi-modal contrastive learning framework that consists of an intra-modal spatial perception module to learn a spatial-aware representation from point clouds and an inter-modal feature interaction module to transfer the capability of perceiving spatial information from the point cloud encoder to the image encoder, respectively. Positive pairs for contrastive losses are established by the matching algorithm and the projection matrix. The whole framework is trained in an unsupervised end-to-end fashion. To the best of our knowledge, this is the first study to explore contrastive learning pre-training strategies for outdoor multi-modal datasets, containing paired camera images and LIDAR point clouds.
SimIPU is a multi-modal pre-training strategy utilizing contrastive learning. To evaluate the effectiveness of our proposed method, we design a simple depth estimation framework. The network architectures follow the generic monodepth design consisting of a ResNet-50 encoder and a densedepth decoder. The framework is trained by the SigLoss proposed in this paper.
@article{li2021simipu,
title={SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual Representations},
author={Li, Zhenyu and Chen, Zehui and Li, Ang and Fang, Liangji and Jiang, Qinhong and Liu, Xianming and Jiang, Junjun and Zhou, Bolei and Zhao, Hang},
journal={arXiv preprint arXiv:2112.04680},
year={2021}
}
Method | Backbone | Pre-train Num. | Pre-train Dataset | Train Epoch | Abs Rel (+flip) | Config | Download |
---|---|---|---|---|---|---|---|
Scratch | R-50 | 0 | None | 24 | 0.087 | config | log | model |
Supervise | R-50 | 1M | ImageNet | 24 | 0.060 | config | log | model |
SimIPU | R-50 | 7481 | KITTI | 24 | 0.075 | config | log | model |
SimIPU | R-50 | 31632 | Waymo(1/5) | 24 | 0.066 | config | log | model |
Method | Backbone | Pre-train Num. | Pre-train Dataset | Train Epoch | Abs Rel (+flip) | Config | Download |
---|---|---|---|---|---|---|---|
Scratch | R-50 | 0 | None | 24 | 0.205 | config | log | model |
Supervise | R-50 | 1M | ImageNet | 24 | 0.143 | config | log | model |
SimIPU | R-50 | 7481 | KITTI | 24 | 0.178 | config | log | model |
SimIPU | R-50 | 31632 | Waymo(1/5) | 24 | 0.166 | config | log | model |