Skip to content

A pipeline that integrates raw LiDAR point cloud and Camera to produce a semantically annotated point cloud

Notifications You must be signed in to change notification settings

ychen921/Point-Painting-Segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LiDAR Point Cloud Semantic Segmentation

An implementation of point painting for real-time point cloud semantic segmentation painting (labeling each liDAR point with a class) based on semantic segmentation maps using DeepLabV3+.

Dataset

Download the rectified stereo camera images and Velodyne sensor data from the KITTI-360 dataset. Save data in 2 folders respectively. To download them, you can use the the following shell script.

  1. Recticied RGB camera images: bash download_2d_perspective.sh.
  2. Velodyne point cloud bash download_3d_velodyne.sh.
  3. camera intrinsics and extrinsics from here.

Checkpoint

For the semantic segmentation of RBG images, please download the pre-trained DeepLabV3+ checkpoint from here which is trained on Cityscapes dataset.

Run

python3 /Code/Wrapper.py --DataPath {dir/to/kitti/dataset} --SavePath {dir/to/your/saving/pcd/folder} --CkptPath {dir/to/DeepLabV3+/checkpoint} --ParseData {Parse the raw point cloud data, Default:0}

Demo

References

About

A pipeline that integrates raw LiDAR point cloud and Camera to produce a semantically annotated point cloud

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published