This repository is used to Generate Fused MultiFrame semantic-KITTI Dataset, especially for the Semantic Scene Completion (SSC) Task.
Below is a visualization result of the fused voxels. As you can see, the fused voxels is much more dense than the original voxels.
- Original Voxel (1 LiDAR scan)
- Fused Voxels (4 consecutive LiDAR scans)
- Label Voxels
Below are the required knowledge you must have in order to understand my code.
- 3D Geometry
- World Coordinate → LiDAR Coordinate
- LiDAR Coordinate → World Coordinate
- Programming skills
- Python
- Shell
- Linux
- Numpy
- Tmux
However, if you not interested, you don't have to understand my code :P
- Create a conda environment
- Install Numpy and Tmux
-
First, let's download the original semanticKITTI dataset. You need to download all the data by clicking on the link below.
-
Now that you have completed downloading the files, we need to make the structure of the dataset like the image shown below.
If you run into any 'missing module' errors while running the code, then simply download the missing modules!
Follow the steps below.
- First, create a tmux session
- Go into that session
- Open the 'generate_multiframe.sh' file
- Change the 4 arguments!
- d: the path of the 'dataset' directory
- o: the output path of the multiframe dataset
- n: number of multiframes you want to combine
- i: make this 5!
- Save the file and run the code below
- And run the code below
- sh ./generate_multiframe.sh
- if you get an error try getting rid of 'CUDA_VISIBLE_DEVICES=1' in the 'generate_multiframe.sh' file.
- CUDA_VISIBLE_DEVICES=1 python generate_multiframe_v2.py -d /mnt/ssd2/jihun/dataset/sequences/00 -o /mnt/ssd2/jihun/dataset_MF/sequences/00 -n 4 -i 5
- If you want to understand how the code works, I suggest you first understand the pseudo algorithm I made.
- The image below is the pseudo algorithm that I made before creating the actual code.
- I think it is very important to create a simple pseudo algorithm before coding something very complicated.
- The pseudo algorithm helps you not get lost during the complicated steps of coding.
This section is for my own reference. You do not need to read this sectin! Dataset -> F(Dataset) -> MF Dataset
- Prepare semantic-KITTI dataset ✅
- Use code to generate MF ✅
- Check the output result ✅
- Create label voxels ⬜
- mapping: pc_code -> voxel_code
- voxelize
- read label pc
- 하나의 voxel 안에 여러 개의 label이 존재하는 경우 어떻게 처리해야 하나?