Skip to content

Latest commit

 

History

History
116 lines (94 loc) · 5.69 KB

README.md

File metadata and controls

116 lines (94 loc) · 5.69 KB

STAIR: Semantic-Targeted Active Implicit Reconstruction

Liren Jin, Haofei Kuang, Yue Pan, Cyrill Stachniss, Marija Popović
University of Bonn

This repository contains the implementation of our paper "STAIR: Semantic-Targeted Active Implicit Reconstruction" accepted to IROS 2024.

@INPROCEEDINGS{jin2024stair,
      title={STAIR: Semantic-Targeted Active Implicit Reconstruction}, 
      booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
      author={Jin, Liren and Kuang, Haofei and Pan, Yue and Stachniss, Cyrill and Popović, Marija},
      year={2024}}

Abstract

Many autonomous robotic applications require object-level understanding when deployed. Actively reconstructing objects of interest, i.e. objects with specific semantic meanings, is therefore relevant for a robot to perform downstream tasks in an initially unknown environment. In this work, we propose a novel framework for semantic-targeted active reconstruction using posed RGB-D measurements and 2D semantic labels as input. The key components of our framework are a semantic implicit neural representation and a compatible planning utility function based on semantic rendering and uncertainty estimation, enabling adaptive view planning to target objects of interest. Our planning approach achieves better reconstruction performance in terms of mesh and novel view rendering quality compared to implicit reconstruction baselines that do not consider semantics for view planning. Our framework further outperforms a state-of-the-art semantic-targeted active reconstruction pipeline based on explicit maps, justifying our choice of utilising implicit neural representations to tackle semantic-targeted active reconstruction problems.

An overview of our STAIR framework:

Setup

We use ignite-gazebo and shapenet models for simulation purpose:

  1. Install the simulator
git clone [email protected]:liren-jin/shapenet_simulator.git
cd shapenet_simulator
docker build . -t shapenet-simulator:v0
  1. Install Nvidia runtime support from here.
  2. Download the shapenet models used in our experiment here and the scene data here to shapenet_simulator/src/simulator.
cd src/simulator
unzip models.zip -d models
unzip scenes.zip -d scenes

(Optional) for more advanced usages, e.g, generate new scenes and test data, please follow the instruction.

  1. Set up STAIR repo:
git clone https://github.com/dmar-bonn/stair
cd stair
conda env create -f environment.yaml
conda activate stair
python setup.py build_ext --inplace
  1. Copy the extracted scene data (scene1, scene2 ....) in step 4 also to stair/test_data for evaluation purpose.
cp -r <path to shapenet-simulator>/src/simulatr/scenes/* <path to stair>/test_data

Basic Usage

In one terminal, in the shapenet_simulator folder, start the simulator:

xhost +local:docker
make 
make enter
cd <scene_id>
ign gazebo -r scene.sdf

In anothor ternimal, in stair folder, start active reconstruction:

conda activate stair

(Optional) If you do not have ros1 installed in your local machine, we also added ros support in conda environment. However, in order to communicate with roscore running in docker container, you have to set environment variable:

export ROS_MASTER_URI=http://localhost:11311

Start plannning experiment:

python plan.py --config <planner type> implicit --target_class_id <class id> --gui

Available values for the command line flag:

Flag Available value
planner type uncertainty_target, uncertainty_all, coverage, max_distance, uniform
class id 1: car, 2: chair, 3: table, 4: sofa, 5: airplane, 6: camera

Note: you can also choose target semantic classes in GUI.

A screenshot of GUI:

Click "next step" to move to the next view point based on the selected planner type. You should see the camera move in the shapenet simulator and updated camera trajectory in GUI as well. After collecting several images, you can start nerf training by clicking "start".

Experiments and Evaluation

python plan.py --config <planner type> implicit  --exp_path <experiment folder>/<scene id>  --target_class_id <target id> --gui
python eval_nerf.py --test_path test_data/<scene id> --exp_path <experiment folder>/<scene id>/<planner type>
python eval_mesh.py --test_path test_data/<scene id> --exp_path <experiment folder>/<scene id>/<planner type>

where <experiment folder> is the folder for saving experimental data, default is "./experiment"; <scene id> is the scene name for the experiment, e.g., scene1.

One example for active reconstruction results can be acquired by running:

./run_example.sh <scene id> <run num> <list of target id> <experiment path>

for example :

./run_example.sh scene1 5 "1" "./test"

Acknowledgements

Parts of the code are based on DirectVoxGO and torch-ngp. Thanks to the authors for their awesome works!

Maintainer

Liren Jin, [email protected]

Project Funding

This work has been fully funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy, EXC-2070 – 390732324 (PhenoRob). All authors are with the Institute of Geodesy and Geoinformation, University of Bonn.