Skip to content

Commit

Permalink
Merge pull request #300 from leggedrobotics/dev/minimal_infernece_script
Browse files Browse the repository at this point in the history
Updated readme and added inference script
  • Loading branch information
mmattamala authored Apr 2, 2024
2 parents a515298 + 457ac7c commit 19e7982
Show file tree
Hide file tree
Showing 10 changed files with 713 additions and 388 deletions.
209 changes: 120 additions & 89 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,144 +1,140 @@
<h1 style="text-align: center;">Fast Traversability Estimation for Wild Visual Navigation</h1>

<h1 align="center">
<br>
Wild Visual Navigation
<br>
</h1>
<p align="center">
<a href="#citation">Citation</a> •
<a href="#instalation">Installation</a> •
<a href="#installation">Installation</a> •
<a href="#overview">Overview</a> •
<a href="#experiments">Experiments</a> •
<a href="#development">Development</a>

![Formatting](https://github.com/leggedrobotics/wild_visual_navigation/actions/workflows/formatting.yml/badge.svg)
<a href="#development">Development</a> •
<a href="#citation">Citation</a>
</p>

![Overview](./assets/drawings/header.jpg)

<img align="right" width="40" height="40" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png" alt="Dino">

## Citation
```
@INPROCEEDINGS{frey23fast,
AUTHOR = {Jonas, Frey and Matias, Mattamala and Nived, Chebrolu and Cesar, Cadena and Maurice, Fallon and Marco, Hutter},
TITLE = {\href{https://arxiv.org/abs/2305.08510}{Fast Traversability Estimation for Wild Visual Navigation}},
BOOKTITLE = {Proceedings of Robotics: Science and Systems},
YEAR = {2023},
ADDRESS = {Daegu, Republic of Korea},
MONTH = {July},
DOI = {10.15607/RSS.2023.XIX.054}
}
```

If you are also building up on the STEGO integration or using the pre-trained models for a comparision please cite:
```
@INPROCEEDINGS{mattamala24wild,
AUTHOR = {Mattamala, Matias and Jonas, Frey and Piotr Libera and Chebrolu, Nived and Georg Martius and Cadena, Cesar and Hutter, Marco and Fallon, Maurice},
TITLE = {{Wild Visual Navigation: Fast Traversability Learning via Pre-Trained Models and Online Self-Supervision}},
BOOKTITLE = {under review for Autonomous Robots},
YEAR = {2024}
}
```
<!-- [START BADGES] -->
<!-- Please keep comment here to allow auto update -->
<p align="center">
<a href="[https://github.com/wow-actions/add-badges/blob/master/LICENSE"><img src="https://img.shields.io/github/license/wow-actions/add-badges?style=flat-square" alt="MIT License" /></a>
<a href="https://github.com/leggedrobotics/wild_visual_navigation/actions/workflows/formatting.yml/badge.svg)"><img src="https://github.com/leggedrobotics/wild_visual_navigation/actions/workflows/formatting.yml/badge.svg" alt="formatting" /></a>
</p>
<!-- [END BADGES] -->

If you are using the elevation_mapping integration
```
@INPROCEEDINGS{erni23mem,
AUTHOR={Erni, Gian and Frey, Jonas and Miki, Takahiro and Mattamala, Matias and Hutter, Marco},
TITLE={\href{https://arxiv.org/abs/2309.16818}{MEM: Multi-Modal Elevation Mapping for Robotics and Learning}},
BOOKTITLE={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
YEAR={2023},
PAGES={11011-11018},
DOI={10.1109/IROS55552.2023.10342108}
}
```
![Overview](./assets/drawings/header.jpg)

Checkout out also our other works.

<img align="right" width="40" height="40" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png" alt="Dino">

## Installation

1. Clone the WVN and our STEGO reimplementation.
### Minimal
Clone the WVN and our STEGO reimplementation.
```shell
mkdir ~/git && cd ~/git
git clone [email protected]:leggedrobotics/wild_visual_navigation.git
git clone [email protected]:leggedrobotics/self_supervised_segmentation.git
```

2. Install the virtual environment.
(Recommended) Create new virtual environment.
```shell
cd ~/git/wild_visual_navigation
# TODO
mkdir ~/.venv
python -m venv ~/venv/wvn
source ~/venv/wvn/bin/activate
```

3. Install the wild_visual_navigation package.
Install the wild_visual_navigation package.
```shell
cd ~/git
pip3 install -e ./wild_visual_navigation
pip3 install -e ./self_supervised_segmentation
```

4. [Optionally] Configure custom paths
Set your custom global paths by defining the ENV_WORKSTATION_NAME and exporting the variable in your `~/.bashrc`.

```shell
export ENV_WORKSTATION_NAME=your_workstation_name
```
The paths can be specified by modifying `wild_visual_navigation/wild_visual_navigation/cfg/gloabl_params.py`.
Add your desired global paths.
Per default, all results are stored in `wild_visual_navigation/results`.



<img align="right" width="40" height="40" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png" alt="Dino">

## Overview
![Overview](./assets/drawings/software_overview.jpg)
What we provide:
- Learning and Feature Extraction Nodes integrated in ROS1
- Gazebo Test Simulation Envrionment
- Example ROSbags
- Pre-trained models with minimalistic inference script (can be used as a easy baseline)

### Repository Structure
```
📦wild_visual_navigation
┣ 📂assets
┣ 📂demo_data # Example images
┣ 🖼 example_images.png
┗ ....
┗ 📂checkpoints # Pre-trained model checkpoints
┣ 📜 mountain_bike_trail_v2.pt
┗ ....
┣ 📂docker # Quick start docker container
┣ 📂results
┣ 📂test
┣ 📂wild_visual_navigation # Core implementation of WVN
┣ 📂wild_visual_navigation_anymal # ROS1 ANYmal helper package
┣ 📂wild_visual_navigation_jackal # ROS1 Jackal simulation example
┣ 📂wild_visual_navigation_msgs # ROS1 message definitions
┣ 📂wild_visual_navigation_ros # ROS1 nodes for running WVN
┗ 📂scripts
┗ 📜 wvn_feature_extractor_node.py
┗ 📜 wvn_learning_node.py
┗ 📜 quick_start.py # Inferencing demo_data from pre-trained checkpoints
```
### Features
- quick_start script for inference using pre-trained models (can be used as an easy baseline)
- ROS1 integration for online deployment
- Jackal Gazebo simulation demo integration
- Docker container for easy installation
- Integration into elevation_mapping_cupy


<img align="right" width="40" height="40" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png" alt="Dino">

## Experiments

### Inference pre-trained model
### Inference of pre-trained model

Script to inference traversability of images within input folder (`assets/demo_data/*.png`), given a pre-trained model checkpoint (`assets/checkpoints/model_name.pt`). The script stores the result in the provided output folder (`results/demo_data/*.png`).
```python
python3 quick_start.py

### Online adaptation [Simulation]
# python3 quick_start.py --help for more CLI information
# usage: quick_start.py [-h] [--model_name MODEL_NAME] [--input_image_folder INPUT_IMAGE_FOLDER]
# [--output_folder_name OUTPUT_FOLDER_NAME] [--network_input_image_height NETWORK_INPUT_IMAGE_HEIGHT]
# [--network_input_image_width NETWORK_INPUT_IMAGE_WIDTH] [--segmentation_type {slic,grid,random,stego}]
# [--feature_type {dino,dinov2,stego}] [--dino_patch_size {8,16}] [--dino_backbone {vit_small}]
# [--slic_num_components SLIC_NUM_COMPONENTS] [--compute_confidence] [--no-compute_confidence]
# [--prediction_per_pixel] [--no-prediction_per_pixel]
```

### Online adaptation [Simulation]
Instructions can be found within [wild_visual_navigation_jackal/README.md](wild_visual_navigation_jackal/README.md).

### Online adaptation [Rosbag]
#### Download Rosbags:
To quickly test out online training and adaption we provide some example rosbags ( [GDrive](https://drive.google.com/drive/folders/1Rf2TRPT6auFxOpnV9-ZfVMjmsvdsrSD3?usp=sharing) ), collected with our ANYmal D robot.

<p align="center">




</p>

#### Example Result:
<div align="center">

| MPI Outdoor | MPI Indoor | Bahnhofstrasse | Bike Trail |
|----------------|------------|-------------|---------------------|
| <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/mpi_outdoor_trav.png" alt="MPI Outdoor"> | <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/mpi_indoor_trav.png" alt="MPI Indoor"> | <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/bahnhofstrasse_trav.png" alt="Bahnhofstrasse"> | <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/mountain_bike_trail_trav.png" alt="Mountain Bike"> |
| <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/mpi_outdoor_raw.png" alt="MPI Outdoor"> | <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/mpi_indoor_raw.png" alt="MPI Indoor"> | <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/bahnhofstrasse_raw.png" alt="Bahnhofstrasse"> | <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/mountain_bike_trail_raw.png" alt="Mountain Bike"> |
| <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/demp_data/mpi_outdoor_raw.png" alt="MPI Outdoor"> | <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/demp_data/mpi_indoor_raw.png" alt="MPI Indoor"> | <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/demp_data/bahnhofstrasse_raw.png" alt="Bahnhofstrasse"> | <img align="center" width="120" height="120" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/demp_data/mountain_bike_trail_raw.png" alt="Mountain Bike"> |

</div>

#### Setup
Let`s set up a new catkin_ws:
#### ROS-Setup:
```shell
# Create Workspace
# Create new catkin workspace
source /opt/ros/noetic/setup.bash
mkdir -r ~/catkin_ws/src && cd ~/catkin_ws/src
catkin init
catkin config --extend /opt/ros/noetic
catkin config --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo

# Clone Repos
# Clone repos
git clone [email protected]:ANYbotics/anymal_d_simple_description.git
git clone [email protected]:ori-drs/procman_ros.git

# Symlink WVN
# Symlink WVN-repository
ln -s ~/git/wild_visual_navigation ~/catkin_ws/src

# Dependencies
Expand All @@ -147,16 +143,15 @@ rosdep install -ryi --from-paths . --ignore-src
# Build
cd ~/catkin_ws
catkin build anymal_d_simple_description
catkin build procman_ros
catkin build wild_visual_navigation_ros

# Source
source /opt/ros/noetic/setup.bash
source ~/catkin_ws/devel/setup.bash
```

After successfully building the ros workspace you can run the full pipeline by either using the launch file (this requires all packages to be installed into your system python installation), or by running the nodes from the virtual environment as plain python scripts.

After successfully building the ros workspace, you can run the entire pipeline by either using the launch file or by running the nodes individually.
Open multiple terminals and run the following commands:

- Run wild_visual_navigation
```shell
Expand All @@ -178,22 +173,19 @@ robag play --clock path_to_mission/*.bag
roslaunch wild_visual_navigation_ros view.launch
```


Degugging (sometimes it is desirable to run the nodes seperate):
- Debugging (sometimes it is desirable to run the two nodes separately):
```shell
python wild_visual_navigation_ros/scripts/wvn_feature_extractor_node.py
```
```shell
python wild_visual_navigation_ros/scripts/wvn_learning_node.py
```


- The general configuration files can be found under: `wild_visual_navigation/cfg/experiment_params.py`
- This configuration is used in the `offline-model-training` and in the `online-ros` mode.
- When running the `online-ros` mode additional configurations for the individual nodes are defined in `wild_visual_navigation/cfg/ros_params.py`.
- When running the `online-ros` mode, additional configurations for the individual nodes are defined in `wild_visual_navigation/cfg/ros_params.py`.
- These configuration file is filled based on the rosparameter-server during runtime.
- The default values for this configuration can be found under `wild_visual_navigation/wild_visual_navigation_ros/config/wild_visual_navigation`.
- We set an environment variable to automatically load the correct global paths and trigger some special behavior e.g. when training on a cluster.


<img align="right" width="40" height="40" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png" alt="Dino">
Expand Down Expand Up @@ -246,3 +238,42 @@ rosrun procman_ros sheriff -l ~/git/wild_visual_navigation/wild_visual_navigatio
```shell
rosbag_play --tf --sem --flp --wvn mission/*.bag
```

<img align="right" width="40" height="40" src="https://github.com/leggedrobotics/wild_visual_navigation/blob/main/assets/images/dino.png" alt="Dino">

## Citation
```
@INPROCEEDINGS{frey23fast,
AUTHOR = {Jonas Frey AND Matias Mattamala AND Nived Chebrolu AND Cesar Cadena AND Maurice Fallon AND Marco Hutter},
TITLE = {{Fast Traversability Estimation for Wild Visual Navigation}},
BOOKTITLE = {Proceedings of Robotics: Science and Systems},
YEAR = {2023},
ADDRESS = {Daegu, Republic of Korea},
MONTH = {July},
DOI = {10.15607/RSS.2023.XIX.054}
}
```

If you are also building up on the STEGO integration or using the pre-trained models for comparison, please cite:
```
@INPROCEEDINGS{mattamala24wild,
AUTHOR = {Jonas Frey AND Matias Mattamala AND Libera Piotr AND Nived Chebrolu AND Cesar Cadena AND Georg Martius AND Marco Hutter AND Maurice Fallon},
TITLE = {{Wild Visual Navigation: Fast Traversability Learning via Pre-Trained Models and Online Self-Supervision}},
BOOKTITLE = {under review for Autonomous Robots},
YEAR = {2024}
}
```

If you are using the elevation_mapping integration:
```
@INPROCEEDINGS{erni23mem,
AUTHOR={Erni, Gian and Frey, Jonas and Miki, Takahiro and Mattamala, Matias and Hutter, Marco},
TITLE={{MEM: Multi-Modal Elevation Mapping for Robotics and Learning}},
BOOKTITLE={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
YEAR={2023},
PAGES={11011-11018},
DOI={10.1109/IROS55552.2023.10342108}
}
```


File renamed without changes
File renamed without changes
File renamed without changes
File renamed without changes
Loading

0 comments on commit 19e7982

Please sign in to comment.