Skip to content
Fadri Furrer edited this page Jan 19, 2016 · 3 revisions

Exploration Planning

Receding Horizon Next Best View Planner

The receding horizon next best view planner is a real-time capable exploration path planner. It does not require any prior knowledge of the environment, except for boundaries of the area to be explored. From the current pose it expands a geometric tree of possible future poses to find a next pose that gives a high exploration gain. This gain reflects the exploration of space (or surface area) that is not yet (sufficiently) known. As the vehicle proceeds on the path, the tree is recomputed, taking in account the new information from the sensor. In every iteration, the best previous branch is maintained.

Specifically, the planner maintains a occupancy map of its environment. This guarantees, that paths are only planned within free space, while giving a notion on exploration progress. While the standard gain is the visible, yet unmapped volume/area, a gain coefficient can also be assigned to occupied or free space. A custom gain model depending on the occupancy probability can be encoded. If the exploration is driven by surface of a mesh model, the gain is computed by the amount of surface that can be inspected. All versions can also be run in a multi agent setup.

Launching the Planner in a Gazebo Simulation

To install the necessary packages refer to the installation guide. In order to launch the demo scenario for volumetric exploration in RotorS Simulator type:

roslaunch interface_nbvp_rotors flat_exploration.launch

and for the demo scenario for surface area exploration type:

roslaunch interface_nbvp_rotors area_exploration.launch

Try the following for the two scenarios employing three agents at a time (needs a lot of computation power in order to be able to collaboratively map occupancy and inspected surface).

roslaunch interface_nbvp_rotors multiagent_flat_exploration.launch
roslaunch interface_nbvp_rotors multiagent_area_exploration.launch

More information about the demo can be found on the demo page

Contents

Credits

This algorithm was developed by Andreas Bircher with the help and support of the members of the Autonomous Systems Lab. The work was supported by the European Commission-funded project AEROWORKS.