Skip to content

Commit

Permalink
update MAPF page, similar to ergodic search page (#45)
Browse files Browse the repository at this point in the history
* fix a typo

* add link to google sheet table

* update MAPF page in a similar way as the ergodic search page

* Update _posts/2023-08-21-multi-agent-path-finding.md

* Update _posts/2023-08-21-multi-agent-path-finding.md

---------

Co-authored-by: Nico Zevallos <[email protected]>
  • Loading branch information
wonderren and gnastacast authored Sep 30, 2023
1 parent 510f162 commit 604115d
Showing 1 changed file with 12 additions and 10 deletions.
22 changes: 12 additions & 10 deletions _posts/2023-08-21-multi-agent-path-finding.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,22 @@
---
title: "Multi-Agent Path Finding"
title: "Multi-Agent Path Planning"
categories:
- research
description: "Multi-Agent Path Finding"
description: "Multi-Agent Path Planning"
published: true
image: img/posts/mapf_MATSPF.gif
tags:
- multi-agent_planning
---

### Table of Contents

1. [Multi-Agent Path Finding]({{page.url | relative_url}}#multi-agent-path-finding)

2. [Multi-Agent Multi-Objective Path Finding]({{page.url | relative_url}}#multi-agent-multi-objective-path-finding)

3. [Multi-Agent Target Sequencing Path Finding]({{page.url | relative_url}}#multi-agent-target-sequencing-path-finding)
| System | Multi-Agent | Multi-Objective | Traveling Salesman |
| :----- | :-------------: | :---------: | :-----------------: |
| [MA-PF]({{page.url | relative_url}}#multi-agent-path-finding) || | |
| [MA-MO-PF]({{page.url | relative_url}}#multi-agent-multi-objective-path-finding) ||| |
| [MA-TS-PF]({{page.url | relative_url}}#multi-agent-target-sequencing-traveling-salesman-path-finding) || ||

4. <a href="https://docs.google.com/spreadsheets/d/1tfIeQ3ZysSg9gAOFEiNn6DdQYHhKfSv-XekHsSkj_m0/edit?usp=sharing">All Problems</a>.
<a href="https://docs.google.com/spreadsheets/d/1tfIeQ3ZysSg9gAOFEiNn6DdQYHhKfSv-XekHsSkj_m0/edit?usp=sharing">All Problems</a>

### Multi Agent Path Finding

Expand All @@ -32,6 +31,7 @@ We seek to obtain the benefits of both coupled and decoupled approaches: we made
Subdimensional expansion is an approach that is able to adapt existing planners, such as A* and RRTs, to solve Multi-Agent Path Finding (MA-PF) problems. This approach first generates an individually (sometimes optimal) plan for each agent, ignoring the other agents. For an N-agent system, the initial search yields N paths, which essentially is a one-dimensional subset of the NM-dimensional configuration space where M is the number of degrees of freedom each agent has. Subdimensional expansion that directs the robots to follow these paths until the goal is reached or an agent-agent collision is detected. At the collision, the search space is then locally increased in dimensionality along any path found by the planning algorithm leading to the collision. Such a space grows, as needed, to determine the (optimal) path to the goal. Doing so constructs a variable dimensional search space of minimal size which will contain the optimal path. We implemented subdimensional expansion for the case where the configuration space of each robot can be represented as a graph, using A* as the underlying path planning algorithm. We name the resulting algorithm M*.
M* can be proven to find an optimal path in finite time, or to terminate in finite time that no path exists.

[Back to top]({{page.url | relative_url}}#table-of-contents)

### Multi Agent Multi Objective Path Finding

Expand All @@ -45,8 +45,9 @@ Starting from the subdimensional expansion and the conventional (standard) MA-PF

The foundation of Multi-Agent Multi-Objective Path Finding (MA-MO-PF) is Single-Agent Multi-Objective Path Finding (SA-MO-PF), which is still an active research area with many open questions. A fundamental challenge in SA-MO-PF is the large number of Pareto-optimal solutions, i.e., start-goal paths. To find these Pareto-optimal start-goal paths, one has to maintain a large number of Paret-optimal paths from the starting location to any other intermediate location when planning towards the goal. We address this challenge by incrementally building a data structure during the planning process to efficiently manage these Pareto-optimal paths. We call the resulting algorithm Enhanced Multi-Objective A* (E-MO-A*). E-MO-A* expedites the existing multi-objective search for up to an order of magnitude and is particularly advantageous for those hard instances with many Pareto-optimal solutions. Furthermore, we have also developed multi-objective planners to handle dynamic environments such as planning among moving obstacles and planning in graphs where edge costs can change.

[Back to top]({{page.url | relative_url}}#table-of-contents)

### Multi Agent Target Sequencing Path Finding
### Multi Agent Target Sequencing (Traveling Salesman) Path Finding


<figure>
Expand All @@ -56,3 +57,4 @@ The foundation of Multi-Agent Multi-Objective Path Finding (MA-MO-PF) is Single-

Another important variant of MA-PF we considered is to let a team of agents collectively visit a large number of goal locations (also called waypoints) before reaching their destinations. We call this problem Multi-Agent Traveling-Salesman Path Finding (MA-TS-PF) and this problem arises in applications ranging from surveillance to logistics. MA-TS-PF involves not only planning collision-free paths but also sequencing multiple goal locations, i.e. assigning goals to agents as well as specifying the visiting order of goals. Solving MA-TS-PF to optimality is challenging as it requires addressing simultaneously the curses of dimensionality arising from both MA-PF and traveling salesman problems. We develop a new approach that handles agent-agent conflicts via subdimensional expansion while simultaneously allocating and sequencing targets for agents via state of the art traveling salesman solvers: the subdimensional expansion dynamically modifies the dimension of the new search space based on agent-agent conflicts and defers planning in the joint space until necessary. Concurrently, the complexity in target allocation and sequencing is addressed by embedding the mTSP solvers in the form of (1) heuristics that underestimate the cost-to-go from any state, and (2) individual optimal policies that constructs the low dimensional search space for subdimensional expansion. Numerically, we perform simulations with at most 20 agents and 50 targets to verify the performance of the proposed approach.

[Back to top]({{page.url | relative_url}}#table-of-contents)

0 comments on commit 604115d

Please sign in to comment.