Skip to content

Commit

Permalink
Merge pull request #65 from TangmereCottage/master
Browse files Browse the repository at this point in the history
Add intro to SLAM/NAV to README and clarify function of the launch command
  • Loading branch information
abizovnuralem authored Jul 30, 2024
2 parents 6a29074 + f7219df commit 5d1fccc
Showing 1 changed file with 32 additions and 17 deletions.
49 changes: 32 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ Together, let's push the boundaries of what's possible with the Unitree Go2 and


## System requirements

Tested systems and ROS2 distro
|systems|ROS2 distro|Build status
|--|--|--|
Expand All @@ -92,30 +93,27 @@ cd ..
NOTE 1: check for any error messages, and do not disregard them. If `pip install` does not complete cleanly, various features will not work. For example, `open3d` does not yet support `python3.12` and therefore you will need to set up a 3.11 `venv` first etc.


Install `rust` language support in your system: [instructions](https://www.rust-lang.org/tools/install)

After, you need to install 1.79 version of the cargo

```
Install `rust` language support following these [instructions](https://www.rust-lang.org/tools/install). Then, install version 1.79 of `cargo`, the `rust` package manager.
```shell
rustup install 1.79.0
rustup default 1.79.0
```


`cargo` should work in terminal 1.79 version
`cargo` should now be availible in the terminal:
```shell
cargo --version
```

Build `go2_ros_sdk`. You need to have `ros2` and `rosdep` installed. If you do not: [instructions](https://docs.ros.org/en/humble/Installation.html). Then:
Build `go2_ros_sdk`. You need to have `ros2` and `rosdep` installed. If you do not, follow these [instructions](https://docs.ros.org/en/humble/Installation.html). Then:
```shell
source /opt/ros/$ROS_DISTRO/setup.bash
rosdep install --from-paths src --ignore-src -r -y
colcon build
```

## Usage
Don't forget to set up your Go2 robot in Wifi-mode and obtain the IP. You can use the mobile app to get it, go to Device -> Data -> Automatic Machine Inspection and look for STA Network: wlan0.

Don't forget to set up your Go2 robot in Wifi-mode and obtain the IP. You can use the mobile app to get it. Go to Device -> Data -> Automatic Machine Inspection and look for STA Network: wlan0.

```shell
source install/setup.bash
Expand All @@ -124,17 +122,33 @@ export CONN_TYPE="webrtc"
ros2 launch go2_robot_sdk robot.launch.py
```

## Real time image detection and tracking
The `robot.launch.py` code starts many services/nodes simultaneously, including

* robot_state_publisher
* ros2_go2_video (front color camera)
* pointcloud_to_laserscan_node
* go2_robot_sdk/go2_driver_node
* go2_robot_sdk/lidar_to_pointcloud
* rviz2
* `joy` (ROS2 Driver for Generic Joysticks and Game Controllers)
* `teleop_twist_joy` (facility for tele-operating Twist-based ROS2 robots with a standard joystick. Converts joy messages to velocity commands)
* `twist_mux` (twist_multiplexer with source prioritization)
* foxglove_launch (launches the foxglove bridge)
* slam_toolbox/online_async_launch.py
* av2_bringup/navigation_launch.py

When you run `robot.launch.py`, `rviz` will fire up, lidar data will begin to accumulate, the front color camera data will be displayed too (typically after 4 seconds), and your dog will be waiting for commands from your joystick (e.g. a X-box controller). You can then steer the dog through your house, e.g., and collect lidar mapping data.

This capability is directly based on [J. Francis's work](https://github.com/jfrancis71/ros2_coco_detector). Once you have launched the sdk, the color image data will be available at `go2_camera/color/image`. In another terminal enter:
## Real time image detection and tracking

This capability is directly based on [J. Francis's work](https://github.com/jfrancis71/ros2_coco_detector). Launch the `go2_ro2_sdk`. After a few seconds, the color image data will be available at `go2_camera/color/image`. On another terminal enter:

```bash
source install/setup.bash
ros2 run coco_detector coco_detector_node
```

There will be a short delay the first time the node is run for PyTorch TorchVision to download the neural network. You should see a downloading progress bar. This network is then cached for subsequent runs.
There will be a short delay the first time the node is run for PyTorch TorchVision to download the neural network. You should see a download progress bar. TorchVision cached for subsequent runs.

On another terminal, to view the detection messages:
```shell
Expand All @@ -150,32 +164,33 @@ ros2 run image_tools showimage --ros-args -r /image:=/annotated_image
```

Example Use:

```shell
ros2 run coco_detector coco_detector_node --ros-args -p publish_annotated_image:=False -p device:=cuda -p detection_threshold:=0.7
```

This will run the coco detector without publishing the annotated image (it is True by default) using the default CUDA device (device=cpu by default). It sets the detection_threshold to 0.7 (it is 0.9 by default). The detection_threshold should be between 0.0 and 1.0; the higher this number the more detections will be rejected. If you have too many false detections try increasing this number. Thus only Detection2DArray messages are published on topic /detected_objects.

## 3D map generation
To save the map, you need to:

To save the map, `export` the following:

```shell
export MAP_SAVE=True
export MAP_NAME="3d_map"
```

Every 10 seconds, the map will be save to root folder of the repo.
Every 10 seconds, a map will be saved to the root folder of the repo.

## Multi robot support
If you want to connect several robots for collaboration:

```
```shell
export ROBOT_IP="robot_ip_1, robot_ip_2, robot_ip_N"
```

## Switching between webrtc connection (Wi-Fi) to CycloneDDS (Ethernet)
```

```shell
export CONN_TYPE="webrtc"
```
or
Expand Down

0 comments on commit 5d1fccc

Please sign in to comment.