-
Notifications
You must be signed in to change notification settings - Fork 1
Usage
This wiki page describes the usage of the planner and the provided examples in detail. Furthermore, we describe a general pathway to integrating the planner and the environment recognition into your system and an experiment to show some capabilities of our system.
This project includes several packages which are used for different purposes. In this section, we try to give a short overview of the packages and their purpose:
- mhp_robot: This package provides functionalities regarding the abstraction of a robot. Furthermore, the package provides functions for environment recognition, obstacles, and hardware-related functions (robot_mocap).
- mhp_planner: This package provides the general functionalities of the planner. It includes the controller, the structured optimization problem, the hypergraph strategy, plants, and solvers for the optimization problem. Note that this package is abstracted from the control_box_rst but with various adaptions and simplifications considering the robot context.
- mhp_robot_ur10_example: This meta package provides several functionalities directly related to the UR10 robot. Different "sub"-packages provide other functionalities:
- universal_robot, ur_description, ur_gazebo, ur_modern_driver: These packages are used for the simulation and the real control of the UR10 robot and are part of the ROS-Industrial project. Please refer to the dependencies section for further information.
- ur_launch: This package provides launch files for the simulation and the real robot. Please refer to the simulation and real robot section for further information.
- ur_utilities: This package provides several functions related to the mhp_robot package. In particular, functionalities regarding the environment and its connection towards the UR10 as defined robot are defined here.
- ur10_planner: This package provides functionalitites of the UR10 connected to the planner package mhp_planner. In particular, the plant (UR10), the task the robot should execute and the cost and potential functions are defined here with special respect to the UR10 robot.
The planner provides different options to customize the controller, the used plant, and the environment recognition. Since the number of options is significant, we try to give an overview here.
Note that we have two types of options:
- Options you pass while starting a launch file (further named arguments)
- Options you define in additional YAML files (further named options)
-
workspace_monitor_mode (default: automatic, options:manual, bag, mocap, tf): This argument sets the mode of the workspace_monitor. The workspace monitor serves as a general node for treating the environment and providing functionalities concerning obstacles.
- automatic: Uses predefined trajectory definitions for obstacles
- manual: Doesn't move the obstacles. For dynamic obstacles, you get the option to move the obstacles manually in rviz.
- tf: Uses the /tf topic to get the obstacle poses.
- bag: Replays a ROSbag with a recorded motion of an obstacle (e.g. a human motion) and publishes the poses of the obstacle on /tf for the planner.
- mocap: Uses the NatNet SDK to receive motion tracking data from our OptiTrack Motion Capture System. The planner subscribes to /tf to get the obstacle poses tracked by the Mocap system.
- obstacle arguments: The arguments static_obstacles, dynamic_obstacles, planes, utility_objects, humans, obstacle_trajectories and human_trajectories define the paths from which the obstacle or trajectory definition (for workspace_monitor_mode:=automatic) are loaded. In the default case, no obstacles are loaded.
-
prediction_mode (default: None, options:Polynom, Kalman, Golay, SPL): This argument defines the mode of the human motion forecast. The following options are available:
- None: No prediction is used.
- Polynom: A polynomial estimation of the current human skeleton for extrapolation.
- Kalman: A Kalman filter estimation of the current human skeleton for extrapolation.
- Golay: A Savitzky-Golay filter estimation of the current human skeleton for extrapolation.
- SPL: A neural network prediction of the human motion. For further information please refer to the dependencies page.
-
foot_print_prediction_mode (default: None, options:Polynom, Kalman, Golay): This argument defines the mode of the human motion forecast of the hip since the prediction modes are only working relative to the hip. The following options are available:
- None: No prediction is used.
- Polynom: A polynomial estimation of the current human foot print for extrapolation.
- Kalman: A Kalman filter estimation of the current human foot print for extrapolation.
- Golay: A Savitzky-Golay filter estimation of the current human foot print for extrapolation.
-
uncertainty_mode (default: None, options:GMM,Constant): This argument defines the mode of the human motion forecast of the hip since the prediction modes are only working relative to the hip. The following options are available:
- None: No uncertainty estimation
- GMM: A uncertainty estimation of the human motion extrapolations (not working with the SPL) based on a Gaussian Mixture Model.
- Constant: A uncertainty estimation of the human motion extrapolations (not working with the SPL) based on the assumption of constant growing uncertainty.
Note that we do not discuss all arguments here but only the most important ones. For a complete list of arguments, please refer to the launch files in the mhp_robot_ur10_example/ur_launch folder. The naming and comments provide a brief overview of the arguments.
-
default_controller.yaml:
Controller settings
You can change various parameters regarding the cost function, boundaries, and optimization problem in this file. The following list provides an overview of important options:-
u_lb, u_ub, x_lb, x_ub: The lower and upper bounds of the control input and the state (here, the joint velocities and angles).
-
dt, n in ocp/finite_differences_grid: The time step and the number of points used for the finite differences grid. This defines the trajectory length with T = (n-1)*dt.
-
Q, R, Rd in ocp/stage_cost/ur_stage_cost/cost_function: The weights of the cost function. Q is the weight of the state (concerning the goal), R is the weight of the control input, and Rd is the weight of the control input deviation. Note that you must activate the regarding cost function in the ocp/stage_cost/cost_function section.
-
collision objects in ocp/stage_cost/ur_stage_cost/collision_potential: You can decide which obstacles are considered in the collision potential calculation by changing the value to true or false
-
weights, margins in ocp/stage_cost/ur_stage_cost/collision_potential: The weights and margins of the collision potential calculation.
-
obstacle_prediction in ocp/stage_cost/ur_stage_cost/collision_potential: Decide whether the obstacle prediction is used.
-
uncertainty_mode in ocp/stage_cost/ur_stage_cost/collision_potential: Decide which uncertainty representation is used. Split the skeleton in case the uncertainty is too large or grow the radius of the body parts.
-
similiar parameters for the final_stage_cost
-
ud_lb, ud_ub in ocp/stage_inequalities/ur_inequality_constraint/acceleration_constraint: The lower and upper bounds of the acceleration constraint.
-
ocp/stage_inequalities/ur_inequality_constraint/collision_constraint: Similar options for the collision potential calculation. Note that the inequality constraint thresholds for collisions should be smaller than for the potentials to allow the potential effect before the constraints.
-
-
default_plant.yaml:
Plant settings
This file defines the plant which is used for the planner. We currently support only the UR10 robot. If you want to adapt the planner for your robot, please refer to the own systems section. The following list provides an overview of important options for the plant:- real_robot: Decide whether the plant is used for the real robot or for the simulation
- reset_on_start: For the simulation, it is possible to reset the robot if you restart the task
- start_state: The start state of the robot. Note that the start state is only used for the simulation. For the real robot, the start state is defined by the current state of the robot.
- control_mode_type in control_mode: Decide if you want to use the OPEN_LOOP or CLOSED_LOOP control mode. The open_loop mode directly sends the controls of the planner, while the closed_loop mode adds a P controller for the commands. The feed_forward option allows to integrate the planner results directly.
-
default_task.yaml:
Task settings
This file defines the task, which is executed by the planner. The following list provides an overview of important options for the task:- initial_goal_state: The goal state which is used for the planner. Note that you can overwrite the goal by sending messages to the topics. For further information, refer to the simulation and real robot section.
- computation_delay: The delay of the computation of the planner. This delay is used to compensate the time for predicting future states. If you choose a delay<0, the computation delay is estimated with one of the filters (or just the last planning time) based on previous delays.
- general_offset_delay: In case you want to add some extra delay due to communications delays or something similar.
To change the environment treatment, you have to change the config files in the mhp_robot/config folder. They are structured in various subfolders regarding their belonging settings. The following list gives an overview of the most important options:
Environment settings
- human_motion_prediction: This folder contains files for changing the extrapolation and prediction parameters. The comments inside the files describe the parameters
- obstacles: This folder contains various files for the obstacle treatment. The YAML files in this folder describe the obstacles for MHP, and the folder rosbag and trajectories contain files and a rosbag (as a test motion) for the workspace_monitor_mode:=bag and automatic. If you want to use a different rosbag or installed MHP4HRI on your own system, you have to change the bag path in the rosbag_play.yaml file.
- mocap: In case you use a Mocap system, this provides an exemplary configuration file for our OptiTrack system.
- spl_model: This folder contains a config file used for training the RNN-SPL model, as well as the trained model.
If you define a new YAML file, you can load it by changing the rosparam
command in the launch file.
Note that this list is not complete. For a full list of options, please refer to the mhp_planner/config folder and its three files. The naming and comments briefly overview the options inside the YAML files.
This section explains the usage of the planner, including a simulation with Gazebo and a visualization in rviz.
You have to start the simulation with the following command (in case you want to change the default arguments, you can do this directly here):
roslaunch ur_launch ur10_simulation.launch
Wait until the line
[ INFO] [xxx.xxx]: Planner started
appears in the terminal. You should also see the UR10 robot in the rviz window, including obstacles, in case you added them in the command line.
To start the task or the environment, we use ROS service calls. The current task is defined in the default_task.yaml file, and the robot will move toward the goal state defined in this file.
To start the robot and the environment, you have to call the following service:
rosservice call /start_all true
and to stop the robot and the environment, you have to call the following service:
rosservice call /start_all false
If you only want to start the robot, you use the /start_task
service.
In case you only want to start the environment, you use the /robot_workspace_monitor/start_simulation
service (if the workspace_monitor_mode is automatic) or replay a rosbag /bag_parser/robot_tf_bag_parser/start_simulation
.
For a few examples, please refer to the examples section.
After starting the simulation, you can send a new state target. The following command sends a new goal state to the planner:
rostopic pub /state_targets trajectory_msgs/JointTrajectory "header:
seq: 0
stamp:
secs: 0
nsecs: 0
frame_id: ''
joint_names:
- 'shoulder_pan_joint'
- 'shoulder_lift_joint'
- 'elbow_joint'
- 'wrist_1_joint'
- 'wrist_2_joint'
- 'wrist_3_joint'
points:
- positions: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
velocities: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
accelerations: []
time_from_start: {secs: 0, nsecs: 0}"
In the positions
field, the desired joint angles are defined.
Using the planner for a real robot is similar to the simulation. The main difference is that you have to start the launch file
roslaunch ur_launch ur10_real.launch
instead of the simulation launch file. The real launch file starts with the drivers for the UR10 robot. Please note that it is required that you check the launch file to provide the correct information for your robot (IP, Payload, topic names....).
The other steps remain the same for starting the planner and the task execution. If you have live Motion Capture data, using the /start_task
service is enough.
To show the functionality in real robot applications, the following videos show the planner in action (live and the visualization, including the planned robot trajectory). The second row presents the same robot motion but now with a human actively disturbing the robot from reaching its goal.
Visualization | Live |
---|---|
To allow the execution of the demos for all members of the HRI community, we only provide demos for the simulation. We provide two examples for the simulation, both with a human as an obstacle:
- Automatic Trajectory
- Recorded Rosbag motion
This demo provides an easy starting point for the usage of our planner. The demo includes a simple motion of the robot (Start state: [-1.0, -0.3, 0.0, 0.0, 0.0, 0.0]; Goal state: [1.0, -0.3, 0.0, 0.0, 0.0, 0.0]), as well as a simulated trajectory of a human motion.
You can start the demo with the following command (Note that the human is included as one default obstacle):
roslaunch ur_launch ur10_simulation.launch workspace_monitor_mode:=automatic human_trajectories:="<PathToMHPRepository>/mhp_robot/config/obstacles/trajectories/human_trajectory_arm.yaml" prediction_mode:=None
Please include your path to the repository in the command. Note, the error regarding the unspecified P gain can be ignored. For further information, please refer to the FAQ section.
In case you want to consider extrapolations of human motions, change the command as follows:
roslaunch ur_launch ur10_simulation.launch workspace_monitor_mode:=automatic human_trajectories:="<PathToMHPRepository>/mhp_robot/config/obstacles/trajectories/human_trajectory_arm.yaml" prediction_mode:=Polynom
For example, the full path (including the fixed part after reaching the mhp_robot package) for the usage with the docker container is: "/root/catkin_ws/src/mhp_review/mhp_robot/config/obstacles/trajectories/human_trajectory_arm.yaml"
Note, these commands start the simulation itself. For starting the task execution and the planner, please refer to the Start the task execution
section and use the service call for starting the planner and the environment (rosservice call /start_all true
)
This demo provides a demo for the same robot motion but with a recorded motion from a Motionc capture system. The recorded motion is saved as a rosbag file and included in this repository. You can start the demo with the following command (Note that the human is included as one default obstacle):
roslaunch ur_launch ur10_simulation.launch workspace_monitor_mode:=bag prediction_mode:=None uncertainty_mode:=None
If you want to use a different rosbag or installed MHP4HRI on your own system, you have to change the bag path in the rosbag_play.yaml file.
In case you want to consider extrapolations of human motion (and uncertainties of the extrapolation if you set the uncertainty_mode to GMM or Constant), change the command as follows:
roslaunch ur_launch ur10_simulation.launch workspace_monitor_mode:=bag prediction_mode:=Polynom uncertainty_mode:=None
Note, these commands start the simulation itself. For starting the task execution and the planner, please refer to the Start the task execution
section and use the service call for starting the planner and the environment (rosservice call /start_all true
)
The following two images show a snapshot of the results if motion extrapolations are included. It is visible that the planner takes a longer path to avoid the human if only the current position is available (left image). If motion extrapolations are available, the robot uses a path closer to the human, reaching the goal faster since the planner gets the information that the human moves the arms up (right image). Note, that the results can differ depending on the hardware and other tasks running on the system since they influence the computation time of the planner and the number of iterations. For more information about our hardware, please refer to the tested systems section on the installation page.
No motion prediction | Polynomial extrapolation |
---|---|
The package structure of the project is modular for customizing the planner and the environment recognition for your system. As a general pathway on how to use the planner for your system, we recommend the following steps:
- Checkout the mhp_robot_ur10_example package
- In case you already have a Gazebo simulation for your robot, you can substitute the ur_gazebo package with your own package or add your own simulation
- Add urdf description files for your robot
- In case you want to use a real robot, please add the corresponding drivers
The previous steps don't rely on the planner or the environment recognition itself since they only add the basic features of a robot into this framework. The following steps are required to use the robot with the planner and the environment:
-
Add utilitites_functions (ur_utilitites package as example). Note that most of these functions are inheritances of the general mhp_robot package. You should also use these structures to apply them in your system. In the following, a few files are presented which you have to change to your system:
- ur_collision: This file defines distance functions for the robot itself and in combination with obstacles. You have to adapt the distance functions to your robot (amount of links...)
- ur_description & ur10_description: This file defines the robot model. You must adapt the robot model to your robot (segments, frames, urdf definition). -ur_kinematic and ur_inverse_kinematic: These files provide the forward and inverse kinematic for the robot. If you have an analytic inverse kinematic, we recommend providing this for the calculateIKSolution function. Otherwise, you can check the numerical solution inside the RobotInverseKinematic class.
- ur_manipulator: This file includes functions for sending control messages to your robot (simulation or real robot)
- ur_utility: This file includes information about the robot description and model. You must adapt the information to your robot (Controller names...).
- collision_marker_publisher: This file includes functions for publishing the collision markers of the robot in the rviz visualization. You have to adapt the topic names for your robot.
-
Add planner functions
- ur_mhp_planner_node: The general node that starts the planner itself. In case you use a different namespace, you have to adapt the call of loadFromParameterServer.
- ur_robot_corbo: This file includes functions for loading parameters from the YAML file and for controlling the robot
- ur_task_corbo: This file includes functions for the task execution.
- ocp folder: You must change this folder's cost and constraint functions. Note that the calculation is mainly done in the other packages, but the function calls are related to one robot type due to the use of the ur_collision and kinematic elements.
This section provides a brief overview of the capabilites of our human motion forecasting with uncertainties in combination with a state-of-the-art occupancy region estimation (https://github.com/Sven-Schepp/SaRA).
The comparison uses a part of the recorded human motion and applies this in the simulation to provide the same baseline for both. MHP4HRI uses the potential functions with the default parameters for the stage costs, as well as a polynomial estimation for the human motion. Furthermore, we apply the GMM uncertainty estimation with skeleton splitting to the human motion extrapolation. The SaRA approach uses the recorded human motion and our planner but without human motion extrapolation or the potential functions.
The following videos show the results of both approaches (MHP4HRI with and without visualized uncertainties. Note that the visualization of the red uncertainty body parts is sometimes lagging due to the amount and variation):
MHP4HRI | SaRA |
---|---|
As simple metrics for a comparison, we use the time the robot needs to reach the goal and the minimum distances between human body parts and the robot. Note that the human motion runs 5 seconds before the robot starts to move since the uncertainty estimation requires previous extrapolations to work properly.
The robot needs 11,7s to reach the goal if we apply SaRA. If we apply MHP4HRI it is possible to reduce this time by 25% to 8,8s.
A reason for this that the planner finds an alternative trajectory instead of executing a robot stop in case of high proximity.
The following table shows the minimum distances over the whole motion for SaRA and MHP4HRI in metres:
Head | Torso | Pelvis | Right Upper Arm | Left Upper Arm | Right Lower Arm | Left Lower Arm | |
---|---|---|---|---|---|---|---|
SaRA | 0.4012 | 0.1006 | 0.1891 | 0.1553 | 0.0466 | 0.0262 | 0.1842 |
MHP4HRI | 0.6274 | 0.1843 | 0.3919 | 0.3545 | 0.1673 | 0.2807 | 0.1975 |
It gets visible that MHP4HRI increases the minimum distance for all body parts. One reason is the proactive collision avoidance that is not included in SaRA and moves the robot with a higher distance around the human and its uncertainty body parts. Another reason is that we use a recorded motion to provide a common baseline. Therefore, the human moves even if SaRA stops the robot and gets even closer to the robot. In real-life a human will mostly move away from the robot and not towards it. Nevertheless, MHP4HRI already moves the robot away from the human and allows a greater workspace.
Both metrics show a simple use-case of MHP4HRI and its capabilities to improve the robot motion in human-robot interaction scenarios with the environment module.