-
Notifications
You must be signed in to change notification settings - Fork 3
ADHERENT scripts execution
This wiki describes how run the python scripts related to the simulation and experimental results obtained in the ADHERENT paper. It is assumed here that a proper setup has been configured by following the ADHERENT setup configuration wiki.
In this wiki, Section 1 needs to be performed first. From Section 2 on, each section addresses a different component of the ADHERENT pipeline and is independent from the other sections. Therefore, you are not required to follow the wiki in order. Just follow Section 1 and then directly one or more sections you are interested in, before concluding with Section 8.
-
Connect the joystick to your laptop via bluetooth or usb.
-
Restart the container:
docker restart adherent
-
For
⚠️ each terminal from which you need to access the container, run:xhost + docker exec -it adherent bash
and, within the container, reach the ADHERENT scripts folder by:
cd adherent/scripts
-
Retarget an example motion using Whole-Body Geometric Retargeting (WBGR) by:
python retargeting.py
Wait for the motion to be retargeted and press Enter to visualize the retargeted motion.
-
Retarget an example motion using Kinematically-Feasible Whole-Body Geometric Retargeting (KFWBGR) by:
python retargeting.py --KFWBGR
-
Obtain mirrored retargeted MoCap data while using WBGR by:
python retargeting.py --mirroring
and while using KFWBGR by:
python retargeting.py --KFWBGR --mirroring
-
Run the
retargeting.py
script with--save
and the retargeted MoCap will be saved in theretargeted_motion.txt
file in the same folder of the script. You can then visualize the latest retargeted motion (without performing retargeting computations again) by:python play_retargeted_mocap.py --latest
-
Visualize all the retargeted MoCap data included in our training dataset by varying the
--dataset
,--portion
and--mirrored
arguments. For instance, visualize the mirrored MoCap from the portion n.2 of the dataset D2 by:python play_retargeted_mocap.py --dataset D2 --portion 2 --mirrored
-
Compare WBGR and KFWBGR by visualizing - one after the other - the retargeted motions obtained using WBGR and KFWBGR on the same MoCap data (as in the supplementary video submitted with the paper) by:
python play_retargeted_mocap_WBGR_vs_KFWBGR.py
-
Etract the I/O features for the network from one of the retargeted motions included in our training dataset by varying the
--dataset
,--portion
and--mirrored
arguments of the dedicated script. The features are first computed in the global frame and then transformed in the local frame to be used by the network. Use--plot_global
or--plot_local
if you want to visualize the global or local features, respectively. For instance, extract the features from the mirrored portion n.6 of the dataset D3 and visualize them globally by:python features_extraction.py --dataset D3 --portion 6 --mirrored --plot_global
-
Launch a new training by:
python training.py
-
To monitor the training online by exploiting tensorboard, check the
Savepath
printed on the terminal by the above script. Access the docker container from another terminal and reach thescripts
folder (see Section 1 above) and then run:tensorboard --logdir <Savepath>
Open in the browser the link returned in the terminal by the above command and wait for the first training data to be saved (this may require some time).
interactive_trajectory_generation.mp4
-
Access the docker container from three different terminals (see Section 1 above) and run:
Terminal 1:
yarpserver --write
Terminal 2:
python joystick.py --deactivate_bezier_plot
Terminal 3:
python trajectory_generation.py
-
Once you see the simulated robot and the motion and facing directions coming from the joystick, press Enter to start the trajectory generation and interactively provide joystick inputs to the trajectory generator.
-
The generated trajectory resulting from the given joystick inputs will be visualized in the simulator. You can optionally use
--plot_trajectory_blending
,--plot_footsteps
and--plot_blending_coefficients
to visualize further details on the generated trajectory, but the additional plots will slow down the process. -
After 1000 inference steps (unless differently specified with
--save_every_N_iterations
), the files:blending_coefficients.txt
footsteps.txt
joystick_input.txt
postural.txt
will be stored in the
/adherent/datasets/inference
folder. -
You can visualize the generated trajectory (along with the correspondent joystick inputs, footsteps and blending coefficients) via:
python play_generated_trajectory.py
trajectory_control_in_simulation.mp4
-
Access the docker container from four different terminals (see Section 1 above) and run:
Terminal 1:
yarpserver --write
Terminal 2:
gazebo /iit/sources/robotology-superbuild/src/icub-gazebo/icub_base_est/icub_world.sdf -slibgazebo_yarp_clock.so
If the gazebo simulation doesn't start automatically, press Play (an error will appear on the terminal until the simulation starts).
⚠️ Reduce the simulation RTF to 0.5 (or less) by setting thePhysics->real_time_units
parameter to 500 (or less), since joint velocity measurements are significantly noisy with high RTFs. If you skip this step, the trajectory control could fail due to the noisy measurements.Terminal 3:
YARP_CLOCK=/clock yarprobotinterface --config launch-wholebodydynamics.xml
Terminal 4:
python trajectory_control.py
-
Inspect the original and scaled footsteps (according to the
--footstep_scaling
argument) while waiting for the trajectory optimization computations. -
When suggested on the terminal, press Enter to start the trajectory control.
-
Once the trajectory control is over, data are saved in a dedicated folder within the
/adherent/datasets/trajectory_control_simulation
folder. Inspect the data of the latest controlled trajectory by:python plot_trajectory_control_data.py --<data_to_plot>
where
<data_to_plot>
isplot_CoM_ZMP_DCM
,plot_feet_cartesian_tracking
,plot_feet_wrenches
,plot_joints_position_tracking
orplot_joints_velocity_tracking
, depending on the data you want to inspect. If you want to inspect data related to other controlled trajectories, set--data_path
accordingly.
-
Reproduce the simulated results summarized in Fig.5 in the paper (Success/failures for different combinations of footstep and velocity scaling) by performing Section 6 above (Trajectory control in simulation) with varying parameters for the
python trajectory_control.py
script:-
--trajectory_path
:- use
../datasets/inference/experiments/1_forward/
for forward walking (Fig. 5, left) - use
../datasets/inference/experiments/6_backward/
for backward walking (Fig. 5, center) - use
../datasets/inference/experiments/4_left/
for side walking (Fig. 5, right)
- use
-
--time_scaling
: use integers '1', '2', '3' or '4' (this is the inverse of the velocity scaling in the Fig.5, which takes value in {1, 0.5, 0.33, 0.25}) -
--footstep_scaling
: use '0.2', '0.4', '0.6', '0.8' or '1.0'
For instance, reproduce the {footstep_scaling=0.6, velocity_scaling=1.0} side walking by:
python trajectory_control.py --trajectory_path ../datasets/inference/experiments/4_left/ --time_scaling 1 --footstep_scaling 0.6
-
-
Reproduce Fig.6 in the paper (ADHERENT-generated vs. human-retargeted joint trajectories) by:
python plot_Fig_6.py
-
Reproduce Fig.7 in the paper (ADHERENT Postural vs. Fixed Postural) by:
python plot_Fig_7.py
-
Reproduce Fig.8 in the paper (blending coefficients activations) by:
python plot_Fig_8.py
Type exit
to exit the current terminal within the docker container and then run:
docker stop adherent
At this stage, you will have a stopped container, ready to be reactivated (Section 1 above) to follow this wiki again. If you want instead to clean the entire setup, please follow these steps.