This is a research project conducted by Jiahe Pan at the University of Melbourne, Australia, under supervision of Jonathan Eden, Denny Oetomo and Wafa Johal. We extend the findings from our previous paper which investigated the effects of levels of robot autonomy on a human teleoperator's cognitive load and trust. In this study, building on the previous work's shared control teleoperated trajectory tracking setup, we introduce noise to the robot's autonomous controller in the tracking task. We use the Franka Emika robot arm and the Novint Falcon haptic device for the primary trajectory tracking task and Tobii eye trackers for one of the cognitive load measures. Experiments are conducted with 24 participants.
- Project site: [TODO]
- Demo video: [TODO]
A laptop with Ubuntu 22.04 and ROS2 (Humble) installations are required. The ros2_ws
workspace contains the following two ROS packages:
cpp_pubsub
tutorial_interfaces
This package contains the code files for the primary task, including receiving information from and sending control commands to the robot, data logging scripts, and rendering the task scene in RViz. Specifically, it contains the following sub-folders:
Folder | Description |
---|---|
/cpp_pubsub |
Contains package files including useful functions to generate the trajectories, parameters to run experiments, and the definition of the DataLogger Python class. |
/data_logging/csv_logs |
Contains the raw data (.csv format) collected from all participants, including a header file for each participant with the calculated task performances for each trial condition. |
/launch |
Contains ROS launch files to run the nodes defined in the /src folder, including launching the controller with both the Gazebo simulator and the real robot, and to start the RViz rendering of the task. |
/robot_noise |
Contains the trajectory noise .csv files which are located in the /noise_csv_files folder, and the scripts to generate the noise files and visualize the noise overlayed onto each of the reference trajectories. |
/scripts |
Contains the definition of the TrajRecorder Python class, used for receiving and saving control commands and robot poses into temporary data structures, before logging the data to csv files using a DataLogger instance. |
/src |
Contains C++ source code for the ROS nodes used, including class definitions of the GazeboController and RealController for controlling the robot in simulation and the real world respectively, the PositionTalker for reading the position of the Falcon joystick, and the MarkerPublisher for publishing visualization markers into the RViz rendering. |
/urdf |
Contains an auto-generated URDF file of the Franka Emika robot arm. |
This packcage contains custom ROS message and service definitions. Specifically, there are two custom msg
interfaces (in the /msg
directory) defined for communication and data logging:
Msg | Description |
---|---|
Falconpos.msg |
A simple definition of a 3D coordinate in Euclidean space. Attributes: x, y, z |
PosInfo.msg |
A definition of the state vector of the system for a given timestamp. Attributes: ref_position[], human_position[], robot_position[], tcp_position[], time_from_start |
The implementation uses a laptop with Windows 11 installation.
In order to use the Tobii eye-tracker, first install the required SDK from PyPI:
pip install tobii-research
Then, it can be imported into Python as:
import tobii_research as tr
To connect to and receive information from the eye-tracker:
found_eyetrackers = tr.find_all_eyetrackers()
my_eyetracker = found_eyetrackers[0]
my_eyetracker.subscribe_to(tr.EYETRACKER_GAZE_DATA,
gaze_data_callback,
as_dictionary=True)
# Basic definition of the callback function
def gaze_data_callback(self, gaze_data):
left_eye_gaze = gaze_data['left_gaze_point_on_display_area']
right_eye_gaze = gaze_data['right_gaze_point_on_display_area']
left_pupil_diameter = gaze_data['left_pupil_diameter']
right_pupil_diameter = gaze_data['right_pupil_diameter']
To disconnect:
my_eyetracker.unsubscribe_from(tr.EYETRACKER_GAZE_DATA, gaze_data_callback)
For more details of implementation, please refer to rhythm_method.py
located in the /windows/secondary task/
directory.
The implementation uses a laptop with Windows 11 installation.
The dual-task method is adopted in this experiment to capture cognitive load objectively, in addition to using pupil diameter. Specifically, "The Rhythm Method" is employed as the secondary task, which involves a rhythmic tapping at a given tempo.
The rhythm files are in .wav
format, and can be generated from any software (e.g. GarageBand on IOS devices).
The implementation of the method in this project uses the following Python libraries:
from keyboard import read_key # to record key taps on the keyboard
from playsound import playsound # to play the rhythm for participants' reference
which can be installed via PyPI:
pip install keyboard playsound
For more details of implementation, please refer to rhythm_method.py
located in the /windows/secondary task/
directory. Note that in Linux the playsound
library needs to be run with root priviledges, and therefore needs to be run with:
sudo python3 rhythm_method.py
The data logged throughout each of the experimental sessions are written to .csv
files. These include both the main measures of interest and the initial demographics information collected at the start of each session. Results from both the previous study and the current study are included in this repository. Both the pre-processing scripts and the resulting processed .csv
files are located in sub-folders within the /study1_results
and /study2_results
directory. The measures are summarized below:
Measure | Description |
---|---|
Trajectory Tracking Error | RMSE error (cm) between the reference and recorded trajectories of each trial |
Rhythm Tapping Error | Normalized percentage error (%) of the inter-tap interval lengths (relative to participant's baseline) |
Pupil Diameter | Pupil diameter (mm) for both left and right eyes, and averaged across them |
Perceived Autonomy | Participants' perceived level of robot autonomy, rated on a 10-point Likert scale |
Perceived Trust | Participants' self-reported trust using a 10-point Likert scale |
NASA-TLX | Self-reported cognitive load levels across all 6 aspects of the NASA-TLX questionnaire |
MDMT | Self-reported trust levels across all 8 dimensions of the MDMT questionnaire |
Within each of the study folders, there are two csv
files:
all_data.csv
: contains data across all autonomy levelssliced_data.csv
: contains data for only high and low autonomy levels
We also combined the results from both studies to explore the effect of noisy robot behavior on our interested measures. These are included in the /studies_combined
folder, similarly stored in separate csv
files for either all or only high and low autonomy levels.
For the exact steps of data preprocessing, refer to order.txt
located in the /study2_results
folder.
The data analysis was performed in RStudio, leveraging existing libraries in the R programming langauge. All R scripts used for analyzing the results of this study are located in the /R_Analysis
directory, which has the following two sub-folders:
/interactions
: Analysis of correlations and interaction effects between the measures and autonomy conditions./main_measures
: Individual analysis of autonomy's effect on each of the measures using ANOVAs. The/all_levels
and/two_levels
sub-folders contain the R scripts for analyzing across all and only high and low autonomy levels respectively.
Plots are also generated in R, and the code are embedded within the above R scripts.
As previously mentioned, the combined data from both the previous study and this study unwent further analysis. The R scripts are located in the /combined_R_Analysis
folder.
The manuscript and supplementary video can be found on TODO. If you find our work useful, please consider citing it using:
TODO