News:
09.05.2024: We released the code to reproduce the results in the paper.
This repository provides code for the following paper:
Modeling Drivers’ Situational Awareness from Eye Gaze for Driving Assistance, CoRL 2024 by Abhijat Biswas, Pranay Gupta, Shreeya Khurana, David Held, Henny Admoni
If you want to use our data, follow these steps in this section. If you want to generate your own data skip to DReyeVR Setup.
The raw data (images, gaze, button press, etc.) can be downloaded from Box here. The data is available as part zip files and must be concatenated together post download.
cat raw_data_test_part_files/part*.zip > raw_data_test.zip
You can see examples of how various data modalities are used via the dataloader file in data/dataset_full.py
's SituationalAwarenessDataset
class.
To generate your own data, you must first set up the DReyeVR simulator.
Generating your own data involves using the DReyeVR simulator. To learn more about DReyeVR, visit the DReyeVR repo. DReyeVR installation and setup instructions can be found here.
In order to parse the data, you will need to set up DReyeVR in a conda environment. Instruction for this here.
Script related to data parsing can be found in the DReyeVR-parser repo.
In particular post_process_participant.sh
(found here) will extract the awareness data for the recording files, perform a replay to get the sensor data, calculate any offset (read more here) and produce frames with gaze and button press overlay.
First, you have to install the conda environment.
git clone https://github.com/HARPLab/DriverSA.git
cd Driver SA
conda env create -f environment.yml
conda activate sit_aw_env
Model weights and baseline evaluation code coming soon.
@inproceedings{biswasmodeling,
title = {Modeling Drivers’ Situational Awareness from Eye Gaze for Driving Assistance},
author = {Biswas, Abhijat and Gupta, Pranay and Khurana, Shreeya and Held, David and Admoni, Henny},
booktitle = {8th Annual Conference on Robot Learning}
}