EyeTrack2Scene is a Python-based tool designed to map eye-tracking data to movie viewing data of subjects. By integrating advanced video segmentation using a panoptic model, this project enables precise analysis of where viewers focus during film playback. This repository also incorporates a collaborative panoptic model developed by (https://github.com/vant7e), which significantly enhances video segmentation accuracy .Currently, the tool supports eye-tracking data in the ASCII format exported from EyeLink 1000 systems (https://www.sr-research.com/software/).
The eye-tracking data provides spatial coordinates from fixation events (denoted as EFIX), indicating where the participant is focusing. These gaze points are mapped onto segmentation masks generated by the panoptic model for the corresponding time frame. This mapping enables the identification and segmentation of the specific region or object in the movie frame that the participant is likely focusing on.
- Eye-Tracking Integration: Maps raw eye-tracking data to specific regions in movie frames.
- Panoptic Video Segmentation: Leverages state-of-the-art panoptic segmentation to identify meaningful regions in video scenes.
- Collaborative Contribution: Incorporates the panoptic model developed by (https://github.com/vant7e).
- Clone this repository:
git clone https://github.com/yourusername/EyeTrack2Scene.git cd EyeTrack2Scene
Feedback and contributions are welcome! Feel free to open an issue or submit a pull request.
This project is licensed under the MIT License.
Special thanks to (https://github.com/vant7e) for their work on the panoptic model that forms the backbone of the video segmentation in this tool.