If this code helps with your work, please cite:
@incollection{yang2020group,
title={Group behavior recognition using attention-and graph-based neural networks},
author={Yang, Fangkai and Yin, Wenjie and Inamura, Tetsunari and Bj{\"o}rkman, M{\aa}rten and Peters, Christopher},
booktitle={ECAI 2020},
pages={1626--1633},
year={2020},
publisher={IOS Press}
}
CongreG8: A mocap dataset for proxemics and social robotics (not public yet). Recorded in a motion capture lab with an approximate 5m × 5m × 3m active capture volume, which is equipped with a NaturalPoint Optitrack2 system with 16 Prime 41 cameras.
Scenario: A game scene called Who’s the Spy. Three group players and one observer. The observer approaches and joins the group.
Details:
- 380 human approaches groups trials
- Full body motion capture data of all players (37 markers)
- Time period: 2-6 seconds;
- Frame rate: 120 fps;
- Behaviors: Accommodate and Ignore
Based on Spatial-Temporal Graph Convolutional Networks (ST-GCN), but extended to multiple temporal scales (MST-GCN). Approach Group GCN (AG-GCN) for group behavior analysis: Multi-Spatial-Temporal GCN (MST-GCN) on the individual level and Group GCN on group level.
A use case in a virtual environment with live classification of accommodation and ignore.
This code was built upon the ST-GCN codebase and various recognition codebases. Many thanks to those authors for making their code available!
This research has received funding from the European Union‘s Horizon 2020 research and innovation programme under grant agreement n. 824160 (EnTimeMent).