TrackEval now contains the official evalution code for evaluating the task of Open World Tracking.
This is the official code from the following paper:
Opening up Open-World Tracking Yang Liu*, Idil Esen Zulfikar*, Jonathon Luiten*, Achal Dave*, Deva Ramanan, Bastian Leibe, Aljoša Ošep, Laura Leal-Taixé *Equal contribution CVPR 2022
The code can be run by running the following script (see script for arguments and how to run): TAO-OW run script
To understand the the data is being read and used, see the TAO-OW dataset class: TAO-OW dataset class
The implementation of the 'Open World Tracking Accuracy' (OWTA) metric proposed in the paper can be found here: OWTA metric
If you work with the code and the benchmark, please cite:
Opening Up Open-World Tracking
@inproceedings{liu2022opening,
title={Opening up Open-World Tracking},
author={Liu, Yang and Zulfikar, Idil Esen and Luiten, Jonathon and Dave, Achal and Ramanan, Deva and Leibe, Bastian and O{\v{s}}ep, Aljo{\v{s}}a and Leal-Taix{\'e}, Laura},
journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2022}
}
TrackEval
@misc{luiten2020trackeval,
author = {Jonathon Luiten, Arne Hoffhues},
title = {TrackEval},
howpublished = {\url{https://github.com/JonathonLuiten/TrackEval}},
year = {2020}
}