This repository contains code for the first task of the ROAD-R Challenge. The code is built on top of 3D-RetinaNet for ROAD.
The first task requires developing models for scenarios where only little annotated data is available at training time. More precisely, only 3 out of 15 videos (from the training partition train_1 of the ROAD-R dataset) are used for training the models in this task. The videos' ids are: 2014-07-14-14-49-50_stereo_centre_01, 2015-02-03-19-43-11_stereo_centre_04, and 2015-02-24-12-32-19_stereo_centre_04.
For the dataset preparation and packages required to train the models, please see the Requirements section from 3D-RetinaNet for ROAD.
To download the pretrained weights, please see the end of the Performance section from 3D-RetinaNet for ROAD.
Also a video sample was given for the full road data annotation
Go through this repo for downloading all the necessary dataset.
- Change all the datatype from
_np.int to "int"__
for getting rid of attribute error [ROAD-R-2023-Challenge/data/datasets.py]
For avoiding UserWarning
- ROAD-R-2023-Challenge/modules/box_utils.py (Line 368-371) Change it into
xx1 = x1.index_select(0, idx)
yy1 = y1.index_select(0, idx)
xx2 = x2.index_select(0, idx)
yy2 = y2.index_select(0, idx)