As part of a project in our university, it was our task to implement an agent in CARLA-Simulator, which autonomously collects image and label data to generate a dataset. This can be used later to train a Deep Convolutional Neural Network, which is able to detect lanemarkings on a road.
Our results can be found on the projects Github page.
The overall project is split up into two parts:
- The first part covers how to create and generate a dataset. It's what this repository is used for.
- The second part covers the training and testing of a Deep Convolutional Neural Network, which is able to detect lanemarkings on a road. It's what this repository is used for.
This repository consists of 2 parts:
- Collect data in CARLA by executing
fast_lane_detection.py
- Generate a dataset with the collected data
dataset_generator.py
For Installation, please read carla_setup.md
Before doing anything, make sure that your CARLA Server is running. After that, do the following steps:
-
Execute and run
fast_lane_detection.py
. This collects all the data in CARLA and saves them as .npy (numpy) files and generates temporary label files, which are later filtered. -
Execute and run
dataset_generator.py
. All the raw images and labels are then converted to .jpg images and .json labels. This file creates adataset
directory and places the processed files inside. Another task of this script is to balance the data. -
Execute and run
image_to_video.py
. After generating the dataset, the images can then be converted to a video. This might be helpful, if you want to check your images and labels for errors.
For any details, please refer to the full documentation in the /docs directory or the hosted version.
Thanks to sagnibak for his work on figuring out how to efficiently save image data in CARLA with .npy. Without his work, it would have been a lot more time consuming regarding this problem. For more information refer to his self-driving car project on Github: https://github.com/sagnibak/self-driving-car