Skip to content

Latest commit

 

History

History
44 lines (27 loc) · 3.93 KB

README.md

File metadata and controls

44 lines (27 loc) · 3.93 KB

Ego-Lane Analysis System

Rodrigo F. Berriel, Edilson de Aguiar, Alberto F. de Souza, and Thiago Oliveira-Santos

Published in Image and Vision Computing: 10.1016/J.IMAVIS.2017.07.005

Graphical-Abstract

Abstract

Decreasing costs of vision sensors and advances in embedded hardware boosted lane related research – detection, estimation, tracking, etc. – in the past two decades. The interest in this topic has increased even more with the demand for advanced driver assistance systems (ADAS) and self-driving cars. Although extensively studied independently, there is still need for studies that propose a combined solution for the multiple problems related to the ego-lane, such as lane departure warning (LDW), lane change detection, lane marking type (LMT) classification, road markings detection and classification, and detection of adjacent lanes (i.e., immediate left and right lanes) presence. In this paper, we propose a real-time Ego-Lane Analysis System (ELAS) capable of estimating ego-lane position, classifying LMTs and road markings, performing LDW and detecting lane change events. The proposed vision-based system works on a temporal sequence of images. Lane marking features are extracted in perspective and Inverse Perspective Mapping (IPM) images that are combined to increase robustness. The final estimated lane is modeled as a spline using a combination of methods (Hough lines with Kalman filter and spline with particle filter). Based on the estimated lane, all other events are detected. To validate ELAS and cover the lack of lane datasets in the literature, a new dataset with more than 20 different scenes (in more than 15,000 frames) and considering a variety of scenarios (urban road, highways, traffic, shadows, etc.) was created. The dataset was manually annotated and made publicly available to enable evaluation of several events that are of interest for the research community (i.e., lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes). Moreover, the system was also validated quantitatively and qualitatively on other public datasets. ELAS achieved high detection rates in all real-world events and proved to be ready for real-time applications.

ELAS Database

To request access to the datasets, read the instructions here.

Videos

Demonstration video of ELAS:

Video1

ELAS was weakly integrated into IARA (our autonomous vehicle). The video below shows ELAS performing on IARA (without tuning any parameter):

Video2

Source-code

I'm working on the source-code to make it easier to use. Meanwhile, you can use this code. In addition, you can preview the annotation and see how bird's eye view was performed using the python script provided in the scripts directory:

python scripts/preview_dataset.py --dataset /full/path/to/a/dataset/directory/ --fps 30

BibTeX

@article{berriel2017imavis,
    Author  = {Rodrigo F. Berriel and Edilson de Aguiar and Alberto F. de Souza and Thiago Oliveira-Santos},
    Title   = {{Ego-Lane Analysis System (ELAS): Dataset and Algorithms}},
    Journal = {Image and Vision Computing},
    Volume  = {68},
    Pages   = {64--75}
    Year    = {2017},
    DOI     = {10.1016/J.IMAVIS.2017.07.005},
    ISSN    = {0262-8856},
}