Skip to content

In this project, it is aimed to find the lane lines in traffic more robustly

Notifications You must be signed in to change notification settings

AkgunOnur/SelfDrivingCar_AdvancedLaneFinding

Repository files navigation

Advanced Lane Finding

In this project, the goal is to identify the lane boundaries in a video stream

Advanced Lane Finding Project

The goals / steps of this project are the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a thresholded binary image.
  • Apply a perspective transform to rectify binary image ("birds-eye view").
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

Camera Calibration

The OpenCV functions findChessboardCorners and calibrateCamera are used for camera calibration. A number of images of a chessboard, taken from different angles with the same camera. Arrays of object points, corresponding to the location of internal corners of a chessboard, and image points, the pixel locations of the internal chessboard corners determined by findChessboardCorners, are fed to calibrateCamera which returns camera calibration and distortion coefficients. These can then be used by the OpenCV undistort function to undo the effects of distortion on any image produced by the same camera. Generally, these coefficients will not change for a given camera (and lens). The image below depicts the process

alt text

The images below show the results of undistort function, using the calibration and distortion coefficients, to one of the chessboard images

alt text alt text

To get thresholded binary image, a few filters which are Sobel on x and y axes, direction filter, s channel filter for a HLS image are applied to the image. The image below shows the result

alt text alt text

To detect lane lines in a more robust way, L channel of the HLS color space is used to isolate white lines and the B channel of the LAB colorspace is used to isolate yellow lines. As it can be seen below, this approach gives better results than before. I did not use any gradient thresholds in my pipeline. Each filter's threshold values are adjusted so that the filter has tolerance to the light.

alt text

These images below are results after applying binary thresholding and perspective transform

alt text alt text alt text

To unwarp the image, source and destination points should be defined. These points are determined by hardcoding.

src = np.float32([(575,464),
                  (707,464), 
                  (258,682), 
                  (1049,682)])
dst = np.float32([(450,0),
                  (w-450,0),
                  (450,h),
                  (w-450,h)])

alt text

The functions find_lines and polyfit_using_prev_fit identify lane lines and fit a second order polynomial to both right and left lane lines. In first step, histogram of the image is obtained. According to the histogram, indices of left and right lane lines are found. The images below show the process

alt text

The image below depicts the histogram generated by find_lines; the resulting base points for the left and right lanes are the nearest to the center.

alt text

The polyfit_using_prev_fit function performs basically the same task, it also makes the search of lane lines easier by using previous fit. It only searches for lane pixels within a certain range of that fit. The image below demonstrates this - the green shaded area is the range from the previous fit, and the yellow lines and red and blue pixels are from the current image:

alt text

The radius of curvature is based upon here and calculated using line below:

curve_radius = ((1 + (2*fit[0]*y_0*y_meters_per_pixel + fit[1])**2)**1.5) / np.absolute(2*fit[0])

In this example, fit[0] is the first coefficient of the second order polynomial fit, and fit[1] is the second coefficient. y_0 is the y position within the image upon which the curvature calculation is based (the bottom-most y - the position of the car in the image - was chosen). y_meters_per_pixel is the factor used for converting from pixels to meters. This conversion was also used to generate a new fit with coefficients in terms of meters.

The position of the vehicle with respect to the center of the lane is calculated with the following lines of code:

lane_center_position = (r_fit_x_int + l_fit_x_int) /2
center_dist = (car_position - lane_center_position) * x_meters_per_pix

r_fit_x_int and l_fit_x_int are the x-intercepts of the right and left fits, respectively.

After finding the indices of lane lines, the image is warped back by using Minv which is a inverse of the warp matrix

alt text

The video stream of result is here


Discussion

The problems which are often encountered are mostly due to lighting conditions, shadows, discoloration, etc. Although, good results are obtained by using Sobel filters, lane lines are found more robustly by using B channel of LAB colorspace which isolates yellow line well and L channel of HLS colorspace. Even though the threshold parameters can be adjusted to find the lane lines in a video stream, the filter may fail in case of snowy weather or a white car entering the frame. Therefore, an automated system should be used to adjust the filter for every possible circumstances

Acknowledgment

I appreciate jeremy-shannon because of his work. It helped a lot during the whole process.

About

In this project, it is aimed to find the lane lines in traffic more robustly

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published