Skip to content

Latest commit

 

History

History
97 lines (66 loc) · 6.52 KB

README.md

File metadata and controls

97 lines (66 loc) · 6.52 KB

Project Code and its Usage

The code is available on Github: https://github.com/marthtz/CarND-P5-Vehicle-Detection

The project was cloned from the official CarND P5 repository. I added the following files:

  • In main folder (Python scripts, classifier data and this writeup)
    • vehicleSVCClassifier.py (script to train SVC classifier)
    • tools.py (script with common functions)
    • detectVehicles.py (script with main and processing pipeline for vehicle detection)
    • detectLaneAndVehicles.py (script that combines lane and vehicle detection, P4 + P5)
    • svc_YUV_32_indv.pkl (pickle saved classifier)
    • cam_cal_pickle.p (pickle saved camera calibration from P4)
    • Writeup.pdf
  • In output_images folder (processed test images and videos)
    • project_video_detect.mp4 (main video)
    • project_video_detect_debug.mp4 (main video + debug info)
    • project_video_lane_vehicle.mp4 (lane and vehicle detection combined)
    • test1_detect.jpg to test6_detect.jpg
    • test1_detect_debug.jpg to test6_detect_debug.jpg
  • In output_images/writeup_images folder (images mentioned in this writeup)
    • car.png
    • notcar.png
    • carhog.png
    • carhogfeatures.png
    • roihog.png
    • windows96.jpg
    • windows128.jpg

Data sets:

To run the classifier the following datasets have to be downloaded and extracted into a common folder ‘data’, where the Python scripts are located:

Usage of the Python scripts:

vehicleSVCClassifier.py

This script opens the supplied data set and trains a linear SVC classifier to detect vehicles. The supplied data must be extracted to a “data” folder.

Execution: “python vehicleSVCClassifier.py”

The script saves the classifier in a pickle file in the same folder as the script.

detectVehicles.py

This script runs vehicle detection on the provided images or videos. In addition, debug information can be created. Executing is done by calling the script with input images/videos as parameter. In addition, if the first argument is ‘debug’, additional output is generated. A saved classifier has to be specified as an argument before the input files (1st argument if debug is not enabled, 2nd argument if debug is enabled)

Examples:

  • python detectVehicles.py debug svc_YUV_32.pkl test_images/test1.jpg test_images/test2.jpg
  • python detectVehicles.py debug svc_YUV_32_indv.pkl project_video.mp4
  • python detectVehicles.py svc_YUV_32_indv.pkl project_video.mp4

detectLaneAndVehicles.py

This script runs full lane line detection and vehicle detection on provided videos (images and debug not tested). A saved classifier has to be specified as an argument before the input files. This script is basically a combination of P4 and P5.

Examples:

  • python detectLaneAndVehicles.py svc_YUV_32_indv.pkl project_video.mp4

Vehicle Detection

Udacity - Self-Driving Car NanoDegree

In this project, your goal is to write a software pipeline to detect vehicles in a video (start with the test_video.mp4 and later implement on full project_video.mp4), but the main output or product we want you to create is a detailed writeup of the project. Check out the writeup template for this project and use it as a starting point for creating your own writeup.

Creating a great writeup:

A great writeup should include the rubric points as well as your description of how you addressed each point. You should include a detailed description of the code used in each step (with line-number references and code snippets where necessary), and links to other supporting documents or external references. You should include images in your writeup to demonstrate how your code works with examples.

All that said, please be concise! We're not looking for you to write a book here, just a brief description of how you passed each rubric point, and references to the relevant code :).

You can submit your writeup in markdown or use another method and submit a pdf instead.

The Project

The goals / steps of this project are the following:

  • Perform a Histogram of Oriented Gradients (HOG) feature extraction on a labeled training set of images and train a classifier Linear SVM classifier
  • Optionally, you can also apply a color transform and append binned color features, as well as histograms of color, to your HOG feature vector.
  • Note: for those first two steps don't forget to normalize your features and randomize a selection for training and testing.
  • Implement a sliding-window technique and use your trained classifier to search for vehicles in images.
  • Run your pipeline on a video stream (start with the test_video.mp4 and later implement on full project_video.mp4) and create a heat map of recurring detections frame by frame to reject outliers and follow detected vehicles.
  • Estimate a bounding box for vehicles detected.

Here are links to the labeled data for vehicle and non-vehicle examples to train your classifier. These example images come from a combination of the GTI vehicle image database, the KITTI vision benchmark suite, and examples extracted from the project video itself. You are welcome and encouraged to take advantage of the recently released Udacity labeled dataset to augment your training data.

Some example images for testing your pipeline on single frames are located in the test_images folder. To help the reviewer examine your work, please save examples of the output from each stage of your pipeline in the folder called ouput_images, and include them in your writeup for the project by describing what each image shows. The video called project_video.mp4 is the video your pipeline should work well on.

As an optional challenge Once you have a working pipeline for vehicle detection, add in your lane-finding algorithm from the last project to do simultaneous lane-finding and vehicle detection!

If you're feeling ambitious (also totally optional though), don't stop there! We encourage you to go out and take video of your own, and show us how you would implement this project on a new video!