Skip to content

Latest commit

 

History

History
116 lines (80 loc) · 5.72 KB

writeup.md

File metadata and controls

116 lines (80 loc) · 5.72 KB

#Behavioral Cloning

##Writeup The goals / steps of this project are the following:

  • Use the simulator to collect data of good driving behavior
  • Build, a convolution neural network in Keras that predicts steering angles from images
  • Train and validate the model with a training and validation set
  • Test that the model successfully drives around track one without leaving the road
  • Summarize the results with a written report

---

Submission includes all required files

My project includes the following files:

  • model.py containing the script to create and train the model
  • drive.py for driving the car in autonomous mode
  • model.h5 containing a trained convolution neural network
  • writeup.md summarizing the results

Installation

The project uses python 3.5.2. Clone the GitHub respository, and use Udatity-CarND-Term1-Starter-Kit to get the rest of the dependencies.

Usage

Once you have recorded a set of images and steering angles from track 1 of the simulator, you can start training our model by executing this line:

python model.py

Once the model.json and model.h5 have been generated by the model trainer, you can start up the simulator in automous mode and see how will it performs by executing the follow line:

python drive.py model.h5 runs

If you want to save video about your car driving, you can executing the follow line:

python video.py runs

Network Architecture

My final CNN is based on NVIDIA's CNN Paper:End to End Learning for self-Driving Cars. The network consists of 9 layers, including a normalization layer, 5 convolutional layers and 3 fully connected layers. The input images is split into YUV planes and passed to the network.The network architecture is shown below. alt text

Based on the paper, I trained a convolutional neural network(CNN) to map raw pixels from the simulator directly to steering commands. The system should automatically learn internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicity train it to detect, for example, the outline of roads.

The summary of my model is shown below: alt text

Dataset

First I use the simulator in the training mode to record my driving on track 1. Here are the dataset:

  • dataset provided by Udacity
  • two laps of center lane driving
  • one lap of center lane driving in counter-clockwise direction
  • one lap of recovery driving from the sides
  • some data that collected in the corner and bridge three times

And I take the following techiniques to augmentate image dataset:

  • 1.Use center, left and right camera images, and add a correction angle of 0.15 to the left image, and -0.27 to the right one. The idea is to center the car, avoid the borders.
  • 2.Flipe images, and invert the steering angle.
Center image: steering angle 0.100034

alt text

Left image: steering angle 0.250034

alt text

Right image: steering angle -0.049966

alt text ######------------------

Flip left image: steering angle -0.250034

alt text

In the final architecture, we also included additional pre-processing steps. These are:

  • 1.Convert the image from RGB to YUV;
  • 2.Crop 50 pixels from top and 20 from bottom;
  • 3.Resize the image from 320*160 to 200*66;
YUV image

alt text

Crop image

alt text

resize image

alt text

Training/Validation Split

I used train_test_split to split the training data into batch training and validation with the ratio of 0.8.

Training

The model used an adam optimizer for minimizing Mean Squared Error as loss function. The samples per epochs were set as 32, and epochs used for training were 15 keeping in account that model did not overfit.

The most challenging part was to teach the car not to drive off the road in the section after the bridge, where lane lines are not marked clearly. I followed my some recommendation on the web and did the following to solve the problem: I drove to the position where the car drives off to the dirt road and stopped it at the same position with similar orientation. Than I turned the wheel toward the center of the road and recorded the data while the car standing for few seconds (10s~30s). This way our model learned what to do when it encounters a dirt road.

alt text

Video

Here's a link to my video result

Conclusion

For my final model, I used only the Udacity data without anything that I collected myself. I have 58,567 training examples and 14,642 testing examples. They add up exactly to 73209*2=146418. I trained for 15 epochs with batch size 32. My validation loss decreased all the way to 0.0095 for this particular model, lower than some other models trained using both Udacity and self-collected data, which were also able to drive the car around in the simulator.