Skip to content

Latest commit

 

History

History
32 lines (21 loc) · 2.27 KB

File metadata and controls

32 lines (21 loc) · 2.27 KB

Paper-Implementation-Overview-Gradient-Descent-Optimization-Algorithms

forthebadge made-with-python MIT Licence

arXiv paper :

An Overview of Gradient Descent Optimization Algorithms - Sebastian Ruder

Python 2.7

Links to Original Paper published on arXiv.org>cs>arXiv:1609.04747 : [1], [2]

Link to Blog with Paper Explanation : [3]

Implemented following Gradient Desent Optimization Algorithms from Scratch:

  1. Vanilla Batch/Stochastic Gradient Descent [4] : batch_gradient_descent.py

  2. Momentum [5] : momentum.py

  3. NAG : Nesterov Accelarated Gradient [6] : nesterov_accelarated_gradient.py

  4. AdaGrad : Adaptive Gradient Algorithm [7] : adagrad.py

  5. AdaDelta : Adaptive Learning Rate Method [8] : adadelta.py

  6. RMS Prop [9] : rms_prop.py

  7. Adam : Adaptive Moment Estimation [10] [11] : adam.py

  8. AdaMax : Infinity Norm based Adaptive Moment Estimation [12] : adamax.py

  9. Nadam : Nesterov-accelarated Adaptive Moment Estimation [13] : nadam.py

  10. AMSGrad [14] : amsgrad.py

Time and Error Analysis :

Minimized dummy Cost Function f(x) = x^2 using default values as initial approximation = 1, error tolerance = 0.0001, learning rate = 0.1, gamma = 0.9, beta_1 = 0.9, beta_2 = 0.999

alt text