In a world where Deep Learning is dominating everywhere, from Agriculture to Medical Science, Automobile, Education, Defense, Security, and other fields. The algorithm has to be efficient for Neural Networks to get better results. Optimization techniques become the centerpiece of deep learning algorithms when one expects better and faster results from the neural networks, and the choice between these optimization algorithms techniques can make a huge difference between waiting for hours or days for excellent accuracy. There is some main point about Optimization in Neural Network.

  1. Better Optimization Algorithm
  2. Better Activation Function
  3. Better Initialization Method
  4. Better Regularization

In this article, we will only focus on the Better Optimizing algorithm for Deep Neural Network (DNN). We will call this optimizing algorithm as a Learning algorithm for this article. There are several well-known Learning algorithms out there. let’s have a look at them.

Momentum-Based Learning Algorithm

  1. Vanilla Gradient Descent (GD)
  2. Momentum Based Gradient Descent
  3. Nesterov Accelerated Gradient Descent (NAG)

Batch-Learning Based Learning Algorithm

  1. Stochastic Update
  2. Mini-Batch Update

Adaptive Learning rate Based Learning Algorithm

  1. AdaGrad
  2. RMS Prop
  3. Adam (a mixture of RMS Prop and Momentum based GD)

#machine-learning #optimization-algorithms #learning-algorithms #deep-learning #neural-networks

Different Optimization Algorithm for Deep Neural Networks: Complete Guide
1.25 GEEK