Optimizers

Optimizers are the paradigm of machine learning particularly in deep learning make a moon in the beauty of its working by reducing or minimizing losses in our model. Optimizers are the methods or algorithms used to change the attributes of our neural network such as weights and learning rate to reduce the losses in our network.Optimization algorithms or strategies are responsible for reducing the loss function and providing the most accurate results possible by updating weight.Back-propagation in Deep learning/Neural network

Back-propagation is done by the optimizers function which is the essence of neural net training. In this, fine-tuning of weights of a neural network based on the error rate obtained in the previous epoch (i.e., iteration). This reduction in weights allows us to reduce error rates and to make the model reliable by increasing its generalization. It is a standard method of training ANN and helps in calculating the gradient of a loss function concerning all the weights in the network.

Gradient Descent

Before having an understanding of Gradient descent one’s must know why it has come and its importance. So I recommend you to have a look at **Brute Force Algorithms **first which refers to a programming style that does not include any shortcuts to improve performance.For example, as we do in traveling salesman problem (TSP). It may take a week, month, or year to solve the deep learning/ANN tasks which have large data sets by using it so for this curse of dimensionality, Gradient descent came into existence.Therefore, **Gradient Descent **is an optimization technique that is used to improve deep learning and neural network-based models by minimizing the cost function.The working of Gradient Descent can be easily understood with the help of the example. Suppose we have a big data set that has millions of records in it then gradient descent takes the complete data set and for one epoch(round of iteration) it does front and back-propagation and then weights get updated for every epoch. So in this way finally loss function is reduced.

#artificial-intelligence #data-science #technology #deep-learning #machine-learning

Understand Optimizers in Deep Learning
1.20 GEEK