No matter what kind of Machine Learning model you’re working on, you need to optimize it, and in this blog, we’ll learn how exactly optimization works.

Optimization in Machine Learning is one of the most important steps and possibly the hardest to learn also. The optimizer** is a function that optimizes Machine Learning models using training data. Optimizers use a Loss Function to calculate the loss of the model and then based on that tries to optimize it.** So without an optimizer, a Machine Learning model can’t do anything amazing.

In this blog, my aim is to explain how optimization works, the logic behind it, and the math behind it. I’ll not explain/provide any code. If you’re looking for a Mathematical/Logical explanation, only then continue.

This is the first part of these series of blogs on Optimization on Machine Learning. In this blog, I’ll explain optimization in an ultra-simple way with a stupid example. This is specifically helpful for absolute beginners who do not have any idea how optimization works.

As I mentioned earlier, the optimizer uses a Loss Function to calculate the loss of the model, and then based on that the optimizer updates the model to achieve a better score, so let’s understand Loss Function first.

#deep-learning #optimization-algorithms #optimization #machine-learning

Optimization in Machine Learning 
1.15 GEEK