In this article, we explore gradient descent - the grandfather of all optimization techniques and it’s variations. We implement them from scratch with Python. Machine Learning Optimization (1) - Gradient Descent with Python

The **code** that accompanies this article can be found **here****.**

So far in our journey through the Machine Learning universe, we covered several big topics. We investigated some **regression** algorithms, **classification** algorithms and algorithms that can be used for both types of problems (**SVM****, ****Decision Trees** and Random Forest). Apart from that, we dipped our toes in unsupervised learning, saw how we can use this type of learning for **clustering** and learned about several clustering techniques.

We also talked about how to quantify machine learning model **performance** and how to improve it with **regularization**. In all these articles, we used Python for “from the scratch” implementations and libraries like **TensorFlow**, **Pytorch** and SciKit Learn. The word optimization popped out more than once in these articles, so in this and next article, we focus on optimization techniques which are an important part of the machine learning process.

In general, every machine learning algorithm is composed of three integral parts:

- A
**loss**function. - Optimization criteria based on the loss function, like a
**cost**function. **Optimization**technique – this process leverages training data to find a solution for optimization criteria (cost function).

As you were able to see in previous articles, some algorithms were created intuitively and didn’t have optimization criteria in mind. In fact, mathematical **explanations** of why and how these algorithms work were done later. Some of these algorithms are **Decision Trees** and **kNN**. Other algorithms, which were developed later had this thing in mind beforehand. **SVM**is one example.

During the training, we change the parameters of our machine learning model to try and **minimize** the loss function. However, the question of how do you change those parameters arises. Also, by how much should we change them during training and when. To answer all these questions we use **optimizers**. They put all different parts of the machine learning algorithm together. So far we mentioned **Gradient Decent** as an optimization technique, but we haven’t explored it in more detail. In this article, we focus on that and we cover the **grandfather** of all optimization techniques and its variation. Note that these techniques are **not** machine learning algorithms. They are solvers of **minimization** problems in which the function to minimize has a gradient in most points of its domain.

☞ **Machine Learning Optimization (2) - Advanced Optimizers from scratch with Python**

machine learning python data-science artificial intelligence

Practice your skills in Data Science with Python, by learning and then trying all these hands-on, interactive projects, that I have posted for you.