We are currently living in a “data era,” where **a vast amount of data is collected and stored every day**. In the face of this growing quantity of data, **machine learning methods have become inescapable**. So much so that you probably use them dozens of times a day without even noticing!

Let’s start with an example of an “everyday” machine learning contribution for millions of users: the algorithm behind Facebook’s News Feed. Facebook uses machine learning to exploit users’ data and feedback to personalize their feeds. If you “like” a post or stop scrolling to read something, the algorithm learns from this and starts to populate your feed with further similar content. This learning is done continuously, and so the material suggested in your News Feed evolves with your preferences, making your user experience more enjoyable.

This is only one example! There are many others. Apple can recognize your friend’s face in the photo you just took. Amazon Echo understands you and can answer your questions. Your vacuum can even navigate its way around your house while Netflix is recommending videos that match your profile! Machine learning has become a massive part of our daily lives, and it’s not going anywhere soon.

But what is machine learning exactly? What’s behind these magical-looking algorithms? And how do they use data to work so well?

Formally, **machine learning is the science of getting computers to realize a task without being explicitly programmed**. In other words, the big difference between classical and machine learning algorithms lies in the way we define them.

**Classical algorithms** are given exact and complete rules to complete a task. **Machine learning algorithms** are given general guidelines that define the model, along with data. This data should contain the missing information necessary for the model to complete the task. So, a machine learning algorithm can accomplish its task when the model has been adjusted with respect to the data. We say that we **“fit the model on the data”** or that **“the model has to be trained on the data.”**

Let’s illustrate this with a simple example. Let’s say we want to predict the price of a house based on the size of the house, the size of its garden, and the number of rooms it has.

We could try to build a classical algorithm that answers this problem. This algorithm would have to take the three house features and return the predicted price based on an explicit rule. In this example, the exact house pricing formula has to be known and coded explicitly. But in practice, this formula is often not known.

On the other hand, we could build a machine learning algorithm. First, such algorithm would define a model that can be an incomplete formula created from our limited knowledge. **Then, the model would be adjusted by training on given housing prices examples**. Doing so, we combine a model with some data.

**In general, machine learning is incredibly useful for difficult tasks when we have incomplete information or information that’s too complex to be coded by hand.** In these cases, we can give the information we have available to our model and let this one “learn” the missing information that it needs by itself. The algorithm will then use statistical techniques to extract the missing knowledge directly from the data.

The two main categories of machine learning techniques are **supervised learning** and **unsupervised learning**.

**In supervised learning, we want to get a model to predict the label of data based on their features**. In order to learn mapping between features and labels, the model has to be fitted on given examples of features with their related labels. We say that “the model is trained on a labeled dataset.”

Predicted labels can be numbers or categories. For example, we could be building a model that predicts the price of a house, implying we would want to predict a label that’s a number. In this case, we would talk about a **regression model**. Otherwise, we might also want to define a model that predicts a category, like “cat” or “not cat”, based on given features. In this situation, we would talk about a **classification model**.

**In unsupervised learning, we want to define a model that reveals structures in some data that are described only by their features but with no labels.** For example, unsupervised learning algorithms can help answer questions like “are there groups among my data?” or “is there any way to simplify the description of my data?”.

The model can look for different kind of underlying structures in the data. If it tries to find groups among the data we would talk about a **clustering model**. An example of a clustering model would be a model that segments customers of a company based on their profiles. Otherwise, if we have a model that transforms data and represents them with a smaller number of features, we would talk about a **dimension reduction model**. An example of this would be a model that summarises the multiple technical characteristics of some cars into a few main indicators.

In summary, supervised learning models associate a label with each data point described by its features whereas unsupervised learning models find structures among all the data points.

In a sense, supervised learning is similar to learning the names of fruits from a picture book: you associate the characteristics of the fruit — the features — with the names written on the page — the label. Classical examples of supervised learning algorithms are linear regression, logistic regression, support vector machines, neural networks, and so on.

Unsupervised learning, on the other hand, is like taking the same fruit picture book, analyzing all of the fruits to detect patterns, and then deciding to group fruits by color and size. Classical examples of unsupervised learning algorithms are k-means clustering, hierarchical clustering, principal component analysis, autoencoders, and so on.

Let’s conclude this post by mentioning that machine learning is not that new, and that many of the algorithms driving today’s applications have been around for years. Nevertheless, some major advances took place along the time: we have built datasets larger than ever before, we have increased our computation power, and we have imagined new cutting edge models. If these advances have already made it possible to approach and in some cases even exceed human abilities across many tasks, there is also no doubt that we are only scratching the surface of what’s possible!

We really hope you enjoyed this post. Do not hesitate to leave feedback and tell us in the comments section if there are any topics you would be interested in for coming videos.

#MachineLearning #AI #DataScience #Python #machine-learning

1 Likes24.25 GEEK