Hi, today we are going to study about the Evaluation metrics for regression problems. Evaluation Metrics are very important as they tell us, how accurate our model is.

Before we proceed to the evaluation techniques, it is important to gain some intuition.

Image for post

In the above image, we can see that we have plotted a linear curve, but the curve is not perfect as some points are lying above the line & some are lying below the line.

So, how accurate our model is?

The evaluation metrics aim to solve these problems. Now, without wasting time, let’s jump to the evaluation metrics & see the evaluation techniques.

There are 6 evaluation techniques:

1. M.A.E (Mean Absolute Error)

2. M.S.E (Mean Squared Error)

3. R.M.S.E (Root Mean Squared Error)

4. R.M.S.L.E (Root Mean Squared Log Error)

5. R-Squared

6. Adjusted R-Squared

Now, let’s discuss these techniques one by one.

M.A.E (Mean Absolute Error)

It is the simplest & very widely used evaluation technique. It is simply the mean of difference b/w actual & predicted values.

Below, is the mathematical formula of the Mean Absolute Error.

Mean Absolute Error

The Scikit-Learn is a great library, as it has almost all the inbuilt functions that we need in our Data Science journey.

Below is the code to implement Mean Absolute Error

from sklearn.metrics import mean_absolute_error

mean_absolute_error(y_true, y_pred)

Here, ‘y_true’ is the true target values & ‘y_pred’ is the predicted target values.

#artificial-intelligence #evaluation-metric #machine-learning #regression #statistics #deep learning

Evaluation Metrics for Regression Problems
1.10 GEEK