In this blog, I will focus on the Performance measures to evaluate our classification model. Specifically, I will demonstrate the meaning of model evaluation metrics — precision and recall through real life examples, and explain the trade-offs involved.

Lets understand them by an example:

Suppose you are a manager of a real estate company and you want to use such a classifier to tell you whether you should pick the property to sell or not.

Now, I want to use classifier because when I pick up the property to sell, I will assign an agent, I will do marketing for that property and do various activities to get it sold. So basically, I will be incurring cost for all these activities. Now say, if that property does not get sold than my cost will be sunk.

So, from my classifier, I want that whenever it predicts house will be sold, it should be accurate most of the times. It will help in minimizing the risk of loosing my money and various other resources. So here you can say that, out of all the positive predictions(house to be sold), you want most of them to be right i.e. TP / (TP + FP). And that is precision. We can define Precision as, “Percentage of our result which is relevant”.

#recall #evaluation #machine-learning #precision #precision-recall-curve

Understand Precision vs Recall through example
5.50 GEEK