Metrics Matter

Evaluation Metrics for Classification Models. I describe each evaluation metric and provide a binary classification example to facilitate comprehension.

Understand Precision vs Recall through example

In this blog, I will focus on the Performance measures to evaluate our classification model. Specifically, I will demonstrate the meaning of model evaluation metrics — precision and recall through real life examples, and explain the trade-offs involved.

Micro AI based software to convert documents to actionable information

MintMesh is cloud based micro Artificial Intelligence platform that helps professionals get accurate and actionable information in a fraction of time. It converts documents to actions and actionable information, quickly and effectively.

Evaluation Metrics for Classification Models Series — Part 1:

The evaluation metrics for classification models series consist of multiple articles linked together geared to teaching you the best practices in evaluating classification model performance.

IoU a better detection evaluation metric

Choosing the right object detection model means looking at more than just mAP. Choosing the best model architecture and pretrained weights for your task can be hard. If you’ve ever worked on an object detection problem then you’ve undoubtedly come across plots and tables similar to those below while comparing different models.

What is Mean Average Precision (mAP) in Object Detection?

In this article, we take apart the mean average precision metric with explanations and graphics. We have also posted this article breaking down the mean average precision metric our blog.

Evaluation Basics Part I: No More Confusion for Confusion Matrix

Evaluation is an essential part of machine learning. The evaluation result tells us how well a particular machine learning algorithm performs.

I performed Error Analysis on Open Images and now I have trust issues

I reassessed Open Images with a SOTA object detection model only to discover that over 1/3 of all false positives were annotation error!

Evaluation Basics Part I: No More Confusion for Confusion Matrix

This article explains what a confusion matrix is and how to use it. The evaluation result tells us how well a particular machine learning algorithm performs.

How to carry out k-fold cross-validation on an imbalanced classification problem

An imbalance classification has its OWN rules. Know them, else you violate their rights. In this article, we would — state the appropriate criteria for applying the k-fold cross-validation.

Why Is This Chart Bad?

Don’t you ever look at graphics that go viral from time to time and try to analyze them? Well, I do. And most of them are crap. But why are they crap?What makes a bad visualization bad?

mAP (mean Average Precision) might confuse you!

One can be forgiven for taking mAP (mean average precision) to literally mean the average of precisions. Nevertheless, you couldn’t be further from the truth!

Intersection over Union — Object Detection Evaluation Technique

This article will describe the concept of IoU in any Object Detection Problem. It will also walk you through the application of the same