Motivation

GBM models have been battle-tested as powerful models but have been tainted by the lack explainability. Typically data scientists look at variable importance plots but they are not enough to explain how a model works. To maximize adoption by the model user, use SHAP values to answer common explainability questions and build trust in your models.

In this post, we will train a GBM model on a simple dataset and you will learn how to explain how the model works. The goal here is not to explain how the math works, but to explain to a non-technical user how the input variables are related to the output variable and how predictions are made.

The dataset we are using is the advertising dataset provided by ISLR and you can get the code used on d6t github.

#2020 may tutorials # overviews #explainability #interpretability #python #shap

Explaining “Blackbox” Machine Learning Models
1.20 GEEK