Employees’ Attrition — How Catboost and Shap can help you understand it!

Employees’ Attrition — How Catboost and Shap can help you understand it! Discover how the use of Scikit and Catboost models can help you deal with an unbalanced dataset and why SHAP is a great tool to explain A.I. predictions.

Real-time Model Interpretability API using SHAP, Streamlit and Docker

Real-time Model Interpretability API using SHAP, Streamlit and Docker A self-service API to explain model scores real-time Continue reading on Towards...

Explainable and Reproducible Machine Learning Model Development

With ML models serving real people, misclassified cases (which are a natural consequence of using ML) are affecting peoples’ lives and sometimes treating them very unfairly. It makes the ability to explain your models’ predictions a requirement rather than just a nice to have.Machine learning model development is hard, especially in the real world. Typically, you need to:

Ultimate Guide to Model Explainability: Anchors

Ultimate Guide to Model Explainability: Anchors. There is now a laundry list of Machine Learning and Deep Learning algorithms to solve each AI problem.

The 4 types of additive Feature Importances

You have probably heard of Feature Importance methods: there are many of them around and they can be very useful for variable selection and model explanation.

Is this the Best Feature Selection Algorithm “BorutaShap”?

A new Python package that combines the “Boruta” algorithm with “Shapley” values. “BorutaShap” definitely provides the most accurate subset of features when compared to both the “Gain”.

Explaining “Blackbox” Machine Learning Models

GBM models have been battle-tested as powerful models but have been tainted by the lack explainability. Typically data scientists look at variable importance plots but they are not enough to explain how a model works. To maximize adoption by the model user, use SHAP values to answer common explainability questions and build trust in your models.