Opening the “black box” of machine learning models is not only critical in understanding the models we create, but also in communicating to others the information brought to light by the machine learning model. I have seen several projects fail because they could not be explained well to others, which is why understanding the models we build is necessary in order to increase the successful implementation of machine learning projects.
Recently I was working on an ML project that required multi-output regression (predicting more than one output/label/target) and had a hard time finding solid examples or resources implementing explainability. Working though the challenges of how to explain multi-output regression models involved a lot of trial and error. Ultimately, I was able to break down my multi-output regression model, and I gained a few “lessons learned” along the way that are worth sharing.
The full code walk through can be found on GitHub at SHAP Values for Multi-Output Regression Models and can be run in the browser through Google Colab.

#machine-learning #explainable-ai #tensorflow #shap #multiple-regression

Explainable AI for Multiple Regression
2.20 GEEK