Interpreting a machine learning model is a crucial, yet often ignored part of a development cycle. Nowadays we have a bunch of tools that may serve this purpose, but when to use which? In this talk, I will present several questions that should be asked during the ad-hoc model interpretation. I will show how to answer them using ad-hoc interpretations tools: SHAP, LIME, PDP plots, additive models, and anchors.

Denis Vorotyntsev is a Senior Data Scientist at Oura. He builds models for improving well being and health tracking. In his free time, he writes about ML and DS in his blog.

Subscribe :


Explaining ML models in 2020
11.20 GEEK