• Artificial Neural Networks offer significant performance benefits compared to other methodologies, but often at the expense of interpretability
  • Problems and controversies arising from the use and reliance on black box algorithms have given rise to increasing calls for more transparent prediction technologies
  • Hybrid architectures attempt to solve the problem of performance and explainability sitting in tension with one another
  • Current approaches to enhancing the interpretability of AI models focus on either building inherently explainable prediction engines or conducting post-hoc analysis
  • The research and development seeking to provide more transparency in this regard is referred to as Explainable AI (XAI)

Modern machine learning architectures are growing increasingly sophisticated in pursuit of superior performance, often leveraging black box-style architectures which offer computational advantages at the expense of model interpretability.

A number of companies have already been caught on the wrong side of this “performance-explainability trade off”.

#explainable ai #support vector machine #machine learning #ai

The Case for Explainable AI (XAI)
1.15 GEEK