Trevor  Russel

Trevor Russel

1617811560

Explainable AI for Multiple Regression

Opening the “black box” of machine learning models is not only critical in understanding the models we create, but also in communicating to others the information brought to light by the machine learning model. I have seen several projects fail because they could not be explained well to others, which is why understanding the models we build is necessary in order to increase the successful implementation of machine learning projects.
Recently I was working on an ML project that required multi-output regression (predicting more than one output/label/target) and had a hard time finding solid examples or resources implementing explainability. Working though the challenges of how to explain multi-output regression models involved a lot of trial and error. Ultimately, I was able to break down my multi-output regression model, and I gained a few “lessons learned” along the way that are worth sharing.
The full code walk through can be found on GitHub at SHAP Values for Multi-Output Regression Models and can be run in the browser through Google Colab.

#machine-learning #explainable-ai #tensorflow #shap #multiple-regression

What is GEEK

Buddha Community

Explainable AI for Multiple Regression
Trevor  Russel

Trevor Russel

1617811560

Explainable AI for Multiple Regression

Opening the “black box” of machine learning models is not only critical in understanding the models we create, but also in communicating to others the information brought to light by the machine learning model. I have seen several projects fail because they could not be explained well to others, which is why understanding the models we build is necessary in order to increase the successful implementation of machine learning projects.
Recently I was working on an ML project that required multi-output regression (predicting more than one output/label/target) and had a hard time finding solid examples or resources implementing explainability. Working though the challenges of how to explain multi-output regression models involved a lot of trial and error. Ultimately, I was able to break down my multi-output regression model, and I gained a few “lessons learned” along the way that are worth sharing.
The full code walk through can be found on GitHub at SHAP Values for Multi-Output Regression Models and can be run in the browser through Google Colab.

#machine-learning #explainable-ai #tensorflow #shap #multiple-regression

Otho  Hagenes

Otho Hagenes

1619511840

Making Sales More Efficient: Lead Qualification Using AI

If you were to ask any organization today, you would learn that they are all becoming reliant on Artificial Intelligence Solutions and using AI to digitally transform in order to bring their organizations into the new age. AI is no longer a new concept, instead, with the technological advancements that are being made in the realm of AI, it has become a much-needed business facet.

AI has become easier to use and implement than ever before, and every business is applying AI solutions to their processes. Organizations have begun to base their digital transformation strategies around AI and the way in which they conduct their business. One of these business processes that AI has helped transform is lead qualifications.

#ai-solutions-development #artificial-intelligence #future-of-artificial-intellige #ai #ai-applications #ai-trends #future-of-ai #ai-revolution

Explaining the Explainable AI: A 2-Stage Approach

As artificial intelligence (AI) models, especially those using deep learning, have gained prominence over the last eight or so years [8], they are now significantly impacting society, ranging from loan decisions to self-driving cars. Inherently though, a majority of these models are opaque, and hence following their recommendations blindly in human critical applications can raise issues such as fairness, safety, reliability, along with many others. This has led to the emergence of a subfield in AI called explainable AI (XAI) [7]. XAI is primarily concerned with understanding or interpreting the decisions made by these opaque or black-box models so that one can appropriate trust, and in some cases, have even better performance through human-machine collaboration [5].

While there are multiple views on what XAI is [12] and how explainability can be formalized [4, 6], it is still unclear as to what XAI truly is and why it is hard to formalize mathematically. The reason for this lack of clarity is that not only must the model and/or data be considered but also the final consumer of the explanation. Most XAI methods [11, 9, 3], given this intermingled view, try to meet all these requirements at the same time. For example, many methods try to identify a sparse set of features that replicate the decision of the model. The sparsity is a proxy for the consumer’s mental model. An important question asks whether we can disentangle the steps that XAI methods are trying to accomplish? This may help us better understand the truly challenging parts as well as the simpler parts of XAI, not to mention it may motivate different types of methods.

Two-Stages of XAI

We conjecture that the XAI process can be broadly disentangled into two parts, as depicted in Figure 1. The first part is uncovering what is truly happening in the model that we want to understand, while the second part is about conveying that information to the user in a consumable way. The first part is relatively easy to formalize as it mainly deals with analyzing how well a simple proxy model might generalize either locally or globally with respect to (w.r.t.) data that is generated using the black-box model. Rather than having generalization guarantees w.r.t. the underlying distribution, we now want them w.r.t. the (conditional) output distribution of the model. Once we have some way of figuring out what is truly important, a second step is to communicate this information. This second part is much less clear as we do not have an objective way of characterizing an individual’s mind. This part, we believe, is what makes explainability as a whole so challenging to formalize. A mainstay for a lot of XAI research over the last year or so has been to conduct user studies to evaluate new XAI methods.

#overviews #ai #explainability #explainable ai #xai

Murray  Beatty

Murray Beatty

1598606037

This Week in AI | Rubik's Code

Every week we bring to you the best AI research papers, articles and videos that we have found interesting, cool or simply weird that week.

#ai #this week in ai #ai application #ai news #artificaial inteligance #artificial intelligence #artificial neural networks #deep learning #machine learning #this week in ai

This Week in AI - Issue #22 | Rubik's Code

Every week we bring to you the best AI research papers, articles and videos that we have found interesting, cool or simply weird that week.Have fun!

Research Papers

Articles

#ai #this week in ai #ai application #ai news #artificaial inteligance #artificial intelligence #artificial neural networks #deep learning #machine learning #this week in ai