How to Visually Explain any CNN based Models?

How to Visually Explain any CNN based Models. Understand and Implement Guided Grad CAM to visually explain class discriminative visualization for any CNN based models

Five critical questions to explain Explainable AI

Five critical questions to explain Explainable AI. Getting started on your Responsible AI journey

How much of your Neural Network’s Prediction can be Attributed to each Input Feature?

How much of your Neural Network’s Prediction can be Attributed to each Input Feature? Peeking inside Deep Neural Networks with Integrated Gradients, Implemented in PyTorch.

What Makes Great Wine… Great?

In this blog post, I will: explain qualitatively what chemical properties of wine make it desirable using the UCI Wine Quality Data Set; explain how the Partial Dependence Plot can be used to explain which chemical properties of wine are desirable; build a machine learning model on the data set; Plot and explain the partial dependence plot in python.

Explainable Monitoring: Stop flying blind and monitor your AI

Explainable Monitoring: Stop flying blind and monitor your AI. Data Science teams find Explainable Monitoring essential to manage their AI

The essence behind an award-winning photo — an AI approach

By visualizing the layers of CNN architectures we dive into the understanding of how machines process images.

How can I explain my ML models to the business?

How can I explain my ML models to the business? 3 frameworks to make your AI more explainable

Real World Hacks for Explainable AI

Explainable AI may help with regulation, but the real value is its contribution to ROI of your AI projects. This post discusses some hacks! To comply with regulation is one of the last reasons someone should think about making artificial intelligence more explainable.

Interpreting Black-Box ML Models using LIME

Interpreting Black-Box ML Models using LIME. Understand LIME Visually by Modelling Breast Cancer Data. It is almost trite at this point for anyone to espouse the potential of machine learning in the medical field.

How can we build explainable AI?

How to think about explainability in your machine learning models? A step-by-step guide to understanding model behaviour, explaining predictions, and building trustworthy models

Why Don’t AI Coders Study AI Ethics?

Why Don’t AI Coders Study AI Ethics? We hear about the societal affects of AI, brought about by willful ignorance of ‘techies’ or ‘tech bros’. So I thought about, what keeps AI coders so distant from the ethics field?

Explaining Deep Learning Forecasts

We already covered in a previous post, how important it is to deal with uncertainty in financial Deep Learning forecasts. In this post, we’ll attempt a first introduction on how we deal with explainability.

Explaining Your Machine Learning Models with SHAP and LIME!

Explaining Your Machine Learning Models with SHAP and LIME! Helping you to demystify what some people might perceive as a “black box” for your machine learning models.

Ultimate Guide to Model Explainability: Anchors

Ultimate Guide to Model Explainability: Anchors. There is now a laundry list of Machine Learning and Deep Learning algorithms to solve each AI problem.

Rise of Modern NLP and the Need of Interpretability!

Rise of Modern NLP and the Need of Interpretability! Modern NLP is at the forefront of computational linguistics, which is concerned with computational modelling of natural language.

Top 3 Enterprise AI/ML Principles: Going beyond model accuracy

Top 3 Enterprise AI/ML Principles: Going beyond model accuracy. For the last 4–5 years, we have been working hard towards implementing various AI/ML use-cases at our enterprises.

Building and Deploying Explainable AI Dashboards using Dash and SHAP

Building and Deploying Explainable AI Dashboards using Dash and SHAP. In recent years, we have seen an explosion in the usage of Machine Learning (ML) algorithms for automating and supporting human decisions.

Explaining Machine Learning Predictions and Building Trust with LIME

Explaining Machine Learning Predictions and Building Trust with LIME. A technique to explain how black-box machine learning classifiers make predictions

Explainable AI : The Next Level

Explainable AI (XAI) refers to methods and techniques in the ML/AI such that the results of the solution can be understood by humans.

On Social Characteristics of Artificial Intelligence

On Social Characteristics of Artificial Intelligence. Well, probably recently, so many times you have heard similar phrases as follows: