TL;DR

The R Package fairmodelsfacilitates** bias detection through model visualizations.** It implements a few mitigation strategies that could reduce bias. It enables easy to use checks for fairness metrics and comparison between different Machine Learning (ML) models.

Long version

Bias mitigation is an important topic in Machine Learning (ML) fairness field. For python users, there are algorithms already implemented, well-explained, and described (see AIF360). fairmodels provides an implementation of a few popular, effective bias mitigation techniques ready to make your model fairer.

I have a biased model, now what?

Having a biased model is not the end of the world. There are lots of ways to deal with it. **fairmodels **implements various algorithms to help you tackle that problem. Firstly, I must describe the difference between the pre-processing algorithm and the post-processing one.

  • Pre-processing algorithms work on data before the model is trained. They try to mitigate the bias between privileged subgroup and unprivileged ones through inference from data.
  • Post-processing algorithms change the output of the model explained with DALEX so that its output does not favor the privileged subgroup so much.

How do these algorithms work?

In this section, I will briefly describe how these bias mitigation techniques work. Code for more detailed examples and some visualizations used here may be found in this vignette.

Pre-processing

Disparate impact remover (Feldman et al., 2015)

Image for post

#dalex #r #fairness #ai #machine-learning

fairmodels: let’s fight with biased Machine Learning models
1.50 GEEK