A selection of the latest arXiv preprints concerning active (and occasionally semi- or weakly-supervised) deep learning


The past couple of weeks have seen plenty of action across the active and semi-supervised machine learning communities. What follows are some personal favourites rather than an exhaustive list, separated into two parts: Active Learning and **Semi-Supervised Learnin**g.

Active Learning


Explainable AI is a big thing nowadays. In ALEX: Active Learning based Enhancement of a Model’s Explainability, the authors use a novel kind of query strategy: prioritization of instances that are “difficult to explain”. (They use the SHAP framework to determine the latter.) Their goal is to arrive at a classifier that is optimized for both its performance and the model’s explainability, and at least on MNIST, the authors manage to succeed on both accounts:

Now, this is not a criticism by any means, but unlike the majority of preprints on machine learning topics that I see, this paper was written in full-on academic style. Kind of like back when I was a physicist, when everything that could have been turned into a mathematical statement, was. To give you a small example of what I mean:

“The error in this parametric approximation, θ (of the true functional dependence, θ), is expected to be smaller with increasing number of pairs in the training set, M, i.e., θ → θ as M → ∞.”

Or in other words: the more training samples, the better the model generalizes. Anyway, I just found it funny that when I first started reading ML papers instead of physics ones, what I perceived as their lack of rigour used to frustrate me, whereas now I am more likely to take note of the opposite. How people change!

#active-learning-news #ai #data-science #deep-learning #machine-learning

Active and Semi-Supervised machine learning
1.45 GEEK