Introduction

In this article, I shall introduce and outline federated learning, a novel machine learning paradigm that was introduced in 2016 by Google. But first, let me set the scene.

So, imagine you are working on yet another machine learning project, say, an autocomplete emoji predictor. You have your data all engineered and ready. You have chosen the models you will experiment with. Basically, you are good to go, you just write the training script and execute it.

This is good and all, but what if you have access to an array of high-quality CPUs/GPUs? You would definitely like to capitalize on that. So you modify your script such that it distributes the training workload on all of the available devices, accumulating the learned parameters on your central machine. This is known as distributed learning.

But, what if you don’t have enough data? Or obtaining a lot of training samples may cause several privacy concerns to arise (emojis are used in a lot in chats after all)? Or worse yet, what if you’d rather use your existing computing power for a project of a higher priority? Well, in 2016, Google came up with an ingenious solution, and it has gained a lot of momentum since then: let the edge devices do the training for you!

#programming #federated-learning #machine-learning #artificial-intelligence

Federated Learning: Motivation and Challenges
2.30 GEEK