Through the first 2 parts of our federated learning demo project, we created a client-server application in Python using socket programming. In part 1, the client-server application was able to handle only a single connection at a time. In other words, the server was able to only receive a single message from the client per connection.

Federated Learning Demo in Python (Part 1): Client-Server Application

Implementing a client-server application using socket programming

heartbeat.fritz.ai

In part 2, we extended the application was extended to allow multiple messages to be sent within the same connection. The server can handle multiple connections simultaneously:

Federated Learning Demo in Python (Part 2): Multiple Connections using Threading

Enabling our server to receive multiple incoming connections

heartbeat.fritz.ai

The code for this project is available at the Federated Learning GitHub project under the TutorialProject directory.

In this tutorial, we’ll create a machine learning model (neural network) will using a library named PyGAD at the server. But the model actually won’t be trained at the server. Instead, the model will be sent to the connected clients where it will be trained by each client’s local data. PyGAD uses a genetic algorithm for training these models.

Each client sends its version of the model back to the server. The server aggregates the parameters of all models to create a single model. Such a model is then tested to make sure it has an acceptable accuracy/error.

In case the model needs further training using the clients’ local data, the model will be sent back to the clients where its parameters are tuned, and finally sent back to the server. The process continues until the model does performs sufficiently well on the test data.

The code for this tutorial is available in the Federated Learning GitHub project under the TutorialProject/Part3 directory. Future tutorials will build upon this application.

Here are the sections covered in this tutorial:

  • Installing PyGAD
  • Creating a population of neural networks using PyGAD
  • Sending the population to connected clients
  • Receiving the population at the client
  • Training the network using the genetic algorithm
  • Sending the client’s trained network back to the server
  • Final client code
  • Aggregating models’ parameters at the server
  • Final server code
  • Running the server and the clients

Let’s get started.

Installing PyGAD

PyGAD is an open-source Python library for building genetic algorithms (GA). It supports many parameters that customize GA for various problems.

PyGAD is available via PyPI, and thus it can be installed using the pip installer. For Windows, use the following CMD command:

pip install pygad

For Linux and Mac, replace pip by pip3 (PyGAD uses Python 3):

pip3 install pygad

Once installed, make sure it works correctly by importing it. The latest version at the time of writing this tutorial is 2.3.0.

import pygad
​
print(pygad.__version__)

If everything is working properly, then we can proceed into the next section to create a neural network using PyGAD.

Creating a Population of Neural Networks using PyGAD

PyGAD uses a GA for training machine learning models. Currently, the models it supports are neural networks and convolutional neural networks.

The GA starts with a random initial population of solutions for the problem being optimized. For some generations, the solutions are evolved by applying mutation and crossover. PyGAD creates an initial population of networks using its pygad.gann module. In this section, we’ll discuss how to use this module for creating the initial population of networks.

The pygad.gann module has a class named GANN. Check the documentation of this module in in the official docs. An instance of this class represents a population of neural networks. All the neural networks have the same architecture. Later, this population will be evolved using the genetic algorithm.

The constructor of the pygad.gann.GANN class accepts the following parameters:

  • num_solutions: Population size (i.e. number of solutions in the population).
  • num_neurons_input: Number of input layer neurons.
  • num_neurons_output: Number of output layer neurons.
  • num_neurons_hidden_layers=[]: A list representing the number of neurons in the hidden layer. If the list has N elements, then it will have N hidden layers. The number of neurons for the hidden layers are specified according to the list values.
  • output_activation="softmax": A string representing the activation function in the output layer.
  • hidden_activations="relu": A string or list of strings representing the activation function in the hidden layer(s). If a list is specified, then its length must be equal to the length of the num_neurons_hidden_layers parameter.

For more information about these parameters, check out the documentation.

The problem that will be modeled in this tutorial is the XOR problem. The samples for this problem are listed below. For each sample, there are 2 inputs and 1 output. Thus, the num_neurons_input parameter will be set to 2, and the num_neurons_output will be set to 1.

#artificial-intelligence #python #heartbeat #machine-learning #federated-learning #deep learning

Training Models using Federated Learning (Part 3)
1.60 GEEK