How To Build a Deep Learning Model to Predict Employee Retention Using Keras and TensorFlow

How To Build a Deep Learning Model to Predict Employee Retention Using Keras and TensorFlow

How To Build a Deep Learning Model to Predict Employee Retention Using Keras and TensorFlow

How To Build a Deep Learning Model to Predict Employee Retention Using Keras and TensorFlow

Introduction

Keras is a neural network API that is written in Python. It runs on top of TensorFlow, CNTK, or Theano. It is a high-level abstraction of these deep learning frameworks and therefore makes experimentation faster and easier. Keras is modular, which means implementation is seamless as developers can quickly extend models by adding modules.

TensorFlow is an open-source software library for machine learning. It works efficiently with computation involving arrays; so it's a great choice for the model you'll build in this tutorial. Furthermore, TensorFlow allows for the execution of code on either CPU or GPU, which is a useful feature especially when you're working with a massive dataset.

In this tutorial, you'll build a deep learning model that will predict the probability of an employee leaving a company. Retaining the best employees is an important factor for most organizations. To build your model, you'll use this dataset available at Kaggle, which has features that measure employee satisfaction in a company. To create this model, you'll use the Keras sequential layer to build the different layers for the model.

Prerequisites

Before you begin this tutorial you'll need the following:

Step 1 — Data Pre-processing

Data Pre-processing is necessary to prepare your data in a manner that a deep learning model can accept. If there are categorical variables in your data, you have to convert them to numbers because the algorithm only accepts numerical figures. A categorical variable represents quantitive data represented by names. In this step, you'll load in your dataset using pandas, which is a data manipulation Python library.

Before you begin data pre-processing, you'll activate your environment and ensure you have all the necessary packages installed to your machine. It's advantageous to use conda to install keras and tensorflow since it will handle the installation of any necessary dependencies for these packages, and ensure they are compatible with keras and tensorflow. In this way, using the Anaconda Python distribution is a good choice for data science related projects.

Move into the environment you created in the prerequisite tutorial:

$ conda activate my_env

Run the following command to install keras and tensorflow:

( my_env ) $ conda install tensorflow keras

Now, open Jupyter Notebook to get started. Jupyter Notebook is opened by typing the following command on your terminal:

( my_env ) $ jupyter notebook

Note: If you're working from a remote server, you'll need to use SSH tunneling to access your notebook. Please revisit step 2 of the prerequisite tutorial for detailed on instructions on setting up SSH tunneling. You can use the following command from your local machine to initiate your SSH tunnel:

$ ssh -L 8888:localhost:8888 [email protected]_server_ip

After accessing Jupyter Notebook, click on the anaconda3 file, and then click New at the top of the screen, and select Python 3 to load a new notebook.

Now, you'll import the required modules for the project and then load the dataset in a notebook cell. You'll load in the pandas module for manipulating your data and numpy for converting the data into numpyarrays. You'll also convert all the columns that are in string format to numerical values for your computer to process.

Insert the following code into a notebook cell and then click Run:

import pandas as pd
import numpy as np
df = pd.read_csv("https://raw.githubusercontent.com/mwitiderrick/kerasDO/master/HR_comma_sep.csv")

You've imported numpy and pandas. You then used pandas to load in the dataset for the model.

You can get a glimpse at the dataset you're working with by using head(). This is a useful function from pandas that allows you to view the first five records of your dataframe. Add the following code to a notebook cell and then run it:

df.head()

You'll now proceed to convert the categorical columns to numbers. You do this by converting them to dummy variables. Dummy variables are usually ones and zeros that indicate the presence or absence of a categorical feature. In this kind of situation, you also avoid the dummy variable trap by dropping the first dummy.

Note: The dummy variable trap is a situation whereby two or more variables are highly correlated. This leads to your model performing poorly. You, therefore, drop one dummy variable to always remain with N-1 dummy variables. Any of the dummy variables can be dropped because there is no preference as long as you remain with N-1 dummy variables. An example of this is if you were to have an on/off switch. When you create the dummy variable you shall get two columns: an on column and an off column. You can drop one of the columns because if the switch isn't on, then it is off.

Insert this code in the next notebook cell and execute it:

feats = ['department','salary']
df_final = pd.get_dummies(df,columns=feats,drop_first=True)

feats = ['department','salary'] defines the two columns for which you want to create dummy variables. pd.get_dummies(df,columns=feats,drop_first=True) will generate the numerical variables that your employee retention model requires. It does this by converting the feats that you define from categorical to numerical variables.

You've loaded in the dataset and converted the salary and department columns into a format the kerasdeep learning model can accept. In the next step, you will split the dataset into a training and testing set.

Step 2 — Separating Your Training and Testing Datasets

You'll use <a href="https://scikit-learn.org/" target="_blank">scikit-learn</a> to split your dataset into a training and a testing set. This is necessary so you can use part of the employee data to train the model and a part of it to test its performance. Splitting a dataset in this way is a common practice when building deep learning models.

It is important to implement this split in the dataset so the model you build doesn't have access to the testing data during the training process. This ensures that the model learns only from the training data, and you can then test its performance with the testing data. If you exposed your model to testing data during the training process then it would memorize the expected outcomes. Consequently, it would fail to give accurate predictions on data that it hasn't seen.

You'll start by importing the train_test_split module from the scikit-learn package. This is the module that will provide the splitting functionality. Insert this code in the next notebook cell and run:

from sklearn.model_selection import train_test_split

With the train_test_split module imported, you'll use the left column in your dataset to predict if an employee will leave the company. Therefore, it is essential that your deep learning model doesn't come into contact with this column. Insert the following into a cell to drop the left column:

X = df_final.drop(['left'],axis=1).values
y = df_final['left'].values

Your deep learning model expects to get the data as arrays. Therefore you use <a href="http://www.numpy.org/" target="_blank">numpy</a> to convert the data to numpy arrays with the .values attribute.

You're now ready to convert the dataset into a testing and training set. You'll use 70% of the data for training and 30% for testing. The training ratio is more than the testing ratio because you'll need to use most of the data for the training process. If desired, you can also experiment with a ratio of 80% for the training set and 20% for the testing set.

Now add this code to the next cell and run to split your training and testing data to the specified ratio:

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

You have now converted the data into the type that Keras expects it to be in (numpy arrays), and your data is split into a training and testing set. You'll pass this data to the keras model later in the tutorial. Beforehand you need to transform the data, which you'll complete in the next step.

Step 3 — Transforming the Data

When building deep learning models it is usually good practice to scale your dataset in order to make the computations more efficient. In this step, you'll scale the data using the StandardScaler; this will ensure that your dataset values have a mean of zero and a unit variable. This transforms the dataset to be normally distributed. You'll use the scikit-learn StandardScaler to scale the features to be within the same range. This will transform the values to have a mean of 0 and a standard deviation of 1. This step is important because you're comparing features that have different measurements; so it is typically required in machine learning.

To scale the training set and the test set, add this code to the notebook cell and run it:

from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

Here, you start by importing the StandardScaler and calling an instance of it. You then use its fit_transform method to scale the training and testing set.

You have scaled all your dataset features to be within the same range. You can start building the artificial neural network in the next step.

Step 4 — Building the Artificial Neural Network

Now you will use keras to build the deep learning model. To do this, you'll import keras, which will use tensorflow as the backend by default. From keras, you'll then import the Sequential module to initialize the artificial neural network. An artificial neural network is a computational model that is built using inspiration from the workings of the human brain. You'll import the Dense module as well, which will add layers to your deep learning model.

When building a deep learning model you usually specify three layer types:

  • The input layer is the layer to which you'll pass the features of your dataset. There is no computation that occurs in this layer. It serves to pass features to the hidden layers.
  • The hidden layers are usually the layers between the input layer and the output layer—and there can be more than one. These layers perform the computations and pass the information to the output layer.
  • The output layer represents the layer of your neural network that will give you the results after training your model. It is responsible for producing the output variables.

To import the Keras, Sequential, and Dense modules, run the following code in your notebook cell:

import keras
from keras.models import Sequential
from keras.layers import Dense

You'll use Sequential to initialize a linear stack of layers. Since this is a classification problem, you'll create a classifier variable. A classification problem is a task where you have labeled data and would like to make some predictions based on the labeled data. Add this code to your notebook to create a classifier variable:

classifier = Sequential()

You've used Sequential to initialize the classifier.

You can now start adding layers to your network. Run this code in your next cell:

classifier.add(Dense(9, kernel_initializer = "uniform",activation = "relu", input_dim=18))

You add layers using the .add() function on your classifier and specify some parameters:

  • The first parameter is the number of nodes that your network should have. The connection between different nodes is what forms the neural network. One of the strategies to determine the number of nodes is to take the average of the nodes in the input layer and the output layer.
  • The second parameter is the kernel_initializer. When you fit your deep learning model the weights will be initialized to numbers close to zero, but not zero. To achieve this you use the uniform distribution initializer. kernel_initializer is the function that initializes the weights.
  • The third parameter is the activation function. Your deep learning model will learn through this function. There are usually linear and non-linear activation functions. You use the <a href="https://keras.io/activations/" target="_blank">relu</a> activation function because it generalizes well on your data. Linear functions are not good for problems like these because they form a straight line.
  • The last parameter is input_dim, which represents the number of features in your dataset.

Now you'll add the output layer that will give you the predictions:

classifier.add(Dense(1, kernel_initializer = "uniform",activation = "sigmoid"))

The output layer takes the following parameters:

  • The number of output nodes. You expect to get one output: if an employee leaves the company. Therefore you specify one output node.
  • For kernel_initializer you use the <a href="https://keras.io/activations/" target="_blank">sigmoid</a> activation function so that you can get the probability that an employee will leave. In the event that you were dealing with more than two categories, you would use the <a href="https://keras.io/activations/" target="_blank">softmax</a> activation function, which is a variant of the sigmoidactivation function.

Next, you'll apply a gradient descent to the neural network. This is an optimization strategy that works to reduce errors during the training process. Gradient descent is how randomly assigned weights in a neural network are adjusted by reducing the cost function, which is a measure of how well a neural network performs based on the output expected from it.

The aim of a gradient descent is to get the point where the error is at its least. This is done by finding where the cost function is at its minimum, which is referred to as a local minimum. In gradient descent, you differentiate to find the slope at a specific point and find out if the slope is negative or positive—you're descending into the minimum of the cost function. There are several types of optimization strategies, but you'll use a popular one known as adam in this tutorial.

Add this code to your notebook cell and run it:

classifier.compile(optimizer= "adam",loss = "binary_crossentropy",metrics = ["accuracy"])

Applying gradient descent is done via the compile function that takes the following parameters:

  • optimizer is the gradient descent.
  • loss is a function that you'll use in the gradient descent. Since this is a binary classification problem you use the binary_crossentropy loss function.
  • The last parameter is the metric that you'll use to evaluate your model. In this case, you'd like to evaluate it based on its accuracy when making predictions.

You're ready to fit your classifier to your dataset. Keras makes this possible via the .fit() method. To do this, insert the following code into your notebook and run it in order to fit the model to your dataset:

classifier.fit(X_train, y_train, batch_size = 10, epochs = 1)

The .fit() method takes a couple of parameters:

  • The first parameter is the training set with the features.
  • The second parameter is the column that you're making the predictions on.
  • The batch_size represents the number of samples that will go through the neural network at each training round.
  • epochs represents the number of times that the dataset will be passed via the neural network. The more epochs the longer it will take to run your model, which also gives you better results.

You've created your deep learning model, compiled it, and fitted it to your dataset. You're ready to make some predictions using the deep learning model. In the next step, you'll start making predictions with the dataset that the model hasn't yet seen.

Step 5 — Running Predictions on the Test Set

To start making predictions, you'll use the testing dataset in the model that you've created. Keras enables you to make predictions by using the .predict() function.

Insert the following code in the next notebook cell to begin making predictions:

y_pred = classifier.predict(X_test)

Since you've already trained the classifier with the training set, this code will use the learning from the training process to make predictions on the test set. This will give you the probabilities of an employee leaving. You'll work with a probability of 50% and above to indicate a high chance of the employee leaving the company.

Enter the following line of code in your notebook cell in order to set this threshold:

y_pred = (y_pred > 0.5)

You've created predictions using the predict method and set the threshold for determining if an employee is likely to leave. To evaluate how well the model performed on the predictions, you will next use a confusion matrix.

Step 6 — Checking the Confusion Matrix

In this step, you will use a confusion matrix to check the number of correct and incorrect predictions. A confusion matrix, also known as an error matrix, is a square matrix that reports the number of true positives(tp), false positives(fp), true negatives(tn), and false negatives(fn) of a classifier.

  • A true positive is an outcome where the model correctly predicts the positive class (also known as sensitivity or recall).
  • A true negative is an outcome where the model correctly predicts the negative class.
  • A false positive is an outcome where the model incorrectly predicts the positive class.
  • A false negative is an outcome where the model incorrectly predicts the negative class.

To achieve this you'll use a confusion matrix that scikit-learn provides.

Insert this code in the next notebook cell to import the scikit-learn confusion matrix:

from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
cm

The confusion matrix output means that your deep learning model made 3305 + 375 correct predictions and 106 + 714 wrong predictions. You can calculate the accuracy with: (3305 + 375) / 4500. The total number of observations in your dataset is 4500. This gives you an accuracy of 81.7%. This is a very good accuracy rate since you can achieve at least 81% correct predictions from your model.

Output
array([[3305,  106],
       [ 714,  375]])

You've evaluated your model using the confusion matrix. Next, you'll work on making a single prediction using the model that you have developed.

Step 7 — Making a Single Prediction

In this step you'll make a single prediction given the details of one employee with your model. You will achieve this by predicting the probability of a single employee leaving the company. You'll pass this employee's features to the predict method. As you did earlier, you'll scale the features as well and convert them to a numpy array.

To pass the employee's features, run the following code in a cell:

new_pred = classifier.predict(sc.transform(np.array([[0.26,0.7 ,3., 238., 6., 0.,0.,0.,0., 0.,0.,0.,0.,0.,1.,0., 0.,1.]])))

These features represent the features of a single employee. As shown in the dataset in step 1, these features represent: satisfaction level, last evaluation, number of projects, and so on. As you did in step 3, you have to transform the features in a manner that the deep learning model can accept.

Add a threshold of 50% with the following code:

new_pred = (new_pred > 0.5)
new_pred

This threshold indicates that where the probability is above 50% an employee will leave the company.

You can see in your output that the employee won't leave the company:

Output
array([[False]])

You might decide to set a lower or higher threshold for your model. For example, you can set the threshold to be 60%:

new_pred = (new_pred > 0.6)
new_pred

This new threshold still shows that the employee won't leave the company:

Output
array([[False]])

In this step, you have seen how to make a single prediction given the features of a single employee. In the next step, you will work on improving the accuracy of your model.

Step 8 — Improving the Model Accuracy

If you train your model many times you'll keep getting different results. The accuracies for each training have a high variance. In order to solve this problem, you'll use K-fold cross-validation. Usually, K is set to 10. In this technique, the model is trained on the first 9 folds and tested on the last fold. This iteration continues until all folds have been used. Each of the iterations gives its own accuracy. The accuracy of the model becomes the average of all these accuracies.

keras enables you to implement K-fold cross-validation via the KerasClassifier wrapper. This wrapper is from scikit-learn cross-validation. You'll start by importing the cross_val_score cross-validation function and the KerasClassifier. To do this, insert and run the following code in your notebook cell:

from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score

To create the function that you will pass to the KerasClassifier, add this code to the next cell:

def make_classifier():
    classifier = Sequential()
    classifier.add(Dense(9, kernel_initializer = "uniform", activation = "relu", input_dim=18))
    classifier.add(Dense(1, kernel_initializer = "uniform", activation = "sigmoid"))
    classifier.compile(optimizer= "adam",loss = "binary_crossentropy",metrics = ["accuracy"])
    return classifier

Here, you create a function that you'll pass to the KerasClassifier—the function is one of the arguments that the classifier expects. The function is a wrapper of the neural network design that you used earlier. The passed parameters are also similar to the ones used earlier in the tutorial. In the function, you first initialize the classifier using Sequential(), you then use Dense to add the input and output layer. Finally, you compile the classifier and return it.

To pass the function you've built to the KerasClassifier, add this line of code to your notebook:

classifier = KerasClassifier(build_fn = make_classifier, batch_size=10, nb_epoch=1)

The KerasClassifier takes three arguments:

  • build_fn: the function with the neural network design
  • batch_size: the number of samples to be passed via the network in each iteration
  • nb_epoch: the number of epochs the network will run

Next, you apply the cross-validation using Scikit-learn's cross_val_score. Add the following code to your notebook cell and run it:

accuracies = cross_val_score(estimator = classifier,X = X_train,y = y_train,cv = 10,n_jobs = -1)

This function will give you ten accuracies since you have specified the number of folds as 10. Therefore, you assign it to the accuracies variable and later use it to compute the mean accuracy. It takes the following arguments:

  • estimator: the classifier that you've just defined
  • X: the training set features
  • y: the value to be predicted in the training set
  • cv: the number of folds
  • n_jobs: the number of CPUs to use (specifying it as -1 will make use of all the available CPUs)

Now you have applied the cross-validation, you can compute the mean and variance of the accuracies. To achieve this, insert the following code into your notebook:

mean = accuracies.mean()
mean

In your output you'll see that the mean is 83%:

Output
0.8343617910685696

To compute the variance of the accuracies, add this code to the next notebook cell:

variance = accuracies.var()
variance

You see that the variance is 0.00109. Since the variance is very low, it means that your model is performing very well.

Output
0.0010935021002275425

You've improved your model's accuracy by using K-Fold cross-validation. In the next step, you'll work on the overfitting problem.

Step 9 — Adding Dropout Regularization to Fight Over-Fitting

Predictive models are prone to a problem known as overfitting. This is a scenario whereby the model memorizes the results in the training set and isn't able to generalize on data that it hasn't seen. Typically you observe overfitting when you have a very high variance on accuracies. To help fight over-fitting in your model, you will add a layer to your model.

In neural networks, dropout regularization is the technique that fights overfitting by adding a Dropoutlayer in your neural network. It has a rate parameter that indicates the number of neurons that will deactivate at each iteration. The process of deactivating nerurons is usually random. In this case, you specify 0.1 as the rate meaning that 1% of the neurons will deactivate during the training process. The network design remains the same.

To add your Dropout layer, add the following code to the next cell:

from keras.layers import Dropout

classifier = Sequential()
classifier.add(Dense(9, kernel_initializer = "uniform", activation = "relu", input_dim=18))
classifier.add(Dropout(rate = 0.1))
classifier.add(Dense(1, kernel_initializer = "uniform", activation = "sigmoid"))
classifier.compile(optimizer= "adam",loss = "binary_crossentropy",metrics = ["accuracy"])

You have added a Dropout layer between the input and output layer. Having set a dropout rate of 0.1 means that during the training process 15 of the neurons will deactivate so that the classifier doesn't overfit on the training set. After adding the Dropout and output layers you then compiled the classifier as you have done previously.

You worked to fight over-fitting in this step with a Dropout layer. Next, you'll work on further improving the model by tuning the parameters you used while creating the model.

Step 10 — Hyperparameter Tuning

Grid search is a technique that you can use to experiment with different model parameters in order to obtain the ones that give you the best accuracy. The technique does this by trying different parameters and returning those that give the best results. You'll use grid search to search for the best parameters for your deep learning model. This will help in improving model accuracy. scikit-learn provides the GridSearchCV function to enable this functionality. You will now proceed to modify the make_classifierfunction to try out different parameters.

Add this code to your notebook to modify the make_classifier function so you can test out different optimizer functions:

from sklearn.model_selection import GridSearchCV
def make_classifier(optimizer):
    classifier = Sequential()
    classifier.add(Dense(9, kernel_initializer = "uniform", activation = "relu", input_dim=18))
    classifier.add(Dense(1, kernel_initializer = "uniform", activation = "sigmoid"))
    classifier.compile(optimizer= optimizer,loss = "binary_crossentropy",metrics = ["accuracy"])
    return classifier

You have started by importing GridSearchCV. You have then made changes to the make_classifierfunction so that you can try different optimizers. You've initialized the classifier, added the input and output layer, and then compiled the classifier. Finally, you have returned the classifier so you can use it.

Like in step 4, insert this line of code to define the classifier:

classifier = KerasClassifier(build_fn = make_classifier)

You've defined the classifier using the KerasClassifier, which expects a function through the build_fnparameter. You have called the KerasClassifier and passed the make_classifier function that you created earlier.

You will now proceed to set a couple of parameters that you wish to experiment with. Enter this code into a cell and run:

params = {
    'batch_size':[20,35],
    'epochs':[2,3],
    'optimizer':['adam','rmsprop']
}

Here you have added different batch sizes, number of epochs, and different types of optimizer functions.

For a small dataset like yours, a batch size of between 20–35 is good. For large datasets its important to experiment with larger batch sizes. Using low numbers for the number of epochs ensures that you get results within a short period. However, you can experiment with bigger numbers that will take a while to complete depending on the processing speed of your server. The adam and rmsprop optimizers from keras are a good choice for this type of neural network.

Now you're going to use the different parameters you have defined to search for the best parameters using the GridSearchCV function. Enter this into the next cell and run it:

grid_search = GridSearchCV(estimator=classifier,
                           param_grid=params,
                           scoring="accuracy",
                           cv=2)

The grid search function expects the following parameters:

  • estimator: the classifier that you're using.
  • param_grid: the set of parameters that you're going to test.
  • scoring: the metric you're using.
  • cv: the number of folds you'll test on.

Next, you fit this grid_search to your training dataset:

grid_search = grid_search.fit(X_train,y_train)

Your output will be similar to the following, wait a moment for it to complete:

Output
Epoch 1/2
5249/5249 [==============================] - 1s 228us/step - loss: 0.5958 - acc: 0.7645
Epoch 2/2
5249/5249 [==============================] - 0s 82us/step - loss: 0.3962 - acc: 0.8510
Epoch 1/2
5250/5250 [==============================] - 1s 222us/step - loss: 0.5935 - acc: 0.7596
Epoch 2/2
5250/5250 [==============================] - 0s 85us/step - loss: 0.4080 - acc: 0.8029
Epoch 1/2
5249/5249 [==============================] - 1s 214us/step - loss: 0.5929 - acc: 0.7676
Epoch 2/2
5249/5249 [==============================] - 0s 82us/step - loss: 0.4261 - acc: 0.7864

Add the following code to a notebook cell to obtain the best parameters from this search using the best_params_ attribute:

best_param = grid_search.best_params_
best_accuracy = grid_search.best_score_

You can now check the best parameters for your model with the following code:

best_param

Your output shows that the best batch size is 20, the best number of epochs is 2, and the adam optimizer is the best for your model:

Output
{'batch_size': 20, 'epochs': 2, 'optimizer': 'adam'}

You can check the best accuracy for your model. The best_accuracy number represents the highest accuracy you obtain from the best parameters after running the grid search:

best_accuracy

Your output will be similar to the following:

Output
0.8533193637489285

You've used GridSearch to figure out the best parameters for your classifier. You have seen that the best batch_size is 20, the best optimizer is the adam optimizer and the best number of epochs is 2. You have also obtained the best accuracy for your classifier as being 85%. You've built an employee retention model that is able to predict if an employee stays or leaves with an accuracy of up to 85%.

Conclusion

In this tutorial, you've used Keras to build an artificial neural network that predicts the probability that an employee will leave a company. You combined your previous knowledge in machine learning using scikit-learn to achieve this. To further improve your model, you can try different activation functions or optimizer functions from keras. You could also experiment with a different number of folds, or, even build a model with a different dataset.

For other tutorials in the machine learning field or using TensorFlow, you can try building a neural network to recognize handwritten digits or other DigitalOcean machine learning tutorials.

Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

What’s new in HTML6

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

Machine Learning Tutorial with Python, Jupyter, KSQL and TensorFlow

Machine Learning Tutorial with Python, Jupyter, KSQL and TensorFlow

Machine Learning With Python, Jupyter, KSQL, and TensorFlow. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers.

Machine Learning With Python, Jupyter, KSQL, and TensorFlow. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers.

Building a scalable, reliable, and performant machine learning (ML) infrastructure is not easy. It takes much more effort than just building an analytic model with Python and your favorite machine learning framework.

Uber, which already runs their scalable and framework-independent machine learning platform Michelangelo for many use cases in production, wrote a good summary:

When Michelangelo started, the most urgent and highest impact use cases were some very high scale problems, which led us to build around Apache Spark (for large-scale data processing and model training) and Java (for low latency, high throughput online serving). This structure worked well for production training and deployment of many models but left a lot to be desired in terms of overhead, flexibility, and ease of use, especially during early prototyping and experimentation [where Notebooks and Python shine].
Uber expanded Michelangelo “to serve any kind of Python model from any source to support other Machine Learning and Deep Learning frameworks like PyTorch and TensorFlow [instead of just using Spark for everything].”

So why did Uber (and many other tech companies) build its own platform and framework-independent machine learning infrastructure?

The posts How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka and Using Apache Kafka to Drive Cutting-Edge Machine Learning describe the benefits of leveraging the Apache Kafka ® ecosystem as a central, scalable, and mission-critical nervous system. It allows real-time data ingestion, processing, model deployment, and monitoring in a reliable and scalable way.

This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers, and production engineers. By leveraging it to build your own scalable machine learning infrastructure and also make your data scientists happy, you can solve the same problems for which Uber built its own ML platform, Michelangelo.


You may also like:A Complete Machine Learning Project Walk-Through in Python


Impedance Mismatch Between Data Scientists, Data Engineers and Production Engineers

Based on what I’ve seen in the field, an impedance mismatch between data scientists, data engineers, and production engineers is the main reason why companies struggle to bring analytic models into production to add business value.

The following diagram illustrates the different required steps and corresponding roles as part of the impedance mismatch in a machine learning lifecycle:

Impedance mismatch between model development and model deployment

Data scientists love Python, period. Therefore, the majority of machine learning/deep learning frameworks focus on Python APIs. Both the stablest and most cutting edge APIs, as well as the majority of examples and tutorials, use Python APIs. In addition to Python support, there is typically support for other programming languages, including JavaScript for web integration and Java for platform integration-though oftentimes with fewer features and less maturity. No matter what other platforms are supported, chances are very high that your data scientists will build and train their analytic models with Python.

There is an impedance mismatch between model development using Python, its tool stack and a scalable, reliable data platform with low latency, high throughput, zero data loss and 24/7 availability requirements needed for data ingestion, preprocessing, model deployment and monitoring at scale. Python, in practice, is not the most well-known technology for these requirements. However, it is a great client for a data platform like Apache Kafka.

The problem is that writing the machine learning source code to train an analytic model with Python and the machine learning framework of your choice is just a very small part of a real-world machine learning infrastructure. You need to think about the whole model lifecycle. The following image represents this hidden technical debt in machine learning systems (showing how small the “ML code” part is):

Thus, you need to train and deploy the model built to a scalable production environment in order to reliably make use of it. This can either be built natively around the Kafka ecosystem, or you could use Kafka just for ingestion into another storage and processing cluster such as HDFS or AWS S3 with Spark. There are many tradeoffs between Kafka, Spark, and several other scalable infrastructures, but that discussion is out of scope for this post. For now, we’ll focus on Kafka.

Different solutions in the industry solve certain parts of the impedance mismatch between data scientists, data engineers, and production engineers. Let’s take a look at some of these options:

  • Official standards like Open Neural Network Exchange (ONNX), Portable Format for Analytics (PFA) or Predictive Model Markup Language (PMML): A data scientist builds a model with Python. The Java developer imports it in Java for production deployment. This approach supports different frameworks, products, and cloud services. You do not have to rely on the same framework or product for training and model deployment. Consider ONNX, a relatively new standard for deep learning — it already supports TensorFlow, PyTorch, and MXNet. These standards have pros and cons. Some people like and use them; many don’t.
  • Developer-focused frameworks like Deeplearning4j: These frameworks are built for software engineers to build the whole machine learning lifecycle on the Java platform, not just model deployment and monitoring, but also preprocessing and training. You can still import other models if you want (e.g., Deeplearning4j lets you import Keras models). This option is great if you: a) have data scientists who can write Java or b) have software engineers who understand machine learning concepts enough to build analytic models.
  • AutoML for building analytic models with limited machine learning experience: This way, domain experts can build and deploy analytic models with a button click. The AutoML engine provides an interface for others to use the model for predictions.
  • Embedding model binaries into applications: The output of model training is an analytic model. For instance, you can write Python code to train and generate a TensorFlow model. Depending on the framework, the output can be text files, Java source code, or binary files. For example, TensorFlow generates a model artifact with Protobuf, JSON, and other files. No matter what format the output of your machine learning framework is, it can be embedded into applications to use for predictions via the framework’s API (e.g., you can load a TensorFlow model from a Java application through TensorFlow’s Java API).
  • Managed model server in the public cloud like Google Cloud Machine Learning Engine: The cloud provider takes over the burden of availability and reliability. The data scientist “just” deploys its trained model, and production engineers can access it. The key tradeoff is that this requires RPC communication to perform model inference.

While all these solutions help data scientists, data engineers, and production engineers to work better together, there are underlying challenges within the hidden debts:

  • Data collection (i.e., integration) and preprocessing need to run at scale

  • Configuration needs to be shared and automated for continuous builds and integration tests

  • The serving and monitoring infrastructure need to fit into your overall enterprise architecture and tool stack

So how can the Kafka ecosystem help here?

Apache Kafka as a Key Component for Solving the Impedance Mismatch

In many cases, it is best to provide experts with the tools they like and know well. The challenge is to combine the different toolsets and still build an integrated system, as well as a continuous, scalable, machine learning workflow. Therefore, Kafka is not competitive but complementary to the discussed alternatives when it comes to solving the impedance mismatch between the data scientist and developer.

The data engineer builds a scalable integration pipeline using Kafka as infrastructure and Python for integration and preprocessing statements. The data scientist can build their model with Python or any other preferred tool. The production engineer gets the analytic models (either manually or through any automated, continuous integration setup) from the data scientist and embeds them into their Kafka application to deploy it in production. Or, the team works together and builds everything with Java and a framework like Deeplearning4j.

Any option can pair well with Apache Kafka. Pick the pieces you need, whether it’s Kafka core for data transportation, Kafka Connect for data integration, or Kafka Streams/KSQL for data preprocessing. Many components can be used for both model training and model inference. Write once and use in both scenarios as shown in the following diagram:

Leveraging the Apache Kafka ecosystem for a machine learning infrastructure

Monitoring the complete environment in real time and at scale is also a common task for Kafka. A huge benefit is that you only build a highly reliable and scalable pipeline once but use it for both parts of a machine learning infrastructure. And you can use it in any environment: in the cloud, in on-prem datacenters, or at the edges where IoT devices are.

Say you wanted to build one integration pipeline from MQTT to Kafka with KSQL for data preprocessing and use Kafka Connect for data ingestion into HDFS, AWS S3, or Google Cloud Storage, where you do the model training. The same integration pipeline, or at least parts of it, can be reused for model inference. New MQTT input data can directly be used in real time to make predictions.

We just explained various alternatives to solving the impedance mismatch between data scientists and software engineers in Kafka environments. Now, let’s discuss one specific option in the next section, which is probably the most convenient for data scientists: leveraging Kafka from a Jupyter Notebook with KSQL statements and combining it with TensorFlow and Keras to train a neural network.

Data Scientists Combining Python and Jupyter With Scalable Streaming Architectures

Data scientists use tools like Jupyter Notebooks to analyze, transform, enrich, filter, and process data. The preprocessed data is then used to train analytic models with machine learning/deep learning frameworks like TensorFlow.

However, some data scientists do not even know “bread-and-butter” concepts of software engineers, such as version control systems like GitHub or continuous integration tools like Jenkins.

This raises the question of how to combine the Python experience of data scientists with the benefits of Apache Kafka as a battle-tested, highly scalable data processing and streaming platform.

Apache Kafka and KSQL for Data Scientists and Data Engineers

Kafka offers integration options that can be used with Python, like Confluent’s Python Client for Apache Kafka or Confluent REST Proxy for HTTP integration. But this is not really a convenient way for data scientists who are used to quickly and interactively analyzing and preprocessing data before model training and evaluation. Rapid prototyping is typically used here.

KSQL enables data scientists to take a look at Kafka event streams and implement continuous stream processing from their well-known and loved Python environments like Jupyter by writing simple SQL-like statements for interactive analysis and data preprocessing.

The following Python example executes an interactive query from a Kafka stream leveraging the open source framework ksql-python, which adds a Python layer on top of KSQL’s REST interface. Here are a few lines of the Python code using KSQL from a Jupyter Notebook:

The result of such a KSQL query is a Python generator object, which you can easily process with other Python libraries. This feels much more Python native and is analogous to NumPy, pandas, scikit-learn and other widespread Python libraries.

Similarly to rapid prototyping with these libraries, you can do interactive queries and data preprocessing with ksql-python. Check out the KSQL quick start and KSQL recipes to understand how to write a KSQL query to easily filter, transform, enrich, or aggregate data. While KSQL is running continuous queries, you can also use it for interactive analysis and use the LIMIT keyword like in ANSI SQL if you just want to get a specific number of rows.

So what’s the big deal? You understand that KSQL can feel Python-native with the ksql-python library, but why use KSQL instead of or in addition to your well-known and favorite Python libraries for analyzing and processing data?

The key difference is that these KSQL queries can also be deployed in production afterwards. KSQL offers you all the features from Kafka under the hood like high scalability, reliability, and failover handling. The same KSQL statement that you use in your Jupyter Notebook for interactive analysis and preprocessing can scale to millions of messages per second. Fault tolerant. With zero data loss and exactly once semantics. This is very important and valuable for bringing together the Python-loving data scientist with the highly scalable and reliable production infrastructure.

Just to be clear: KSQL + Python is not the all-rounder for every data engineering task, and it does not replace the existing Python toolset. But it is a great option in the toolbox of data scientists and data engineers, and it adds new possibilities like getting real-time updates of incoming information as the source data changes or updating a deployed model with a new and improved version.

Jupyter Notebook for Fraud Detection With Python KSQL and TensorFlow/Keras

Let’s now take a look at a detailed example using the combination of KSQL and Python. It involves advanced code examples using ksql-python and other widespread components from Python’s machine learning ecosystem, like NumPy, pandas, TensorFlow, and Keras.

The use case is fraud detection for credit card payments. We use a test dataset from Kaggle as a foundation to train an unsupervised autoencoder to detect anomalies and potential fraud in payments. The focus of this example is not just model training, but the whole machine learning infrastructure, including data ingestion, data preprocessing, model training, model deployment, and monitoring. All of this needs to be scalable, reliable, and performant.

For the full running example and more details, see the documentation.

Let’s take a look at a few snippets of the Jupyter Notebook.

Connection to KSQL server and creation of a KSQL stream using Python:

from ksql import KSQLAPI
client = KSQLAPI('http://localhost:8088')

client.create_stream(table_name='creditcardfraud_source',
                     columns_type=['Id bigint', 'Timestamp varchar', 'User varchar', 'Time int', 'V1 double', 'V2 double', 'V3 double', 'V4 double', 'V5 double', 'V6 double', 'V7 double', 'V8 double', 'V9 double', 'V10 double', 'V11 double', 'V12 double', 'V13 double', 'V14 double', 'V15 double', 'V16 double', 'V17 double', 'V18 double', 'V19 double', 'V20 double', 'V21 double', 'V22 double', 'V23 double', 'V24 double', 'V25 double', 'V26 double', 'V27 double', 'V28 double', 'Amount double', 'Class string'],
                     topic='creditcardfraud_source',
                     value_format='DELIMITED')

Preprocessing incoming payment information using Python:

  • Filter columns that are not needed

  • Filter messages where column "class" is empty

  • Change data format to Avro for convenient and further processing

client.create_stream_as(table_name='creditcardfraud_preprocessed_avro',
                     select_columns=['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount', 'Class'],
                     src_table='creditcardfraud_source',
                     conditions='Class IS NOT NULL',
                     kafka_topic='creditcardfraud_preprocessed_avro',
                     value_format='AVRO')

Some more examples for possible data wrangling and preprocessing with KSQL:

  • Drop columns, filter messages where value “class” is empty and change data format to Avro:
CREATE STREAM creditcardfraud_preprocessed_avro WITH (VALUE_FORMAT='AVRO', KAFKA_TOPIC='creditcardfraud_preprocessed_avro') AS SELECT Time,  V1 , V2 , V3 , V4 , V5 , V6 , V7 , V8 , V9 , V10 , V11 , V12 , V13 , V14 , V15 , V16 , V17 , V18 , V19 , V20 , V21 , V22 , V23 , V24 , V25 , V26 , V27 , V28 , Amount , Class FROM creditcardfraud_source WHERE Class IS NOT NULL;

  • Anonymization (mask the two leftmost characters, e.g., “Hans” becomes “**ns”):
SELECT Id, MASK_LEFT(User, 2) FROM creditcardfraud_source;

  • Augmentation (add -1 if “class” is null):
SELECT Id, IFNULL(Class, -1) FROM creditcardfraud_source;

  • Merge/join data frames:
CREATE STREAM creditcardfraud_per_user WITH (VALUE_FORMAT='AVRO', KAFKA_TOPIC='creditcardfraud_preprocessed_avro') AS SELECT Time,  V1 , V2 , V3 , V4 , V5 , V6 , V7 , V8 , V9 , V10 , V11 , V12 , V13 , V14 , V15 , V16 , V17 , V18 , V19 , V20 , V21 , V22 , V23 , V24 , V25 , V26 , V27 , V28 , Amount , Class FROM creditcardfraud_enahnced c INNER JOIN USERS u on c.userid = u.userid WHERE V1 > 5 AND V2 IS NOT NULL AND u.CITY LIKE 'Premium%';

The Jupyter Notebook contains the full example. We use Python + KSQL for integration, data preprocessing, and interactive analysis and combine them with various other libraries from a common Python machine learning tool stack for prototyping and model training:

  • Arrays/matrices processing with NumPy and pandas

  • ML-specific processing (split train/test, etc.) with scikit-learn

  • Interactive analysis through data visualisations with Matplotlib

  • ML training + evaluation with TensorFlow and Keras

Model inference and visualisation are done in the Jupyter notebook, too. After you have built an accurate model, you can deploy it anywhere to make predictions and leverage the same integration pipeline for model training. Some examples of model deployment in Kafka environments are:

  • Analytic models (TensorFlow, Keras, H2O and Deeplearning4j) embedded in Kafka Streams microservices

  • Anomaly detection of IoT sensor data with a model embedded into a KSQL UDF

  • RPC communication between Kafka Streams application and model server (TensorFlow Serving)

Python, KSQL, and Jupyter for Prototyping, Demos, and Production Deployments

As you can see, both in theory (Google’s paper Hidden Technical Debt in Machine Learning Systems) and in practice (Uber’s machine learning platform Michelangelo), it is not a simple task to build a scalable, reliable, and performant machine learning infrastructure.

The impedance mismatch between data scientists, data engineers, and production engineers must be resolved in order for machine learning projects to deliver real business value. This requires using the right tool for the job and understanding how to combine them. You can use Python and Jupyter for prototyping and demos (often Kafka and KSQL might be overhead here and not needed if you just want to do fast, simple prototyping on a historical dataset) or combine Python and Jupyter with your whole development lifecycle up to production deployments at scale.

Integration of Kafka event streams and KSQL statements into Jupyter Notebooks allows you to:

  • Use the preferred existing environment of the data scientist (including Python and Jupyter) and combine it with Kafka and KSQL to integrate and continuously process real-time streaming data by using a simple Python wrapper API to execute KSQL queries

  • Easily connect to real-time streaming data instead of just historical batches of data (maybe from the last day, week or month, e.g., coming in via CSV files)

  • Merge different concepts like streaming event-based sensor data coming from Kafka with Python programming concepts like generators or dictionaries objects, which you can use for your Python data processing tools or ML frameworks like NumPy, pandas, or scikit-learn

  • Reuse the same logic for integration, preprocessing, and monitoring and move it from your Jupyter Notebook and prototyping or demos to large-scale test and production systems

Python for prototyping and Apache Kafka for a scalable streaming platform are not rival technology stacks. They work together very well, especially if you use “helper tools” like Jupyter Notebooks and KSQL.

Please try it out and let us know your thoughts. How do you leverage the Apache Kafka ecosystem in your machine learning projects?

Machine Learning, Data Science and Deep Learning with Python

Machine Learning, Data Science and Deep Learning with Python

Complete hands-on Machine Learning tutorial with Data Science, Tensorflow, Artificial Intelligence, and Neural Networks. Introducing Tensorflow, Using Tensorflow, Introducing Keras, Using Keras, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Learning Deep Learning, Machine Learning with Neural Networks, Deep Learning Tutorial with Python

Machine Learning, Data Science and Deep Learning with Python

Complete hands-on Machine Learning tutorial with Data Science, Tensorflow, Artificial Intelligence, and Neural Networks

Explore the full course on Udemy (special discount included in the link): http://learnstartup.net/p/BkS5nEmZg

In less than 3 hours, you can understand the theory behind modern artificial intelligence, and apply it with several hands-on examples. This is machine learning on steroids! Find out why everyone’s so excited about it and how it really works – and what modern AI can and cannot really do.

In this course, we will cover:
• Deep Learning Pre-requistes (gradient descent, autodiff, softmax)
• The History of Artificial Neural Networks
• Deep Learning in the Tensorflow Playground
• Deep Learning Details
• Introducing Tensorflow
• Using Tensorflow
• Introducing Keras
• Using Keras to Predict Political Parties
• Convolutional Neural Networks (CNNs)
• Using CNNs for Handwriting Recognition
• Recurrent Neural Networks (RNNs)
• Using a RNN for Sentiment Analysis
• The Ethics of Deep Learning
• Learning More about Deep Learning

At the end, you will have a final challenge to create your own deep learning / machine learning system to predict whether real mammogram results are benign or malignant, using your own artificial neural network you have learned to code from scratch with Python.

Separate the reality of modern AI from the hype – by learning about deep learning, well, deeply. You will need some familiarity with Python and linear algebra to follow along, but if you have that experience, you will find that neural networks are not as complicated as they sound. And how they actually work is quite elegant!

This is hands-on tutorial with real code you can download, study, and run yourself.

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

Libraries play an important role when developers decide to work in Machine Learning or Deep Learning researches. In this article, we list down 10 comparisons between TensorFlow and PyTorch these two Machine Learning Libraries.

According to this article, a survey based on a sample of 1,616 ML developers and data scientists, for every one developer using PyTorch, there are 3.4 developers using TensorFlow. In this article, we list down 10 comparisons between these two Machine Learning Libraries

1 - Origin

PyTorch has been developed by Facebook which is based on Torch while TensorFlow, an open sourced Machine Learning Library, developed by Google Brain is based on the idea of data flow graphs for building models.

2 - Features

TensorFlow has some attracting features such as TensorBoard which serves as a great option while visualising a Machine Learning model, it also has TensorFlow Serving which is a specific grpc server that is used during the deployment of models in production. On the other hand, PyTorch has several distinguished features too such as dynamic computation graphs, naive support for Python, support for CUDA which ensures less time for running the code and increase in performance.

3 - Community

TensorFlow is adopted by many researchers of various fields like academics, business organisations, etc. It has a much bigger community than PyTorch which implies that it is easier to find for resources or solutions in TensorFlow. There is a vast amount of tutorials, codes, as well as support in TensorFlow and PyTorch, being the newcomer into play as compared to TensorFlow, it lacks these benefits.

4 - Visualisation

Visualisation plays as a protagonist while presenting any project in an organisation. TensorFlow has TensorBoard for visualising Machine Learning models which helps during training the model and spot the errors quickly. It is a real-time representation of the graphs of a model which not only depicts the graphic representation but also shows the accuracy graphs in real-time. This eye-catching feature is lacked by PyTorch.

5 - Defining Computational Graphs

In TensorFlow, defining computational graph is a lengthy process as you have to build and run the computations within sessions. Also, you will have to use other parameters such as placeholders, variable scoping, etc. On the other hand, Python wins this point as it has the dynamic computation graphs which help id building the graphs dynamically. Here, the graph is built at every point of execution and you can manipulate the graph at run-time.

6 - Debugging

PyTorch being the dynamic computational process, the debugging process is a painless method. You can easily use Python debugging tools like pdb or ipdb, etc. for instance, you can put “pdb.set_trace()” at any line of code and then proceed for executions of further computations, pinpoint the cause of the errors, etc. While, for TensorFlow you have to use the TensorFlow debugger tool, tfdbg which lets you view the internal structure and states of running TensorFlow graphs during training and inference.

7 - Deployment

For now, deployment in TensorFlow is much more supportive as compared to PyTorch. It has the advantage of TensorFlow Serving which is a flexible, high-performance serving system for deploying Machine Learning models, designed for production environments. However, in PyTorch, you can use the Microframework for Python, Flask for deployment of models.

8 - Documentation

The documentation of both frameworks is broadly available as there are examples and tutorials in abundance for both the libraries. You can say, it is a tie between both the frameworks.

Click here for TensorFlow documentation and click here for PyTorch documentation.

9 - Serialisation

The serialisation in TensorFlow can be said as one of the advantages for this framework users. Here, you can save your entire graph as a protocol buffer and then later it can be loaded in other supported languages, however, PyTorch lacks this feature. 

10 - Device Management

By default, Tensorflow maps nearly all of the GPU memory of all GPUs visible to the process which is a comedown but here it automatically presumes that you want to run your code on the GPU because of the well-set defaults and thus result in fair management of the device. On the other hand, PyTorch keeps track of the currently selected GPU and all the CUDA tensors which will be allocated.