In this story, I have developed a working tutorial on deploying a model trained with pycaret library on Microsoft’s Azure Cloud Platform. In my previous article Build with PyCaret: Deploy on Google Cloud Platform, we had learned how to deploy the model on Google cloud. We will use the same example in this tutorial and deploy the model on Microsoft Azure platform.

Image for post

We learn to deploy model trained with pycaret to Microsoft Azure Platform. pycaret has support to deploy a trained model on AWS but not with GCP or Azure at the moment. I followed the similar code-practices as used in library to deploy and load model with AWS to deploy the model on Microsoft’s Azure Platform.

PyCaret is an open source, low-code machine learning library in Python that allows you to go from preparing your data to deploying your model within seconds in your choice of notebook environment.source

PyCaret is an autoML framework for citizen data scientists term used in its official documentation and homepage. This is relatively new library released few months ago for public use and still under active development. After going through some source code, I realized the current public release lacks support for deployment of trained/finalized models to Google and Azure Cloud Platforms. Although, It has support for deploying on Amazon web services.

Microsoft Azure is another very popular framework for cloud services and targeting different market than Google and AWS. Having already huge customer base, Azure has captured reasonable market share. In my opinion, for citizen data scientists, probably Microsoft Azure is best starting point. Let us learn how can we deploy a model on Microsoft Azure.

For this tutorial, we will use the Regression Tutorial (REG101) — Level Beginner for model training.

Installing pycaret

!pip install pycaret

Mounting the Gdrive

We need to mount the google drive to read the data in colab environment. Below is the simplest way to mount it. You will be asked to enter the token generated by your access procedure. Here is the link to the article about mounting gdrive

We will save models locally on Google drive for this tutorial.

from google.colab import drive
drive.mount('/content/drive')

Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).

Let us create a directory to save models locally.

# Create directory on google drive to save models locally. You can use temp paths.
import os
model_dir = '/content/drive/My Drive/azure_deploy_model/'
os.makedirs(model_dir, exist_ok=True)

Getting the Data

You can download the data from the original source found here and load it using pandas (Learn How) or you can use PyCaret’s data respository to load the data using the get_data() function (This will require internet connection).

from pycaret.datasets import get_data
dataset = get_data('diamond')

png

#check the shape of data
dataset.shape

(6000, 8)
data = dataset.sample(frac=0.9, random_state=786).reset_index(drop=True)
data_unseen = dataset.drop(data.index).reset_index(drop=True)

print('Data for Modeling: ' + str(data.shape))
print('Unseen Data For Predictions: ' + str(data_unseen.shape))
Data for Modeling: (5400, 8)
Unseen Data For Predictions: (600, 8)

Setting up Environment in PyCaret

Let us setup modelling pipeline using pycaret’s setup module.

from pycaret.regression import *

exp_reg101 = setup(data = data, target = 'Price', session_id=123)
Setup Successfully Completed!

png

Create a Light GBM Model

For this tutorial, we model the data using Light GBM from many options implemented in pycaret. You can choose any model of your choice but that is not focus of this tutorial.

lightgbm = create_model('lightgbm')

png

Tune Light Gradient Boosting Machine

Let us train the model also called a tuning the model in pycaret’s terminologies.

tuned_lightgbm = tune_model('lightgbm')

png

Residual Plot

Below are the plots to see the residual errors for the model

plot_model(tuned_lightgbm)

png

Prediction Error Plot Let use plot the prediction errors vs true values of target.

plot_model(tuned_lightgbm, plot = 'error')

png

Feature Importance Plot

feature importance is very informative plot to see the contribution of each feature in the model.

plot_model(tuned_lightgbm, plot='feature')

png

Another way to analyze the performance of models is to use the evaluate_model() function which displays a user interface for all of the available plots for a given model. It internally uses the plot_model() function.

evaluate_model(tuned_lightgbm)

interactive(children=(ToggleButtons(description='Plot Type:', icons=('',), options=(('Hyperparameters', 'param…

Predict on test / hold-out Sample

predict_model(tuned_lightgbm);

#google-cloud-platform #machine-learning #azure #data-science #pycaret

Build with PyCaret : Deploy on Microsoft Azure Platform
4.75 GEEK