Codeless ML with TensorFlow and AI Platform

Codeless ML with TensorFlow and AI Platform

Codeless ML with TensorFlow and AI Platform - Building an end-to-end machine learning pipeline without writing any ML code.

Originally published by Gad Benram at blog.doit-intl.com

Advances in AI frameworks enable developers to create and deploy deep learning models with as little effort as clicking a few buttons on the screen. Using a UI or an API based on Tensorflow Estimators, models can be built and served without writing a single line of machine learning code.

70 years ago, only a handful of experts knew how to create computer programs, because the process of programming required very high theoretical and technical specialization. Over the years, humans have created increasingly higher levels of abstraction and encapsulation of programming, allowing less-skilled personnel to create software with very basic tools (see Wix for example). The exact same process occurs these days with machine learning — only it advances extremely faster. In this blog post we will write down a simple script that will generate a full machine learning pipeline.

Truly codeless?

This post contains two types of code. The first is a SQL query to generate the dataset — this is the part of the code could be replaced by tools like Google Cloud Dataprep. The other type involves API calls using a Python client library — all of these actions are available through the AI platform UI. When I say codeless, I mean that at no point will you need to import TensorFlow or other ML libraries.

In this demo, I will use the Chicago Taxi Trips open dataset in Google BigQuery to predict the travel time of a taxi based on pickup location, desired drop-off, and the time of ride start. The model will be trained and deployed using Google Cloud services that wrap Tensorflow.

The entire code sample can be found in this GitHub repository.

Extract Features using BigQuery

Based on an EDA shown in this notebook, I created a SQL query to generate a training dataset:

WITH dataset AS( SELECT 

          EXTRACT(HOUR FROM  trip_start_timestamp) trip_start_hour
        , EXTRACT(DAYOFWEEK FROM  trip_start_timestamp) trip_start_weekday
        , EXTRACT(WEEK FROM  trip_start_timestamp) trip_start_week
        , EXTRACT(DAYOFYEAR FROM  trip_start_timestamp) trip_start_yearday
        , EXTRACT(MONTH FROM  trip_start_timestamp) trip_start_month
        , (trip_miles * 1.60934 ) / ((trip_seconds + .01) / (60 * 60)) trip_speed_kmph
        , trip_miles
        , pickup_latitude
        , pickup_longitude
        , dropoff_latitude
        , dropoff_longitude
        , pickup_community_area
        , dropoff_community_area
        , ST_DISTANCE(
          (ST_GEOGPOINT(pickup_longitude,pickup_latitude)),
          (ST_GEOGPOINT(dropoff_longitude,dropoff_latitude))) air_distance
        , CAST (trip_seconds AS FLOAT64) trip_seconds
    FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips` 
        WHERE RAND() < (3000000/112860054) --sample maximum ~3M records 
                AND  trip_start_timestamp < '2016-01-01'
                AND pickup_location IS NOT NULL
                AND dropoff_location IS NOT NULL)
    SELECT 
         trip_seconds
        , air_distance
        , pickup_latitude
        , pickup_longitude
        , dropoff_latitude
        , dropoff_longitude
        , pickup_community_area
        , dropoff_community_area
        , trip_start_hour
        , trip_start_weekday
        , trip_start_week
        , trip_start_yearday
        , trip_start_month
    FROM dataset
    WHERE trip_speed_kmph BETWEEN 5 AND 90

feature extraction script

In the repo, you will be able to see how I execute the query using a python client and export it to GCS.

Important! In order for the AI platform to build a model with this data, the first column must be the target variable and the CSV export should not contain a header.

Submit hyper-parameter tuning job and deploy

After I have my dataset containing a few hundred thousand rides, I define a simple neural network architecture based on the TensorFlow Estimator API, with parameters space to search. This specific spec will create a 3 hidden-layers neural network that solves a regression task (the expected trip time). It will launch 50 trials to search optimal settings for the learning rate, regularization factors, and maximum steps.

{
        "scaleTier": "CUSTOM",
        "masterType": "standard_gpu",
        "args": [
            "--preprocess",
            "--validation_split=0.2",
            "--model_type=regression",
            "--hidden_units=120,60,60",
            "--batch_size=128",
            "--eval_frequency_secs=128",
            "--optimizer_type=ftrl",
            "--use_wide",
            "--embed_categories",
            "--dnn_learning_rate=0.001",
            "--dnn_optimizer_type=ftrl"
        ],
        "hyperparameters": {
            "goal": "MINIMIZE",
            "params": [
                {
                    "parameterName": "max_steps",
                    "minValue": 100,
                    "maxValue": 60000,
                    "type": "INTEGER",
                    "scaleType": "UNIT_LINEAR_SCALE"
                },
                {
                    "parameterName": "learning_rate",
                    "minValue": 0.0001,
                    "maxValue": 0.5,
                    "type": "DOUBLE",
                    "scaleType": "UNIT_LINEAR_SCALE"
                },
                {
                    "parameterName": "l1_regularization_strength",
                    "maxValue": 1,
                    "type": "DOUBLE",
                    "scaleType": "UNIT_LINEAR_SCALE"
                },
                {
                    "parameterName": "l2_regularization_strength",
                    "maxValue": 1,
                    "type": "DOUBLE",
                    "scaleType": "UNIT_LINEAR_SCALE"
                },
                {
                    "parameterName": "l2_shrinkage_regularization_strength",
                    "maxValue": 1,
                    "type": "DOUBLE",
                    "scaleType": "UNIT_LINEAR_SCALE"
                }
            ],
            "maxTrials": 50,
            "maxParallelTrials": 10,
            "hyperparameterMetricTag": "loss",
            "enableTrialEarlyStopping": True
        },
        "region": "us-central1",
        "jobDir": "{JOB_DIR}",
        "masterConfig": {
            "imageUri": "gcr.io/cloud-ml-algos/wide_deep_learner_gpu:latest"
        }
    }

Provided the spec above I can use a Python client to launch a training job:

def train_hyper_params(cloudml_client, training_inputs):

job_name = 'chicago_travel_time_training_{}'.format(datetime.utcnow().strftime('%Y%m%d%H%M%S'))
project_name = 'projects/{}'.format(project_id)
job_spec = {'jobId': job_name, 'trainingInput': training_inputs}
response = cloudml_client.projects().jobs().create(body=job_spec,
                                            parent=project_name).execute()
print(response)

I use the API client to monitor the job run, and, when the job is done, I deploy and test the model.

def create_model(cloudml_client):
    """
    Creates a Model entity in AI Platform
    :param cloudml_client: discovery client
    """
    models = cloudml_client.projects().models()
    create_spec = {'name': model_name}

models.create(body=create_spec,
              parent=project_name).execute()

def deploy_version(cloudml_client, job_results): """ Deploying the best trail's model to AI platform :param cloudml_client: discovery client :param job_results: response of the finished AI platform job """ models = cloudml_client.projects().models()

training_outputs = job_results['trainingOutput']
version_spec = {
    "name": model_version,
    "isDefault": False,
    "runtimeVersion": training_outputs['builtInAlgorithmOutput']['runtimeVersion'],

    # Assuming the trials are sorted by performance (best is first)
    "deploymentUri": training_outputs['trials'][0]['builtInAlgorithmOutput']['modelPath'],
    "framework": training_outputs['builtInAlgorithmOutput']['framework'],
    "pythonVersion": training_outputs['builtInAlgorithmOutput']['pythonVersion'],
    "autoScaling": {
        'minNodes': 0
    }
}

versions = models.versions()
response = versions.create(body=version_spec,
                parent='{}/models/{}'.format(project_name, model_name)).execute()
return response

With this, I completed the deployment of a machine learning pipeline using only API calls.

Get predictions

In order to get predictions, I load part of the test set records to the memory and send it to the deployed version for inference:

def validate_model():
    """
    Function to validate the model results
    """
    df_val = pd.read_csv('{}/processed_data/test.csv'.format(job_dir))

# Submit only 10 samples to the server, ignore the first column (=target column)
instances = [", ".join(x) for x in df_val.iloc[:10, 1:].astype(str).values.tolist()]
service = discovery.build('ml', 'v1')
version_name = 'projects/{}/models/{}'.format(project_id, model_name)

if model_version is not None:
    version_name += '/versions/{}'.format(model_version)

response = service.projects().predict(
    name=version_name,
    body={'instances': instances}
).execute()

if 'error' in response:
    raise RuntimeError(response['error'])

return response['predictions']

Getting predictions

Originally published by Gad Benram at blog.doit-intl.com

-------------------------------------------------

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ Complete Guide to TensorFlow for Deep Learning with Python

☞ Data Science: Deep Learning in Python

☞ Python for Data Science and Machine Learning Bootcamp

☞ Deep Learning with TensorFlow 2.0 [2019]

☞ TensorFlow 2.0: A Complete Guide on the Brand New TensorFlow

☞ Tensorflow and Keras For Neural Networks and Deep Learning

☞ Tensorflow Bootcamp For Data Science in Python

☞ Complete 2019 Data Science & Machine Learning Bootcamp



Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

Brave, Chrome, Firefox, Opera or Edge: Which is Better and Faster?

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

Machine Learning Tutorial with Python, Jupyter, KSQL and TensorFlow

Machine Learning With Python, Jupyter, KSQL, and TensorFlow. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers.

Machine Learning, Data Science and Deep Learning with Python

Complete hands-on Machine Learning tutorial with Data Science, Tensorflow, Artificial Intelligence, and Neural Networks. Introducing Tensorflow, Using Tensorflow, Introducing Keras, Using Keras, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Learning Deep Learning, Machine Learning with Neural Networks, Deep Learning Tutorial with Python

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

Libraries play an important role when developers decide to work in Machine Learning or Deep Learning researches. In this article, we list down 10 comparisons between TensorFlow and PyTorch these two Machine Learning Libraries.