Using Google’s Natural Language API with Python

Using Google’s Natural Language API with Python

<strong>How to make your own sentiment analyzer using Python and Google’s Natural Language&nbsp;API</strong>

How to make your own sentiment analyzer using Python and Google’s Natural Language API

Imagine you are a product owner who wants to know what people are saying about your product in social media. Maybe your company launched a new product and you want to know how people reacted to it. You might want to use a sentiment analyzer like MonkeyLearnor Talkwalker. But wouldn’t it be cool if we could make our own sentiment analyzer? Let’s make it then!

In this tutorial, we are going to make a Telegram Bot that will do the sentiment analysis of tweets related to the keyword that we define.

If this is your first time building a Telegram Bot, you might want to read this post first.

Getting started

1. Install the libraries

We are going to use tweepy to gather the tweet data. We will use nltk to help us clean the tweets. Google Natural Language API will do the sentiment analysis. python-telegram-bot will send the result through Telegram chat.

pip3 install tweepy nltk google-cloud-language python-telegram-bot


2. Get Twitter API Keys

To be able to gather the tweets from Twitter, we need to create a developer account to get the Twitter API Keys first.

Go to Twitter Developer website, and create an account if you don’t have one.

Open Apps page, click “Create an app”, fill out the form and click “Create”.

Click on “Keys and tokens” tab, copy the API Key and API Secret Key in the “Consumer API keys” section.

Click the “Create” button under “Access token & access token secret” section. Copy the Access Token and Access Token Secret that have been generated.

Great! Now you should have four keys — API Key, API Secret Key, Access Token, and Access Token Secret. Save those keys for later use.

3. Enable Google Natural Language API

We need to enable the Google Natural Language API first if we want to use the service.

Go to Google Developers Console and create a new project (or select the one you have).

In the project dashboard, click “ENABLE APIS AND SERVICES”, and search for Cloud Natural Language API.

Click “ENABLE” to enable the API.

4. Create service account key

If we want to use Google Cloud services like Google Natural Language, we need a service account key. This is like our credential to use Google’s services.

Go to Google Developers Console, click “Credentials” tab, choose “Create credentials” and click “Service account key”.

Choose “App Engine default service account” and JSON as key type, then click “Create”.

There is a .json file that will be automatically downloaded, name it creds.json.

Set the GOOGLE_APPLICATION_CREDENTIALS with the path of our creds.json file in the terminal.

export GOOGLE_APPLICATION_CREDENTIALS='[PATH_TO_CREDS.JSON]'


If everything is good, then it’s time to write our program.

Write the program

This program will gather all the tweets containing the defined keyword in the last 24 hours with a maximum of 50 tweets. Then it will analyze the tweets’ sentiments one by one. We will send the result (average sentiment score) through Telegram chat.

This is a simple workflow of our program.

connect to the Twitter API -> search tweets based on the keyword -> clean all of the tweets -> get tweet’s sentiment score -> send the result
Let’s make a single function to define each flow.

1. Connect to the Twitter API

The first thing that we need to do is gather the tweets’ data, so we have to connect to the Twitter API first.

Import the tweepy library.

import tweepy

Define the keys that we generated earlier.

ACC_TOKEN = 'YOUR_ACCESS_TOKEN'
ACC_SECRET = 'YOUR_ACCESS_TOKEN_SECRET'
CONS_KEY = 'YOUR_CONSUMER_API_KEY'
CONS_SECRET = 'YOUR_CONSUMER_API_SECRET_KEY'

Make a function called authentication to connect to the API, with four parameters which are all of the keys.

def authentication(cons_key, cons_secret, acc_token, acc_secret):
    auth = tweepy.OAuthHandler(cons_key, cons_secret)
    auth.set_access_token(acc_token, acc_secret)
    api = tweepy.API(auth)
    return api

2. Search the tweets

We can search the tweets with two criteria, based on time or quantity. If it’s based on time, we define the time interval and if it’s based on quantity, we define the total tweets that we want to gather. Since we want to gather the tweets from the last 24 hours with maximum tweets of 50, we will use both of the criteria.

Since we want to gather the tweets from the last 24 hours, let’s take yesterday’s date as our time parameter.

from datetime import datetime, timedelta
today_datetime = datetime.today().now()
yesterday_datetime = today_datetime - timedelta(days=1)
today_date = today_datetime.strftime('%Y-%m-%d')
yesterday_date = yesterday_datetime.strftime('%Y-%m-%d')

Connect to the Twitter API using a function we defined before.

api = authentication(CONS_KEY,CONS_SECRET,ACC_TOKEN,ACC_SECRET)


Define our search parameters. q is where we define our keyword, since is the start date for our search, result_type='recent'means we are going to take the newest tweets, lang='en' is going to take the English tweets only, and items(total_tweets) is where we define the maximum tweets that we are going to take.

search_result = tweepy.Cursor(api.search, 
                              q=keyword, 
                              since=yesterday_date,
                              result_type='recent', 
                              lang='en').items(total_tweets)

Wrap those codes in a function called search_tweets with keyword and total_tweets as the parameters.

def search_tweets(keyword, total_tweets):
    today_datetime = datetime.today().now()
    yesterday_datetime = today_datetime - timedelta(days=1)
    today_date = today_datetime.strftime('%Y-%m-%d')
    yesterday_date = yesterday_datetime.strftime('%Y-%m-%d')
    api = authentication(CONS_KEY,CONS_SECRET,ACC_TOKEN,ACC_SECRET)
    search_result = tweepy.Cursor(api.search, 
                                  q=keyword, 
                                  since=yesterday_date, 
                                  result_type='recent', 
                                  lang='en').items(total_tweets)
    return search_result

3. Clean the tweets

Before we analyze the tweets sentiment, we need to clean the tweets a little bit so the Google Natural Language API can identify them better.

We will use the nltk and regex libraries to help us in this process.

import re
from nltk.tokenize import WordPunctTokenizer

We remove the username in every tweet, so basically we can remove everything that begins with @ and we use regex to do it.

user_removed = re.sub(r'@[A-Za-z0-9]+','',tweet.decode('utf-8'))


We also remove links in every tweet.

link_removed = re.sub('https?://[A-Za-z0-9./]+','',user_removed)


Numbers are also deleted from all of the tweets.

number_removed = re.sub('[^a-zA-Z]',' ',link_removed)


The last, convert all of the characters into lower space, then remove every unnecessary space.

lower_case_tweet = number_removed.lower()
tok = WordPunctTokenizer()
words = tok.tokenize(lower_case_tweet)
clean_tweet = (' '.join(words)).strip()

Wrap those codes into a function called clean_tweets with tweet as our parameter.

def clean_tweets(tweet):
    user_removed = re.sub(r'@[A-Za-z0-9]+','',tweet.decode('utf-8'))
    link_removed = re.sub('https?://[A-Za-z0-9./]+','',user_removed)
    number_removed = re.sub('[^a-zA-Z]', ' ', link_removed)
    lower_case_tweet= number_removed.lower()
    tok = WordPunctTokenizer()
    words = tok.tokenize(lower_case_tweet)
    clean_tweet = (' '.join(words)).strip()
    return clean_tweet

4. Get tweet’s sentiment

To be able to get a tweet’s sentiment, we will use Google Natural Language API.

The API provides Sentiment Analysis, Entities Analysis, and Syntax Analysis. We will only use the Sentiment Analysis for this tutorial.

In Google’s Sentiment Analysis, there are score and magnitude. Score is the score of the sentiment ranges from -1.0 (very negative) to 1.0 (very positive). Magnitude is the strength of sentiment and ranges from 0 to infinity.

For the sake of simplicity of this tutorial, we will only consider the score. If you are thinking of doing deep NLP analysis, you should consider the magnitude too.

Import the Google Natural Language library.

from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types

Make a function called get_sentiment_score which takes tweet as the parameter, and returns the sentiment score.

def get_sentiment_score(tweet):
    client = language.LanguageServiceClient()
    document = types\
               .Document(content=tweet,
                         type=enums.Document.Type.PLAIN_TEXT)
    sentiment_score = client\
                      .analyze_sentiment(document=document)\
                      .document_sentiment\
                      .score
    return sentiment_score

5. Analyze the tweets

Let’s make a function that will loop the list of tweets we get from search_tweets function and get the sentiment’s score of every tweet using get_sentiment_score function. Then we’ll calculate the average. The average score will determine whether the given keyword has a positive, neutral, or negative sentiment.

Define score equals to 0 , then use search_tweets function to get the tweets related to the keyword that we define.

score = 0
tweets = search_tweets(keyword, total_tweets)

Loop through the list of tweets, and do the cleaning using clean_tweets function that we created before.

for tweet in tweets:
    cleaned_tweet = clean_tweets(tweet.text.encode('utf-8'))


Get the sentiment score using get_sentiment_score function, and increment the score by adding sentiment_score.

for tweet in tweets:
    cleaned_tweet = clean_tweets(tweet.text.encode('utf-8'))
    sentiment_score = get_sentiment_score(cleaned_tweet)
    score += sentiment_score

Let’s print out each tweet and its sentiment so we can see the progress detail in the terminal.

for tweet in tweets:
    cleaned_tweet = clean_tweets(tweet.text.encode('utf-8'))
    sentiment_score = get_sentiment_score(cleaned_tweet)
    score += sentiment_score
    print('Tweet: {}'.format(cleaned_tweet))
    print('Score: {}\n'.format(sentiment_score))

Calculate the average score and pass it to final_score variable. Wrap all of the codes into analyze_tweets function, with keyword and total_tweets as the parameters.

def analyze_tweets(keyword, total_tweets):
    score = 0
    tweets = search_tweets(keyword, total_tweets)
    for tweet in tweets:
        cleaned_tweet = clean_tweets(tweet.text.encode('utf-8'))
        sentiment_score = get_sentiment_score(cleaned_tweet)
        score += sentiment_score
        print('Tweet: {}'.format(cleaned_tweet))
        print('Score: {}\n'.format(sentiment_score))
    final_score = round((score / float(total_tweets)),2)
    return final_score

6. Send the tweet’s sentiment score

Let’s make the last function in the workflow. This function will takes user’s keyword and calculate the average sentiment’s score. Then we’ll send it through Telegram Bot.

Get the keyword from the user.

keyword = update.message.text

Use analyze_tweets function to get the final score, keyword as our parameter, and set the total_tweets = 50 since we want to gather 50 tweets.

final_score = analyze_tweets(keyword, 50)


We define whether a given score is considered negative, neutral, or positive using Google’s score range, as we see in the image below.

if final_score <= -0.25:
    status = 'NEGATIVE | ❌'
elif final_score <= 0.25:
    status = 'NEUTRAL | 🔶'
else:
    status = 'POSITIVE | ✅'

Lastly, send the final_score and the status through Telegram Bot.

bot.send_message(chat_id=update.message.chat_id,
                 text='Average score for '
                       + str(keyword) 
                       + ' is ' 
                       + str(final_score) 
                       + ' | ' + status)

Wrap the codes into a function called send_the_result.

def send_the_result(bot, update):
    keyword = update.message.text
    final_score = analyze_tweets(keyword, 50)
    if final_score <= -0.25:
        status = 'NEGATIVE | ❌'
    elif final_score <= 0.25:
        status = 'NEUTRAL | 🔶'
    else:
        status = 'POSITIVE | ✅'
    bot.send_message(chat_id=update.message.chat_id,
                     text='Average score for '
                           + str(keyword) 
                           + ' is ' 
                           + str(final_score) 
                           + ' | ' + status)

7. Main program

Lastly, create another function called main to run our program. Don’t forget to change YOUR_TOKEN to your bot’s token.

from telegram.ext import Updater, MessageHandler, Filters
def main():
    updater = Updater('YOUR_TOKEN')
    dp = updater.dispatcher
    dp.add_handler(MessageHandler(Filters.text, send_the_result))
    updater.start_polling()
    updater.idle()
if __name__ == '__main__':
    main()

In the end, your code should look like this

import tweepy
import re
from telegram.ext import Updater, MessageHandler, Filters
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
from datetime import datetime, timedelta
from nltk.tokenize import WordPunctTokenizer
ACC_TOKEN = 'YOUR_ACCESS_TOKEN'
ACC_SECRET = 'YOUR_ACCESS_TOKEN_SECRET'
CONS_KEY = 'YOUR_CONSUMER_API_KEY'
CONS_SECRET = 'YOUR_CONSUMER_API_SECRET_KEY'
def authentication(cons_key, cons_secret, acc_token, acc_secret):
    auth = tweepy.OAuthHandler(cons_key, cons_secret)
    auth.set_access_token(acc_token, acc_secret)
    api = tweepy.API(auth)
    return api
def search_tweets(keyword, total_tweets):
    today_datetime = datetime.today().now()
    yesterday_datetime = today_datetime - timedelta(days=1)
    today_date = today_datetime.strftime('%Y-%m-%d')
    yesterday_date = yesterday_datetime.strftime('%Y-%m-%d')
    api = authentication(CONS_KEY,CONS_SECRET,ACC_TOKEN,ACC_SECRET)
    search_result = tweepy.Cursor(api.search, 
                                  q=keyword, 
                                  since=yesterday_date, 
                                  result_type='recent', 
                                  lang='en').items(total_tweets)
    return search_result
def clean_tweets(tweet):
    user_removed = re.sub(r'@[A-Za-z0-9]+','',tweet.decode('utf-8'))
    link_removed = re.sub('https?://[A-Za-z0-9./]+','',user_removed)
    number_removed = re.sub('[^a-zA-Z]', ' ', link_removed)
    lower_case_tweet= number_removed.lower()
    tok = WordPunctTokenizer()
    words = tok.tokenize(lower_case_tweet)
    clean_tweet = (' '.join(words)).strip()
    return clean_tweet
def get_sentiment_score(tweet):
    client = language.LanguageServiceClient()
    document = types\
               .Document(content=tweet,
                         type=enums.Document.Type.PLAIN_TEXT)
    sentiment_score = client\
                      .analyze_sentiment(document=document)\
                      .document_sentiment\
                      .score
    return sentiment_score
def analyze_tweets(keyword, total_tweets):
    score = 0
    tweets = search_tweets(keyword,total_tweets)
    for tweet in tweets:
        cleaned_tweet = clean_tweets(tweet.text.encode('utf-8'))
        sentiment_score = get_sentiment_score(cleaned_tweet)
        score += sentiment_score
        print('Tweet: {}'.format(cleaned_tweet))
        print('Score: {}\n'.format(sentiment_score))
    final_score = round((score / float(total_tweets)),2)
    return final_score
def send_the_result(bot, update):
    keyword = update.message.text
    final_score = analyze_tweets(keyword, 50)
    if final_score <= -0.25:
        status = 'NEGATIVE | ❌'
    elif final_score <= 0.25:
        status = 'NEUTRAL | 🔶'
    else:
        status = 'POSITIVE | ✅'
    bot.send_message(chat_id=update.message.chat_id,
                     text='Average score for '
                           + str(keyword) 
                           + ' is ' 
                           + str(final_score) 
                           + ' | ' + status)
def main():
    updater = Updater('YOUR_TOKEN')
    dp = updater.dispatcher
    dp.add_handler(MessageHandler(Filters.text, send_the_result))
    updater.start_polling()
    updater.idle()
if __name__ == '__main__':
    main()

Save the file and name it main.py, then run the program.

python3 main.py

Go to your telegram bot by accessing this URL: [[https://telegram.me/YOUR_BOT_USERNAME](https://telegram.me/YOUR_BOT_USERNAME.)](https://telegram.me/YOUR_BOT_USERNAME](https://telegram.me/YOUR_BOT_USERNAME.) "https://telegram.me/YOUR_BOT_USERNAME](https://telegram.me/YOUR_BOT_USERNAME.)"). Type any product, person name, or whatever you want and send it to your bot. If everything runs, there should be a detailed sentiment score for each tweet in the terminal. The bot will reply with the average sentiment score.

The pictures below are an example if I type valentino rossi and send it to the bot.

If you managed to follow the steps until the end of this tutorial, that’s awesome! You have your sentiment analyzer now, how cool is that!?

You can also check out my GitHub to get the code. Please do not hesitate to connect and leave a message in my Linkedin profile if you want to ask about anything.

Please leave a comment if you think there are any errors in my code or writing.

Thank you and good luck! :)

30s ad

Complete Python Bootcamp: Go from zero to hero in Python

Learn Python Through Exercises

The Python Bible™ | Everything You Need to Program in Python

The Ultimate Python Programming Tutorial

Python for Data Analysis and Visualization - 32 HD Hours !

TensorFlow 2.0 Full Tutorial - Python Neural Networks for Beginners

TensorFlow 2.0 Full Tutorial - Python Neural Networks for Beginners

Learn how to use Tensorflow 2.0 in this full course for beginners. This Python neural network tutorial series will teach how to use Tensorflow 2.0 and demonstrate how to create neural networks with Python and TensorFlow 2.0.

TensorFlow 2.0 Full Tutorial - Python Neural Networks for Beginners

⭐️ Course Contents ⭐️

⌨️ (0:00:00) What is a Neural Network?

⌨️ (0:26:34) Loading & Looking at Data

⌨️ (0:39:38) Creating a Model

⌨️ (0:56:48) Using the Model to Make Predictions

⌨️ (1:07:11) Text Classification P1

⌨️ (1:28:37) What is an Embedding Layer? Text Classification P2

⌨️ (1:42:30) Training the Model - Text Classification P3

⌨️ (1:52:35) Saving & Loading Models - Text Classification P4

⌨️ (2:07:09) How to Install TensorFlow GPU on Linux

Machine Learning Tutorial with Python, Jupyter, KSQL and TensorFlow

Machine Learning Tutorial with Python, Jupyter, KSQL and TensorFlow

Machine Learning With Python, Jupyter, KSQL, and TensorFlow. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers.

Machine Learning With Python, Jupyter, KSQL, and TensorFlow. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers.

Building a scalable, reliable, and performant machine learning (ML) infrastructure is not easy. It takes much more effort than just building an analytic model with Python and your favorite machine learning framework.

Uber, which already runs their scalable and framework-independent machine learning platform Michelangelo for many use cases in production, wrote a good summary:

When Michelangelo started, the most urgent and highest impact use cases were some very high scale problems, which led us to build around Apache Spark (for large-scale data processing and model training) and Java (for low latency, high throughput online serving). This structure worked well for production training and deployment of many models but left a lot to be desired in terms of overhead, flexibility, and ease of use, especially during early prototyping and experimentation [where Notebooks and Python shine].
Uber expanded Michelangelo “to serve any kind of Python model from any source to support other Machine Learning and Deep Learning frameworks like PyTorch and TensorFlow [instead of just using Spark for everything].”

So why did Uber (and many other tech companies) build its own platform and framework-independent machine learning infrastructure?

The posts How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka and Using Apache Kafka to Drive Cutting-Edge Machine Learning describe the benefits of leveraging the Apache Kafka ® ecosystem as a central, scalable, and mission-critical nervous system. It allows real-time data ingestion, processing, model deployment, and monitoring in a reliable and scalable way.

This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers, and production engineers. By leveraging it to build your own scalable machine learning infrastructure and also make your data scientists happy, you can solve the same problems for which Uber built its own ML platform, Michelangelo.


You may also like:A Complete Machine Learning Project Walk-Through in Python


Impedance Mismatch Between Data Scientists, Data Engineers and Production Engineers

Based on what I’ve seen in the field, an impedance mismatch between data scientists, data engineers, and production engineers is the main reason why companies struggle to bring analytic models into production to add business value.

The following diagram illustrates the different required steps and corresponding roles as part of the impedance mismatch in a machine learning lifecycle:

Impedance mismatch between model development and model deployment

Data scientists love Python, period. Therefore, the majority of machine learning/deep learning frameworks focus on Python APIs. Both the stablest and most cutting edge APIs, as well as the majority of examples and tutorials, use Python APIs. In addition to Python support, there is typically support for other programming languages, including JavaScript for web integration and Java for platform integration-though oftentimes with fewer features and less maturity. No matter what other platforms are supported, chances are very high that your data scientists will build and train their analytic models with Python.

There is an impedance mismatch between model development using Python, its tool stack and a scalable, reliable data platform with low latency, high throughput, zero data loss and 24/7 availability requirements needed for data ingestion, preprocessing, model deployment and monitoring at scale. Python, in practice, is not the most well-known technology for these requirements. However, it is a great client for a data platform like Apache Kafka.

The problem is that writing the machine learning source code to train an analytic model with Python and the machine learning framework of your choice is just a very small part of a real-world machine learning infrastructure. You need to think about the whole model lifecycle. The following image represents this hidden technical debt in machine learning systems (showing how small the “ML code” part is):

Thus, you need to train and deploy the model built to a scalable production environment in order to reliably make use of it. This can either be built natively around the Kafka ecosystem, or you could use Kafka just for ingestion into another storage and processing cluster such as HDFS or AWS S3 with Spark. There are many tradeoffs between Kafka, Spark, and several other scalable infrastructures, but that discussion is out of scope for this post. For now, we’ll focus on Kafka.

Different solutions in the industry solve certain parts of the impedance mismatch between data scientists, data engineers, and production engineers. Let’s take a look at some of these options:

  • Official standards like Open Neural Network Exchange (ONNX), Portable Format for Analytics (PFA) or Predictive Model Markup Language (PMML): A data scientist builds a model with Python. The Java developer imports it in Java for production deployment. This approach supports different frameworks, products, and cloud services. You do not have to rely on the same framework or product for training and model deployment. Consider ONNX, a relatively new standard for deep learning — it already supports TensorFlow, PyTorch, and MXNet. These standards have pros and cons. Some people like and use them; many don’t.
  • Developer-focused frameworks like Deeplearning4j: These frameworks are built for software engineers to build the whole machine learning lifecycle on the Java platform, not just model deployment and monitoring, but also preprocessing and training. You can still import other models if you want (e.g., Deeplearning4j lets you import Keras models). This option is great if you: a) have data scientists who can write Java or b) have software engineers who understand machine learning concepts enough to build analytic models.
  • AutoML for building analytic models with limited machine learning experience: This way, domain experts can build and deploy analytic models with a button click. The AutoML engine provides an interface for others to use the model for predictions.
  • Embedding model binaries into applications: The output of model training is an analytic model. For instance, you can write Python code to train and generate a TensorFlow model. Depending on the framework, the output can be text files, Java source code, or binary files. For example, TensorFlow generates a model artifact with Protobuf, JSON, and other files. No matter what format the output of your machine learning framework is, it can be embedded into applications to use for predictions via the framework’s API (e.g., you can load a TensorFlow model from a Java application through TensorFlow’s Java API).
  • Managed model server in the public cloud like Google Cloud Machine Learning Engine: The cloud provider takes over the burden of availability and reliability. The data scientist “just” deploys its trained model, and production engineers can access it. The key tradeoff is that this requires RPC communication to perform model inference.

While all these solutions help data scientists, data engineers, and production engineers to work better together, there are underlying challenges within the hidden debts:

  • Data collection (i.e., integration) and preprocessing need to run at scale

  • Configuration needs to be shared and automated for continuous builds and integration tests

  • The serving and monitoring infrastructure need to fit into your overall enterprise architecture and tool stack

So how can the Kafka ecosystem help here?

Apache Kafka as a Key Component for Solving the Impedance Mismatch

In many cases, it is best to provide experts with the tools they like and know well. The challenge is to combine the different toolsets and still build an integrated system, as well as a continuous, scalable, machine learning workflow. Therefore, Kafka is not competitive but complementary to the discussed alternatives when it comes to solving the impedance mismatch between the data scientist and developer.

The data engineer builds a scalable integration pipeline using Kafka as infrastructure and Python for integration and preprocessing statements. The data scientist can build their model with Python or any other preferred tool. The production engineer gets the analytic models (either manually or through any automated, continuous integration setup) from the data scientist and embeds them into their Kafka application to deploy it in production. Or, the team works together and builds everything with Java and a framework like Deeplearning4j.

Any option can pair well with Apache Kafka. Pick the pieces you need, whether it’s Kafka core for data transportation, Kafka Connect for data integration, or Kafka Streams/KSQL for data preprocessing. Many components can be used for both model training and model inference. Write once and use in both scenarios as shown in the following diagram:

Leveraging the Apache Kafka ecosystem for a machine learning infrastructure

Monitoring the complete environment in real time and at scale is also a common task for Kafka. A huge benefit is that you only build a highly reliable and scalable pipeline once but use it for both parts of a machine learning infrastructure. And you can use it in any environment: in the cloud, in on-prem datacenters, or at the edges where IoT devices are.

Say you wanted to build one integration pipeline from MQTT to Kafka with KSQL for data preprocessing and use Kafka Connect for data ingestion into HDFS, AWS S3, or Google Cloud Storage, where you do the model training. The same integration pipeline, or at least parts of it, can be reused for model inference. New MQTT input data can directly be used in real time to make predictions.

We just explained various alternatives to solving the impedance mismatch between data scientists and software engineers in Kafka environments. Now, let’s discuss one specific option in the next section, which is probably the most convenient for data scientists: leveraging Kafka from a Jupyter Notebook with KSQL statements and combining it with TensorFlow and Keras to train a neural network.

Data Scientists Combining Python and Jupyter With Scalable Streaming Architectures

Data scientists use tools like Jupyter Notebooks to analyze, transform, enrich, filter, and process data. The preprocessed data is then used to train analytic models with machine learning/deep learning frameworks like TensorFlow.

However, some data scientists do not even know “bread-and-butter” concepts of software engineers, such as version control systems like GitHub or continuous integration tools like Jenkins.

This raises the question of how to combine the Python experience of data scientists with the benefits of Apache Kafka as a battle-tested, highly scalable data processing and streaming platform.

Apache Kafka and KSQL for Data Scientists and Data Engineers

Kafka offers integration options that can be used with Python, like Confluent’s Python Client for Apache Kafka or Confluent REST Proxy for HTTP integration. But this is not really a convenient way for data scientists who are used to quickly and interactively analyzing and preprocessing data before model training and evaluation. Rapid prototyping is typically used here.

KSQL enables data scientists to take a look at Kafka event streams and implement continuous stream processing from their well-known and loved Python environments like Jupyter by writing simple SQL-like statements for interactive analysis and data preprocessing.

The following Python example executes an interactive query from a Kafka stream leveraging the open source framework ksql-python, which adds a Python layer on top of KSQL’s REST interface. Here are a few lines of the Python code using KSQL from a Jupyter Notebook:

The result of such a KSQL query is a Python generator object, which you can easily process with other Python libraries. This feels much more Python native and is analogous to NumPy, pandas, scikit-learn and other widespread Python libraries.

Similarly to rapid prototyping with these libraries, you can do interactive queries and data preprocessing with ksql-python. Check out the KSQL quick start and KSQL recipes to understand how to write a KSQL query to easily filter, transform, enrich, or aggregate data. While KSQL is running continuous queries, you can also use it for interactive analysis and use the LIMIT keyword like in ANSI SQL if you just want to get a specific number of rows.

So what’s the big deal? You understand that KSQL can feel Python-native with the ksql-python library, but why use KSQL instead of or in addition to your well-known and favorite Python libraries for analyzing and processing data?

The key difference is that these KSQL queries can also be deployed in production afterwards. KSQL offers you all the features from Kafka under the hood like high scalability, reliability, and failover handling. The same KSQL statement that you use in your Jupyter Notebook for interactive analysis and preprocessing can scale to millions of messages per second. Fault tolerant. With zero data loss and exactly once semantics. This is very important and valuable for bringing together the Python-loving data scientist with the highly scalable and reliable production infrastructure.

Just to be clear: KSQL + Python is not the all-rounder for every data engineering task, and it does not replace the existing Python toolset. But it is a great option in the toolbox of data scientists and data engineers, and it adds new possibilities like getting real-time updates of incoming information as the source data changes or updating a deployed model with a new and improved version.

Jupyter Notebook for Fraud Detection With Python KSQL and TensorFlow/Keras

Let’s now take a look at a detailed example using the combination of KSQL and Python. It involves advanced code examples using ksql-python and other widespread components from Python’s machine learning ecosystem, like NumPy, pandas, TensorFlow, and Keras.

The use case is fraud detection for credit card payments. We use a test dataset from Kaggle as a foundation to train an unsupervised autoencoder to detect anomalies and potential fraud in payments. The focus of this example is not just model training, but the whole machine learning infrastructure, including data ingestion, data preprocessing, model training, model deployment, and monitoring. All of this needs to be scalable, reliable, and performant.

For the full running example and more details, see the documentation.

Let’s take a look at a few snippets of the Jupyter Notebook.

Connection to KSQL server and creation of a KSQL stream using Python:

from ksql import KSQLAPI
client = KSQLAPI('http://localhost:8088')

client.create_stream(table_name='creditcardfraud_source',
                     columns_type=['Id bigint', 'Timestamp varchar', 'User varchar', 'Time int', 'V1 double', 'V2 double', 'V3 double', 'V4 double', 'V5 double', 'V6 double', 'V7 double', 'V8 double', 'V9 double', 'V10 double', 'V11 double', 'V12 double', 'V13 double', 'V14 double', 'V15 double', 'V16 double', 'V17 double', 'V18 double', 'V19 double', 'V20 double', 'V21 double', 'V22 double', 'V23 double', 'V24 double', 'V25 double', 'V26 double', 'V27 double', 'V28 double', 'Amount double', 'Class string'],
                     topic='creditcardfraud_source',
                     value_format='DELIMITED')

Preprocessing incoming payment information using Python:

  • Filter columns that are not needed

  • Filter messages where column "class" is empty

  • Change data format to Avro for convenient and further processing

client.create_stream_as(table_name='creditcardfraud_preprocessed_avro',
                     select_columns=['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount', 'Class'],
                     src_table='creditcardfraud_source',
                     conditions='Class IS NOT NULL',
                     kafka_topic='creditcardfraud_preprocessed_avro',
                     value_format='AVRO')

Some more examples for possible data wrangling and preprocessing with KSQL:

  • Drop columns, filter messages where value “class” is empty and change data format to Avro:
CREATE STREAM creditcardfraud_preprocessed_avro WITH (VALUE_FORMAT='AVRO', KAFKA_TOPIC='creditcardfraud_preprocessed_avro') AS SELECT Time,  V1 , V2 , V3 , V4 , V5 , V6 , V7 , V8 , V9 , V10 , V11 , V12 , V13 , V14 , V15 , V16 , V17 , V18 , V19 , V20 , V21 , V22 , V23 , V24 , V25 , V26 , V27 , V28 , Amount , Class FROM creditcardfraud_source WHERE Class IS NOT NULL;

  • Anonymization (mask the two leftmost characters, e.g., “Hans” becomes “**ns”):
SELECT Id, MASK_LEFT(User, 2) FROM creditcardfraud_source;

  • Augmentation (add -1 if “class” is null):
SELECT Id, IFNULL(Class, -1) FROM creditcardfraud_source;

  • Merge/join data frames:
CREATE STREAM creditcardfraud_per_user WITH (VALUE_FORMAT='AVRO', KAFKA_TOPIC='creditcardfraud_preprocessed_avro') AS SELECT Time,  V1 , V2 , V3 , V4 , V5 , V6 , V7 , V8 , V9 , V10 , V11 , V12 , V13 , V14 , V15 , V16 , V17 , V18 , V19 , V20 , V21 , V22 , V23 , V24 , V25 , V26 , V27 , V28 , Amount , Class FROM creditcardfraud_enahnced c INNER JOIN USERS u on c.userid = u.userid WHERE V1 > 5 AND V2 IS NOT NULL AND u.CITY LIKE 'Premium%';

The Jupyter Notebook contains the full example. We use Python + KSQL for integration, data preprocessing, and interactive analysis and combine them with various other libraries from a common Python machine learning tool stack for prototyping and model training:

  • Arrays/matrices processing with NumPy and pandas

  • ML-specific processing (split train/test, etc.) with scikit-learn

  • Interactive analysis through data visualisations with Matplotlib

  • ML training + evaluation with TensorFlow and Keras

Model inference and visualisation are done in the Jupyter notebook, too. After you have built an accurate model, you can deploy it anywhere to make predictions and leverage the same integration pipeline for model training. Some examples of model deployment in Kafka environments are:

  • Analytic models (TensorFlow, Keras, H2O and Deeplearning4j) embedded in Kafka Streams microservices

  • Anomaly detection of IoT sensor data with a model embedded into a KSQL UDF

  • RPC communication between Kafka Streams application and model server (TensorFlow Serving)

Python, KSQL, and Jupyter for Prototyping, Demos, and Production Deployments

As you can see, both in theory (Google’s paper Hidden Technical Debt in Machine Learning Systems) and in practice (Uber’s machine learning platform Michelangelo), it is not a simple task to build a scalable, reliable, and performant machine learning infrastructure.

The impedance mismatch between data scientists, data engineers, and production engineers must be resolved in order for machine learning projects to deliver real business value. This requires using the right tool for the job and understanding how to combine them. You can use Python and Jupyter for prototyping and demos (often Kafka and KSQL might be overhead here and not needed if you just want to do fast, simple prototyping on a historical dataset) or combine Python and Jupyter with your whole development lifecycle up to production deployments at scale.

Integration of Kafka event streams and KSQL statements into Jupyter Notebooks allows you to:

  • Use the preferred existing environment of the data scientist (including Python and Jupyter) and combine it with Kafka and KSQL to integrate and continuously process real-time streaming data by using a simple Python wrapper API to execute KSQL queries

  • Easily connect to real-time streaming data instead of just historical batches of data (maybe from the last day, week or month, e.g., coming in via CSV files)

  • Merge different concepts like streaming event-based sensor data coming from Kafka with Python programming concepts like generators or dictionaries objects, which you can use for your Python data processing tools or ML frameworks like NumPy, pandas, or scikit-learn

  • Reuse the same logic for integration, preprocessing, and monitoring and move it from your Jupyter Notebook and prototyping or demos to large-scale test and production systems

Python for prototyping and Apache Kafka for a scalable streaming platform are not rival technology stacks. They work together very well, especially if you use “helper tools” like Jupyter Notebooks and KSQL.

Please try it out and let us know your thoughts. How do you leverage the Apache Kafka ecosystem in your machine learning projects?

TensorFlow vs NumPy vs Pure Python: Performance Comparison

TensorFlow vs NumPy vs Pure Python: Performance Comparison

How much faster does the application run when implemented with NumPy instead of pure Python? What about TensorFlow? The purpose of this article is to begin to explore the improvements you can achieve by using these libraries.

Python has a design philosophy that stresses allowing programmers to express concepts readably and in fewer lines of code. This philosophy makes the language suitable for a diverse set of use cases: simple scripts for web, large web applications (like YouTube), scripting language for other platforms (like Blender and Autodesk’s Maya), and scientific applications in several areas, such as astronomy, meteorology, physics, and data science.

It is technically possible to implement scalar and matrix calculations using Python lists. However, this can be unwieldy, and performance is poor when compared to languages suited for numerical computation, such as MATLAB or Fortran, or even some general purpose languages, such as C or C++.

To circumvent this deficiency, several libraries have emerged that maintain Python’s ease of use while lending the ability to perform numerical calculations in an efficient manner. Two such libraries worth mentioning are NumPy (one of the pioneer libraries to bring efficient numerical computation to Python) and TensorFlow (a more recently rolled-out library focused more on deep learning algorithms).

  • NumPy provides support for large multidimensional arrays and matrices along with a collection of mathematical functions to operate on these elements. The project relies on well-known packages implemented in other languages (like Fortran) to perform efficient computations, bringing the user both the expressiveness of Python and a performance similar to MATLAB or Fortran.
  • TensorFlow is an open-source library for numerical computation originally developed by researchers and engineers working at the Google Brain team. The main focus of the library is to provide an easy-to-use API to implement practical machine learning algorithms and deploy them to run on CPUs, GPUs, or a cluster.

But how do these schemes compare? How much faster does the application run when implemented with NumPy instead of pure Python? What about TensorFlow? The purpose of this article is to begin to explore the improvements you can achieve by using these libraries.

To compare the performance of the three approaches, you’ll build a basic regression with native Python, NumPy, and TensorFlow.

Engineering the Test Data

To test the performance of the libraries, you’ll consider a simple two-parameter linear regression problem. The model has two parameters: an intercept term, w_0 and a single coefficient, w_1.

Given N pairs of inputs x and desired outputs d, the idea is to model the relationship between the outputs and the inputs using a linear model y = w_0 + w_1 * x where the output of the model y is approximately equal to the desired output d for every pair (x, d).

Technical Detail: The intercept term, w_0, is technically just a coefficient like w_1, but it can be interpreted as a coefficient that multiplies elements of a vector of 1s.

To generate the training set of the problem, use the following program:

import numpy as np

np.random.seed(444)

N = 10000
sigma = 0.1
noise = sigma * np.random.randn(N)
x = np.linspace(0, 2, N)
d = 3 + 2 * x + noise
d.shape = (N, 1)

We need to prepend a column vector of 1s to x.

X = np.column_stack((np.ones(N, dtype=x.dtype), x))
print(X.shape)
(10000, 2)

This program creates a set of 10,000 inputs x linearly distributed over the interval from 0 to 2. It then creates a set of desired outputs d = 3 + 2 * x + noise, where noise is taken from a Gaussian (normal) distribution with zero mean and standard deviation sigma = 0.1.

By creating x and d in this way, you’re effectively stipulating that the optimal solution for w_0 and w_1 is 3 and 2, respectively.

Xplus = np.linalg.pinv(X)
w_opt = Xplus @ d
print(w_opt)
[[2.99536719]
[2.00288672]]

There are several methods to estimate the parameters w_0 and w_1 to fit a linear model to the training set. One of the most-used is ordinary least squares, which is a well-known solution for the estimation of w_0 and w_1 in order to minimize the square of the error e, given by the summation of y - d for every training sample.

One way to easily compute the ordinary least squares solution is by using the Moore-Penrose pseudo-inverse of a matrix. This approach stems from the fact that you have X and d and are trying to solve for wm, in the equation d = X @ wm. (The @ symbol denotes matrix multiplication, which is supported by both NumPy and native Python as of PEP 465 and Python 3.5+.)

Using this approach, we can estimate w_m using w_opt = Xplus @ d, where Xplus is given by the pseudo-inverse of X, which can be calculated using numpy.linalg.pinv, resulting in w_0 = 2.9978 and w_1 = 2.0016, which is very close to the expected values of w_0 = 3 and w_1 = 2.

Note: Using w_opt = np.linalg.inv(X.T @ X) @ X.T @ d would yield the same solution.

Although it is possible to use this deterministic approach to estimate the coefficients of the linear model, it is not possible for some other models, such as neural networks. In these cases, iterative algorithms are used to estimate a solution for the parameters of the model.

One of the most-used algorithms is gradient descent, which at a high level consists of updating the parameter coefficients until we converge on a minimized loss (or cost). That is, we have some cost function (often, the mean squared error—MSE), and we compute its gradient with respect to the network’s coefficients (in this case, the parameters w_0 and w_1), considering a step size mu. By performing this update many times (in many epochs), the coefficients converge to a solution that minimizes the cost function.

In the following sections, you’ll build and use gradient descent algorithms in pure Python, NumPy, and TensorFlow. To compare the performance of the three approaches, we’ll look at runtime comparisons on an Intel Core i7 4790K 4.0 GHz CPU.

Gradient Descent in Pure Python

Let’s start with a pure-Python approach as a baseline for comparison with the other approaches. The Python function below estimates the parameters w_0 and w_1 using gradient descent:

import itertools as it

def py_descent(x, d, mu, N_epochs):
N = len(x)
f = 2 / N

# "Empty" predictions, errors, weights, gradients.
y = [0] * N
w = [0, 0]
grad = [0, 0]

for _ in it.repeat(None, N_epochs):
    # Can't use a generator because we need to
    # access its elements twice.
    err = tuple(i - j for i, j in zip(d, y))
    grad[0] = f * sum(err)
    grad[1] = f * sum(i * j for i, j in zip(err, x))
    w = [i + mu * j for i, j in zip(w, grad)]
    y = (w[0] + w[1] * i for i in x)
return w

Above, everything is done with Python list comprehensions, slicing syntax, and the built-in sum() and zip() functions. Before running through each epoch, “empty” containers of zeros are initialized for y, w, and grad.

Technical Detail: py_descent above does use itertools.repeat() rather than for _ in range(N_epochs). The former is faster than the latter because repeat() does not need to manufacture a distinct integer for each loop. It just needs to update the reference count to None. The timeit module contains an example.

Now, use this to find a solution:

import time

x_list = x.tolist()
d_list = d.squeeze().tolist() # Need 1d lists

mu is a step size, or scaling factor.

mu = 0.001
N_epochs = 10000

t0 = time.time()
py_w = py_descent(x_list, d_list, mu, N_epochs)
t1 = time.time()

print(py_w)
[2.959859852416156, 2.0329649630002757]

print('Solve time: {:.2f} seconds'.format(round(t1 - t0, 2)))
Solve time: 18.65 seconds

With a step size of mu = 0.001 and 10,000 epochs, we can get a fairly precise estimate of w_0 and w_1. Inside the for-loop, the gradients with respect to the parameters are calculated and used in turn to update the weights, moving in the opposite direction in order to minimize the MSE cost function.

At each epoch, after the update, the output of the model is calculated. The vector operations are performed using list comprehensions. We could have also updated y in-place, but that would not have been beneficial to performance.

The elapsed time of the algorithm is measured using the time library. It takes 18.65 seconds to estimate w_0 = 2.9598 and w_1 = 2.0329. While the timeit library can provide a more exact estimate of runtime by running multiple loops and disabling garbage collection, just viewing a single run with time suffices in this case, as you’ll see shortly.

Using NumPy

NumPy adds support for large multidimensional arrays and matrices along with a collection of mathematical functions to operate on them. The operations are optimized to run with blazing speed by relying on the projects BLAS and LAPACK for underlying implementation.

Using NumPy, consider the following program to estimate the parameters of the regression:

def np_descent(x, d, mu, N_epochs):
d = d.squeeze()
N = len(x)
f = 2 / N

y = np.zeros(N)
err = np.zeros(N)
w = np.zeros(2)
grad = np.empty(2)

for _ in it.repeat(None, N_epochs):
    np.subtract(d, y, out=err)
    grad[:] = f * np.sum(err), f * (err @ x)
    w = w + mu * grad
    y = w[0] + w[1] * x
return w

np_w = np_descent(x, d, mu, N_epochs)
print(np_w)
[2.95985985 2.03296496]

The code block above takes advantage of vectorized operations with NumPy arrays (ndarrays). The only explicit for-loop is the outer loop over which the training routine itself is repeated. List comprehensions are absent here because NumPy’s ndarray type overloads the arithmetic operators to perform array calculations in an optimized way.

You may notice there are a few alternate ways to go about solving this problem. For instance, you could use simply f * err @ X, where X is the 2d array that includes a column vector of ones, rather than our 1d x.

However, this is actually not all that efficient, because it requires a dot product of an entire column of ones with another vector (err), and we know that result will simply be np.sum(err). Similarly, w[0] + w[1] * x wastes less computation than w * X, in this specific case.

Let’s look at the timing comparison. As you’ll see below, the timeit module is needed here to get a more precise picture of runtime, as we’re now talking about fractions of a second rather than multiple seconds of runtime:

import timeit

setup = ("from main import x, d, mu, N_epochs, np_descent;"
"import numpy as np")
repeat = 5
number = 5 # Number of loops within each repeat

np_times = timeit.repeat('np_descent(x, d, mu, N_epochs)', setup=setup,
repeat=repeat, number=number)

timeit.repeat() returns a list. Each element is the total time taken to execute n loops of the statement. To get a single estimate of runtime, you can take the average time for a single call from the lower bound of the list of repeats:

print(min(np_times) / number)
0.31947448799983247
Using TensorFlow

TensorFlow is an open-source library for numerical computation originally developed by researchers and engineers working at the Google Brain team.

Using its Python API, TensorFlow’s routines are implemented as a graph of computations to perform. Nodes in the graph represent mathematical operations, and the graph edges represent the multidimensional data arrays (also called tensors) communicated between them.

At runtime, TensorFlow takes the graph of computations and runs it efficiently using optimized C++ code. By analyzing the graph of computations, TensorFlow is able to identify the operations that can be run in parallel. This architecture allows the use of a single API to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device.

Using TensorFlow, consider the following program to estimate the parameters of the regression:

import tensorflow as tf

def tf_descent(X_tf, d_tf, mu, N_epochs):
N = X_tf.get_shape().as_list()[0]
f = 2 / N

w = tf.Variable(tf.zeros((2, 1)), name="w_tf")
y = tf.matmul(X_tf, w, name="y_tf")
e = y - d_tf
grad = f * tf.matmul(tf.transpose(X_tf), e)

training_op = tf.assign(w, w - mu * grad)
init = tf.global_variables_initializer()

with tf.Session() as sess:
    init.run()
    for epoch in range(N_epochs):
        sess.run(training_op)
    opt = w.eval()
return opt

X_tf = tf.constant(X, dtype=tf.float32, name="X_tf")
d_tf = tf.constant(d, dtype=tf.float32, name="d_tf")

tf_w = tf_descent(X_tf, d_tf, mu, N_epochs)
print(tf_w)
[[2.9598553]
[2.032969 ]]

When you use TensorFlow, the data must be loaded into a special data type called a Tensor. Tensors mirror NumPy arrays in more ways than they are dissimilar.

type(X_tf)
<class 'tensorflow.python.framework.ops.Tensor'>

After the tensors are created from the training data, the graph of computations is defined:

  • First, a variable tensor w is used to store the regression parameters, which will be updated at each iteration.
  • Using w and X_tf, the output y is calculated using a matrix product, implemented with tf.matmul().
  • The error is calculated and stored in the e tensor.
  • The gradients are computed, using the matrix approach, by multiplying the transpose of X_tf by the e.
  • Finally, the update of the parameters of the regression is implemented with the tf.assign() function. It creates a node that implements batch gradient descent, updating the next step tensor w to w - mu * grad.

It is worth noticing that the code until the training_op creation does not perform any computation. It just creates the graph of the computations to be performed. In fact, even the variables are not initialized yet. To perform the computations, it is necessary to create a session and use it to initialize the variables and run the algorithm to evaluate the parameters of the regression.

There are some different ways to initialize the variables and create the session to perform the computations. In this program, the line init = tf.global_variables_initializer() creates a node in the graph that will initialize the variables when it is run. The session is created in the with block, and init.run() is used to actually initialize the variables. Inside the with block, training_op is run for the desired number of epochs, evaluating the parameter of the regression, which have their final value stored in opt.

Here is the same code-timing structure that was used with the NumPy implementation:

setup = ("from main import X_tf, d_tf, mu, N_epochs, tf_descent;"
"import tensorflow as tf")

tf_times = timeit.repeat("tf_descent(X_tf, d_tf, mu, N_epochs)", setup=setup,
repeat=repeat, number=number)

print(min(tf_times) / number)
1.1982891103994917

It took 1.20 seconds to estimate w_0 = 2.9598553 and w_1 = 2.032969. It is worth noticing that the computation was performed on a CPU and the performance may be improved when run on a GPU.

Lastly, you could have also defined an MSE cost function and passed this to TensorFlow’s gradients() function, which performs automatic differentiation, finding the gradient vector of MSE with regard to the weights:

mse = tf.reduce_mean(tf.square(e), name="mse")
grad = tf.gradients(mse, w)[0]

However, the timing difference in this case is negligible.

Conclusion

The purpose of this article was to perform a preliminary comparison of the performance of a pure Python, a NumPy and a TensorFlow implementation of a simple iterative algorithm to estimate the coefficients of a linear regression problem.

The results for the elapsed time to run the algorithm are summarized in the table below:

While the NumPy and TensorFlow solutions are competitive (on CPU), the pure Python implementation is a distant third. While Python is a robust general-purpose programming language, its libraries targeted towards numerical computation will win out any day when it comes to large batch operations on arrays.

While the NumPy example proved quicker by a hair than TensorFlow in this case, it’s important to note that TensorFlow really shines for more complex cases. With our relatively elementary regression problem, using TensorFlow arguably amounts to “using a sledgehammer to crack a nut,” as the saying goes.

With TensorFlow, it is possible to build and train complex neural networks across hundreds or thousands of multi-GPU servers. In a future post, we will cover the setup to run this example in GPUs using TensorFlow and compare the results.