TensorFlow is dead, long live TensorFlow!

TensorFlow is dead, long live TensorFlow!

TensorFlow is ann open source machine learning library for research and production. TensorFlow is dead, long live TensorFlow

TensorFlow is ann open source machine learning library for research and production. TensorFlow is dead, long live TensorFlow

If you’re an AI enthusiast and you didn’t see the big news this month, you might have just snoozed through an off-the-charts earthquake. Everything is about to change!

Last year I wrote 9 Things You Need To Know About TensorFlow… but there’s one thing you need to know above all others: TensorFlow 2.0 is here!

The revolution is here! Welcome to TensorFlow 2.0.
It’s a radical makeover. The consequences of what just happened are going to have major ripple effects on every industry, just you wait. If you’re a TF beginner in mid-2019, you’re extra lucky because you picked the best possible time to enter AI (though you might want to start from scratch if your old tutorials have the word “session” in them).

In a nutshell:TensorFlow has just gone full Keras. Those of you who know those words just fell out of your chairs. Boom!

A prickly experience

I doubt that many people have accused TensorFlow 1.x of being easy to love. It’s the industrial lathe of AI… and about as user-friendly. At best, you might feel grateful for being able to accomplish your AI mission at mind-boggling scale.

You’d also attract some raised eyebrows if you claimed that TensorFlow 1.x was easy to get the hang of. Its steep learning curve made it mostly inaccessible to the casual user, but mastering it meant you could talk about it the way you’d brag about that toe you lost while climbing Everest. Was it fun? No, c’mon, really: was it fun?

You‘re not the only one — it’s what TensorFlow 1.x tutorials used to feel like for everybody.

TensorFlow’s core strength is performance. It was built for taking models from research to production at massive scale and it delivers, but TF 1.x made you sweat for it. Persevere and you’d be able to join the ranks of ML practitioners who use it for incredible things, like finding new planets and pioneering medicine.

What a pity that such a powerful tool was in the hands of so few… until now.

Don’t worry about what tensors are. We

just called them (generalized) matrices where I grew up. The name

TensorFlow is a nod to the fact that TF’s very good at performing

distributed computations involving multidimensional arrays (er,

matrices), which you’ll find handy for

AI](http://bit.ly/quaesita_emperor) "http://bit.ly/quaesita_emperor)") at scale.

Image source.](http://karlstratos.com/drawings/drawings.html). "http://karlstratos.com/drawings/drawings.html).")

Cute and cuddly Keras

Now that we’ve covered cactuses, let’s talk about something you’d actually want to hug. Overheard at my place of work: “I think I have an actual crush on Keras.”

Keras is a specification for building models layer-by-layer that works with multiple machine learning frameworks (so it’s not a TF thing), but you might know it as a high level API accessed from within TensorFlow as tf.keras.

Incidentally, I’m writing this section on Keras’ 4th birthday (Mar 27, 2019) for an extra dose of warm fuzzies.

Keras was built from the ground up to be Pythonic and always put people first — it was designed to be inviting, flexible, and simple to learn.

Why don’t we have both?

Why must we choose between Keras’s cuddliness and traditional TensorFlow’s mighty performance? What don’t we have both?

Great idea! Let’s have both! That’s TensorFlow 2.0 in a nutshell.

This is TensorFlow 2.0. You can mash those orange buttons yourself here.](http://bit.ly/tfoview). "http://bit.ly/tfoview).")

The revolution is here! Welcome to TensorFlow 2.0.### The usability revolution

Going forward, Keras will be the high level API for TensorFlow and it’s extended so that you can use all the advanced features of TensorFlow directly from tf.keras.

The revolution is here! Welcome to TensorFlow 2.0.

In the new version, everything you’ve hated most about TensorFlow 1.x gets the guillotine. Having to perform a dark ritual just to add two numbers together? Dead. TensorFlow Sessions? Dead. A million ways to do the exact same thing? Dead. Rewriting code if you switch hardware or scale? Dead. Reams of boilerplate to write? Dead. Horrible unactionable error messages? Dead. Steep learning curve? Dead.

The revolution is here! Welcome to TensorFlow 2.0.
You’re expecting the obvious catch, aren’t you? Worse performance? Guess again! We’re not giving up performance.

TensorFlow is now cuddly and this is a game-changer, because it means that one of the most potent tools of our time just dropped the bulk of its barriers to entry. Tech enthusiasts from all walks of life are finally empowered to join in because the new version opens access beyond researchers and other highly-motivated folks with an impressive pain threshold.

The revolution is here! Welcome to TensorFlow 2.0.
Everyone is welcome. Want to play? Then come play!

Eager to please

In TensorFlow 2.0, eager execution is now the default. You can take advantage of graphs even in eager context, which makes your debugging and prototyping easy, while the TensorFlow runtime takes care of performance and scaling under the hood.

Wrangling graphs in TensorFlow 1.x (declarative programming) was disorienting for many, but it’s all just a bad dream now with eager execution (imperative programming). If you skipped learning it before, so much the better. TF 2.0 is a fresh start for everyone.

As easy as one… one… one…

Many APIs got consolidated across TensorFlow under Keras, so now it’s easier to know what you should use when. For example, now you only need to work with one set of optimizers and one set of metrics. How many sets of layers? You guessed it! One! Keras-style, naturally.

In fact, the whole ecosystem of tools got a spring cleaning, from data processing pipelines to easy model exporting to TensorBoard integration with Keras, which is now a… one-liner!

There are also great tools that let you switch and optimize distribution strategies for amazing scaling efficiency without losing any of the convenience of Keras.

Those distribution strategies are pretty, aren’t they?

The catch!

If the catch isn’t performance, what is it? There has to be a catch, right?

Actually, the catch was your suffering up to now. TensorFlow demanded quite a lot of patience from its users while a friendly version was brewing. This wasn’t a matter of sadism. Making tools for deep learning is new territory, and we’re all charting it as we go along. Wrong turns were inevitable, but we learned a lot along the way.

The revolution is here! Welcome to TensorFlow 2.0.
The TensorFlow community put in a lot of elbow grease to make the initial magic happen, and then more effort again to polish the best gems while scraping out less fortunate designs. The plan was never to force you to use a rough draft forever, but perhaps you habituated so well to the discomfort that you didn’t realize it was temporary. Thank you for your patience!
The revolution is here! Welcome to TensorFlow 2.0.
The reward is everything you appreciate about TensorFlow 1.x made friendly under a consistent API with tons of duplicate functionality removed so it’s cleaner to use. Even the errors are cleaned up to be concise, simple to understand, and actionable. Mighty performance stays!

What’s the big deal?

Haters (who’re gonna hate) might say that much of v2.0 could be cobbled together in v1.x if you searched hard enough, so what’s all the fuss about? Well, not everyone wants to spend our days digging around in clutter for buried treasure. The makeover and clean-up are worth a standing ovation. But that’s not the biggest big deal.

The point not to miss is this: TensorFlow just announced an uncompromising focus on usability.

The revolution is here! Welcome to TensorFlow 2.0.
AI lets you automate tasks you can’t come up with instructions for. It lets you automate the ineffable. Democratization means that AI at scale will no longer be the province of a tiny tech elite.
The revolution is here! Welcome to TensorFlow 2.0.
Imagine a future where “I know how to make things with Python and “I know how to make things with AI are equally commonplace statements… Exactly! I’m almost tempted to use that buzzword “disruptive” here.

The great migration

We know it’s hard work to upgrade to a new version, especially when the changes are so dramatic. If you’re about to embark on migrating your codebase to 2.0, you’re not alone — we’ll be doing the same here at Google with one of the largest codebases in the world. As we go along, we’ll be sharing migration guides to help you out.

The revolution is here! Welcome to TensorFlow 2.0.
If you rely on specific functionality, you won’t be left in the lurch — except for contrib, all TF 1.x functions will live on in the compat.v1 compatibility module. We’re also giving you a script which automatically updates your code so it runs on TensorFlow 2.0. Learn more in the video below.

This video’s is a great resource if you’re eager to dig deeper into TF 2.0 and geek out on code snippets.

Your clean slate

TF 2.0 is a beginner’s paradise, so it will be a downer for those who’ve been looking forward to watching newbies suffer the way you once suffered. If you were hoping to use TensorFlow for hazing new recruits, you might need to search for some other way to inflict existential horror.

The revolution is here! Welcome to TensorFlow 2.0.
Sitting out might have been the smartest move, because now’s the best time to arrive on the scene. As of March 2019, TensorFlow 2.0 is available in alpha (that’s a preview, you hipster you), so learning it now gets you ready in time for the full release that the community is gearing up for over the next quarter.
The revolution is here! Welcome to TensorFlow 2.0.
Following the dramatic changes, you won’t be as much of a beginner as you imagined. The playing field got leveled, the game got easier, and there’s a seat saved just for you. Welcome! I’m glad you’re finally here and I hope you’re as excited about this new world of possibilities as I am.

Dive in!

Check out the shiny redesigned tensorflow.org for tutorials, examples, documentation, and tools to get you started… or dive straight in with:

pip install tensorflow==2.0.0-alpha0

You’ll find detailed instructions here.

Machine Learning Tutorial with Python, Jupyter, KSQL and TensorFlow

Machine Learning Tutorial with Python, Jupyter, KSQL and TensorFlow

Machine Learning With Python, Jupyter, KSQL, and TensorFlow. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers.

Machine Learning With Python, Jupyter, KSQL, and TensorFlow. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers.

Building a scalable, reliable, and performant machine learning (ML) infrastructure is not easy. It takes much more effort than just building an analytic model with Python and your favorite machine learning framework.

Uber, which already runs their scalable and framework-independent machine learning platform Michelangelo for many use cases in production, wrote a good summary:

When Michelangelo started, the most urgent and highest impact use cases were some very high scale problems, which led us to build around Apache Spark (for large-scale data processing and model training) and Java (for low latency, high throughput online serving). This structure worked well for production training and deployment of many models but left a lot to be desired in terms of overhead, flexibility, and ease of use, especially during early prototyping and experimentation [where Notebooks and Python shine].
Uber expanded Michelangelo “to serve any kind of Python model from any source to support other Machine Learning and Deep Learning frameworks like PyTorch and TensorFlow [instead of just using Spark for everything].”

So why did Uber (and many other tech companies) build its own platform and framework-independent machine learning infrastructure?

The posts How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka and Using Apache Kafka to Drive Cutting-Edge Machine Learning describe the benefits of leveraging the Apache Kafka ® ecosystem as a central, scalable, and mission-critical nervous system. It allows real-time data ingestion, processing, model deployment, and monitoring in a reliable and scalable way.

This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers, and production engineers. By leveraging it to build your own scalable machine learning infrastructure and also make your data scientists happy, you can solve the same problems for which Uber built its own ML platform, Michelangelo.


You may also like:A Complete Machine Learning Project Walk-Through in Python


Impedance Mismatch Between Data Scientists, Data Engineers and Production Engineers

Based on what I’ve seen in the field, an impedance mismatch between data scientists, data engineers, and production engineers is the main reason why companies struggle to bring analytic models into production to add business value.

The following diagram illustrates the different required steps and corresponding roles as part of the impedance mismatch in a machine learning lifecycle:

Impedance mismatch between model development and model deployment

Data scientists love Python, period. Therefore, the majority of machine learning/deep learning frameworks focus on Python APIs. Both the stablest and most cutting edge APIs, as well as the majority of examples and tutorials, use Python APIs. In addition to Python support, there is typically support for other programming languages, including JavaScript for web integration and Java for platform integration-though oftentimes with fewer features and less maturity. No matter what other platforms are supported, chances are very high that your data scientists will build and train their analytic models with Python.

There is an impedance mismatch between model development using Python, its tool stack and a scalable, reliable data platform with low latency, high throughput, zero data loss and 24/7 availability requirements needed for data ingestion, preprocessing, model deployment and monitoring at scale. Python, in practice, is not the most well-known technology for these requirements. However, it is a great client for a data platform like Apache Kafka.

The problem is that writing the machine learning source code to train an analytic model with Python and the machine learning framework of your choice is just a very small part of a real-world machine learning infrastructure. You need to think about the whole model lifecycle. The following image represents this hidden technical debt in machine learning systems (showing how small the “ML code” part is):

Thus, you need to train and deploy the model built to a scalable production environment in order to reliably make use of it. This can either be built natively around the Kafka ecosystem, or you could use Kafka just for ingestion into another storage and processing cluster such as HDFS or AWS S3 with Spark. There are many tradeoffs between Kafka, Spark, and several other scalable infrastructures, but that discussion is out of scope for this post. For now, we’ll focus on Kafka.

Different solutions in the industry solve certain parts of the impedance mismatch between data scientists, data engineers, and production engineers. Let’s take a look at some of these options:

  • Official standards like Open Neural Network Exchange (ONNX), Portable Format for Analytics (PFA) or Predictive Model Markup Language (PMML): A data scientist builds a model with Python. The Java developer imports it in Java for production deployment. This approach supports different frameworks, products, and cloud services. You do not have to rely on the same framework or product for training and model deployment. Consider ONNX, a relatively new standard for deep learning — it already supports TensorFlow, PyTorch, and MXNet. These standards have pros and cons. Some people like and use them; many don’t.
  • Developer-focused frameworks like Deeplearning4j: These frameworks are built for software engineers to build the whole machine learning lifecycle on the Java platform, not just model deployment and monitoring, but also preprocessing and training. You can still import other models if you want (e.g., Deeplearning4j lets you import Keras models). This option is great if you: a) have data scientists who can write Java or b) have software engineers who understand machine learning concepts enough to build analytic models.
  • AutoML for building analytic models with limited machine learning experience: This way, domain experts can build and deploy analytic models with a button click. The AutoML engine provides an interface for others to use the model for predictions.
  • Embedding model binaries into applications: The output of model training is an analytic model. For instance, you can write Python code to train and generate a TensorFlow model. Depending on the framework, the output can be text files, Java source code, or binary files. For example, TensorFlow generates a model artifact with Protobuf, JSON, and other files. No matter what format the output of your machine learning framework is, it can be embedded into applications to use for predictions via the framework’s API (e.g., you can load a TensorFlow model from a Java application through TensorFlow’s Java API).
  • Managed model server in the public cloud like Google Cloud Machine Learning Engine: The cloud provider takes over the burden of availability and reliability. The data scientist “just” deploys its trained model, and production engineers can access it. The key tradeoff is that this requires RPC communication to perform model inference.

While all these solutions help data scientists, data engineers, and production engineers to work better together, there are underlying challenges within the hidden debts:

  • Data collection (i.e., integration) and preprocessing need to run at scale

  • Configuration needs to be shared and automated for continuous builds and integration tests

  • The serving and monitoring infrastructure need to fit into your overall enterprise architecture and tool stack

So how can the Kafka ecosystem help here?

Apache Kafka as a Key Component for Solving the Impedance Mismatch

In many cases, it is best to provide experts with the tools they like and know well. The challenge is to combine the different toolsets and still build an integrated system, as well as a continuous, scalable, machine learning workflow. Therefore, Kafka is not competitive but complementary to the discussed alternatives when it comes to solving the impedance mismatch between the data scientist and developer.

The data engineer builds a scalable integration pipeline using Kafka as infrastructure and Python for integration and preprocessing statements. The data scientist can build their model with Python or any other preferred tool. The production engineer gets the analytic models (either manually or through any automated, continuous integration setup) from the data scientist and embeds them into their Kafka application to deploy it in production. Or, the team works together and builds everything with Java and a framework like Deeplearning4j.

Any option can pair well with Apache Kafka. Pick the pieces you need, whether it’s Kafka core for data transportation, Kafka Connect for data integration, or Kafka Streams/KSQL for data preprocessing. Many components can be used for both model training and model inference. Write once and use in both scenarios as shown in the following diagram:

Leveraging the Apache Kafka ecosystem for a machine learning infrastructure

Monitoring the complete environment in real time and at scale is also a common task for Kafka. A huge benefit is that you only build a highly reliable and scalable pipeline once but use it for both parts of a machine learning infrastructure. And you can use it in any environment: in the cloud, in on-prem datacenters, or at the edges where IoT devices are.

Say you wanted to build one integration pipeline from MQTT to Kafka with KSQL for data preprocessing and use Kafka Connect for data ingestion into HDFS, AWS S3, or Google Cloud Storage, where you do the model training. The same integration pipeline, or at least parts of it, can be reused for model inference. New MQTT input data can directly be used in real time to make predictions.

We just explained various alternatives to solving the impedance mismatch between data scientists and software engineers in Kafka environments. Now, let’s discuss one specific option in the next section, which is probably the most convenient for data scientists: leveraging Kafka from a Jupyter Notebook with KSQL statements and combining it with TensorFlow and Keras to train a neural network.

Data Scientists Combining Python and Jupyter With Scalable Streaming Architectures

Data scientists use tools like Jupyter Notebooks to analyze, transform, enrich, filter, and process data. The preprocessed data is then used to train analytic models with machine learning/deep learning frameworks like TensorFlow.

However, some data scientists do not even know “bread-and-butter” concepts of software engineers, such as version control systems like GitHub or continuous integration tools like Jenkins.

This raises the question of how to combine the Python experience of data scientists with the benefits of Apache Kafka as a battle-tested, highly scalable data processing and streaming platform.

Apache Kafka and KSQL for Data Scientists and Data Engineers

Kafka offers integration options that can be used with Python, like Confluent’s Python Client for Apache Kafka or Confluent REST Proxy for HTTP integration. But this is not really a convenient way for data scientists who are used to quickly and interactively analyzing and preprocessing data before model training and evaluation. Rapid prototyping is typically used here.

KSQL enables data scientists to take a look at Kafka event streams and implement continuous stream processing from their well-known and loved Python environments like Jupyter by writing simple SQL-like statements for interactive analysis and data preprocessing.

The following Python example executes an interactive query from a Kafka stream leveraging the open source framework ksql-python, which adds a Python layer on top of KSQL’s REST interface. Here are a few lines of the Python code using KSQL from a Jupyter Notebook:

The result of such a KSQL query is a Python generator object, which you can easily process with other Python libraries. This feels much more Python native and is analogous to NumPy, pandas, scikit-learn and other widespread Python libraries.

Similarly to rapid prototyping with these libraries, you can do interactive queries and data preprocessing with ksql-python. Check out the KSQL quick start and KSQL recipes to understand how to write a KSQL query to easily filter, transform, enrich, or aggregate data. While KSQL is running continuous queries, you can also use it for interactive analysis and use the LIMIT keyword like in ANSI SQL if you just want to get a specific number of rows.

So what’s the big deal? You understand that KSQL can feel Python-native with the ksql-python library, but why use KSQL instead of or in addition to your well-known and favorite Python libraries for analyzing and processing data?

The key difference is that these KSQL queries can also be deployed in production afterwards. KSQL offers you all the features from Kafka under the hood like high scalability, reliability, and failover handling. The same KSQL statement that you use in your Jupyter Notebook for interactive analysis and preprocessing can scale to millions of messages per second. Fault tolerant. With zero data loss and exactly once semantics. This is very important and valuable for bringing together the Python-loving data scientist with the highly scalable and reliable production infrastructure.

Just to be clear: KSQL + Python is not the all-rounder for every data engineering task, and it does not replace the existing Python toolset. But it is a great option in the toolbox of data scientists and data engineers, and it adds new possibilities like getting real-time updates of incoming information as the source data changes or updating a deployed model with a new and improved version.

Jupyter Notebook for Fraud Detection With Python KSQL and TensorFlow/Keras

Let’s now take a look at a detailed example using the combination of KSQL and Python. It involves advanced code examples using ksql-python and other widespread components from Python’s machine learning ecosystem, like NumPy, pandas, TensorFlow, and Keras.

The use case is fraud detection for credit card payments. We use a test dataset from Kaggle as a foundation to train an unsupervised autoencoder to detect anomalies and potential fraud in payments. The focus of this example is not just model training, but the whole machine learning infrastructure, including data ingestion, data preprocessing, model training, model deployment, and monitoring. All of this needs to be scalable, reliable, and performant.

For the full running example and more details, see the documentation.

Let’s take a look at a few snippets of the Jupyter Notebook.

Connection to KSQL server and creation of a KSQL stream using Python:

from ksql import KSQLAPI
client = KSQLAPI('http://localhost:8088')

client.create_stream(table_name='creditcardfraud_source',
                     columns_type=['Id bigint', 'Timestamp varchar', 'User varchar', 'Time int', 'V1 double', 'V2 double', 'V3 double', 'V4 double', 'V5 double', 'V6 double', 'V7 double', 'V8 double', 'V9 double', 'V10 double', 'V11 double', 'V12 double', 'V13 double', 'V14 double', 'V15 double', 'V16 double', 'V17 double', 'V18 double', 'V19 double', 'V20 double', 'V21 double', 'V22 double', 'V23 double', 'V24 double', 'V25 double', 'V26 double', 'V27 double', 'V28 double', 'Amount double', 'Class string'],
                     topic='creditcardfraud_source',
                     value_format='DELIMITED')

Preprocessing incoming payment information using Python:

  • Filter columns that are not needed

  • Filter messages where column "class" is empty

  • Change data format to Avro for convenient and further processing

client.create_stream_as(table_name='creditcardfraud_preprocessed_avro',
                     select_columns=['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount', 'Class'],
                     src_table='creditcardfraud_source',
                     conditions='Class IS NOT NULL',
                     kafka_topic='creditcardfraud_preprocessed_avro',
                     value_format='AVRO')

Some more examples for possible data wrangling and preprocessing with KSQL:

  • Drop columns, filter messages where value “class” is empty and change data format to Avro:
CREATE STREAM creditcardfraud_preprocessed_avro WITH (VALUE_FORMAT='AVRO', KAFKA_TOPIC='creditcardfraud_preprocessed_avro') AS SELECT Time,  V1 , V2 , V3 , V4 , V5 , V6 , V7 , V8 , V9 , V10 , V11 , V12 , V13 , V14 , V15 , V16 , V17 , V18 , V19 , V20 , V21 , V22 , V23 , V24 , V25 , V26 , V27 , V28 , Amount , Class FROM creditcardfraud_source WHERE Class IS NOT NULL;

  • Anonymization (mask the two leftmost characters, e.g., “Hans” becomes “**ns”):
SELECT Id, MASK_LEFT(User, 2) FROM creditcardfraud_source;

  • Augmentation (add -1 if “class” is null):
SELECT Id, IFNULL(Class, -1) FROM creditcardfraud_source;

  • Merge/join data frames:
CREATE STREAM creditcardfraud_per_user WITH (VALUE_FORMAT='AVRO', KAFKA_TOPIC='creditcardfraud_preprocessed_avro') AS SELECT Time,  V1 , V2 , V3 , V4 , V5 , V6 , V7 , V8 , V9 , V10 , V11 , V12 , V13 , V14 , V15 , V16 , V17 , V18 , V19 , V20 , V21 , V22 , V23 , V24 , V25 , V26 , V27 , V28 , Amount , Class FROM creditcardfraud_enahnced c INNER JOIN USERS u on c.userid = u.userid WHERE V1 > 5 AND V2 IS NOT NULL AND u.CITY LIKE 'Premium%';

The Jupyter Notebook contains the full example. We use Python + KSQL for integration, data preprocessing, and interactive analysis and combine them with various other libraries from a common Python machine learning tool stack for prototyping and model training:

  • Arrays/matrices processing with NumPy and pandas

  • ML-specific processing (split train/test, etc.) with scikit-learn

  • Interactive analysis through data visualisations with Matplotlib

  • ML training + evaluation with TensorFlow and Keras

Model inference and visualisation are done in the Jupyter notebook, too. After you have built an accurate model, you can deploy it anywhere to make predictions and leverage the same integration pipeline for model training. Some examples of model deployment in Kafka environments are:

  • Analytic models (TensorFlow, Keras, H2O and Deeplearning4j) embedded in Kafka Streams microservices

  • Anomaly detection of IoT sensor data with a model embedded into a KSQL UDF

  • RPC communication between Kafka Streams application and model server (TensorFlow Serving)

Python, KSQL, and Jupyter for Prototyping, Demos, and Production Deployments

As you can see, both in theory (Google’s paper Hidden Technical Debt in Machine Learning Systems) and in practice (Uber’s machine learning platform Michelangelo), it is not a simple task to build a scalable, reliable, and performant machine learning infrastructure.

The impedance mismatch between data scientists, data engineers, and production engineers must be resolved in order for machine learning projects to deliver real business value. This requires using the right tool for the job and understanding how to combine them. You can use Python and Jupyter for prototyping and demos (often Kafka and KSQL might be overhead here and not needed if you just want to do fast, simple prototyping on a historical dataset) or combine Python and Jupyter with your whole development lifecycle up to production deployments at scale.

Integration of Kafka event streams and KSQL statements into Jupyter Notebooks allows you to:

  • Use the preferred existing environment of the data scientist (including Python and Jupyter) and combine it with Kafka and KSQL to integrate and continuously process real-time streaming data by using a simple Python wrapper API to execute KSQL queries

  • Easily connect to real-time streaming data instead of just historical batches of data (maybe from the last day, week or month, e.g., coming in via CSV files)

  • Merge different concepts like streaming event-based sensor data coming from Kafka with Python programming concepts like generators or dictionaries objects, which you can use for your Python data processing tools or ML frameworks like NumPy, pandas, or scikit-learn

  • Reuse the same logic for integration, preprocessing, and monitoring and move it from your Jupyter Notebook and prototyping or demos to large-scale test and production systems

Python for prototyping and Apache Kafka for a scalable streaming platform are not rival technology stacks. They work together very well, especially if you use “helper tools” like Jupyter Notebooks and KSQL.

Please try it out and let us know your thoughts. How do you leverage the Apache Kafka ecosystem in your machine learning projects?

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

Libraries play an important role when developers decide to work in Machine Learning or Deep Learning researches. In this article, we list down 10 comparisons between TensorFlow and PyTorch these two Machine Learning Libraries.

According to this article, a survey based on a sample of 1,616 ML developers and data scientists, for every one developer using PyTorch, there are 3.4 developers using TensorFlow. In this article, we list down 10 comparisons between these two Machine Learning Libraries

1 - Origin

PyTorch has been developed by Facebook which is based on Torch while TensorFlow, an open sourced Machine Learning Library, developed by Google Brain is based on the idea of data flow graphs for building models.

2 - Features

TensorFlow has some attracting features such as TensorBoard which serves as a great option while visualising a Machine Learning model, it also has TensorFlow Serving which is a specific grpc server that is used during the deployment of models in production. On the other hand, PyTorch has several distinguished features too such as dynamic computation graphs, naive support for Python, support for CUDA which ensures less time for running the code and increase in performance.

3 - Community

TensorFlow is adopted by many researchers of various fields like academics, business organisations, etc. It has a much bigger community than PyTorch which implies that it is easier to find for resources or solutions in TensorFlow. There is a vast amount of tutorials, codes, as well as support in TensorFlow and PyTorch, being the newcomer into play as compared to TensorFlow, it lacks these benefits.

4 - Visualisation

Visualisation plays as a protagonist while presenting any project in an organisation. TensorFlow has TensorBoard for visualising Machine Learning models which helps during training the model and spot the errors quickly. It is a real-time representation of the graphs of a model which not only depicts the graphic representation but also shows the accuracy graphs in real-time. This eye-catching feature is lacked by PyTorch.

5 - Defining Computational Graphs

In TensorFlow, defining computational graph is a lengthy process as you have to build and run the computations within sessions. Also, you will have to use other parameters such as placeholders, variable scoping, etc. On the other hand, Python wins this point as it has the dynamic computation graphs which help id building the graphs dynamically. Here, the graph is built at every point of execution and you can manipulate the graph at run-time.

6 - Debugging

PyTorch being the dynamic computational process, the debugging process is a painless method. You can easily use Python debugging tools like pdb or ipdb, etc. for instance, you can put “pdb.set_trace()” at any line of code and then proceed for executions of further computations, pinpoint the cause of the errors, etc. While, for TensorFlow you have to use the TensorFlow debugger tool, tfdbg which lets you view the internal structure and states of running TensorFlow graphs during training and inference.

7 - Deployment

For now, deployment in TensorFlow is much more supportive as compared to PyTorch. It has the advantage of TensorFlow Serving which is a flexible, high-performance serving system for deploying Machine Learning models, designed for production environments. However, in PyTorch, you can use the Microframework for Python, Flask for deployment of models.

8 - Documentation

The documentation of both frameworks is broadly available as there are examples and tutorials in abundance for both the libraries. You can say, it is a tie between both the frameworks.

Click here for TensorFlow documentation and click here for PyTorch documentation.

9 - Serialisation

The serialisation in TensorFlow can be said as one of the advantages for this framework users. Here, you can save your entire graph as a protocol buffer and then later it can be loaded in other supported languages, however, PyTorch lacks this feature. 

10 - Device Management

By default, Tensorflow maps nearly all of the GPU memory of all GPUs visible to the process which is a comedown but here it automatically presumes that you want to run your code on the GPU because of the well-set defaults and thus result in fair management of the device. On the other hand, PyTorch keeps track of the currently selected GPU and all the CUDA tensors which will be allocated.


TensorFlow Extended (TFX): Machine Learning Pipelines

TensorFlow Extended (TFX): Machine Learning Pipelines

TensorFlow Extended (TFX): Machine Learning Pipelines

TensorFlow Extended (TFX): Machine Learning Pipelines

Speaker: Martin Andrews

Event: Google I/O Recap 2019 Singapore AI - From Model to Device by BigDataX

Thanks for watching

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about TensorFlow and Machine Learning

Machine Learning In Node.js With TensorFlow.js

Machine Learning A-Z™: Hands-On Python & R In Data Science

TensorFlow is dead, long live TensorFlow!

A Complete Machine Learning Project Walk-Through in Python

Top 18 Machine Learning Platforms For Developers