Learn how to use Tensorflow 2.0 in this full course for beginners. This Python neural network tutorial series will teach how to use Tensorflow 2.0 and demonstrate how to create neural networks with Python and TensorFlow 2.0.
TensorFlow 2.0 Full Tutorial - Python Neural Networks for Beginners
⭐️ Course Contents ⭐️
⌨️ (0:00:00) What is a Neural Network?
⌨️ (0:26:34) Loading & Looking at Data
⌨️ (0:39:38) Creating a Model
⌨️ (0:56:48) Using the Model to Make Predictions
⌨️ (1:07:11) Text Classification P1
⌨️ (1:28:37) What is an Embedding Layer? Text Classification P2
⌨️ (1:42:30) Training the Model - Text Classification P3
⌨️ (1:52:35) Saving & Loading Models - Text Classification P4
⌨️ (2:07:09) How to Install TensorFlow GPU on Linux
Machine Learning With Python, Jupyter, KSQL, and TensorFlow. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers.
Machine Learning With Python, Jupyter, KSQL, and TensorFlow. This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers.
Building a scalable, reliable, and performant machine learning (ML) infrastructure is not easy. It takes much more effort than just building an analytic model with Python and your favorite machine learning framework.
Uber, which already runs their scalable and framework-independent machine learning platform Michelangelo for many use cases in production, wrote a good summary:
When Michelangelo started, the most urgent and highest impact use cases were some very high scale problems, which led us to build around Apache Spark (for large-scale data processing and model training) and Java (for low latency, high throughput online serving). This structure worked well for production training and deployment of many models but left a lot to be desired in terms of overhead, flexibility, and ease of use, especially during early prototyping and experimentation [where Notebooks and Python shine].
Uber expanded Michelangelo “to serve any kind of Python model from any source to support other Machine Learning and Deep Learning frameworks like PyTorch and TensorFlow [instead of just using Spark for everything].”
So why did Uber (and many other tech companies) build its own platform and framework-independent machine learning infrastructure?
The posts How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka and Using Apache Kafka to Drive Cutting-Edge Machine Learning describe the benefits of leveraging the Apache Kafka ® ecosystem as a central, scalable, and mission-critical nervous system. It allows real-time data ingestion, processing, model deployment, and monitoring in a reliable and scalable way.
This post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers, and production engineers. By leveraging it to build your own scalable machine learning infrastructure and also make your data scientists happy, you can solve the same problems for which Uber built its own ML platform, Michelangelo.
You may also like:A Complete Machine Learning Project Walk-Through in Python
Based on what I’ve seen in the field, an impedance mismatch between data scientists, data engineers, and production engineers is the main reason why companies struggle to bring analytic models into production to add business value.
The following diagram illustrates the different required steps and corresponding roles as part of the impedance mismatch in a machine learning lifecycle:
Impedance mismatch between model development and model deployment
Data scientists love Python
, period. Therefore, the majority of machine learning/deep learning frameworks focus on Python APIs. Both the stablest and most cutting edge APIs, as well as the majority of examples and tutorials, use Python APIs
. In addition to Python support, there is typically support for other programming languages, including JavaScript for web integration and Java for platform integration-though oftentimes with fewer features and less maturity. No matter what other platforms are supported, chances are very high that your data scientists will build and train their analytic models with Python.
There is an impedance mismatch between model development using Python, its tool stack and a scalable, reliable data platform with low latency, high throughput, zero data loss and 24/7 availability requirements needed for data ingestion, preprocessing, model deployment and monitoring at scale. Python, in practice, is not the most well-known technology for these requirements. However, it is a great client for a data platform like Apache Kafka.
The problem is that writing the machine learning source code to train an analytic model with Python
and the machine learning framework of your choice is just a very small part of a real-world machine learning infrastructure. You need to think about the whole model lifecycle. The following image represents this hidden technical debt in machine learning systems (showing how small the “ML code” part is):
Thus, you need to train and deploy the model built to a scalable production environment in order to reliably make use of it. This can either be built natively around the Kafka ecosystem, or you could use Kafka just for ingestion into another storage and processing cluster such as HDFS or AWS S3 with Spark. There are many tradeoffs between Kafka, Spark, and several other scalable infrastructures, but that discussion is out of scope for this post. For now, we’ll focus on Kafka.
Different solutions in the industry solve certain parts of the impedance mismatch between data scientists, data engineers, and production engineers. Let’s take a look at some of these options:
TensorFlow
model. Depending on the framework, the output can be text files, Java source code, or binary files. For example, TensorFlow generates a model artifact with Protobuf, JSON
, and other files. No matter what format the output of your machine learning framework is, it can be embedded into applications to use for predictions via the framework’s API (e.g., you can load a TensorFlow model from a Java application through TensorFlow’s
Java API).While all these solutions help data scientists, data engineers, and production engineers to work better together, there are underlying challenges within the hidden debts:
Data collection (i.e., integration) and preprocessing need to run at scale
Configuration needs to be shared and automated for continuous builds and integration tests
The serving and monitoring infrastructure need to fit into your overall enterprise architecture and tool stack
So how can the Kafka
ecosystem help here?
In many cases, it is best to provide experts with the tools they like and know well. The challenge is to combine the different toolsets and still build an integrated system, as well as a continuous, scalable
, machine learning workflow. Therefore, Kafka
is not competitive but complementary to the discussed alternatives when it comes to solving the impedance mismatch between the data scientist and developer.
The data engineer builds a scalable integration pipeline using Kafka as infrastructure and Python for integration and preprocessing statements. The data scientist can build their model with Python or any other preferred tool. The production engineer gets the analytic models (either manually or through any automated, continuous integration setup) from the data scientist and embeds them into their Kafka application to deploy it in production. Or, the team works together and builds everything with Java and a framework like Deeplearning4j.
Any option can pair well with Apache Kafka. Pick the pieces you need, whether it’s Kafka core for data transportation, Kafka Connect for data integration, or Kafka Streams/KSQL for data preprocessing. Many components can be used for both model training and model inference. Write once and use in both scenarios as shown in the following diagram:
Leveraging the Apache Kafka ecosystem for a machine learning infrastructure
Monitoring the complete environment in real time and at scale is also a common task for Kafka. A huge benefit is that you only build a highly reliable and scalable pipeline once but use it for both parts of a machine learning infrastructure. And you can use it in any environment: in the cloud, in on-prem datacenters, or at the edges where IoT devices are.
Say you wanted to build one integration pipeline from MQTT to Kafka with KSQL for data preprocessing and use Kafka Connect for data ingestion into HDFS, AWS S3, or Google Cloud Storage, where you do the model training. The same integration pipeline, or at least parts of it, can be reused for model inference. New MQTT input data can directly be used in real time to make predictions.
We just explained various alternatives to solving the impedance mismatch between data scientists and software engineers in Kafka environments. Now, let’s discuss one specific option in the next section, which is probably the most convenient for data scientists: leveraging Kafka from a Jupyter Notebook with KSQL statements and combining it with TensorFlow
and Keras
to train a neural network.
Data scientists use tools like Jupyter Notebooks to analyze, transform, enrich, filter, and process data. The preprocessed data is then used to train analytic models with machine learning/deep learning frameworks like TensorFlow.
However, some data scientists do not even know “bread-and-butter” concepts of software engineers, such as version control systems like GitHub or continuous integration tools like Jenkins.
This raises the question of how to combine the Python experience of data scientists with the benefits of Apache Kafka as a battle-tested, highly scalable data processing and streaming platform.
Apache Kafka and KSQL for Data Scientists and Data EngineersKafka offers integration options that can be used with Python, like Confluent’s Python Client for Apache Kafka or Confluent REST Proxy for HTTP integration. But this is not really a convenient way for data scientists who are used to quickly and interactively analyzing and preprocessing data before model training and evaluation. Rapid prototyping is typically used here.
KSQL enables data scientists to take a look at Kafka event streams and implement continuous stream processing from their well-known and loved Python environments like Jupyter by writing simple SQL-like statements for interactive analysis and data preprocessing.
The following Python example executes an interactive query from a Kafka stream leveraging the open source framework ksql-python, which adds a Python layer on top of KSQL’s REST interface. Here are a few lines of the Python code using KSQL from a Jupyter Notebook:
The result of such a KSQL query is a Python generator object, which you can easily process with other Python libraries. This feels much more Python native and is analogous to NumPy, pandas, scikit-learn and other widespread Python libraries.
Similarly to rapid prototyping with these libraries, you can do interactive queries and data preprocessing with ksql-python. Check out the KSQL quick start and KSQL recipes to understand how to write a KSQL query to easily filter, transform, enrich, or aggregate data. While KSQL is running continuous queries, you can also use it for interactive analysis and use the LIMIT
keyword like in ANSI SQL if you just want to get a specific number of rows.
So what’s the big deal? You understand that KSQL can feel Python-native with the ksql-python library, but why use KSQL instead of or in addition to your well-known and favorite Python libraries for analyzing and processing data?
The key difference is that these KSQL queries can also be deployed in production afterwards. KSQL offers you all the features from Kafka under the hood like high scalability, reliability, and failover handling. The same KSQL statement that you use in your Jupyter Notebook for interactive analysis and preprocessing can scale to millions of messages per second. Fault tolerant. With zero data loss and exactly once semantics. This is very important and valuable for bringing together the Python-loving data scientist with the highly scalable and reliable production infrastructure.
Just to be clear: KSQL + Python is not the all-rounder for every data engineering task, and it does not replace the existing Python toolset. But it is a great option in the toolbox of data scientists and data engineers, and it adds new possibilities like getting real-time updates of incoming information as the source data changes or updating a deployed model with a new and improved version.
Jupyter Notebook for Fraud Detection With Python KSQL and TensorFlow/KerasLet’s now take a look at a detailed example using the combination of KSQL and Python. It involves advanced code examples using ksql-python and other widespread components from Python’s machine learning ecosystem, like NumPy, pandas, TensorFlow, and Keras.
The use case is fraud detection for credit card payments. We use a test dataset from Kaggle as a foundation to train an unsupervised autoencoder to detect anomalies and potential fraud in payments. The focus of this example is not just model training, but the whole machine learning infrastructure, including data ingestion, data preprocessing, model training, model deployment, and monitoring. All of this needs to be scalable, reliable, and performant.
For the full running example and more details, see the documentation.
Let’s take a look at a few snippets of the Jupyter Notebook.
Connection to KSQL server and creation of a KSQL stream using Python:
from ksql import KSQLAPI
client = KSQLAPI('http://localhost:8088')
client.create_stream(table_name='creditcardfraud_source',
columns_type=['Id bigint', 'Timestamp varchar', 'User varchar', 'Time int', 'V1 double', 'V2 double', 'V3 double', 'V4 double', 'V5 double', 'V6 double', 'V7 double', 'V8 double', 'V9 double', 'V10 double', 'V11 double', 'V12 double', 'V13 double', 'V14 double', 'V15 double', 'V16 double', 'V17 double', 'V18 double', 'V19 double', 'V20 double', 'V21 double', 'V22 double', 'V23 double', 'V24 double', 'V25 double', 'V26 double', 'V27 double', 'V28 double', 'Amount double', 'Class string'],
topic='creditcardfraud_source',
value_format='DELIMITED')
Preprocessing incoming payment information using Python:
Filter columns that are not needed
Filter messages where column "class" is empty
Change data format to Avro for convenient and further processing
client.create_stream_as(table_name='creditcardfraud_preprocessed_avro',
select_columns=['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount', 'Class'],
src_table='creditcardfraud_source',
conditions='Class IS NOT NULL',
kafka_topic='creditcardfraud_preprocessed_avro',
value_format='AVRO')
Some more examples for possible data wrangling and preprocessing with KSQL:
CREATE STREAM creditcardfraud_preprocessed_avro WITH (VALUE_FORMAT='AVRO', KAFKA_TOPIC='creditcardfraud_preprocessed_avro') AS SELECT Time, V1 , V2 , V3 , V4 , V5 , V6 , V7 , V8 , V9 , V10 , V11 , V12 , V13 , V14 , V15 , V16 , V17 , V18 , V19 , V20 , V21 , V22 , V23 , V24 , V25 , V26 , V27 , V28 , Amount , Class FROM creditcardfraud_source WHERE Class IS NOT NULL;
SELECT Id, MASK_LEFT(User, 2) FROM creditcardfraud_source;
SELECT Id, IFNULL(Class, -1) FROM creditcardfraud_source;
CREATE STREAM creditcardfraud_per_user WITH (VALUE_FORMAT='AVRO', KAFKA_TOPIC='creditcardfraud_preprocessed_avro') AS SELECT Time, V1 , V2 , V3 , V4 , V5 , V6 , V7 , V8 , V9 , V10 , V11 , V12 , V13 , V14 , V15 , V16 , V17 , V18 , V19 , V20 , V21 , V22 , V23 , V24 , V25 , V26 , V27 , V28 , Amount , Class FROM creditcardfraud_enahnced c INNER JOIN USERS u on c.userid = u.userid WHERE V1 > 5 AND V2 IS NOT NULL AND u.CITY LIKE 'Premium%';
The Jupyter Notebook contains the full example. We use Python + KSQL for integration, data preprocessing, and interactive analysis and combine them with various other libraries from a common Python machine learning tool stack for prototyping and model training:
Arrays/matrices processing with NumPy and pandas
ML-specific processing (split train/test, etc.) with scikit-learn
Interactive analysis through data visualisations with Matplotlib
ML training + evaluation with TensorFlow and Keras
Model inference and visualisation are done in the Jupyter notebook, too. After you have built an accurate model, you can deploy it anywhere to make predictions and leverage the same integration pipeline for model training. Some examples of model deployment in Kafka environments are:
Analytic models (TensorFlow, Keras, H2O and Deeplearning4j) embedded in Kafka Streams microservices
Anomaly detection of IoT sensor data with a model embedded into a KSQL UDF
RPC communication between Kafka Streams application and model server (TensorFlow Serving)
As you can see, both in theory (Google’s paper Hidden Technical Debt in Machine Learning Systems) and in practice (Uber’s machine learning platform Michelangelo), it is not a simple task to build a scalable, reliable, and performant machine learning infrastructure.
The impedance mismatch between data scientists, data engineers, and production engineers must be resolved in order for machine learning projects to deliver real business value. This requires using the right tool for the job and understanding how to combine them. You can use Python and Jupyter for prototyping and demos (often Kafka and KSQL might be overhead here and not needed if you just want to do fast, simple prototyping on a historical dataset) or combine Python and Jupyter with your whole development lifecycle up to production deployments at scale.
Integration of Kafka event streams and KSQL statements into Jupyter Notebooks allows you to:
Use the preferred existing environment of the data scientist (including Python and Jupyter) and combine it with Kafka and KSQL to integrate and continuously process real-time streaming data by using a simple Python wrapper API to execute KSQL queries
Easily connect to real-time streaming data instead of just historical batches of data (maybe from the last day, week or month, e.g., coming in via CSV files)
Merge different concepts like streaming event-based sensor data coming from Kafka with Python programming concepts like generators or dictionaries objects, which you can use for your Python data processing tools or ML frameworks like NumPy, pandas, or scikit-learn
Reuse the same logic for integration, preprocessing, and monitoring and move it from your Jupyter Notebook and prototyping or demos to large-scale test and production systems
Python for prototyping and Apache Kafka for a scalable streaming platform are not rival technology stacks. They work together very well, especially if you use “helper tools” like Jupyter Notebooks and KSQL.
Please try it out and let us know your thoughts. How do you leverage the Apache Kafka ecosystem in your machine learning projects?
How much faster does the application run when implemented with NumPy instead of pure Python? What about TensorFlow? The purpose of this article is to begin to explore the improvements you can achieve by using these libraries.
Python has a design philosophy that stresses allowing programmers to express concepts readably and in fewer lines of code. This philosophy makes the language suitable for a diverse set of use cases: simple scripts for web, large web applications (like YouTube), scripting language for other platforms (like Blender and Autodesk’s Maya), and scientific applications in several areas, such as astronomy, meteorology, physics, and data science.
It is technically possible to implement scalar and matrix calculations using Python lists. However, this can be unwieldy, and performance is poor when compared to languages suited for numerical computation, such as MATLAB or Fortran, or even some general purpose languages, such as C or C++.
To circumvent this deficiency, several libraries have emerged that maintain Python’s ease of use while lending the ability to perform numerical calculations in an efficient manner. Two such libraries worth mentioning are NumPy (one of the pioneer libraries to bring efficient numerical computation to Python) and TensorFlow (a more recently rolled-out library focused more on deep learning algorithms).
But how do these schemes compare? How much faster does the application run when implemented with NumPy instead of pure Python? What about TensorFlow? The purpose of this article is to begin to explore the improvements you can achieve by using these libraries.
To compare the performance of the three approaches, you’ll build a basic regression with native Python, NumPy, and TensorFlow.
Engineering the Test DataTo test the performance of the libraries, you’ll consider a simple two-parameter linear regression problem. The model has two parameters: an intercept term, w_0
and a single coefficient, w_1
.
Given N pairs of inputs x
and desired outputs d
, the idea is to model the relationship between the outputs and the inputs using a linear model y = w_0 + w_1 * x
where the output of the model y
is approximately equal to the desired output d
for every pair (x, d)
.
Technical Detail: The intercept term, w_0
, is technically just a coefficient like w_1
, but it can be interpreted as a coefficient that multiplies elements of a vector of 1s.
To generate the training set of the problem, use the following program:
import numpy as npnp.random.seed(444)
N = 10000
We need to prepend a column vector of 1s to
sigma = 0.1
noise = sigma * np.random.randn(N)
x = np.linspace(0, 2, N)
d = 3 + 2 * x + noise
d.shape = (N, 1)x
.X = np.column_stack((np.ones(N, dtype=x.dtype), x))
print(X.shape)
(10000, 2)
This program creates a set of 10,000 inputs x
linearly distributed over the interval from 0 to 2. It then creates a set of desired outputs d = 3 + 2 * x + noise
, where noise
is taken from a Gaussian (normal) distribution with zero mean and standard deviation sigma = 0.1
.
By creating x
and d
in this way, you’re effectively stipulating that the optimal solution for w_0
and w_1
is 3 and 2, respectively.
Xplus = np.linalg.pinv(X)
w_opt = Xplus @ d
print(w_opt)
[[2.99536719]
[2.00288672]]
There are several methods to estimate the parameters w_0
and w_1
to fit a linear model to the training set. One of the most-used is ordinary least squares, which is a well-known solution for the estimation of w_0
and w_1
in order to minimize the square of the error e
, given by the summation of y - d
for every training sample.
One way to easily compute the ordinary least squares solution is by using the Moore-Penrose pseudo-inverse of a matrix. This approach stems from the fact that you have X
and d
and are trying to solve for wm, in the equation d = X @ wm
. (The @
symbol denotes matrix multiplication, which is supported by both NumPy and native Python as of PEP 465 and Python 3.5+.)
Using this approach, we can estimate w_m
using w_opt = Xplus @ d
, where Xplus
is given by the pseudo-inverse of X
, which can be calculated using numpy.linalg.pinv
, resulting in w_0 = 2.9978
and w_1 = 2.0016
, which is very close to the expected values of w_0 = 3
and w_1 = 2
.
Note: Using w_opt = np.linalg.inv(X.T @ X) @ X.T @ d
would yield the same solution.
Although it is possible to use this deterministic approach to estimate the coefficients of the linear model, it is not possible for some other models, such as neural networks. In these cases, iterative algorithms are used to estimate a solution for the parameters of the model.
One of the most-used algorithms is gradient descent, which at a high level consists of updating the parameter coefficients until we converge on a minimized loss (or cost). That is, we have some cost function (often, the mean squared error—MSE), and we compute its gradient with respect to the network’s coefficients (in this case, the parameters w_0
and w_1
), considering a step size mu
. By performing this update many times (in many epochs), the coefficients converge to a solution that minimizes the cost function.
In the following sections, you’ll build and use gradient descent algorithms in pure Python, NumPy, and TensorFlow. To compare the performance of the three approaches, we’ll look at runtime comparisons on an Intel Core i7 4790K 4.0 GHz CPU.
Gradient Descent in Pure PythonLet’s start with a pure-Python approach as a baseline for comparison with the other approaches. The Python function below estimates the parameters w_0
and w_1
using gradient descent:
import itertools as itdef py_descent(x, d, mu, N_epochs):
N = len(x)
f = 2 / N# "Empty" predictions, errors, weights, gradients. y = [0] * N w = [0, 0] grad = [0, 0] for _ in it.repeat(None, N_epochs): # Can't use a generator because we need to # access its elements twice. err = tuple(i - j for i, j in zip(d, y)) grad[0] = f * sum(err) grad[1] = f * sum(i * j for i, j in zip(err, x)) w = [i + mu * j for i, j in zip(w, grad)] y = (w[0] + w[1] * i for i in x) return w
Above, everything is done with Python list comprehensions, slicing syntax, and the built-in sum()
and zip()
functions. Before running through each epoch, “empty” containers of zeros are initialized for y
, w
, and grad
.
Technical Detail: py_descent
above does use itertools.repeat()
rather than for _ in range(N_epochs)
. The former is faster than the latter because repeat()
does not need to manufacture a distinct integer for each loop. It just needs to update the reference count to None
. The timeit module contains an example.
Now, use this to find a solution:
import timex_list = x.tolist()
d_list = d.squeeze().tolist() # Need 1d listsmu
is a step size, or scaling factor.mu = 0.001
N_epochs = 10000t0 = time.time()
py_w = py_descent(x_list, d_list, mu, N_epochs)
t1 = time.time()print(py_w)
[2.959859852416156, 2.0329649630002757]print('Solve time: {:.2f} seconds'.format(round(t1 - t0, 2)))
Solve time: 18.65 seconds
With a step size of mu = 0.001
and 10,000 epochs, we can get a fairly precise estimate of w_0
and w_1
. Inside the for-loop, the gradients with respect to the parameters are calculated and used in turn to update the weights, moving in the opposite direction in order to minimize the MSE cost function.
At each epoch, after the update, the output of the model is calculated. The vector operations are performed using list comprehensions. We could have also updated y
in-place, but that would not have been beneficial to performance.
The elapsed time of the algorithm is measured using the time
library. It takes 18.65 seconds to estimate w_0 = 2.9598
and w_1 = 2.0329
. While the timeit
library can provide a more exact estimate of runtime by running multiple loops and disabling garbage collection, just viewing a single run with time
suffices in this case, as you’ll see shortly.
NumPy adds support for large multidimensional arrays and matrices along with a collection of mathematical functions to operate on them. The operations are optimized to run with blazing speed by relying on the projects BLAS and LAPACK for underlying implementation.
Using NumPy, consider the following program to estimate the parameters of the regression:
def np_descent(x, d, mu, N_epochs):
d = d.squeeze()
N = len(x)
f = 2 / Ny = np.zeros(N) err = np.zeros(N) w = np.zeros(2) grad = np.empty(2) for _ in it.repeat(None, N_epochs): np.subtract(d, y, out=err) grad[:] = f * np.sum(err), f * (err @ x) w = w + mu * grad y = w[0] + w[1] * x return w
np_w = np_descent(x, d, mu, N_epochs)
print(np_w)
[2.95985985 2.03296496]
The code block above takes advantage of vectorized operations with NumPy arrays (ndarrays
). The only explicit for-loop is the outer loop over which the training routine itself is repeated. List comprehensions are absent here because NumPy’s ndarray
type overloads the arithmetic operators to perform array calculations in an optimized way.
You may notice there are a few alternate ways to go about solving this problem. For instance, you could use simply f * err @ X
, where X
is the 2d array that includes a column vector of ones, rather than our 1d x
.
However, this is actually not all that efficient, because it requires a dot product of an entire column of ones with another vector (err
), and we know that result will simply be np.sum(err)
. Similarly, w[0] + w[1] * x
wastes less computation than w * X
, in this specific case.
Let’s look at the timing comparison. As you’ll see below, the timeit module is needed here to get a more precise picture of runtime, as we’re now talking about fractions of a second rather than multiple seconds of runtime:
import timeitsetup = ("from main import x, d, mu, N_epochs, np_descent;"
"import numpy as np")
repeat = 5
number = 5 # Number of loops within each repeatnp_times = timeit.repeat('np_descent(x, d, mu, N_epochs)', setup=setup,
repeat=repeat, number=number)
timeit.repeat()
returns a list. Each element is the total time taken to execute n loops of the statement. To get a single estimate of runtime, you can take the average time for a single call from the lower bound of the list of repeats:
print(min(np_times) / number)Using TensorFlow
0.31947448799983247
TensorFlow is an open-source library for numerical computation originally developed by researchers and engineers working at the Google Brain team.
Using its Python API, TensorFlow’s routines are implemented as a graph of computations to perform. Nodes in the graph represent mathematical operations, and the graph edges represent the multidimensional data arrays (also called tensors) communicated between them.
At runtime, TensorFlow takes the graph of computations and runs it efficiently using optimized C++ code. By analyzing the graph of computations, TensorFlow is able to identify the operations that can be run in parallel. This architecture allows the use of a single API to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device.
Using TensorFlow, consider the following program to estimate the parameters of the regression:
import tensorflow as tfdef tf_descent(X_tf, d_tf, mu, N_epochs):
N = X_tf.get_shape().as_list()[0]
f = 2 / Nw = tf.Variable(tf.zeros((2, 1)), name="w_tf") y = tf.matmul(X_tf, w, name="y_tf") e = y - d_tf grad = f * tf.matmul(tf.transpose(X_tf), e) training_op = tf.assign(w, w - mu * grad) init = tf.global_variables_initializer() with tf.Session() as sess: init.run() for epoch in range(N_epochs): sess.run(training_op) opt = w.eval() return opt
X_tf = tf.constant(X, dtype=tf.float32, name="X_tf")
d_tf = tf.constant(d, dtype=tf.float32, name="d_tf")tf_w = tf_descent(X_tf, d_tf, mu, N_epochs)
print(tf_w)
[[2.9598553]
[2.032969 ]]
When you use TensorFlow, the data must be loaded into a special data type called a Tensor
. Tensors mirror NumPy arrays in more ways than they are dissimilar.
type(X_tf)
<class 'tensorflow.python.framework.ops.Tensor'>
After the tensors are created from the training data, the graph of computations is defined:
w
is used to store the regression parameters, which will be updated at each iteration.w
and X_tf
, the output y
is calculated using a matrix product, implemented with tf.matmul()
.e
tensor.X_tf
by the e
.tf.assign()
function. It creates a node that implements batch gradient descent, updating the next step tensor w
to w - mu * grad
.It is worth noticing that the code until the training_op
creation does not perform any computation. It just creates the graph of the computations to be performed. In fact, even the variables are not initialized yet. To perform the computations, it is necessary to create a session and use it to initialize the variables and run the algorithm to evaluate the parameters of the regression.
There are some different ways to initialize the variables and create the session to perform the computations. In this program, the line init = tf.global_variables_initializer()
creates a node in the graph that will initialize the variables when it is run. The session is created in the with
block, and init.run()
is used to actually initialize the variables. Inside the with
block, training_op
is run for the desired number of epochs, evaluating the parameter of the regression, which have their final value stored in opt
.
Here is the same code-timing structure that was used with the NumPy implementation:
setup = ("from main import X_tf, d_tf, mu, N_epochs, tf_descent;"
"import tensorflow as tf")tf_times = timeit.repeat("tf_descent(X_tf, d_tf, mu, N_epochs)", setup=setup,
repeat=repeat, number=number)print(min(tf_times) / number)
1.1982891103994917
It took 1.20 seconds to estimate w_0 = 2.9598553
and w_1 = 2.032969
. It is worth noticing that the computation was performed on a CPU and the performance may be improved when run on a GPU.
Lastly, you could have also defined an MSE cost function and passed this to TensorFlow’s gradients()
function, which performs automatic differentiation, finding the gradient vector of MSE with regard to the weights:
mse = tf.reduce_mean(tf.square(e), name="mse")
grad = tf.gradients(mse, w)[0]
However, the timing difference in this case is negligible.
ConclusionThe purpose of this article was to perform a preliminary comparison of the performance of a pure Python, a NumPy and a TensorFlow implementation of a simple iterative algorithm to estimate the coefficients of a linear regression problem.
The results for the elapsed time to run the algorithm are summarized in the table below:
While the NumPy and TensorFlow solutions are competitive (on CPU), the pure Python implementation is a distant third. While Python is a robust general-purpose programming language, its libraries targeted towards numerical computation will win out any day when it comes to large batch operations on arrays.
While the NumPy example proved quicker by a hair than TensorFlow in this case, it’s important to note that TensorFlow really shines for more complex cases. With our relatively elementary regression problem, using TensorFlow arguably amounts to “using a sledgehammer to crack a nut,” as the saying goes.
With TensorFlow, it is possible to build and train complex neural networks across hundreds or thousands of multi-GPU servers. In a future post, we will cover the setup to run this example in GPUs using TensorFlow and compare the results.