Introducing TensorFlow Datasets

Introducing TensorFlow Datasets

Introducing TensorFlow Datasets

Public datasets fuel the machine learning research rocket (h/t Andrew Ng), but it’s still too difficult to simply get those datasets into your machine learning pipeline. Every researcher goes through the pain of writing one-off scripts to download and prepare every dataset they work with, which all have different source formats and complexities. Not anymore.

Today, we’re pleased to introduce TensorFlow Datasets (GitHub) which exposes public research datasets as [tf.data.Datasets]([https://www.tensorflow.org/api_docs/python/tf/data/Dataset)](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) "https://www.tensorflow.org/api_docs/python/tf/data/Dataset)") and as NumPy arrays. It does all the grungy work of fetching the source data and preparing it into a common format on disk, and it uses the [tf.data API]([https://www.tensorflow.org/guide/datasets)](https://www.tensorflow.org/guide/datasets) "https://www.tensorflow.org/guide/datasets)") to build high-performance input pipelines, which are TensorFlow 2.0-ready and can be used with tf.keras models. We’re launching with 29 popular research datasets such as MNIST, Street View House Numbers, the 1 Billion Word Language Model Benchmark, and the Large Movie Reviews Dataset, and will add more in the months to come; we hope that you join in and add a dataset yourself.

tl;dr

# Install: pip install tensorflow-datasets
import tensorflow_datasets as tfds
mnist_data = tfds.load("mnist")
mnist_train, mnist_test = mnist_data["train"], mnist_data["test"]
assert isinstance(mnist_train, tf.data.Dataset)

Try tfds out in a Colab notebook.

[tfds.load]([https://www.tensorflow.org/datasets/api_docs/python/tfds/load)](https://www.tensorflow.org/datasets/api_docs/python/tfds/load) "https://www.tensorflow.org/datasets/api_docs/python/tfds/load)") and [DatasetBuilder]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder)")

Every dataset is exposed as a DatasetBuilder, which knows:

  • Where to download the data from and how to extract it and write it to a standard format ([DatasetBuilder.download_and_prepare]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#download_and_prepare)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#download_and_prepare) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#download_and_prepare)")).
  • How to load it from disk ([DatasetBuilder.as_dataset]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#as_dataset)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#as_dataset) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#as_dataset)")).
  • And all the information about the dataset, like the names, types, and shapes of all the features, the number of records in each split, the source URLs, citation for the dataset or associated paper, etc. ([DatasetBuilder.info]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#info)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#info) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#info)")).

You can directly instantiate any of the DatasetBuilders or fetch them by string with [tfds.builder]([https://www.tensorflow.org/datasets/api_docs/python/tfds/builder)](https://www.tensorflow.org/datasets/api_docs/python/tfds/builder) "https://www.tensorflow.org/datasets/api_docs/python/tfds/builder)"):

import tensorflow_datasets as tfds

# Fetch the dataset directly
mnist = tfds.image.MNIST()
# or by string name
mnist = tfds.builder('mnist')

# Describe the dataset with DatasetInfo
assert mnist.info.features['image'].shape == (28, 28, 1)
assert mnist.info.features['label'].num_classes == 10
assert mnist.info.splits['train'].num_examples == 60000

# Download the data, prepare it, and write it to disk
mnist.download_and_prepare()

# Load data from disk as tf.data.Datasets
datasets = mnist.as_dataset()
train_dataset, test_dataset = datasets['train'], datasets['test']
assert isinstance(train_dataset, tf.data.Dataset)

# And convert the Dataset to NumPy arrays if you'd like
for example in tfds.as_numpy(train_dataset):
  image, label = example['image'], example['label']
assert isinstance(image, np.array)

as_dataset() accepts a batch_size argument which will give you batches of examples instead of one example at a time. For small datasets that fit in memory, you can pass batch_size=-1 to get the entire dataset at once as a tf.Tensor. All tf.data.Datasets can easily be converted to iterables of NumPy arrays using [tfds.as_numpy()]([https://www.tensorflow.org/datasets/api_docs/python/tfds/as_numpy)](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_numpy) "https://www.tensorflow.org/datasets/api_docs/python/tfds/as_numpy)").

As a convenience, you can do all the above with [tfds.load]([https://www.tensorflow.org/datasets/api_docs/python/tfds/load)](https://www.tensorflow.org/datasets/api_docs/python/tfds/load) "https://www.tensorflow.org/datasets/api_docs/python/tfds/load)"), which fetches the DatasetBuilder by name, calls download_and_prepare(), and calls as_dataset().

import tensorflow_datasets as tfds

datasets = tfds.load("mnist")
train_dataset, test_dataset = datasets["train"], datasets["test"]
assert isinstance(train_dataset, tf.data.Dataset)

You can also easily get the [DatasetInfo]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetInfo)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetInfo) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetInfo)") object from tfds.load by passing with_info=True. See the API documentation for all the options.

Dataset Versioning

Every dataset is versioned (builder.info.version) so that you can rest assured that the data doesn’t change underneath you and that results are reproducible. For now, we guarantee that if the data changes, the version will be incremented.

Note that while we do guarantee the data values and splits are identical given the same version, we do not currently guarantee the ordering of records for the same version.

Dataset Configuration

Datasets with different variants are configured with named BuilderConfigs. For example, the Large Movie Review Dataset ([tfds.text.IMDBReviews]([https://www.tensorflow.org/datasets/datasets#imdb_reviews)](https://www.tensorflow.org/datasets/datasets#imdb_reviews) "https://www.tensorflow.org/datasets/datasets#imdb_reviews)")) could have different encodings for the input text (for example, plain text, or a character encoding, or a subword encoding). The built-in configurations are listed with the dataset documentation and can be addressed by string, or you can pass in your own configuration.

# See the built-in configs
configs = tfds.text.IMDBReviews.builder_configs
assert "bytes" in configs

# Address a built-in config with tfds.builder
imdb = tfds.builder("imdb_reviews/bytes")
# or when constructing the builder directly
imdb = tfds.text.IMDBReviews(config="bytes")
# or use your own custom configuration
my_encoder = tfds.features.text.ByteTextEncoder(additional_tokens=['hello'])
my_config = tfds.text.IMDBReviewsConfig(
    name="my_config",
    version="1.0.0",
    text_encoder_config=tfds.features.text.TextEncoderConfig(encoder=my_encoder),
)
imdb = tfds.text.IMDBReviews(config=my_config)

See the section on dataset configuration in our documentation on adding a dataset.

Text Datasets and Vocabularies

Text datasets can be often be painful to work with because of different encodings and vocabulary files. tensorflow-datasets makes it much easier. It’s shipping with many text tasks and includes three kinds of TextEncoders, all of which support Unicode:

  • Where to download the data from and how to extract it and write it to a standard format ([DatasetBuilder.download_and_prepare]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#download_and_prepare)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#download_and_prepare) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#download_and_prepare)")).
  • How to load it from disk ([DatasetBuilder.as_dataset]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#as_dataset)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#as_dataset) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#as_dataset)")).
  • And all the information about the dataset, like the names, types, and shapes of all the features, the number of records in each split, the source URLs, citation for the dataset or associated paper, etc. ([DatasetBuilder.info]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#info)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#info) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#info)")).

The encoders, along with their vocabulary sizes, can be accessed through DatasetInfo:

imdb = tfds.builder("imdb_reviews/subwords8k")

# Get the TextEncoder from DatasetInfo
encoder = imdb.info.features["text"].encoder
assert isinstance(encoder, tfds.features.text.SubwordTextEncoder)

# Encode, decode
ids = encoder.encode("Hello world")
assert encoder.decode(ids) == "Hello world"

# Get the vocabulary size
vocab_size = encoder.vocab_size

Both TensorFlow and TensorFlow Datasets will be working to improve text support even further in the future.

Getting started

Our documentation site is the best place to start using tensorflow-datasets. Here are some additional pointers for getting started:

  • Where to download the data from and how to extract it and write it to a standard format ([DatasetBuilder.download_and_prepare]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#download_and_prepare)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#download_and_prepare) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#download_and_prepare)")).
  • How to load it from disk ([DatasetBuilder.as_dataset]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#as_dataset)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#as_dataset) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#as_dataset)")).
  • And all the information about the dataset, like the names, types, and shapes of all the features, the number of records in each split, the source URLs, citation for the dataset or associated paper, etc. ([DatasetBuilder.info]([https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#info)](https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#info) "https://www.tensorflow.org/datasets/api_docs/python/tfds/core/DatasetBuilder#info)")).

We expect to be adding datasets in the coming months, and we hope that the community will join in. Open a GitHub Issue to request a dataset, vote on which datasets should be added next, discuss implementation, or ask for help. And Pull Requests very welcome! Add a popular dataset to contribute to the community, or if you have your own data, contribute it to TFDS to make your data famous!

Now that data is easy, happy modeling!

Acknowledgements

We’d like to thank Stefan Webb of Oxford for allowing us to use the tensorflow-datasets PyPI name. Thanks Stefan!

We’d also like to thank Lukasz Kaiser and the Tensor2Tensor project for inspiring and guiding tensorflow/datasets. Thanks Lukasz! T2T will be migrating to tensorflow/datasets soon.

Originally published by TensorFlow at https://medium.com/tensorflow

Learn More

Applied Deep Learning with PyTorch - Full Course

Machine Learning In Node.js With TensorFlow.js

Introducing TensorFlow.js: Machine Learning in Javascript

A Complete Machine Learning Project Walk-Through in Python

An illustrated guide to Kubernetes Networking

Introduction to PyTorch and Machine Learning

Complete Guide to TensorFlow for Deep Learning with Python

Machine Learning with TensorFlow + Real-Life Business Case

Machine Learning & Tensorflow - Google Cloud Approach

TensorFlow vs NumPy vs Pure Python: Performance Comparison

TensorFlow vs NumPy vs Pure Python: Performance Comparison

How much faster does the application run when implemented with NumPy instead of pure Python? What about TensorFlow? The purpose of this article is to begin to explore the improvements you can achieve by using these libraries.

Python has a design philosophy that stresses allowing programmers to express concepts readably and in fewer lines of code. This philosophy makes the language suitable for a diverse set of use cases: simple scripts for web, large web applications (like YouTube), scripting language for other platforms (like Blender and Autodesk’s Maya), and scientific applications in several areas, such as astronomy, meteorology, physics, and data science.

It is technically possible to implement scalar and matrix calculations using Python lists. However, this can be unwieldy, and performance is poor when compared to languages suited for numerical computation, such as MATLAB or Fortran, or even some general purpose languages, such as C or C++.

To circumvent this deficiency, several libraries have emerged that maintain Python’s ease of use while lending the ability to perform numerical calculations in an efficient manner. Two such libraries worth mentioning are NumPy (one of the pioneer libraries to bring efficient numerical computation to Python) and TensorFlow (a more recently rolled-out library focused more on deep learning algorithms).

  • NumPy provides support for large multidimensional arrays and matrices along with a collection of mathematical functions to operate on these elements. The project relies on well-known packages implemented in other languages (like Fortran) to perform efficient computations, bringing the user both the expressiveness of Python and a performance similar to MATLAB or Fortran.
  • TensorFlow is an open-source library for numerical computation originally developed by researchers and engineers working at the Google Brain team. The main focus of the library is to provide an easy-to-use API to implement practical machine learning algorithms and deploy them to run on CPUs, GPUs, or a cluster.

But how do these schemes compare? How much faster does the application run when implemented with NumPy instead of pure Python? What about TensorFlow? The purpose of this article is to begin to explore the improvements you can achieve by using these libraries.

To compare the performance of the three approaches, you’ll build a basic regression with native Python, NumPy, and TensorFlow.

Engineering the Test Data

To test the performance of the libraries, you’ll consider a simple two-parameter linear regression problem. The model has two parameters: an intercept term, w_0 and a single coefficient, w_1.

Given N pairs of inputs x and desired outputs d, the idea is to model the relationship between the outputs and the inputs using a linear model y = w_0 + w_1 * x where the output of the model y is approximately equal to the desired output d for every pair (x, d).

Technical Detail: The intercept term, w_0, is technically just a coefficient like w_1, but it can be interpreted as a coefficient that multiplies elements of a vector of 1s.

To generate the training set of the problem, use the following program:

import numpy as np

np.random.seed(444)

N = 10000
sigma = 0.1
noise = sigma * np.random.randn(N)
x = np.linspace(0, 2, N)
d = 3 + 2 * x + noise
d.shape = (N, 1)

We need to prepend a column vector of 1s to x.

X = np.column_stack((np.ones(N, dtype=x.dtype), x))
print(X.shape)
(10000, 2)

This program creates a set of 10,000 inputs x linearly distributed over the interval from 0 to 2. It then creates a set of desired outputs d = 3 + 2 * x + noise, where noise is taken from a Gaussian (normal) distribution with zero mean and standard deviation sigma = 0.1.

By creating x and d in this way, you’re effectively stipulating that the optimal solution for w_0 and w_1 is 3 and 2, respectively.

Xplus = np.linalg.pinv(X)
w_opt = Xplus @ d
print(w_opt)
[[2.99536719]
[2.00288672]]

There are several methods to estimate the parameters w_0 and w_1 to fit a linear model to the training set. One of the most-used is ordinary least squares, which is a well-known solution for the estimation of w_0 and w_1 in order to minimize the square of the error e, given by the summation of y - d for every training sample.

One way to easily compute the ordinary least squares solution is by using the Moore-Penrose pseudo-inverse of a matrix. This approach stems from the fact that you have X and d and are trying to solve for wm, in the equation d = X @ wm. (The @ symbol denotes matrix multiplication, which is supported by both NumPy and native Python as of PEP 465 and Python 3.5+.)

Using this approach, we can estimate w_m using w_opt = Xplus @ d, where Xplus is given by the pseudo-inverse of X, which can be calculated using numpy.linalg.pinv, resulting in w_0 = 2.9978 and w_1 = 2.0016, which is very close to the expected values of w_0 = 3 and w_1 = 2.

Note: Using w_opt = np.linalg.inv(X.T @ X) @ X.T @ d would yield the same solution.

Although it is possible to use this deterministic approach to estimate the coefficients of the linear model, it is not possible for some other models, such as neural networks. In these cases, iterative algorithms are used to estimate a solution for the parameters of the model.

One of the most-used algorithms is gradient descent, which at a high level consists of updating the parameter coefficients until we converge on a minimized loss (or cost). That is, we have some cost function (often, the mean squared error—MSE), and we compute its gradient with respect to the network’s coefficients (in this case, the parameters w_0 and w_1), considering a step size mu. By performing this update many times (in many epochs), the coefficients converge to a solution that minimizes the cost function.

In the following sections, you’ll build and use gradient descent algorithms in pure Python, NumPy, and TensorFlow. To compare the performance of the three approaches, we’ll look at runtime comparisons on an Intel Core i7 4790K 4.0 GHz CPU.

Gradient Descent in Pure Python

Let’s start with a pure-Python approach as a baseline for comparison with the other approaches. The Python function below estimates the parameters w_0 and w_1 using gradient descent:

import itertools as it

def py_descent(x, d, mu, N_epochs):
N = len(x)
f = 2 / N

# "Empty" predictions, errors, weights, gradients.
y = [0] * N
w = [0, 0]
grad = [0, 0]

for _ in it.repeat(None, N_epochs):
    # Can't use a generator because we need to
    # access its elements twice.
    err = tuple(i - j for i, j in zip(d, y))
    grad[0] = f * sum(err)
    grad[1] = f * sum(i * j for i, j in zip(err, x))
    w = [i + mu * j for i, j in zip(w, grad)]
    y = (w[0] + w[1] * i for i in x)
return w

Above, everything is done with Python list comprehensions, slicing syntax, and the built-in sum() and zip() functions. Before running through each epoch, “empty” containers of zeros are initialized for y, w, and grad.

Technical Detail: py_descent above does use itertools.repeat() rather than for _ in range(N_epochs). The former is faster than the latter because repeat() does not need to manufacture a distinct integer for each loop. It just needs to update the reference count to None. The timeit module contains an example.

Now, use this to find a solution:

import time

x_list = x.tolist()
d_list = d.squeeze().tolist() # Need 1d lists

mu is a step size, or scaling factor.

mu = 0.001
N_epochs = 10000

t0 = time.time()
py_w = py_descent(x_list, d_list, mu, N_epochs)
t1 = time.time()

print(py_w)
[2.959859852416156, 2.0329649630002757]

print('Solve time: {:.2f} seconds'.format(round(t1 - t0, 2)))
Solve time: 18.65 seconds

With a step size of mu = 0.001 and 10,000 epochs, we can get a fairly precise estimate of w_0 and w_1. Inside the for-loop, the gradients with respect to the parameters are calculated and used in turn to update the weights, moving in the opposite direction in order to minimize the MSE cost function.

At each epoch, after the update, the output of the model is calculated. The vector operations are performed using list comprehensions. We could have also updated y in-place, but that would not have been beneficial to performance.

The elapsed time of the algorithm is measured using the time library. It takes 18.65 seconds to estimate w_0 = 2.9598 and w_1 = 2.0329. While the timeit library can provide a more exact estimate of runtime by running multiple loops and disabling garbage collection, just viewing a single run with time suffices in this case, as you’ll see shortly.

Using NumPy

NumPy adds support for large multidimensional arrays and matrices along with a collection of mathematical functions to operate on them. The operations are optimized to run with blazing speed by relying on the projects BLAS and LAPACK for underlying implementation.

Using NumPy, consider the following program to estimate the parameters of the regression:

def np_descent(x, d, mu, N_epochs):
d = d.squeeze()
N = len(x)
f = 2 / N

y = np.zeros(N)
err = np.zeros(N)
w = np.zeros(2)
grad = np.empty(2)

for _ in it.repeat(None, N_epochs):
    np.subtract(d, y, out=err)
    grad[:] = f * np.sum(err), f * (err @ x)
    w = w + mu * grad
    y = w[0] + w[1] * x
return w

np_w = np_descent(x, d, mu, N_epochs)
print(np_w)
[2.95985985 2.03296496]

The code block above takes advantage of vectorized operations with NumPy arrays (ndarrays). The only explicit for-loop is the outer loop over which the training routine itself is repeated. List comprehensions are absent here because NumPy’s ndarray type overloads the arithmetic operators to perform array calculations in an optimized way.

You may notice there are a few alternate ways to go about solving this problem. For instance, you could use simply f * err @ X, where X is the 2d array that includes a column vector of ones, rather than our 1d x.

However, this is actually not all that efficient, because it requires a dot product of an entire column of ones with another vector (err), and we know that result will simply be np.sum(err). Similarly, w[0] + w[1] * x wastes less computation than w * X, in this specific case.

Let’s look at the timing comparison. As you’ll see below, the timeit module is needed here to get a more precise picture of runtime, as we’re now talking about fractions of a second rather than multiple seconds of runtime:

import timeit

setup = ("from main import x, d, mu, N_epochs, np_descent;"
"import numpy as np")
repeat = 5
number = 5 # Number of loops within each repeat

np_times = timeit.repeat('np_descent(x, d, mu, N_epochs)', setup=setup,
repeat=repeat, number=number)

timeit.repeat() returns a list. Each element is the total time taken to execute n loops of the statement. To get a single estimate of runtime, you can take the average time for a single call from the lower bound of the list of repeats:

print(min(np_times) / number)
0.31947448799983247
Using TensorFlow

TensorFlow is an open-source library for numerical computation originally developed by researchers and engineers working at the Google Brain team.

Using its Python API, TensorFlow’s routines are implemented as a graph of computations to perform. Nodes in the graph represent mathematical operations, and the graph edges represent the multidimensional data arrays (also called tensors) communicated between them.

At runtime, TensorFlow takes the graph of computations and runs it efficiently using optimized C++ code. By analyzing the graph of computations, TensorFlow is able to identify the operations that can be run in parallel. This architecture allows the use of a single API to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device.

Using TensorFlow, consider the following program to estimate the parameters of the regression:

import tensorflow as tf

def tf_descent(X_tf, d_tf, mu, N_epochs):
N = X_tf.get_shape().as_list()[0]
f = 2 / N

w = tf.Variable(tf.zeros((2, 1)), name="w_tf")
y = tf.matmul(X_tf, w, name="y_tf")
e = y - d_tf
grad = f * tf.matmul(tf.transpose(X_tf), e)

training_op = tf.assign(w, w - mu * grad)
init = tf.global_variables_initializer()

with tf.Session() as sess:
    init.run()
    for epoch in range(N_epochs):
        sess.run(training_op)
    opt = w.eval()
return opt

X_tf = tf.constant(X, dtype=tf.float32, name="X_tf")
d_tf = tf.constant(d, dtype=tf.float32, name="d_tf")

tf_w = tf_descent(X_tf, d_tf, mu, N_epochs)
print(tf_w)
[[2.9598553]
[2.032969 ]]

When you use TensorFlow, the data must be loaded into a special data type called a Tensor. Tensors mirror NumPy arrays in more ways than they are dissimilar.

type(X_tf)
<class 'tensorflow.python.framework.ops.Tensor'>

After the tensors are created from the training data, the graph of computations is defined:

  • First, a variable tensor w is used to store the regression parameters, which will be updated at each iteration.
  • Using w and X_tf, the output y is calculated using a matrix product, implemented with tf.matmul().
  • The error is calculated and stored in the e tensor.
  • The gradients are computed, using the matrix approach, by multiplying the transpose of X_tf by the e.
  • Finally, the update of the parameters of the regression is implemented with the tf.assign() function. It creates a node that implements batch gradient descent, updating the next step tensor w to w - mu * grad.

It is worth noticing that the code until the training_op creation does not perform any computation. It just creates the graph of the computations to be performed. In fact, even the variables are not initialized yet. To perform the computations, it is necessary to create a session and use it to initialize the variables and run the algorithm to evaluate the parameters of the regression.

There are some different ways to initialize the variables and create the session to perform the computations. In this program, the line init = tf.global_variables_initializer() creates a node in the graph that will initialize the variables when it is run. The session is created in the with block, and init.run() is used to actually initialize the variables. Inside the with block, training_op is run for the desired number of epochs, evaluating the parameter of the regression, which have their final value stored in opt.

Here is the same code-timing structure that was used with the NumPy implementation:

setup = ("from main import X_tf, d_tf, mu, N_epochs, tf_descent;"
"import tensorflow as tf")

tf_times = timeit.repeat("tf_descent(X_tf, d_tf, mu, N_epochs)", setup=setup,
repeat=repeat, number=number)

print(min(tf_times) / number)
1.1982891103994917

It took 1.20 seconds to estimate w_0 = 2.9598553 and w_1 = 2.032969. It is worth noticing that the computation was performed on a CPU and the performance may be improved when run on a GPU.

Lastly, you could have also defined an MSE cost function and passed this to TensorFlow’s gradients() function, which performs automatic differentiation, finding the gradient vector of MSE with regard to the weights:

mse = tf.reduce_mean(tf.square(e), name="mse")
grad = tf.gradients(mse, w)[0]

However, the timing difference in this case is negligible.

Conclusion

The purpose of this article was to perform a preliminary comparison of the performance of a pure Python, a NumPy and a TensorFlow implementation of a simple iterative algorithm to estimate the coefficients of a linear regression problem.

The results for the elapsed time to run the algorithm are summarized in the table below:

While the NumPy and TensorFlow solutions are competitive (on CPU), the pure Python implementation is a distant third. While Python is a robust general-purpose programming language, its libraries targeted towards numerical computation will win out any day when it comes to large batch operations on arrays.

While the NumPy example proved quicker by a hair than TensorFlow in this case, it’s important to note that TensorFlow really shines for more complex cases. With our relatively elementary regression problem, using TensorFlow arguably amounts to “using a sledgehammer to crack a nut,” as the saying goes.

With TensorFlow, it is possible to build and train complex neural networks across hundreds or thousands of multi-GPU servers. In a future post, we will cover the setup to run this example in GPUs using TensorFlow and compare the results.


Python Connect MySQL Database

Python Connect MySQL Database

Databases are critical for storing and processing data even if you consider a powerful programming language like Python. Ever wondered where does this whole large set of data is stored in or fetched from?

Databases are critical for storing and processing data even if you consider a powerful programming language like Python. Ever wondered where does this whole large set of data is stored in or fetched from?

In this article, I’ll talk about the same and take you through the following aspects in detail.

  •        What is a [database](https://morioh.com/topic/database "database")?
    
  •        What is [MySQLdb](https://morioh.com/topic/mysql "MySQLdb")?
    
  •        How does [Python](https://morioh.com/topic/python "Python") connect to a database?
    
  •        Creating a Database
    
  •        Database Operations-[CRUD](https://morioh.com/p/60b941830c01 "CRUD")
    

Let’s get started :)

What is a database?

A database is basically a collection of structured data in such a way that it can easily be retrieved, managed and accessed in various ways. One of the simplest forms of databases is a text database. Relational databases are the most popular database system which includes the following:

Among all these databases, MySQL is one of the easiest databases to work with. Let me walk you through about this in detail.

What is MySQLdb?

MySQLdb is an open-source freely available relational database management system that uses Structured Query Language. Now one of the most important question here is “What is SQL?”

SQL (Structured Query Language) is a standard language for relational databases that allow users to do various operations on data like, Manipulating, Creating, Dropping, etc. In a nutshell, SQL allows you to do anything with the data.

Let’s move ahead and dive deep into Python database connection wherein you will learn how to connect with the database.

**How does Python connect to a database? **

It is very simple to connect Python with the database. Refer the below image which illustrates a Python connection with the database where how a connection request is sent to MySQL connector Python, gets accepted from the database and cursor is executed with result data.

Before connecting to the MySQL database, make sure you have MySQL installer installed on your computer. It provides a comprehensive set of tools which helps in installing MySQL with the following components:
MySQL server All available connectorsMySQL WorkbenchMySQL NotifierTools for Excel and Microsoft Visual StudioMySQL Sample DatabasesMySQL Documentation
To download the MySQL installer please go through the following video which talks about the various steps that you need to follow while installing MySQL.

Before proceeding you should make sure you have MySQL db installed on your computer. Refer the below commands for installing MySQL in command prompt and pycharm:

Using Pip:

Command:

pip install mysql-connector

**Using Pycharm **

Command:

import mysql.connector

Output:

C:UsersHarshit_KantPycharmProjectstest1venvScriptspython.exe C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

Process finished with exit code 0

Moving on in this article with Python Database Connection let us see the parameters required to connect to the database:

  • **Username- It is simply the username you give to work MySQL server with, the Default username is root.
  • **Password- **Password is given by the user when you have installed the MySQL database. I am giving password here as ‘password123’
  • Host Name- This basically is the server name or IP address on which your MySQL is running, If it is a ‘localhost’, then your IP address is 127.0.0.0

I will show you from a coding perspective to connect python with MySQL database.

Example:

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123") // I have used 'host','username','password'
 
print(mydb)

Output:

C:UsersHarshit_KantPycharmProjectstest1venvScriptspython.exe C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

<mysql.connector.connection_cext.CMySQLConnection object at 0x000001606D7BD6A0>

Process finished with exit code 0

**Explanation: **Here ‘mydb’ is just an instance. From the output, you can clearly see that it has connected to the database.

Next up in Python Database Connection, you will learn how to create a database.

Creating a Database:

Once the database connection is established, you are ready to create your own database which will be acting as a bridge between your python and MySQL server.

Let’s see the implementation part of it.

Example:

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123")
mycursor=mydb.cursor()
mycursor.execute("create database harshdb")

Output:

C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

Process finished with exit code 0

Explanation:

  • In the above program, I have made use of cursor which is basically an object that is used to communicate to your entire MySQL server through which I am able to create my own database.
  • You can see from the output that my database with the name”harshdb” is created which is custom, as you can give any name to your database.

If you want to see the databases in your MySQL server, you can implement the following piece of code in pycharm:

Example :

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123")
mycursor=mydb.cursor()
mycursor.execute("show databases")
 
for db in mycursor:
print(db)

Output:

C:UsersHarshit_KantPycharmProjectstest1venvScriptspython.exe C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

(‘harshdb’,)

(‘information_schema’,)

(‘mysql’,)

(‘performance_schema’,)

(‘sakila’,)

(‘sys’,)

(‘world’,)

Process finished with exit code 0

Explanation:

  • By implementing the above-written code I have tried showing all the databases which are existing in MySQL server.

Now that you have created your database, let’s dive deep into one of the most important aspects of Python Database Connection by doing few operations in it. Let us understand this in detail.

Database Operations[CRUD]:

There are numerous operations a programmer can perform using databases and SQL in order to have sound knowledge of database programming and MySQL.

I have demonstrated the CRUD operations below

  • Create– It is an SQL statement used to create a record in the table or can say it is used for creating a table.
  • **Read- **It is used for fetching useful information from the database.
  • **Update- **This particular SQL statement is used for updating the records in the table or updating the table.
  • **Delete- **As the name itself justifies this command is used for deleting the table.

Let us look at each aspect in detail from the coding perspective.

Create Operation:

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123",database=harshdb)
 
mycursor=mydb.cursor()
 
mycursor.execute("create table employee(name varchar(250),sal int(20))")

Output:

C:UsersHarshit_KantPycharmProjectstest1venvScriptspython.exe C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

Process finished with exit code 0

Explanation:

  • In the above-given program, I have created a table ’employee’.
  • Table employee has two fields ‘name’ and ‘sal’.
  • Here, the User id is “root” and Password is “password123” used for accessing harshdb.

Below given Screenshot shows the table ’employee’ and returns the fields ‘name’ and ‘sal’.

In order to see the table which I have created, refer to the following code in python

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123",database="harshdb")
mycursor=mydb.cursor()
mycursor.execute("show tables")
 
for tb in mycursor:
    print(tb)

Output:

C:UsersHarshit_KantPycharmProjectstest1venvScriptspython.exe C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

(’employee’,)

Process finished with exit code 0

Below given Screenshot shows the table ’employee’ which I have created.

Screenshot:

Now that you have seen how a table is created, let us look at how a user can fetch values from it.

Read Operation:

This particular operation happens in various stages. In order to do that first stage is to populate the table.

Code:

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123",database="harshdb")
mycursor=mydb.cursor()
 
sqlformula = "Insert into employee(name,sal) values(%s,%s)"//'values has placeholders
 
employees = [("harshit",200000),("rahul", 30000),("avinash", 40000),("amit", 50000),]//Created an array of emplpoyees
 
 
mycursor.executemany(sqlformula, employees)//Passing the data
 
mydb.commit()//SQL statement used for saving the changes

Output:

C:UsersHarshit_KantPycharmProjectstest1venvScriptspython.exe C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

Process finished with exit code 0

In the above code, I have populated the data by using an array of employees by writing SQL statements in Python. Below a screenshot of the database will show the changes

Here,’harshit’ is used two times in the record while created the array.

**Stage 2: **In this stage, we will make use of the “select” SQL statement where the actual read operation will take place.

  • fetchall()– This particular function fetches all the data from the last executed statement.
  • **fetchone()- **This particular statement fetches one data from the last executed statement.

Code:

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123",database="harshdb")
mycursor=mydb.cursor()
 
mycursor.execute("select * from employee")
 
myresult = mycursor.fetchall()
 
for row in myresult:
    print(row)

Output:

(‘harshit’, 200000)

(‘harshit’, 200000)

(‘rahul’, 30000)

(‘avinash’, 40000)

(‘amit’, 50000)

Process finished with exit code 0

**Explanation: **In the above code we have made use of the function ‘fetchall()’. It fetches all the data from the last executed statement.

Given below is the screenshot of the database.

Code:

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123",database="harshdb")
mycursor=mydb.cursor()
 
mycursor.execute("select name from employee")//selecting the field i want data to be fetched from
 
myresult = mycursor.fetchone()
 
for row in myresult:
    print(row)

Output:

C:UsersHarshit_KantPycharmProjectstest1venvScriptspython.exe C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

harshit

Process finished with exit code 0

**Explanation: **In the above code, I have made use of the function “fetchone()” which basically fetches a single data from the last executed statement.

That was all about ‘Read operation’, let’s dive deep into Update operation.

Update Operation:

This SQL statement is used for updating the records in the table. Let’s implement the code and see how the changes are taking place.

Code:

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123",database="harshdb")
mycursor=mydb.cursor()
 
sql = "Update employee SET sal = 70000 WHERE name = 'harshit'"
 
mycursor.execute(sql)
 
mydb.commit()

Output:

C:UsersHarshit_KantPycharmProjectstest1venvScriptspython.exe C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

Process finished with exit code 0

**Explanation: **We have updated the row “sal” of record harshit in the above-given code. Below given Screenshot will give you a clear picture.

Screenshot:

As you can clearly see row ‘sal’ of record ‘harshit’ is updated to 70000.

This was all about Update operation, moving on with “Python Connect MySQL Database” article we will see the last operation which is ‘delete’.

Delete Operation:

As the name itself justifies, Delete operation is used for the deletion of records from the table. Let’s understand it from a coding perspective.

Code:

import mysql.connector
 
mydb=mysql.connector.connect(host="localhost",user="root",passwd="password123",database="harshdb")
mycursor=mydb.cursor()
 
sql = "DELETE FROM employee  WHERE name = 'harshit'"
 
mycursor.execute(sql)
 
mydb.commit()

Output:

C:UsersHarshit_KantPycharmProjectstest1venvScriptspython.exe C:/Users/Harshit_Kant/PycharmProjects/test1/venv/python-db-conn.py

Process finished with exit code 0

**Explanation: **In the above code I have deleted a record ‘harshit’ as it was repeated twice.

Below given screenshot will give you a better picture.

As you can clearly see from the screenshot record ‘harshit’ has been deleted. Well, you can do another set of manipulation from the delete operation itself like deleting salary. I had mentioned only two fields so the operations on the record which I could do is limited, but you can create more fields under the same table ’employee’ or any other table you create.

This brings us to the end of our article on “Python Connect MySQL Database”. I hope you are clear with all the concepts related to database, MYSQL db, database operations in python. Make sure you practice as much as possible and revert your experience.

How to write a simple toy database in Python

How to write a simple toy database in Python

How to write a simple toy database in Python: MySQL, PostgreSQL, Oracle, Redis, and many more, you just name it — databases are a really important piece of technology in the progress of human civilization. Today we can see how valuable data are, and so keeping them safe and stable is where the database comes in!

How to write a simple toy database in Python: MySQL, PostgreSQL, Oracle, Redis, and many more, you just name it — databases are a really important piece of technology in the progress of human civilization. Today we can see how valuable data are, and so keeping them safe and stable is where the database comes in!

So we can see how important databases are as well. For a quite some time I was thinking of creating My Own Toy Database just to understand, play around, and experiment with it. As Richard Feynman said:

“What I cannot create, I do not understand.”
So without any further talking let’s jump into the fun part: coding.

Let’s Start Coding…

For this Toy Database, we’ll use Python (my favorite ❤️). I named this database FooBarDB (I couldn’t find any other name 😉), but you can call it whatever you want!

So first let’s import some necessary Python libraries which are already available in Python Standard Library:

import json
import os

Yes, we only need these two libraries! We need json as our database will be based on JSON, and os for some path related stuff.

Now let’s define the main class FoobarDB with some pretty basic functions, which I’ll explain below.

class FoobarDB(object):
    def __init__(self , location):
        self.location = os.path.expanduser(location)
        self.load(self.location)

    def load(self , location):
        if os.path.exists(location):
            self._load()
        else:
            self.db = {}
        return True

    def _load(self):
        self.db = json.load(open(self.location , "r"))

    def dumpdb(self):
        try:
            json.dump(self.db , open(self.location, "w+"))
            return True
        except:
            return False

Here we defined our main class with an __init__ function. Whenever creating a Foobar Database we only need to pass the location of the database. In the first __init__ function we take the location parameter and replace ~ or ~user with user’s home directory to make it work intended way. And finally, put it in self.location variable to access it later on the same class functions. In the end, we are calling the load function passing self.location as an argument.

. . . .
    def load(self , location):
        if os.path.exists(location):
            self._load()
        else:
            self.db = {}
        return True
. . . .

In the next load function we take the location of the database as a param. Then check if the database exists or not. If it exists, we load it with the _load() function (explained below). Otherwise, we create an empty in-memory JSON object. And finally, return true on success.

. . . . 

    def _load(self):
        self.db = json.load(open(self.location , "r"))
. . . .

In the _load function, we just simply open the database file from the location stored in self.location. Then we transform it into a JSON object and load it into self.db variable.

 def dumpdb(self):
        try:
            json.dump(self.db , open(self.location, "w+"))
            return True
        except:
            return False

And finally, the dumpdb function: its name says what it does. It takes the in-memory database (actually a JSON object) from the self.db variable and saves it in the database file! It returns True if saved successfully, otherwise returns False.

Make It a Little More Usable… 😉

Wait a minute! 😐 A database is useless if it can’t store and retrieve data, isn’t it? Let’s go and add them also…😎

def set(self , key , value):
        try:
            self.db[str(key)] = value
            self.dumpdb()
            return True
        except Exception as e:
            print("[X] Error Saving Values to Database : " + str(e))
            return False

    def get(self , key):
        try:
            return self.db[key]
        except KeyError:
            print("No Value Can Be Found for " + str(key))  
            return False

    def delete(self , key):
        if not key in self.db:
            return False
        del self.db[key]
        self.dumpdb()
        return True

The set function is to add data to the database. As our database is a simple key-value based database, we’ll only take a key and value as an argument.

First, we’ll try to add the key and value to the database and then save the database. If everything goes right it will return True. Otherwise, it will print an error message and return False. (We don’t want it to crash and erase our data every time an error occurs 😎).

def get(self, key):
        try:
            return self.db[key]
        except KeyError:
            return False

get is a simple function, we take key as an argument and try to return the value linked to the key from the database. Otherwise False is returned with a message.

		def delete(self , key):
        if not key in self.db:
            return False
        del self.db[key]
        self.dumpdb()
        return True

delete function is to delete a key as well as its value from the database. First, we make sure the key is present in the database. If not we return False. Otherwise, we delete the key with the built-in del which automatically deletes the value of the key. Next, we save the database and it returns false.

Now you might think, what if I’ve created a large database and want to reset it? In theory, we can use delete — but it’s not practical, and it’s also very time-consuming! ⏳ So we can create a function to do this task…

    def resetdb(self):
        self.db={}
        self.dumpdb()
        return True

Here’s the function to reset the database, resetdb! It’s so simple: first, what we do is re-assign our in-memory database with an empty JSON object and it just saves it! And that’s it! Our Database is now again clean shaven.

Finally… 🎉

That’s it friends! We have created our own Toy Database ! 🎉🎉 Actually, FoobarDB is just a simple demo of a database. It’s like a cheap DIY toy: you can improve it any way you want. You can also add many other functions according to your needs.

Full Source is Here 👉 bauripalash/foobardb

I hope, you enjoyed it! Let me know your suggestions, ideas or mistakes I’ve made in the comments below! 👇

Thank you! See you soon!