The 10 Important Updates from TensorFlow 2.0

The 10 Important Updates from TensorFlow 2.0

In this article, we'll see 10 important updates from TensorFlow 2.0. TensorFlow 2.0 will be simple and easy to use for all users on all platforms.

In this article, we'll see 10 important updates from TensorFlow 2.0. TensorFlow 2.0 will be simple and easy to use for all users on all platforms.

TensorFlow 2.0 alpha has now been released. The framework does have a significant impact on the deep learningcommunity. Practitioners, researchers, developers have loved the framework and have adapted it like never before. It is easily one of the main reasons behind the jump-start of all the super cool deep learning enabled applications that we get to see today. With that being said, TensorFlow 1.xhas its cons too (like many other frameworks). As Martin Wicke (Software Engineer from the TensorFlow team) said during TF Dev Summit '19 -

We've learned a lot since 1.0.

With all the lessons learned from the wide user-base, GitHub issues, the TensorFlow team released the TensorFlow 2.0 alpha which comes with a significant number of important changes for the betterment of performance, user experience and so on. It enables you with rapid prototyping and includes many modern deep learning practices. In this article, you will get to study some of these changes through precise implementations.

Note that the updates discussed here are the most significant ones according to the author. You will need some previous TensorFlow and Keras experience in order to follow along with this article.

Installation and a demo dataset

Updating to TensorFlow 2.0 is running the following line of code from a Jupyter Notebook:

!pip install tensorflow==2.0.0-alpha0

The GPU variant can also be installed in the same way (requires CUDA before):

!pip install tensorflow-gpu==2.0.0-alpha0

You can find more about the installation process here.

Some of the updates that you will be studying include code implementations. In those cases, you will need a dataset. For this article, you will be using the Adult dataset from the UCI Archive.

import pandas as pd

columns = ["Age", "WorkClass", "fnlwgt", "Education", "EducationNum",
        "MaritalStatus", "Occupation", "Relationship", "Race", "Gender",
        "CapitalGain", "CapitalLoss", "HoursPerWeek", "NativeCountry", "Income"]

data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data',
                    header=None,
                    names=columns)

data.head()

Let's do some basic data preprocessing and then set up the data splits in an 80:20 ratio:

from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
import numpy as np

# Label Encode
le = LabelEncoder()
data = data.apply(le.fit_transform)

# Segregate data features & convert into NumPy arrays
X = data.iloc[:, 0:-1].values
y = data['Income'].values

# Split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)

By now, you should have a working environment with TensorFlow 2.0 installed and a dataset loaded into your workspace. You can now proceed towards the updates.

1. Eager execution by default

In TensorFlow 2.0, you no longer need to create a session and run the computational graph within that. Eager execution is enabled by default in the 2.0 release so that you can build your models and run them instantly. You can choose to disable the eager execution like so:

tf.compat.v1.disable_eager_execution() (provided tensorflow is imported with tf alias.)

Here's a little code-based comparison that shows this difference -

2. tf.function and AutoGraph

While eager execution enables you with imperative programming, when it comes to distributed training, full-scale optimization, production environments TensorFlow 1.x style graph execution has its advantages over eager execution. In TensorFlow 2.0, you retain graph based executions but in a more flexible way. It is achieved with [tf.function](https://www.tensorflow.org/alpha/tutorials/eager/tf_function) and [AutoGraph](https://www.tensorflow.org/alpha/guide/autograph).

tf.function allows you to define TensorFlow graphs with Python-style syntax via its AutoGraph feature. AutoGraph supports a good range of Python compatibility including if-statement, for-loop, while-loop, Iterators, etc. However, there are limitations. Here you can find the complete list of supports that are currently available. Below is an example that shows you how easy it is to define a TensorFlow graph with just a decorator.

import tensorflow as tf

# Define the forward pass
@tf.function
def single_layer(x, y):
    return tf.nn.relu(tf.matmul(x, y))

# Generate random data drawn from a uniform distribution
x = tf.random.uniform((2, 3))
y = tf.random.uniform((3, 5))

single_layer(x, y)
<tf.Tensor: id=73, shape=(2, 5), dtype=float32, numpy=
array([[0.5779363 , 0.11255255, 0.26296678, 0.12809312, 0.23484911],
       [0.5932371 , 0.1793559 , 0.2845083 , 0.23249313, 0.21367362]],
      dtype=float32)>

Notice that you did not have to create any sessions or placeholders to run the function single_layer(). This is one of the nifty features of tf.function. Behind the hood, it does all the necessary optimizations so that your code runs faster.

3. tf.variable_scope no longer needed

In TensorFlow 1.x, to be able to use tf.layers as variables and to reuse them, you had to use the tf.variable block. But this is no longer needed in TensorFlow 2.0. Because of the presence of keras as the center high-level API in TensorFlow 2.0, all the layers created using tf.layers can easily be put into a tf.keras.Sequential definition. This makes the code much easier to read, and you get to keep track of the variables and losses as well.

Here's an example:

# Define the model
model = tf.keras.Sequential([
    tf.keras.layers.Dropout(rate=0.2, input_shape=X_train.shape[1:]),
    tf.keras.layers.Dense(units=64, activation='relu'),
    tf.keras.layers.Dropout(rate=0.2),
    tf.keras.layers.Dense(units=64, activation='relu'),
    tf.keras.layers.Dropout(rate=0.2),
    tf.keras.layers.Dense(units=1, activation='sigmoid')
])

# Get the output probabilities
out_probs = model(X_train.astype(np.float32), training=True)
print(out_probs)
tf.Tensor(
[[1.        ]
 [0.12573627]
 [1.        ]
 ...
 [1.        ]
 [1.        ]
 [1.        ]], shape=(26048, 1), dtype=float32)

In the above example, you passed the training data through the model just to get the raw output probabilities. Notice that it is just a forward pass. You can, of course, go ahead and train your model -

model.compile(loss='binary_crossentropy', optimizer='adam')

model.fit(X_train, y_train,
              validation_data=(X_test, y_test),
              epochs=5, batch_size=64)
Train on 26048 samples, validate on 6513 samples
Epoch 1/5
26048/26048 [==============================] - 2s 62us/sample - loss: 79.5270 - val_loss: 0.7142
Epoch 2/5
26048/26048 [==============================] - 1s 48us/sample - loss: 2.0096 - val_loss: 0.5894
Epoch 3/5
26048/26048 [==============================] - 1s 47us/sample - loss: 0.8750 - val_loss: 0.5761
Epoch 4/5
26048/26048 [==============================] - 1s 49us/sample - loss: 0.6650 - val_loss: 0.5629
Epoch 5/5
26048/26048 [==============================] - 1s 47us/sample - loss: 0.6885 - val_loss: 0.5539






You can get a list of the model's trainable parameters in a layer by layer manner like so -

# Model's trainable parameters in a layer by layer fashion
model.trainable_variables
[<tf.Variable 'dense_12/kernel:0' shape=(14, 64) dtype=float32, numpy=
 array([[-1.48688853e-02,  2.74527162e-01,  2.58149177e-01,
         -2.35980123e-01,  7.92130232e-02, -1.19770452e-01,
          1.83823228e-01,  2.26748139e-01, -1.31252930e-01,
         -1.67176753e-01,  1.43430918e-01,  2.32805759e-01,
          2.47395486e-01,  8.89694989e-02,  1.75705254e-02,
         -2.01672405e-01,  2.01087326e-01, -1.67460442e-01,
         -1.03051037e-01, -2.56078333e-01, -6.07236922e-02,
          4.76933420e-02, -4.65645194e-02,  2.20712095e-01,
          1.98741913e-01,  9.32294428e-02,  1.51318759e-01,
         -3.96257639e-03, -1.51869521e-01,  8.89182389e-02,
         -4.22340333e-02,  1.55168772e-03, -7.01716542e-03,
         -8.23616534e-02, -1.85766399e-01, -1.97881564e-01,
          1.94241285e-01,  2.11566478e-01, -1.68947518e-01,
         -2.34904587e-01, -8.28040987e-02, -1.37671828e-02,
          3.46715450e-02,  9.42899585e-02,  9.07505751e-02,
          2.64314085e-01,  4.13734019e-02, -1.75569654e-02,
          2.49794573e-01,  2.40060896e-01,  1.24608070e-01,
         -2.27075279e-01, -1.13472998e-01, -1.09154880e-01,
         -2.51923293e-01,  2.43190974e-01,  2.63507813e-01,
          1.83881164e-01,  5.65617085e-02, -2.68286765e-01,
          1.78039759e-01,  6.91905916e-02, -2.60141104e-01,
         -2.56884694e-02],
        [-1.60553172e-01,  1.84462130e-01, -1.64327353e-01,
         -2.02879310e-03, -1.35839581e-02, -2.11382195e-01,
         -1.51656792e-01, -1.50204003e-02,  1.61570847e-01,
         -1.29508615e-01, -1.70697004e-01, -2.11556107e-01,
          2.15181440e-01,  2.67737001e-01, -1.19572535e-01,
          1.15734965e-01, -5.27024269e-02,  4.56553698e-02,
         -1.80567816e-01, -1.51056111e-01, -2.31304854e-01,
         -1.31544277e-01,  1.42878979e-01, -8.88223648e-02,
         -2.77194977e-01,  1.98713481e-01,  1.64229482e-01,
         -8.50015134e-02,  1.04941219e-01,  2.73275048e-01,
          2.01503932e-02,  2.22145498e-01,  1.61160469e-01,
          5.18816710e-02, -1.18925110e-01,  2.20809698e-01,
          9.16796625e-02, -1.24019340e-01, -1.42927185e-01,
         -1.58376783e-01,  8.95256698e-02, -1.36581853e-01,
         -9.74076241e-02, -2.06318110e-01,  4.34296429e-02,
          1.48526222e-01, -2.64008492e-01,  2.33468860e-01,
         -1.74503058e-01, -2.60894388e-01,  1.12190038e-01,
         -1.72933638e-01,  1.87754840e-01,  5.69777489e-02,
          9.31494832e-02,  9.37287509e-02, -2.24829912e-01,
         -5.65375686e-02, -2.31988132e-01, -5.92674166e-02,
         -2.54451334e-01, -1.28820181e-01,  1.57452404e-01,
          2.53181010e-01],
        [-8.94532055e-02, -7.04574287e-02, -2.74045289e-01,
         -2.29278371e-01, -1.12556815e-02, -4.37867343e-02,
          6.96483850e-02, -2.20679641e-02, -8.04719925e-02,
         -4.27710414e-02, -6.98548555e-03,  5.35116494e-02,
         -1.54523849e-02, -1.36115998e-01,  1.38038993e-01,
         -1.85180068e-01,  2.15847164e-01,  2.55365819e-01,
          1.37135267e-01,  1.90906912e-01, -2.23682523e-02,
          1.52650058e-01,  2.04477787e-01, -4.36266363e-02,
          1.78499818e-01,  1.90241158e-01, -2.02745885e-01,
          1.43350720e-01, -1.13368660e-01, -2.01326758e-01,
         -1.61648542e-01,  2.25443751e-01, -2.68535197e-01,
          2.37828940e-01,  2.71143168e-01,  1.59860253e-02,
          1.41094506e-01, -1.76632628e-01,  1.88476801e-01,
          2.02816904e-01, -1.03268191e-01, -2.36591846e-01,
          1.79396987e-01,  1.70014054e-01, -2.30597705e-01,
          2.61288881e-03, -4.42424417e-03, -3.84955704e-02,
          2.72334903e-01, -4.91250306e-02,  1.07610583e-01,
         -2.72850186e-01, -2.71188200e-01, -1.15645885e-01,
          2.53611356e-01, -1.48682937e-01, -4.46224958e-02,
         -6.12093955e-02, -2.67423481e-01, -1.97976261e-01,
          4.02505398e-02,  8.28173161e-02,  1.94115847e-01,
          6.79514706e-02],
        [ 1.02568567e-02, -2.73051471e-01,  1.93972498e-01,
          1.67789280e-01, -7.65820295e-02,  1.69053733e-01,
         -1.67652726e-01, -1.12306148e-01,  1.29045337e-01,
          5.20431995e-03,  1.22617424e-01,  2.59980887e-01,
          2.37120360e-01,  2.59193987e-01,  1.71425581e-01,
          2.73495167e-01, -3.11368108e-02,  2.11496860e-01,
         -2.26072937e-01, -9.43622887e-02,  2.56022662e-01,
          1.86894894e-01, -2.35674426e-01, -9.95516777e-03,
          1.84704363e-01,  2.27636904e-01, -1.74311996e-02,
         -1.57380402e-02, -1.43433169e-01, -1.87973380e-02,
          1.76340997e-01, -1.85148180e-01,  1.91334367e-01,
          1.00137413e-01, -2.62901902e-01, -8.22693110e-03,
         -1.17425114e-01, -2.61702567e-01, -2.40183711e-01,
         -7.42957443e-02, -2.43198499e-01,  1.00527972e-01,
         -1.11117616e-01, -9.74197388e-02, -1.09167382e-01,
         -7.14137256e-02,  2.48018056e-01, -3.86851579e-02,
          4.26724553e-02, -2.99333185e-02,  2.41537303e-01,
         -2.68284887e-01,  8.95127654e-03, -3.74048352e-02,
          4.77899015e-02,  2.41122097e-01,  1.11537516e-01,
         -3.37415487e-02, -1.43319309e-01, -1.34244651e-01,
          1.61695689e-01, -1.83817685e-01,  5.05107641e-02,
          2.74721473e-01],
        [ 3.05238366e-02,  4.31960225e-02,  1.15660310e-01,
          2.01156676e-01,  8.93190503e-03, -1.82507738e-01,
         -1.66644901e-01,  2.53293186e-01,  9.39259827e-02,
          2.66437620e-01,  1.03438407e-01,  6.01558089e-02,
         -5.76229393e-02,  1.00222319e-01, -8.71886164e-02,
          2.47991115e-01,  2.03391343e-01, -5.64218462e-02,
         -1.81319863e-01, -1.78091347e-01,  1.94970667e-02,
          2.73696750e-01,  2.22271591e-01, -1.62375182e-01,
         -1.20849550e-01, -5.32025993e-02, -7.60249197e-02,
         -3.30891609e-02, -1.34273469e-01, -7.55624324e-02,
          1.07143939e-01,  2.12463081e-01,  7.97367096e-03,
         -6.87274337e-03, -8.43367577e-02,  2.55893081e-01,
          1.24732047e-01,  3.09056938e-02,  8.86841714e-02,
         -2.23312736e-01,  1.97805136e-01,  2.18041629e-01,
          3.45717669e-02, -4.20909375e-02,  5.96292019e-02,
          1.79306090e-01,  2.72990197e-01,  3.02815437e-02,
          2.37860054e-01,  2.76284903e-01,  3.77161503e-02,
          2.26478606e-01,  8.85216296e-02, -1.82998061e-01,
         -1.41343147e-01, -3.46849561e-02, -2.34851494e-01,
          1.46038651e-01, -1.52093291e-01, -8.06826651e-02,
          8.09380412e-03,  2.53538191e-02, -1.27880573e-02,
          1.55383885e-01],
        [-1.07118145e-01,  2.71667391e-01, -1.35462150e-01,
          8.78523886e-02,  8.47310722e-02, -3.18741649e-02,
         -1.72285080e-01,  9.50790346e-02, -7.42185712e-02,
         -1.69902325e-01, -8.20439905e-02, -3.02564055e-02,
          1.61808312e-01,  6.13009930e-03,  4.78896201e-02,
         -1.39527738e-01, -1.96388185e-01, -9.79056209e-02,
          8.11750889e-02, -8.75651240e-02, -3.17215472e-02,
          2.24185854e-01,  1.03506386e-01,  2.46435404e-03,
         -1.83918521e-01, -1.77772760e-01, -1.59666687e-01,
         -5.00660688e-02, -1.95413038e-01,  2.49774963e-01,
          2.11800635e-01,  7.34189749e-02, -1.63613647e-01,
          1.28584713e-01, -2.04943165e-01,  4.48526740e-02,
         -9.40444320e-02, -2.36514211e-01,  4.40850854e-02,
         -7.21262991e-02,  5.26860356e-03,  2.54257828e-01,
         -1.71898901e-02, -1.66287631e-01, -4.29128110e-02,
          3.84885073e-02,  1.63391858e-01, -1.09616295e-01,
          2.26927966e-01, -2.67344981e-01,  1.98232234e-01,
          1.29737794e-01,  2.69295484e-01, -2.23180622e-01,
         -1.87438726e-03, -5.20526767e-02,  9.74531174e-02,
         -1.05390891e-01,  1.23165011e-01,  2.33101934e-01,
         -2.56039590e-01,  2.46387571e-01,  1.33860320e-01,
          1.71753883e-01],
        [ 2.46957332e-01, -4.92525846e-02, -2.22080618e-01,
          4.05346751e-02, -5.00992537e-02, -2.60361612e-01,
          1.50414556e-01,  2.01799482e-01, -2.87890434e-03,
          9.51286852e-02, -5.86918592e-02,  2.12740213e-01,
         -1.76745623e-01, -2.74649799e-01,  2.05127060e-01,
         -4.51588929e-02, -1.18441284e-02,  1.17566496e-01,
          2.14967847e-01,  2.30442315e-01, -2.03341544e-02,
          7.21938014e-02,  1.91002727e-01, -2.73522615e-01,
         -1.07315734e-01,  1.57117695e-01, -7.27429241e-02,
          1.98784769e-01,  1.34299874e-01, -2.60534406e-01,
          8.44456553e-02,  5.92016876e-02, -8.88088793e-02,
          9.40183103e-02,  8.87127221e-02, -9.60084200e-02,
          2.42618769e-01,  9.65010524e-02,  6.18630648e-03,
          1.61135674e-01, -3.82966697e-02,  1.02110088e-01,
         -1.88043356e-01,  6.97199404e-02,  2.39620298e-01,
          5.69199026e-02, -1.25965476e-01, -8.32125545e-02,
         -8.48805904e-03,  1.70814633e-01,  2.38609940e-01,
          9.24529135e-02,  9.29380953e-02, -1.60003811e-01,
         -2.04197079e-01,  2.51140565e-01,  2.41884738e-01,
         -2.46104851e-01,  6.61611557e-03, -2.67855734e-01,
         -7.67029077e-02, -2.74775296e-01,  2.36378461e-01,
         -2.72717297e-01],
        [ 1.63002580e-01, -1.04987592e-01, -1.11121044e-01,
         -2.73849100e-01,  1.99946165e-02,  2.11521506e-01,
          2.06256032e-01,  2.54784852e-01,  2.57405788e-01,
          1.75982475e-01, -1.57612175e-01, -1.88202858e-02,
         -1.82799488e-01, -6.26320094e-02, -9.18765068e-02,
         -1.66230381e-01,  2.42929131e-01, -3.45604420e-02,
          3.02044451e-02, -1.67087615e-02, -9.18568671e-02,
         -1.18204534e-01,  2.26822466e-01, -8.45120549e-02,
          1.58829272e-01, -2.22656310e-01, -1.80833176e-01,
         -1.51249528e-01,  2.30215102e-01, -2.01435268e-01,
          2.50793129e-01,  1.61696225e-01,  1.12378091e-01,
         -8.44676197e-02, -1.86490998e-01,  2.16112882e-01,
         -1.67694584e-01,  8.36035609e-02,  1.36310160e-02,
         -2.36266181e-01,  2.16432512e-02,  2.17068702e-01,
          1.48556292e-01, -6.13741130e-02,  1.84532225e-01,
         -1.20505244e-01,  5.50346076e-02,  1.04375720e-01,
          1.96388662e-01,  2.04656780e-01,  8.99768472e-02,
          1.04485691e-01,  1.16647959e-01, -9.09715742e-02,
          2.40128249e-01,  7.08191991e-02, -1.35386303e-01,
          1.52992904e-02,  2.04906076e-01,  2.08586067e-01,
          2.65424818e-01,  1.74420804e-01,  1.45571589e-01,
         -1.06450215e-01],
        [-1.22071415e-01,  6.90596700e-02, -9.81627107e-02,
         -1.82385862e-01,  3.71887982e-02,  1.33560777e-01,
          6.62094355e-03, -2.25594267e-01, -8.94398540e-02,
         -2.11033255e-01,  2.53058523e-01,  5.08429706e-02,
         -1.27695456e-01, -7.27435797e-02, -1.51305407e-01,
          3.16268504e-02,  2.58970231e-01,  8.51702690e-02,
          2.73242801e-01, -1.25677899e-01, -2.71640301e-01,
         -1.60824418e-01, -2.76342273e-01,  2.24858135e-01,
         -8.03019106e-02, -4.79616970e-02,  4.94971275e-02,
          2.46035010e-01, -1.74869299e-02,  1.85437828e-01,
         -2.01017499e-01, -2.23311543e-01,  2.70765752e-01,
         -2.11389661e-01, -2.26453170e-01,  2.06002831e-01,
          2.16605961e-01,  1.56077802e-01, -2.76331574e-01,
         -7.14364648e-03, -1.25960454e-01,  1.02812976e-01,
          5.37744164e-03, -9.14498568e-02, -2.16731012e-01,
         -4.22561914e-02, -1.18804276e-02, -4.11395282e-02,
         -2.58837283e-01, -9.24162269e-02,  2.24286765e-01,
          1.97664350e-01, -2.04566836e-01,  1.49493903e-01,
          1.82809919e-01,  2.18066871e-01,  2.27073222e-01,
          1.76770508e-01,  1.28788888e-01,  7.43162632e-03,
         -2.44799465e-01,  2.06821591e-01, -9.25005376e-02,
          1.84141576e-01],
        [ 1.05317682e-01,  1.83150172e-02, -6.71321154e-02,
          1.00300103e-01, -2.54237145e-01, -3.71084660e-02,
         -1.02833554e-01, -5.97543716e-02, -2.18547538e-01,
         -8.90600234e-02, -2.40394264e-01, -2.57878542e-01,
         -1.38011947e-01,  2.36597955e-02, -2.27259427e-01,
         -1.65269971e-02,  2.32348710e-01, -1.00096032e-01,
         -2.13123351e-01, -1.40784979e-02, -2.66731352e-01,
         -2.15898558e-01, -5.78602701e-02,  1.08396888e-01,
         -2.02795267e-01, -1.52687684e-01,  2.78952122e-02,
          4.09219265e-02, -5.15770912e-02, -1.81588203e-01,
          2.73707718e-01,  1.09840721e-01, -1.40243679e-01,
         -2.13766873e-01, -1.94679320e-01, -9.15652514e-03,
         -1.61587566e-01,  2.27655083e-01, -1.11349046e-01,
         -1.05967700e-01,  8.99270475e-02,  2.07172066e-01,
          5.06473184e-02,  2.01718628e-01, -1.03773981e-01,
          2.73704678e-01,  4.07311916e-02,  9.41670239e-02,
         -7.51210451e-02,  2.25694746e-01,  4.44093049e-02,
          2.77287036e-01,  2.25879252e-02, -6.58842623e-02,
         -2.06691712e-01, -1.68207854e-01,  1.10538006e-02,
         -1.19143382e-01,  1.65247411e-01, -1.02170840e-01,
          7.17070699e-02, -7.43492991e-02, -7.37106651e-02,
         -1.29226327e-01],
        [ 2.08517313e-02,  8.65581036e-02, -2.01248676e-01,
         -1.06920242e-01,  2.04556465e-01, -5.12601584e-02,
          1.17174774e-01, -1.21960059e-01, -1.31039545e-01,
          1.45936877e-01,  9.38895345e-03, -1.14137828e-02,
          1.54711992e-01,  2.67244726e-01, -7.15402961e-02,
         -2.23028928e-01, -2.71299481e-01, -1.36449203e-01,
         -1.25627816e-02,  3.13916504e-02,  1.73118323e-01,
         -2.17780888e-01, -1.95076853e-01,  1.28784478e-02,
          1.73919499e-01, -2.42948875e-01, -2.14346394e-01,
          5.35857081e-02,  2.67256826e-01, -1.71346068e-02,
         -2.76432812e-01, -1.73468918e-01,  1.22662723e-01,
         -9.96078849e-02, -1.15638345e-01, -2.65158296e-01,
          2.12729961e-01, -2.70184338e-01,  1.08982086e-01,
         -1.14385784e-02,  2.67733067e-01,  2.64605552e-01,
          7.57011771e-02, -8.78878832e-02, -9.69131440e-02,
         -6.81236386e-03,  6.40029907e-02, -1.91579491e-01,
          1.71635926e-01, -2.19610840e-01, -1.01383820e-01,
          1.74940199e-01, -1.23514935e-01, -4.02086824e-02,
          2.65191942e-01, -2.47828737e-01, -5.83019853e-03,
         -1.24326095e-01, -2.10787788e-01, -2.57244408e-02,
         -9.65181738e-02, -1.34586707e-01, -2.63660282e-01,
         -2.33780265e-01],
        [-2.09537894e-01,  1.81803823e-01, -2.23274127e-01,
          2.68277794e-01, -2.12194473e-01,  2.69619197e-01,
         -1.91460058e-01,  1.50443584e-01, -6.01146221e-02,
          1.15322739e-01,  5.74926138e-02, -2.09335685e-01,
          2.66064018e-01, -2.50099152e-01,  2.27989703e-01,
          1.48722529e-03, -2.75823861e-01, -2.74460733e-01,
         -2.54678339e-01,  2.07069367e-01,  2.42757052e-01,
         -8.09566826e-02, -2.22230926e-01,  3.88453007e-02,
         -7.51499534e-02, -1.13763615e-01,  1.86943352e-01,
          1.81314886e-01, -1.03227988e-01,  1.27721041e-01,
          1.00327253e-01, -1.25737816e-01, -9.31653380e-03,
         -1.79606676e-02, -1.99202478e-01,  1.40470475e-01,
         -1.78151071e-01,  3.56182456e-02,  2.09965855e-01,
          9.80757773e-02,  9.55764055e-02,  2.42440253e-01,
          2.26146430e-01, -8.72465968e-03, -2.06995502e-01,
          1.26261711e-01,  1.92399114e-01,  2.21498907e-02,
          2.40556687e-01, -1.17468238e-01, -8.96153450e-02,
          3.64099145e-02,  5.64157963e-05, -9.97322649e-02,
          1.81693852e-01, -1.95398301e-01,  2.67696530e-01,
          2.18172163e-01,  1.50565267e-01, -2.76668876e-01,
         -2.90721059e-02,  6.15487993e-02,  5.47989309e-02,
         -2.45864540e-01],
        [ 1.13498271e-01, -1.24701887e-01, -1.19635433e-01,
          6.81682229e-02,  1.42366707e-01, -5.18653989e-02,
          1.70933545e-01,  4.18927073e-02, -8.23812187e-02,
         -1.72122866e-01,  3.46628726e-02,  2.39999801e-01,
         -4.86224890e-04,  8.29051435e-02, -6.71084374e-02,
         -1.72895417e-01, -2.63225108e-01, -1.55994743e-01,
          8.19830298e-02,  2.49279350e-01, -1.41113624e-01,
          1.25947356e-01, -9.30310488e-02,  2.40998656e-01,
          2.44344383e-01, -1.36330962e-01, -1.14291891e-01,
         -2.29074568e-01,  1.76846683e-01, -7.63051659e-02,
         -6.28410280e-02, -1.43780455e-01, -7.99130350e-02,
         -2.32542127e-01, -3.03542614e-03,  7.96765089e-03,
          2.05407441e-02, -3.18776071e-02, -1.66951925e-01,
         -2.53402591e-01,  1.85931325e-02, -2.08924711e-02,
         -2.02480197e-01, -1.78624660e-01, -9.39854980e-03,
          2.22942740e-01, -7.72327036e-02,  8.92090797e-03,
          5.94776869e-03, -1.45615578e-01, -1.00357220e-01,
         -6.98443055e-02, -1.69289708e-02,  1.10462517e-01,
         -2.50632793e-01,  1.05173588e-01, -1.03613839e-01,
         -1.78682446e-01, -4.74603325e-02,  2.64549822e-01,
          2.41646737e-01, -9.74451900e-02, -1.91499934e-01,
         -2.03671366e-01],
        [ 3.43604088e-02, -4.77244258e-02, -2.74687082e-01,
          1.44897908e-01,  1.87038392e-01, -2.73052067e-01,
         -1.34714529e-01, -1.96854770e-02,  1.78879768e-01,
         -4.30725813e-02, -1.44803524e-02, -4.08369452e-02,
          1.24610901e-01,  1.33537620e-01, -5.67995459e-02,
          1.66517943e-01,  1.21737421e-02, -2.28156358e-01,
          2.42469996e-01, -8.04692805e-02,  2.54256994e-01,
          1.89271569e-02,  1.06245875e-01,  2.76879996e-01,
          1.47841871e-01, -9.83145386e-02,  1.41099930e-01,
         -9.15518403e-03,  2.22966105e-01,  1.95244431e-01,
          2.46362776e-01,  1.43388927e-01,  2.12212205e-01,
         -2.39929557e-02,  2.23469466e-01,  2.43519396e-01,
          2.35615760e-01, -7.24931657e-02, -9.37553197e-02,
          2.35618442e-01,  1.09928012e-01, -2.83769220e-02,
         -1.05210841e-02, -2.18923137e-01, -1.58438280e-01,
         -1.87489986e-02,  1.51137710e-02,  1.77096963e-01,
          7.83360600e-02,  2.20489174e-01, -3.45443189e-02,
          6.89106286e-02,  2.31777161e-01, -1.25984594e-01,
          1.43728256e-02,  2.55063027e-01, -2.42056713e-01,
          8.74229670e-02,  2.20979035e-01, -2.00921297e-03,
          1.69425875e-01, -8.34510028e-02, -1.03761226e-01,
          8.88096690e-02]], dtype=float32)>,
 <tf.Variable 'dense_12/bias:0' shape=(64,) dtype=float32, numpy=
 array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>,
 <tf.Variable 'dense_13/kernel:0' shape=(64, 64) dtype=float32, numpy=
 array([[ 0.20200957,  0.03036232,  0.11040972, ..., -0.21020778,
          0.17196609, -0.03736575],
        [-0.2064129 ,  0.13786067,  0.09109865, ..., -0.15494904,
          0.09000905, -0.18967415],
        [-0.0387924 , -0.02436857,  0.16121905, ..., -0.1803377 ,
         -0.00170219,  0.15630807],
        ...,
        [ 0.19548352,  0.10514452, -0.03767221, ...,  0.03404056,
          0.02135798,  0.00550348],
        [-0.16041529, -0.07542154, -0.1700579 , ...,  0.00083075,
          0.11576484,  0.08763643],
        [-0.09544714,  0.08534966, -0.06500863, ...,  0.04508607,
         -0.17440501,  0.1134396 ]], dtype=float32)>,
 <tf.Variable 'dense_13/bias:0' shape=(64,) dtype=float32, numpy=
 array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)>,
 <tf.Variable 'dense_14/kernel:0' shape=(64, 1) dtype=float32, numpy=
 array([[ 0.17874134],
        [ 0.06660989],
        [ 0.2120269 ],
        [ 0.1908356 ],
        [-0.05980097],
        [ 0.2545969 ],
        [ 0.16937432],
        [ 0.28103924],
        [-0.301428  ],
        [-0.1401844 ],
        [-0.02959338],
        [ 0.10712665],
        [ 0.09891567],
        [-0.28661886],
        [ 0.28736794],
        [ 0.03912222],
        [-0.03885537],
        [-0.25707358],
        [-0.24519518],
        [ 0.11147693],
        [ 0.02554649],
        [-0.20881867],
        [ 0.00373942],
        [ 0.02928248],
        [ 0.09055263],
        [ 0.15126869],
        [-0.11197442],
        [ 0.23908103],
        [ 0.07320437],
        [-0.05635457],
        [ 0.14777556],
        [-0.17251213],
        [-0.02642217],
        [ 0.25192064],
        [-0.15656634],
        [-0.0924283 ],
        [-0.20901027],
        [-0.17767514],
        [-0.15508023],
        [ 0.06313407],
        [ 0.2708218 ],
        [-0.14065444],
        [ 0.12714231],
        [-0.05807959],
        [ 0.17975545],
        [ 0.19628727],
        [-0.24905266],
        [-0.12731928],
        [-0.15389986],
        [-0.15024558],
        [-0.08432762],
        [-0.28963754],
        [-0.07519016],
        [-0.04082993],
        [ 0.13681188],
        [ 0.18757123],
        [ 0.09581241],
        [ 0.09615937],
        [ 0.22277021],
        [ 0.2865938 ],
        [ 0.00316831],
        [-0.27389333],
        [-0.09506477],
        [ 0.01873708]], dtype=float32)>,
 ]
4. Custom layers made very easy

In machine learning research or even in industrial applications, there is often a need for writing custom layers to cater to specific use cases. TensorFlow 2.0 makes it super easy to write a custom layer and use it along with the existing layers. You can also customize the forward pass of your model in any way you want.

In order to create a custom layer, the easiest option is to extend the Layer class from tf.keras.layers and then define it accordingly. You will create a custom layer, and then define its forward computations. The following is the output of executing help(tf.keras.layers.Layer). It tells you what things you need to specify in order to get this done:

Taking advice from the above snippet, you will -

  • Define the constructor with the number of the outputs
  • In the build() method you will add the weights for your layer
  • Finally in the call() method you will define the forward pass by chaining matrix multiplication and relu() together
class MyDenseLayer(tf.keras.layers.Layer):
    # Define the constructor
    def __init__(self, num_outputs):
        super(MyDenseLayer, self).__init__()
        self.num_outputs = num_outputs
    # Define the build function to add the weights
    def build(self, input_shape):
        self.kernel = self.add_variable("kernel",
                                    shape=[input_shape[-1],
                                           self.num_outputs])
    # Define the forward pass
    def call(self, input):
        matmul = tf.matmul(input, self.kernel)
        return tf.nn.relu(matmul)

# Initialize the layer with 10 output units
layer = MyDenseLayer(10)
# Supply the input shape
layer(tf.random.uniform((10,3)))
# Display the trainable parameters of the layer
print(layer.trainable_variables)
[<tf.Variable 'my_dense_layer_7/kernel:0' shape=(3, 10) dtype=float32, numpy=
array([[ 0.43613756,  0.21344548,  0.37803996,  0.65583944,  0.11884308,
         0.13909656,  0.30802298,  0.5313586 ,  0.04967308,  0.32889426],
       [ 0.1680265 , -0.59944266, -0.4014195 ,  0.14887196,  0.07071263,
         0.37862527, -0.5822403 , -0.5963166 ,  0.3106798 ,  0.05353856],
       [-0.44345278, -0.23122305, -0.62959856, -0.43062705,  0.13194847,
        -0.60124606, -0.62745696,  0.12254918, -0.09806103, -0.45324165]],
      dtype=float32)>]

You can compose multiple layers by extending Model class from tf.keras. You can find more about composing models here.

5. Flexibility in model training

TensorFlow can use automatic differentiation to compute the gradients of the loss function with respect to model parameters. tf.GradientTape creates a tape within a context which is used by TensorFlow to keep track of the gradients recorded from each computation in that tape. To understand this, let's define a model in a more low-level way by extending the tf.keras.Model class.

from tensorflow.keras import Model

class CustomModel(Model):
    def __init__(self):
        super(CustomModel, self).__init__()
        self.do1 = tf.keras.layers.Dropout(rate=0.2, input_shape=(14,))
        self.fc1 = tf.keras.layers.Dense(units=64, activation='relu')
        self.do2 = tf.keras.layers.Dropout(rate=0.2)
        self.fc2 = tf.keras.layers.Dense(units=64, activation='relu')
        self.do3 = tf.keras.layers.Dropout(rate=0.2)
        self.out = tf.keras.layers.Dense(units=1, activation='sigmoid')

    def call(self, x):
        x = self.do1(x)
        x = self.fc1(x)
        x = self.do2(x)
        x = self.fc2(x)
        x = self.do3(x)
        return self.out(x)

model = CustomModel()

Notice that the topology of this model is exactly the same as the one you defined earlier. To be able to train this model using automatic differentiation, you need to define the loss function and the optimizer differently -

loss_func = tf.keras.losses.BinaryCrossentropy()
optimizer = tf.keras.optimizers.Adam()

You will now define the metrics which will be used to measure the performance of the network turning its training. By performance, model's loss and accuracy are meant here.

# Average the loss across the batch size within an epoch
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_acc = tf.keras.metrics.BinaryAccuracy(name='train_acc')

valid_loss = tf.keras.metrics.Mean(name='test_loss')
valid_acc = tf.keras.metrics.BinaryAccuracy(name='valid_acc')

tf.data provides utility methods to define input data pipelines. This is particularly very useful when you are dealing with a large volume of data.

You will now define the data generator, which will generate batches of data during the model's training.

X_train, X_test = X_train.astype(np.float32), X_test.astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
y_train, y_test = y_train.reshape(-1, 1), y_test.reshape(-1, 1)

# Batches of 64
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).batch(64)
test_ds = tf.data.Dataset.from_tensor_slices((X_test, y_test)).batch(64)

You are now ready to train the model using tf.GradientTape. Firstly, you will define a method which will train the model with the data you just defined using tf.data.DataSet. You will also wrap the model training steps with the tf.function decorator to take advantage of the speedup it offers in the computation.

Model training and validation

# Train the model
@tf.function
def model_train(features, labels):
    # Define the GradientTape context
    with tf.GradientTape() as tape:
        # Get the probabilities
        predictions = model(features)
        # Calculate the loss
        loss = loss_func(labels, predictions)
    # Get the gradients
    gradients = tape.gradient(loss, model.trainable_variables)
    # Update the weights
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))

    train_loss(loss)
    train_acc(labels, predictions)
# Validating the model
@tf.function
def model_validate(features, labels):
    predictions = model(features)
    t_loss = loss_func(labels, predictions)

    valid_loss(t_loss)
    valid_acc(labels, predictions)

Use the above two methods to train and validate the model for 5 epochs.

for epoch in range(5):
    for features, labels in train_ds:
        model_train(features, labels)

    for test_features, test_labels in test_ds:
        model_validate(test_features, test_labels)

    template = 'Epoch {}, train_loss: {}, train_acc: {}, train_loss: {}, test_acc: {}'
    print (template.format(epoch+1,
                         train_loss.result(),
                         train_acc.result()*100,
                         valid_loss.result(),
                         valid_acc.result()*100))
Epoch 1, train_loss: 9.8155517578125, train_acc: 66.32754516601562, train_loss: 2.8762073516845703, test_acc: 78.96514892578125
Epoch 2, train_loss: 10.235926628112793, train_acc: 67.04353332519531, train_loss: 3.508544921875, test_acc: 79.0572738647461
Epoch 3, train_loss: 8.876679420471191, train_acc: 67.97962951660156, train_loss: 4.440890789031982, test_acc: 78.7348403930664
Epoch 4, train_loss: 8.136384963989258, train_acc: 68.46015167236328, train_loss: 3.812603235244751, test_acc: 73.58360290527344
Epoch 5, train_loss: 7.779866695404053, train_acc: 68.70469665527344, train_loss: 3.80180025100708, test_acc: 74.73975372314453

This example is inspired by this example from TensorFlow 2.0's authors.

6. TensorFlow datasets

A separate module named DataSets is used to operate with the network model in an elegant way. You already saw this in the earlier example. In this section, you will see how you can load in the MNIST dataset just in the way you want.

You can install the tensorflow_datasets library with pip. Once it is installed, you are ready to go. It provides several utility functions to help you flexibly prepare your dataset construction pipeline. You can learn more about these functions here and here. You will now see how you can build a data input pipeline to load in the MNIST dataset.

import tensorflow_datasets as tfds

# You can fetch the DatasetBuilder class by string
mnist_builder = tfds.builder("mnist")

# Download the dataset
mnist_builder.download_and_prepare()

# Construct a tf.data.Dataset: train and test
ds_train, ds_test = mnist_builder.as_dataset(split=[tfds.Split.TRAIN, tfds.Split.TEST])

You can ignore the warning. Notice how elegantly tensorflow_datasets handled the pipeline.

# Prepare batches of 128 from the training set
ds_train = ds_train.batch(128)

# Load in the dataset in the simplest way possible
for features in ds_train:
    image, label = features["image"], features["label"]

You can now display the first image from the collection of images you loaded in. Note that tensorflow_datasets works in eager mode and in a graph based setting as well.

import matplotlib.pyplot as plt
%matplotlib inline

# You can convert a TensorFlow tensor just by using
# .numpy()
plt.imshow(image[0].numpy().reshape(28, 28), cmap=plt.cm.binary)
plt.show()

7. Automatic mixed precision policy

The mixed precision policy was proposed by NVIDIA last year. You can find the original paper here. The brief idea behind the mixed precision policy is to use a mixture of half (FP16) and full precision (FP32) and take advantages of both the worlds. It has shown amazing results in the training of very deep neural networks (both in terms of time and score).

If you are on a CUDA enabled GPU environment (Volta Generation, Tesla T4 for example) and you installed the GPU variant of TensorFlow 2.0, you can instruct TensorFlow to train in mixed precision like so -

os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1'

This will automatically cast the operations of a TensorFlow graph accordingly. You will be able to see a good amount of boost in your model's performance. You can also optimize TensorFlow core operations with mixed precision policy. Check this article to know more about this.

8. Distributed training

TensorFlow 2.0 makes it super easy to distribute the training process across multiple GPUs. This is particularly useful for production purpose when you have to meet super heavy loads. This is as easy as putting your model training block inside a with block.

First, you specify a distribution strategy like so:

mirrored_strategy = tf.distribute.MirroredStrategy()

A mirrored strategy creates one replica per GPU and the model variables are equally mirrored across GPUs. You can now use the defined strategy like the following:

with mirrored_strategy.scope():
    model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
    model.compile(loss='mse', optimizer='sgd')
    model.fit(X_train, y_train,
             validation_data=(X_test, y_test),
             batch_size=128,
             epochs=10)

Note that the above piece of code will only be useful if you have multiple GPUs configured on a single system. There are a number of distribution strategies you can configure. You can find more about it here.

9. TensorBoard within Jupyter Notebook

This is probably the most exciting part of this update. You can visualize the model training directly within your Jupyter Notebook via TensorBoard. The new TensorBoard is loaded with a lot of exciting features like memory profiling, viewing image data including confusion matrix, conceptual model graph and so on. You can find more about this here.

In this section, you will configure your environment such that the TensorBoard is displayed within Jupyter Notebook. You will first have to load the tensorboard.notebook notebook extension -

%load_ext tensorboard.notebook

You will now define the TensorBoard callback using the tf.keras.callbacks module.

from datetime import datetime
import os

# Make a directory to keep the training logs
os.mkdir("logs")

# Set the callback
logdir = "logs"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)

Rebuild the model using the Sequential API of tf.keras -

# Define the model
model = tf.keras.Sequential([
    tf.keras.layers.Dropout(rate=0.2, input_shape=X_train.shape[1:]),
    tf.keras.layers.Dense(units=64, activation='relu'),
    tf.keras.layers.Dropout(rate=0.2),
    tf.keras.layers.Dense(units=64, activation='relu'),
    tf.keras.layers.Dropout(rate=0.2),
    tf.keras.layers.Dense(units=1, activation='sigmoid')
])

# Compile the model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

The train and test sets were modified for different uses. So, it will be a good idea to split them once again -

# Split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=7)

You are all ready to train the model -

# The TensorBoard extension
%tensorboard --logdir logs/
# Pass the TensorBoard callback you defined
model.fit(X_train, y_train,
         validation_data=(X_test, y_test),
         batch_size=64,
         epochs=10,
         callbacks=[tensorboard_callback],
         verbose=False)
    <iframe
        width="100%"
        height="800"
        src="http://localhost:6006"
        frameborder="0"
        allowfullscreen
    >







The TensorBoard dashboard should be loaded in your Jupyter Notebook, and you should be able to trace the training and validation metrics.

10. TensorFlow for Swift

Despite all the incredible success, one thing very saddening about Python is that it is slow. To help researchers, practitioners, and even beginners, the TensorFlow team has developed a version for Swift. Although it is not as production ready as the Python variant it certainly has the potential. Swift allows for more low-level interactions and advanced compilation modules. This is where you will be able to find everything related to TensorFlow's Swift variant.

How to get started with Python for Deep Learning and Data Science

How to get started with Python for Deep Learning and Data Science

A step-by-step guide to setting up Python for Deep Learning and Data Science for a complete beginner

A step-by-step guide to setting up Python for Deep Learning and Data Science for a complete beginner

You can code your own Data Science or Deep Learning project in just a couple of lines of code these days. This is not an exaggeration; many programmers out there have done the hard work of writing tons of code for us to use, so that all we need to do is plug-and-play rather than write code from scratch.

You may have seen some of this code on Data Science / Deep Learning blog posts. Perhaps you might have thought: “Well, if it’s really that easy, then why don’t I try it out myself?”

If you’re a beginner to Python and you want to embark on this journey, then this post will guide you through your first steps. A common complaint I hear from complete beginners is that it’s pretty difficult to set up Python. How do we get everything started in the first place so that we can plug-and-play Data Science or Deep Learning code?

This post will guide you through in a step-by-step manner how to set up Python for your Data Science and Deep Learning projects. We will:

  • Set up Anaconda and Jupyter Notebook
  • Create Anaconda environments and install packages (code that others have written to make our lives tremendously easy) like tensorflow, keras, pandas, scikit-learn and matplotlib.

Once you’ve set up the above, you can build your first neural network to predict house prices in this tutorial here:

Build your first Neural Network to predict house prices with Keras

Setting up Anaconda and Jupyter Notebook

The main programming language we are going to use is called Python, which is the most common programming language used by Deep Learning practitioners.

The first step is to download Anaconda, which you can think of as a platform for you to use Python “out of the box”.

Visit this page: https://www.anaconda.com/distribution/ and scroll down to see this:

This tutorial is written specifically for Windows users, but the instructions for users of other Operating Systems are not all that different. Be sure to click on “Windows” as your Operating System (or whatever OS that you are on) to make sure that you are downloading the correct version.

This tutorial will be using Python 3, so click the green Download button under “Python 3.7 version”. A pop up should appear for you to click “Save” into whatever directory you wish.

Once it has finished downloading, just go through the setup step by step as follows:

Click Next

Click “I Agree”

Click Next

Choose a destination folder and click Next

Click Install with the default options, and wait for a few moments as Anaconda installs

Click Skip as we will not be using Microsoft VSCode in our tutorials

Click Finish, and the installation is done!

Once the installation is done, go to your Start Menu and you should see some newly installed software:

You should see this on your start menu

Click on Anaconda Navigator, which is a one-stop hub to navigate the apps we need. You should see a front page like this:

Anaconda Navigator Home Screen

Click on ‘Launch’ under Jupyter Notebook, which is the second panel on my screen above. Jupyter Notebook allows us to run Python code interactively on the web browser, and it’s where we will be writing most of our code.

A browser window should open up with your directory listing. I’m going to create a folder on my Desktop called “Intuitive Deep Learning Tutorial”. If you navigate to the folder, your browser should look something like this:

Navigating to a folder called Intuitive Deep Learning Tutorial on my Desktop

On the top right, click on New and select “Python 3”:

Click on New and select Python 3

A new browser window should pop up like this.

Browser window pop-up

Congratulations — you’ve created your first Jupyter notebook! Now it’s time to write some code. Jupyter notebooks allow us to write snippets of code and then run those snippets without running the full program. This helps us perhaps look at any intermediate output from our program.

To begin, let’s write code that will display some words when we run it. This function is called print. Copy and paste the code below into the grey box on your Jupyter notebook:

print("Hello World!")

Your notebook should look like this:

Entering in code into our Jupyter Notebook

Now, press Alt-Enter on your keyboard to run that snippet of code:

Press Alt-Enter to run that snippet of code

You can see that Jupyter notebook has displayed the words “Hello World!” on the display panel below the code snippet! The number 1 has also filled in the square brackets, meaning that this is the first code snippet that we’ve run thus far. This will help us to track the order in which we have run our code snippets.

Instead of Alt-Enter, note that you can also click Run when the code snippet is highlighted:

Click Run on the panel

If you wish to create new grey blocks to write more snippets of code, you can do so under Insert.

Jupyter Notebook also allows you to write normal text instead of code. Click on the drop-down menu that currently says “Code” and select “Markdown”:

Now, our grey box that is tagged as markdown will not have square brackets beside it. If you write some text in this grey box now and press Alt-Enter, the text will render it as plain text like this:

If we write text in our grey box tagged as markdown, pressing Alt-Enter will render it as plain text.

There are some other features that you can explore. But now we’ve got Jupyter notebook set up for us to start writing some code!

Setting up Anaconda environment and installing packages

Now we’ve got our coding platform set up. But are we going to write Deep Learning code from scratch? That seems like an extremely difficult thing to do!

The good news is that many others have written code and made it available to us! With the contribution of others’ code, we can play around with Deep Learning models at a very high level without having to worry about implementing all of it from scratch. This makes it extremely easy for us to get started with coding Deep Learning models.

For this tutorial, we will be downloading five packages that Deep Learning practitioners commonly use:

  • Set up Anaconda and Jupyter Notebook
  • Create Anaconda environments and install packages (code that others have written to make our lives tremendously easy) like tensorflow, keras, pandas, scikit-learn and matplotlib.

The first thing we will do is to create a Python environment. An environment is like an isolated working copy of Python, so that whatever you do in your environment (such as installing new packages) will not affect other environments. It’s good practice to create an environment for your projects.

Click on Environments on the left panel and you should see a screen like this:

Anaconda environments

Click on the button “Create” at the bottom of the list. A pop-up like this should appear:

A pop-up like this should appear.

Name your environment and select Python 3.7 and then click Create. This might take a few moments.

Once that is done, your screen should look something like this:

Notice that we have created an environment ‘intuitive-deep-learning’. We can see what packages we have installed in this environment and their respective versions.

Now let’s install some packages we need into our environment!

The first two packages we will install are called Tensorflow and Keras, which help us plug-and-play code for Deep Learning.

On Anaconda Navigator, click on the drop down menu where it currently says “Installed” and select “Not Installed”:

A whole list of packages that you have not installed will appear like this:

Search for “tensorflow”, and click the checkbox for both “keras” and “tensorflow”. Then, click “Apply” on the bottom right of your screen:

A pop up should appear like this:

Click Apply and wait for a few moments. Once that’s done, we will have Keras and Tensorflow installed in our environment!

Using the same method, let’s install the packages ‘pandas’, ‘scikit-learn’ and ‘matplotlib’. These are common packages that data scientists use to process the data as well as to visualize nice graphs in Jupyter notebook.

This is what you should see on your Anaconda Navigator for each of the packages.

Pandas:

Installing pandas into your environment

Scikit-learn:

Installing scikit-learn into your environment

Matplotlib:

Installing matplotlib into your environment

Once it’s done, go back to “Home” on the left panel of Anaconda Navigator. You should see a screen like this, where it says “Applications on intuitive-deep-learning” at the top:

Now, we have to install Jupyter notebook in this environment. So click the green button “Install” under the Jupyter notebook logo. It will take a few moments (again). Once it’s done installing, the Jupyter notebook panel should look like this:

Click on Launch, and the Jupyter notebook app should open.

Create a notebook and type in these five snippets of code and click Alt-Enter. This code tells the notebook that we will be using the five packages that you installed with Anaconda Navigator earlier in the tutorial.

import tensorflow as tf

import keras

import pandas

import sklearn

import matplotlib

If there are no errors, then congratulations — you’ve got everything installed correctly:

A sign that everything works!

If you have had any trouble with any of the steps above, please feel free to comment below and I’ll help you out!

*Originally published by Joseph Lee Wei En at *medium.freecodecamp.org

===================================================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

A Complete Machine Learning Project Walk-Through in Python

Machine Learning In Node.js With TensorFlow.js

An A-Z of useful Python tricks

Top 10 Algorithms for Machine Learning Newbies

Automated Machine Learning on the Cloud in Python

Introduction to PyTorch and Machine Learning

Python Tutorial for Beginners (2019) - Learn Python for Machine Learning and Web Development

Machine Learning A-Z™: Hands-On Python & R In Data Science

Python for Data Science and Machine Learning Bootcamp

Data Science, Deep Learning, & Machine Learning with Python

Deep Learning A-Z™: Hands-On Artificial Neural Networks

Artificial Intelligence A-Z™: Learn How To Build An AI

Deep Learning from Scratch and Using Tensorflow in Python

Deep Learning from Scratch and Using Tensorflow in Python

In this article, we will learn how deep learning works and get familiar with its terminology — such as backpropagation and batch size

Originally published by Milad Toutounchian at https://towardsdatascience.com
Deep learning is one of the most popular models currently being used in real-world, Data Science applications. It’s been an effective model in areas that range from image to text to voice/music. With the increase in its use, the ability to quickly and scalably implement deep learning becomes paramount. The rise of deep learning platforms such as Tensorflow, help developers implement what they need to in easier ways.

In this article, we will learn how deep learning works and get familiar with its terminology — such as backpropagation and batch size. We will implement a simple deep learning model — from theory to scratch implementation — for a predefined input and output in Python, and then do the same using deep learning platforms such as Keras and Tensorflow. We have written this simple deep learning model using Keras and Tensorflow version 1.x and version 2.0 with three different levels of complexity and ease of coding.

Deep Learning Implementation from Scratch

Consider a simple multi-layer-perceptron with four input neurons, one hidden layer with three neurons and an output layer with one neuron. We have three data-samples for the input denoted as X, and three data-samples for the desired output denoted as yt. So, each input data-sample has four features.

# Inputs and outputs of the neural net:
import numpy as np

X=np.array([[1.0, 0.0, 1.0, 0.0],[1.0, 0.0, 1.0, 1.0],[0.0, 1.0, 0.0, 1.0]])
yt=np.array([[1.0],[1.0],[0.0]])

The x*(m) in this figure is one-sample of Xh(m) is the output of the hidden layer for input x(m), and Wi* and Wh are the weights.

The goal of a neural net (NN) is to obtain weights and biases such that for a given input, the NN provides the desired output. But, we do not know the appropriate weights and biases in advance, so we update the weights and biases such that the error between the output of NN, yp(m), and desired ones, yt(m), is minimized. This iterative minimization process is called the NN training.

Assume the activation functions for both hidden and output layers are sigmoid functions. Therefore,

The size of weights, biases and the relationships between input and outputs of the neural net

Where activation function is the sigmoid, m is the mth data-sample and yp(m) is the NN output.

The error function, which measures the difference between the output of NN with the desired one, can be expressed mathematically as:

The Error defined for the neural net which is squared error

The pseudocode for the above NN has been summarized below:

pseudocode for the neural net training

From our pseudocode, we realize that the partial derivative of Error (E) with respect to parameters (weights and biases) should be computed. Using the chain rule from calculus we can write:

We have two options here for updating the weights and biases in backward path (backward path means updating weights and biases such that error is minimized):

  1. Use all *N * samples of the training data
  2. Use one sample (or a couple of samples)

For the first one, we say the batch size is N. For the second one, we say batch size is 1, if use one sample to updates the parameters. So batch size means how many data samples are being used for updating the weights and biases.

You can find the implementation of the above neural net, in which the gradient of the error with respect to parameters is calculated Symbolically, with different batch sizes here.

As you can see with the above example, creating a simple deep learning model from scratch involves methods that are very complex. In the next section, we will see how deep learning frameworks can assist in introducing scalability and greater ease of implementation to our model.

Deep Learning implementation using Keras, Tensorflow 1.x and 2.0

In the previous section, we computed the gradient of Error w.r.t. parameters from using the chain rule. We saw first-hand that it is not an easy or scalable approach. Also, keep in mind that we evaluate the partial derivatives at each iteration, and as a result, the Symbolic Gradient is not needed although its value is important. This is where deep-learning frameworks such as Keras and Tensorflow can play their role. The deep-learning frameworks use an AutoDiff method for numerical calculations of partial gradients. If you’re not familiar with AutoDiff, StackExchange has a great example to walk through.

The AutoDiff decomposes the complex expression into a set of primitive ones, i.e. expressions consisting of at most a single function call. As the differentiation rules for each separate expression are already known, the final results can be computed in an efficient way.

We have implemented the NN model with three different levels in Keras, Tensorflow 1.x and Tensorflow 2.0:

1- High-Level (Keras and Tensorflow 2.0): High-Level Tensorflow 2.0 with Batch Size 1

2- Medium-Level (Tensorflow 1.x and 2.0): Medium-Level Tensorflow 1.x with Batch Size 1 , Medium-Level Tensorflow 1.x with Batch Size NMedium-Level Tensorflow 2.0 with Batch Size 1Medium-Level Tensorflow v 2.0 with Batch Size N

3- Low-Level (Tensorflow 1.x): Low-Level Tensorflow 1.x with Batch Size N

Code Snippets:

For the High-Level, we have accomplished the implementation using Keras and Tensorflow v 2.0 with model.train_on_batch:

# High-Level implementation of the neural net in Tensorflow:
model.compile(loss=mse, optimizer=optimizer)
for _ in range(2000):
    for step, (x, y) in enumerate(zip(X_data, y_data)):
        model.train_on_batch(np.array([x]), np.array([y]))

In the Medium-Level using Tensorflow 1.x, we have defined:

E = tf.reduce_sum(tf.pow(ypred - Y, 2))
optimizer = tf.train.GradientDescentOptimizer(0.1)
grads = optimizer.compute_gradients(E, [W_h, b_h, W_o, b_o])
updates = optimizer.apply_gradients(grads)

This ensures that in the for loop, the updates variable will be updated. For Medium-Level, the gradients and their updates are defined outside the for_loop and inside the for_loop updates is iteratively updated. In the Medium-Level using Tensorflow v 2.x, we have used:

# Medium-Level implementation of the neural net in Tensorflow

# In for_loop
with tf.GradientTape() as tape:
   x = tf.convert_to_tensor(np.array([x]), dtype=tf.float64)
   y = tf.convert_to_tensor(np.array([y]), dtype=tf.float64)
   ypred = model(x)
   loss = mse(y, ypred)
gradients = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(gradients, model.trainable_weights))

In Low-Level implementation, each weight and bias is updated separately. In the Low-Level using Tensorflow v 1.x, we have defined:

# Low-Level implementation of the neural net in Tensorflow:
E = tf.reduce_sum(tf.pow(ypred - Y, 2))
dE_dW_h = tf.gradients(E, [W_h])[0]
dE_db_h = tf.gradients(E, [b_h])[0]
dE_dW_o = tf.gradients(E, [W_o])[0]
dE_db_o = tf.gradients(E, [b_o])[0]
# In for_loop:
evaluated_dE_dW_h = sess.run(dE_dW_h,
                                     feed_dict={W_h: W_h_i, b_h: b_h_i, W_o: W_o_i, b_o: b_o_i, X: X_data.T, Y: y_data.T})
        W_h_i = W_h_i - 0.1 * evaluated_dE_dW_h
        evaluated_dE_db_h = sess.run(dE_db_h,
                                     feed_dict={W_h: W_h_i, b_h: b_h_i, W_o: W_o_i, b_o: b_o_i, X: X_data.T, Y: y_data.T})
        b_h_i = b_h_i - 0.1 * evaluated_dE_db_h
        evaluated_dE_dW_o = sess.run(dE_dW_o,
                                     feed_dict={W_h: W_h_i, b_h: b_h_i, W_o: W_o_i, b_o: b_o_i, X: X_data.T, Y: y_data.T})
        W_o_i = W_o_i - 0.1 * evaluated_dE_dW_o
        evaluated_dE_db_o = sess.run(dE_db_o,
                                     feed_dict={W_h: W_h_i, b_h: b_h_i, W_o: W_o_i, b_o: b_o_i, X: X_data.T, Y: y_data.T})
        b_o_i = b_o_i - 0.1 * evaluated_dE_db_o

As you can see with the above low level implementation, the developer has more control over every single step of numerical operations and calculations.

Conclusion

We have now shown that implementing from scratch even a simple deep learning model by using Symbolic gradient computation for weight and bias updates is not an easy or scalable approach. Using deep learning frameworks accelerates this process as a result of using AutoDiff, which is basically a stable numerical gradient computation for updating weights and biases.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

Machine Learning A-Z™: Hands-On Python & R In Data Science

Python for Data Science and Machine Learning Bootcamp

Machine Learning, Data Science and Deep Learning with Python

Deep Learning A-Z™: Hands-On Artificial Neural Networks

Artificial Intelligence A-Z™: Learn How To Build An AI

A Complete Machine Learning Project Walk-Through in Python

Machine Learning: how to go from Zero to Hero

Top 18 Machine Learning Platforms For Developers

10 Amazing Articles On Python Programming And Machine Learning

100+ Basic Machine Learning Interview Questions and Answers

Best Python Libraries For Data Science & Machine Learning

Best Python Libraries For Data Science & Machine Learning

Best Python Libraries For Data Science & Machine Learning | Data Science Python Libraries

This video will focus on the top Python libraries that you should know to master Data Science and Machine Learning. Here’s a list of topics that are covered in this session:

  • Introduction To Data Science And Machine Learning
  • Why Use Python For Data Science And Machine Learning?
  • Python Libraries for Data Science And Machine Learning
  • Python libraries for Statistics
  • Python libraries for Visualization
  • Python libraries for Machine Learning
  • Python libraries for Deep Learning
  • Python libraries for Natural Language Processing

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Python

Complete Python Bootcamp: Go from zero to hero in Python 3

Machine Learning A-Z™: Hands-On Python & R In Data Science

Python and Django Full Stack Web Developer Bootcamp

Complete Python Masterclass

Python Tutorial - Python GUI Programming - Python GUI Examples (Tkinter Tutorial)

Computer Vision Using OpenCV

OpenCV Python Tutorial - Computer Vision With OpenCV In Python

Python Tutorial: Image processing with Python (Using OpenCV)

A guide to Face Detection in Python

Machine Learning Tutorial - Image Processing using Python, OpenCV, Keras and TensorFlow

PyTorch Tutorial for Beginners

The Pandas Library for Python

Introduction To Data Analytics With Pandas