Not Hotdog with Keras and TensorFlow.js. If you were ever confused about whether something was a hotdog or not, don’t worry! I’ve got the web app just for you!

In this short tutorial, I’ll walk you through training a Keras model for image classification and then using that model in a web app by utilizing TensorFlow.js. The problem we’ll be solving is Not Hotdog: given an image, our model will have to correctly classify the object as a hotdog or not a hotdog. This classification task is not particularly exciting, but for this tutorial, we’ll be focusing more on the process of using a pre-trained Keras model using Tensorflow.js.

Training a Keras ModelLet’s begin by building our dataset. I used the Google images downloadutility, but you can use whatever you prefer. Instabot is another good option. Just make sure you have a few hundred images for both classes and you split them into training, validation and test sets in the format that Keras expects:

Trending AI Articles:1. Building a Django POST face-detection API using OpenCV and Haar Cascades> 1. Building a Django POST face-detection API using OpenCV and Haar Cascades> 1. Building a Django POST face-detection API using OpenCV and Haar Cascades> 1. Building a Django POST face-detection API using OpenCV and Haar Cascades

Next, we’ll build a simple deep net to train on the dataset that we have. The neural network I used is composed of 3 chunks of convolutions with ReLU activations and maxpool layers after them. On top, we have two fully connected layers with a ReLU activation, a dropout layer and a sigmoid for binary classification.

```
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
```

We’ll use binary cross-entropy as the loss function and use RMSProp as the optimization algorithm. We train for 50 epoch to achieve ~93% accuracy on the validation set, which is good enough for the purposes of this tutorial. To train the model yourself or play around with the code, check out the notebook here. The code is largely based on the first parts of this Keras tutorial.

Once we have a trained model, we need to make sure we save it to disk before we proceed to the next section:

```
model.save('simplemodel.h5')
```

Be sure to use the correct method for saving the model. Keras provides

for saving models:

`model.save_weights('<filename>')`

will save just the weights of the model;`model.to_json()`

/`model.to_yaml()`

will save the architecture to json or yaml format;`model.save('<filename'>)`

will save the weights, architecture and even the optimizer state so training can be resumed;

We need to use the last method because, unsurprisingly, TensorFlow.js needs both the weights and architecture of our model before it can utilize it.

Converting a Keras Model into a TensorFlow.js ModelNow that we have the model saved, install the

`tensorflowjs`

Python package and run the following command:

```
tensorflowjs --input_format keras <path-to-h5-file> <path-to-dir>
```

Alternatively, we could have used the `tensorflowjs`

Python API to save the model directly as a TensorFlow.js model:

```
tensorflowjs.converters.save_keras_model(model, model_dir)
```

In either case, we should now have several files in our model directory: a `model.json`

file and several weight files in binary format. It’s important to note that these conversions are only supported for standard Keras classes and methods. Custom layers or metrics cannot be safely converted from Python to JavaScript.

Once we have the model converted, let’s use it in a small web application. On the HTML side of the things, we’ll simply have an `image_upload`

file select element, an `image`

element to show the selected image, and a `result`

div to show the model’s classification.

The JavaScript side of things is a bit more complicated. Let’s take a look at the code and then we can step through it:

```
var wait = ms => new Promise((r, j)=>setTimeout(r, ms));
async function main() {
const model = await tf.loadModel('./model/model.json');
document.getElementById('image_upload').onchange = function(ev) {
var f = ev.target.files[0];
var fr = new FileReader();
var makePrediction = async function(img) {
// We need to ensure that the image is actually loaded before we proceed.
while(!img.complete) {
await wait(100);
}
var tensor = tf.fromPixels(img)
.resizeNearestNeighbor([150,150])
.toFloat().expandDims();
const prediction = model.predict(tensor);
var data = prediction.dataSync();
document.getElementById('result').innerHTML =
data[0] == 0 ? "Now, that's a hotdog! :)" : "Not hotdog! :(";
}
var fileReadComplete = function(ev2) {
document.getElementById('image').src = ev2.target.result;
var img = new Image();
img.src = ev2.target.result;
makePrediction(img);
};
fr.onload = fileReadComplete;
fr.readAsDataURL(f);
}
}
main();
```

First, we begin by loading our model and we ensure that we actually wait for the operation to finish by using `await`

:

```
const model = await tf.loadModel('./model/model.json');
```

Next, we need to set an event handler that responds to the file selector being used. We’ll use the `FileReader`

API by setting another callback when an image is loaded and trigger the actual loading of the image using `readAsDataURL(...)`

.

```
document.getElementById('image_upload').onchange = function(ev) {
var f = ev.target.files[0];
var fr = new FileReader();
var makePrediction = async function(img) { ... };
var fileReadComplete = function(ev2) { ... };
fr.onload = fileReadComplete;
fr.readAsDataURL(f);
}
```

Once the file has been read, we’ll show the image on our page and then we’ll create an `Image `

object that will be passed to the actual prediction function:

```
var fileReadComplete = function(ev2) {
document.getElementById('image').src = ev2.target.result;
var img = new Image();
img.src = ev2.target.result;
makePrediction(img);
};
```

At this point, we have to ensure that the `Image`

object is ready or the rest of the code will not be happy. That’s why we’ll use the `wait`

lambda that we defined at the top of our code, to ensure that the function waits until the image is ready to be used.

Then, we have to convert our `Image`

object into a tensor with the correct formatting. We’ll use the `fromPixels(...)`

method to transform the image to a tensor, resize it to what our model expect using `resizeNearestNeighbor(...)`

, convert it to floating point values using `toFloat()`

, and then use `expandDims()`

to insert another dimension in our tensor so that it fits the batched input format our model was trained on.

```
var makePrediction = async function(img) {
while(!img.complete) {
await wait(100);
}
var tensor = tf.fromPixels(img)
.resizeNearestNeighbor([150,150])
.toFloat().expandDims();
const prediction = model.predict(tensor);
var data = prediction.dataSync();
document.getElementById('result').innerHTML =
data[0] == 0 ? "Now, that's a hotdog! :)" : "Not hotdog! :(";
}
```

After we have pre-processed our image, we can pass it into our model using the `predict(...)`

method and get a prediction. In order to get the actual data out of the prediction tensor, we’ll use the `dataSync()`

method. At this point, you can do whatever you need with the prediction. In this case, we’ll add a simple message to our web page that answers the age-old question: “Is this a hotdog?” We truly live in the future.

**Further reading:**

☞ Deep Learning from Scratch and Using Tensorflow in Python

☞ Real Computer Vision for mobile and embedded

☞ What is TensorFrames? TensorFlow + Apache Spark

☞ 5 TensorFlow and ML Courses for Programmers

☞ How to Deploy Machine Learning Models on Mobile and Embedded Devices

☞ How to classify butterflies with deep learning in Keras

This full course introduces the concept of client-side artificial neural networks. We will learn how to deploy and run models along with full deep learning applications in the browser! To implement this cool capability, we’ll be using TensorFlow.js (TFJS), TensorFlow’s JavaScript library.

By the end of this video tutorial, you will have built and deployed a web application that runs a neural network in the browser to classify images! To get there, we'll learn about client-server deep learning architectures, converting Keras models to TFJS models, serving models with Node.js, tensor operations, and more!

⭐️Course Sections⭐️

⌨️ 0:00 - Intro to deep learning with client-side neural networks

⌨️ 6:06 - Convert Keras model to Layers API format

⌨️ 11:16 - Serve deep learning models with Node.js and Express

⌨️ 19:22 - Building UI for neural network web app

⌨️ 27:08 - Loading model into a neural network web app

⌨️ 36:55 - Explore tensor operations with VGG16 preprocessing

⌨️ 45:16 - Examining tensors with the debugger

⌨️ 1:00:37 - Broadcasting with tensors

⌨️ 1:11:30 - Running MobileNet in the browser

In this TensorFlow tutorial for professionals and enthusiasts who are interested in applying Deep Learning Algorithm using TensorFlow to solve various problems.

In this TensorFlow tutorial for professionals and enthusiasts who are interested in applying Deep Learning Algorithm using TensorFlow to solve various problems.

TensorFlow is an open source deep learning library that is based on the concept of data flow graphs for building models. It allows you to create large-scale neural networks with many layers. Learning the use of this library is also a fundamental part of the AI & Deep Learning course curriculum. Following are the topics that will be discussed in this TensorFlow tutorial:

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **##

In this **TensorFlow tutorial**, before talking about TensorFlow, let us first understand *what are tensors*. **Tensors **are nothing but a de facto for representing the data in deep learning.

As shown in the image above, tensors are just multidimensional arrays, that allows you to represent data having higher dimensions. In general, Deep Learning you deal with high dimensional data sets where dimensions refer to different features present in the data set. In fact, the name “**TensorFlow**” has been derived from the operations which neural networks perform on tensors. It’s literally a flow of tensors. Since, you have understood what are tensors, let us move ahead in this **TensorFlow **tutorial and understand – *what is TensorFlow?*

**TensorFlow **is a library based on Python that provides different types of functionality for implementing **Deep Learning Models**. As discussed earlier, the term **TensorFlow** is made up of two terms – Tensor & Flow:

In **TensorFlow**, the term tensor refers to the representation of data as multi-dimensional array whereas the term flow refers to the series of operations that one performs on tensors as shown in the above image.

Now we have covered enough background about **TensorFlow**.

Next up, in this TensorFlow tutorial we will be discussing about TensorFlow code-basics.

TensorFlow Tutorial: Code BasicsBasically, the overall process of writing a **TensorFlow program** involves two steps:

- Building a Computational Graph
- Running a Computational Graph

Let me explain you the above two steps one by one:

So, *what is a computational graph?* Well, a computational graph is a series of TensorFlow operations arranged as nodes in the graph. Each nodes take 0 or more tensors as input and produces a tensor as output. Let me give you an example of a simple computational graph which consists of three nodes – * a*,

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **

Basically, one can think of a computational graph as an alternative way of conceptualizing mathematical calculations that takes place in a TensorFlow program. The operations assigned to different nodes of a Computational Graph can be performed in parallel, thus, providing a better performance in terms of computations.

Here we just describe the computation, it doesn’t compute anything, it does not hold any values, it just defines the operations specified in your code.

Let us take the previous example of computational graph and understand how to execute it. Following is the code from previous example:

```
import tensorflow as tf
# Build a graph
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
```

Now, in order to get the output of node c, we need to run the computational graph within a **session**. Session places the graph operations onto Devices, such as CPUs or GPUs, and provides methods to execute them.

A session encapsulates the control and state of the *TensorFlow *runtime i.e. it stores the information about the order in which all the operations will be performed and passes the result of already computed operation to the next operation in the pipeline. Let me show you how to run the above computational graph within a session (Explanation of each line of code has been added as a comment):

```
# Create the session object
sess = tf.Session()
#Run the graph within a session and store the output to a variable
output_c = sess.run(c)
#Print the output of node c
print(output_c)
#Close the session to free up some resources
sess.close()
Output:
30
```

So, this was all about session and running a computational graph within it. Now, let us talk about variables and placeholders that we will be using extensively while building deep learning model using *TensorFlow*.

In *TensorFlow*, constants, placeholders and variables are used to represent different parameters of a deep learning model. Since, I have already discussed constants earlier, I will start with placeholders.

A *TensorFlow* constant allows you to store a value but, what if, you want your nodes to take inputs on the run? For this kind of functionality, placeholders are used which allows your graph to take external inputs as parameters. Basically, a placeholder is a promise to provide a value later or during runtime. Let me give you an example to make things simpler:

```
import tensorflow as tf
# Creating placeholders
a = tf. placeholder(tf.float32)
b = tf. placeholder(tf.float32)
# Assigning multiplication operation w.r.t. a & b to node mul
mul = a*b
# Create session object
sess = tf.Session()
# Executing mul by passing the values [1, 3] [2, 4] for a and b respectively
output = sess.run(mul, {a: [1,3], b: [2, 4]})
print('Multiplying a b:', output)
Output:
[2. 12.]
```

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **

Now, let us move ahead and understand –

In deep learning, placeholders are used to take arbitrary inputs in your model or graph. Apart from taking input, you also need to modify the graph such that it can produce new outputs w.r.t. same inputs. For this you will be using variables. In a nutshell, a variable allows you to add such parameters or node to the graph that are trainable i.e. the value can be modified over the period of a time. Variables are defined by providing their initial value and type as shown below:

```
var = tf.Variable( [0.4], dtype = tf.float32 )
```

**Note: **

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **

Constants are initialized when you call

```
init = tf.global_variables_initializer()
sess.run(init)
```

Always remember that a variable must be initialized before a graph is used for the first time.

**Note:** *TensorFlow variables are in-memory buffers that contain tensors, but unlike normal tensors that are only instantiated when a graph is run and are immediately deleted afterwards, variables survive across multiple executions of a graph.*

Now that we have covered enough basics of *TensorFlow*, let us go ahead and understand how to implement a linear regression model using *TensorFlow*.

Linear Regression Model is used for predicting the unknown value of a variable (Dependent Variable) from the known value of another variables (Independent Variable) using linear regression equation as shown below:

Therefore, for creating a linear model, you need:

- Building a Computational Graph
- Running a Computational Graph

So, let us begin building linear model using TensorFlow:

Copy the code by clicking the button given below:

```
# Creating variable for parameter slope (W) with initial value as 0.4
W = tf.Variable([.4], tf.float32)
#Creating variable for parameter bias (b) with initial value as -0.4
b = tf.Variable([-0.4], tf.float32)
# Creating placeholders for providing input or independent variable, denoted by x
x = tf.placeholder(tf.float32)
# Equation of Linear Regression
linear_model = W * x + b
# Initializing all the variables
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
# Running regression model to calculate the output w.r.t. to provided x values
print(sess.run(linear_model {x: [1, 2, 3, 4]}))
```

**Output:**

```
[ 0. 0.40000001 0.80000007 1.20000005]
```

The above stated code just represents the basic idea behind the implementation of regression model i.e. how you follow the equation of regression line so as to get output w.r.t. a set of input values. But, there are two more things left to be added in this model to make it a complete regression model:

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **

Now let us understand how can I incorporate the above stated functionalities into my code for regression model.

A loss function measures how far apart the current output of the model is from that of the desired or target output. I’ll use a most commonly used loss function for my linear regression model called as Sum of Squared Error or SSE. SSE calculated w.r.t. model output (represent by linear_model) and desired or target output (y) as:

```
y = tf.placeholder(tf.float32)
error = linear_model - y
squared_errors = tf.square(error)
loss = tf.reduce_sum(squared_errors)
print(sess.run(loss, {x:[1,2,3,4], y:[2, 4, 6, 8]})
```

```
Output:
90.24
```

As you can see, we are getting a high loss value. Therefore, we need to adjust our weights (W) and bias (b) so as to reduce the error that we are receiving.

TensorFlow provides **optimizers** that slowly change each variable in order to minimize the loss function or error. The simplest optimizer is **gradient descent**. It modifies each variable according to the magnitude of the derivative of loss with respect to that variable.

```
#Creating an instance of gradient descent optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
for i in range(1000):
sess.run(train, {x:[1, 2, 3, 4], y:[2, 4, 6, 8]})
print(sess.run([W, b]))
```

```
Output:
[array([ 1.99999964], dtype=float32), array([ 9.86305167e-07], dtype=float32)]
```

So, this is how you create a linear model using TensorFlow and train it to get the desired output.

Libraries play an important role when developers decide to work in Machine Learning or Deep Learning researches. In this article, we list down 10 comparisons between TensorFlow and PyTorch these two Machine Learning Libraries.

According to this article, a survey based on a sample of 1,616 **ML developers** and **data scientists**, for every one developer using **PyTorch**, there are 3.4 developers using **TensorFlow**. In this article, we list down 10 comparisons between these two **Machine Learning Libraries**.

**PyTorch** has been developed by Facebook which is based on Torch while **TensorFlow**, an open sourced **Machine Learning**** Library**, developed by Google Brain is based on the idea of data flow graphs for building models.

**TensorFlow** has some attracting features such as TensorBoard which serves as a great option while visualising a **Machine Learning** model, it also has **TensorFlow **Serving which is a specific grpc server that is used during the deployment of models in production. On the other hand, **PyTorch** has several distinguished features too such as dynamic computation graphs, naive support for **Python**, support for CUDA which ensures less time for running the code and increase in performance.

**TensorFlow **is adopted by many researchers of various fields like academics, business organisations, etc. It has a much bigger community than **PyTorch **which implies that it is easier to find for resources or solutions in **TensorFlow**. There is a vast amount of tutorials, codes, as well as support in **TensorFlow** and **PyTorch**, being the newcomer into play as compared to **TensorFlow**, it lacks these benefits.

**Visualisation **plays as a protagonist while presenting any project in an organisation. **TensorFlow **has TensorBoard for visualising **Machine Learning** models which helps during training the model and spot the errors quickly. It is a real-time representation of the graphs of a model which not only depicts the graphic representation but also shows the accuracy graphs in real-time. This eye-catching feature is lacked by **PyTorch**.

In **TensorFlow**, defining computational graph is a lengthy process as you have to build and run the computations within sessions. Also, you will have to use other parameters such as placeholders, variable scoping, etc. On the other hand, **Python **wins this point as it has the dynamic computation graphs which help id building the graphs dynamically. Here, the graph is built at every point of execution and you can manipulate the graph at run-time.

**PyTorch **being the dynamic computational process, the debugging process is a painless method. You can easily use ** Python debugging tools** like pdb or ipdb, etc. for instance, you can put “pdb.set_trace()” at any line of code and then proceed for executions of further computations, pinpoint the cause of the errors, etc. While, for

For now, deployment in ** TensorFlow **is much more supportive as compared to

The documentation of both frameworks is broadly available as there are examples and tutorials in abundance for both the libraries. You can say, it is a tie between both the frameworks.

Click here for ** TensorFlow documentation** and click here for

The ** serialisation **in

By default, ** Tensorflow **maps nearly all of the GPU memory of all GPUs visible to the process which is a comedown but here it automatically presumes that you want to run your code on the GPU because of the well-set defaults and thus result in fair management of the device. On the other hand,