'm training my CNN model with 14k+ images for 30 epochs and in the 28th epoch, I find an abnormal validation accuracy and loss as shown below :

'm training my CNN model with 14k+ images for 30 epochs and in the 28th epoch, I find an abnormal validation accuracy and loss as shown below :

- 67s - loss: 0.0767 - acc: 0.9750 - val_loss: 0.6755 - val_acc: 0.8412 Epoch 27/30 - 67s - loss: 0.1039 - acc: 0.9630 - val_loss: 0.3671 - val_acc: 0.9018 Epoch 28/30 - 67s - loss: 0.0639 - acc: 0.9775 - val_loss: 1.1921e-07 - val_acc: 0.1190 Epoch 29/30 - 66s - loss: 0.0767 - acc: 0.9744 - val_loss: 0.8091 - val_acc: 0.8306 Epoch 30/30 - 66s - loss: 0.0534 - acc: 0.9815 - val_loss: 0.2091 - val_acc: 0.9433 Using TensorFlow backend.

Can anyone explain why this happened?

This full course introduces the concept of client-side artificial neural networks. We will learn how to deploy and run models along with full deep learning applications in the browser! To implement this cool capability, we’ll be using TensorFlow.js (TFJS), TensorFlow’s JavaScript library.

By the end of this video tutorial, you will have built and deployed a web application that runs a neural network in the browser to classify images! To get there, we'll learn about client-server deep learning architectures, converting Keras models to TFJS models, serving models with Node.js, tensor operations, and more!

⭐️Course Sections⭐️

⌨️ 0:00 - Intro to deep learning with client-side neural networks

⌨️ 6:06 - Convert Keras model to Layers API format

⌨️ 11:16 - Serve deep learning models with Node.js and Express

⌨️ 19:22 - Building UI for neural network web app

⌨️ 27:08 - Loading model into a neural network web app

⌨️ 36:55 - Explore tensor operations with VGG16 preprocessing

⌨️ 45:16 - Examining tensors with the debugger

⌨️ 1:00:37 - Broadcasting with tensors

⌨️ 1:11:30 - Running MobileNet in the browser

In this TensorFlow tutorial for professionals and enthusiasts who are interested in applying Deep Learning Algorithm using TensorFlow to solve various problems.

In this TensorFlow tutorial for professionals and enthusiasts who are interested in applying Deep Learning Algorithm using TensorFlow to solve various problems.

TensorFlow is an open source deep learning library that is based on the concept of data flow graphs for building models. It allows you to create large-scale neural networks with many layers. Learning the use of this library is also a fundamental part of the AI & Deep Learning course curriculum. Following are the topics that will be discussed in this TensorFlow tutorial:

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **##

In this **TensorFlow tutorial**, before talking about TensorFlow, let us first understand *what are tensors*. **Tensors **are nothing but a de facto for representing the data in deep learning.

As shown in the image above, tensors are just multidimensional arrays, that allows you to represent data having higher dimensions. In general, Deep Learning you deal with high dimensional data sets where dimensions refer to different features present in the data set. In fact, the name “**TensorFlow**” has been derived from the operations which neural networks perform on tensors. It’s literally a flow of tensors. Since, you have understood what are tensors, let us move ahead in this **TensorFlow **tutorial and understand – *what is TensorFlow?*

**TensorFlow **is a library based on Python that provides different types of functionality for implementing **Deep Learning Models**. As discussed earlier, the term **TensorFlow** is made up of two terms – Tensor & Flow:

In **TensorFlow**, the term tensor refers to the representation of data as multi-dimensional array whereas the term flow refers to the series of operations that one performs on tensors as shown in the above image.

Now we have covered enough background about **TensorFlow**.

Next up, in this TensorFlow tutorial we will be discussing about TensorFlow code-basics.

TensorFlow Tutorial: Code BasicsBasically, the overall process of writing a **TensorFlow program** involves two steps:

- Building a Computational Graph
- Running a Computational Graph

Let me explain you the above two steps one by one:

So, *what is a computational graph?* Well, a computational graph is a series of TensorFlow operations arranged as nodes in the graph. Each nodes take 0 or more tensors as input and produces a tensor as output. Let me give you an example of a simple computational graph which consists of three nodes – * a*,

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **

Basically, one can think of a computational graph as an alternative way of conceptualizing mathematical calculations that takes place in a TensorFlow program. The operations assigned to different nodes of a Computational Graph can be performed in parallel, thus, providing a better performance in terms of computations.

Here we just describe the computation, it doesn’t compute anything, it does not hold any values, it just defines the operations specified in your code.

Let us take the previous example of computational graph and understand how to execute it. Following is the code from previous example:

```
import tensorflow as tf
# Build a graph
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
```

Now, in order to get the output of node c, we need to run the computational graph within a **session**. Session places the graph operations onto Devices, such as CPUs or GPUs, and provides methods to execute them.

A session encapsulates the control and state of the *TensorFlow *runtime i.e. it stores the information about the order in which all the operations will be performed and passes the result of already computed operation to the next operation in the pipeline. Let me show you how to run the above computational graph within a session (Explanation of each line of code has been added as a comment):

```
# Create the session object
sess = tf.Session()
#Run the graph within a session and store the output to a variable
output_c = sess.run(c)
#Print the output of node c
print(output_c)
#Close the session to free up some resources
sess.close()
Output:
30
```

So, this was all about session and running a computational graph within it. Now, let us talk about variables and placeholders that we will be using extensively while building deep learning model using *TensorFlow*.

In *TensorFlow*, constants, placeholders and variables are used to represent different parameters of a deep learning model. Since, I have already discussed constants earlier, I will start with placeholders.

A *TensorFlow* constant allows you to store a value but, what if, you want your nodes to take inputs on the run? For this kind of functionality, placeholders are used which allows your graph to take external inputs as parameters. Basically, a placeholder is a promise to provide a value later or during runtime. Let me give you an example to make things simpler:

```
import tensorflow as tf
# Creating placeholders
a = tf. placeholder(tf.float32)
b = tf. placeholder(tf.float32)
# Assigning multiplication operation w.r.t. a & b to node mul
mul = a*b
# Create session object
sess = tf.Session()
# Executing mul by passing the values [1, 3] [2, 4] for a and b respectively
output = sess.run(mul, {a: [1,3], b: [2, 4]})
print('Multiplying a b:', output)
Output:
[2. 12.]
```

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **

Now, let us move ahead and understand –

In deep learning, placeholders are used to take arbitrary inputs in your model or graph. Apart from taking input, you also need to modify the graph such that it can produce new outputs w.r.t. same inputs. For this you will be using variables. In a nutshell, a variable allows you to add such parameters or node to the graph that are trainable i.e. the value can be modified over the period of a time. Variables are defined by providing their initial value and type as shown below:

```
var = tf.Variable( [0.4], dtype = tf.float32 )
```

**Note: **

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **

Constants are initialized when you call

```
init = tf.global_variables_initializer()
sess.run(init)
```

Always remember that a variable must be initialized before a graph is used for the first time.

**Note:** *TensorFlow variables are in-memory buffers that contain tensors, but unlike normal tensors that are only instantiated when a graph is run and are immediately deleted afterwards, variables survive across multiple executions of a graph.*

Now that we have covered enough basics of *TensorFlow*, let us go ahead and understand how to implement a linear regression model using *TensorFlow*.

Linear Regression Model is used for predicting the unknown value of a variable (Dependent Variable) from the known value of another variables (Independent Variable) using linear regression equation as shown below:

Therefore, for creating a linear model, you need:

- Building a Computational Graph
- Running a Computational Graph

So, let us begin building linear model using TensorFlow:

Copy the code by clicking the button given below:

```
# Creating variable for parameter slope (W) with initial value as 0.4
W = tf.Variable([.4], tf.float32)
#Creating variable for parameter bias (b) with initial value as -0.4
b = tf.Variable([-0.4], tf.float32)
# Creating placeholders for providing input or independent variable, denoted by x
x = tf.placeholder(tf.float32)
# Equation of Linear Regression
linear_model = W * x + b
# Initializing all the variables
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
# Running regression model to calculate the output w.r.t. to provided x values
print(sess.run(linear_model {x: [1, 2, 3, 4]}))
```

**Output:**

```
[ 0. 0.40000001 0.80000007 1.20000005]
```

The above stated code just represents the basic idea behind the implementation of regression model i.e. how you follow the equation of regression line so as to get output w.r.t. a set of input values. But, there are two more things left to be added in this model to make it a complete regression model:

**What is TensorFlow** TensorFlow Code Basics**TensorFlow UseCase **

Now let us understand how can I incorporate the above stated functionalities into my code for regression model.

A loss function measures how far apart the current output of the model is from that of the desired or target output. I’ll use a most commonly used loss function for my linear regression model called as Sum of Squared Error or SSE. SSE calculated w.r.t. model output (represent by linear_model) and desired or target output (y) as:

```
y = tf.placeholder(tf.float32)
error = linear_model - y
squared_errors = tf.square(error)
loss = tf.reduce_sum(squared_errors)
print(sess.run(loss, {x:[1,2,3,4], y:[2, 4, 6, 8]})
```

```
Output:
90.24
```

As you can see, we are getting a high loss value. Therefore, we need to adjust our weights (W) and bias (b) so as to reduce the error that we are receiving.

TensorFlow provides **optimizers** that slowly change each variable in order to minimize the loss function or error. The simplest optimizer is **gradient descent**. It modifies each variable according to the magnitude of the derivative of loss with respect to that variable.

```
#Creating an instance of gradient descent optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
for i in range(1000):
sess.run(train, {x:[1, 2, 3, 4], y:[2, 4, 6, 8]})
print(sess.run([W, b]))
```

```
Output:
[array([ 1.99999964], dtype=float32), array([ 9.86305167e-07], dtype=float32)]
```

So, this is how you create a linear model using TensorFlow and train it to get the desired output.

Libraries play an important role when developers decide to work in Machine Learning or Deep Learning researches. In this article, we list down 10 comparisons between TensorFlow and PyTorch these two Machine Learning Libraries.

According to this article, a survey based on a sample of 1,616 **ML developers** and **data scientists**, for every one developer using **PyTorch**, there are 3.4 developers using **TensorFlow**. In this article, we list down 10 comparisons between these two **Machine Learning Libraries**.

**PyTorch** has been developed by Facebook which is based on Torch while **TensorFlow**, an open sourced **Machine Learning**** Library**, developed by Google Brain is based on the idea of data flow graphs for building models.

**TensorFlow** has some attracting features such as TensorBoard which serves as a great option while visualising a **Machine Learning** model, it also has **TensorFlow **Serving which is a specific grpc server that is used during the deployment of models in production. On the other hand, **PyTorch** has several distinguished features too such as dynamic computation graphs, naive support for **Python**, support for CUDA which ensures less time for running the code and increase in performance.

**TensorFlow **is adopted by many researchers of various fields like academics, business organisations, etc. It has a much bigger community than **PyTorch **which implies that it is easier to find for resources or solutions in **TensorFlow**. There is a vast amount of tutorials, codes, as well as support in **TensorFlow** and **PyTorch**, being the newcomer into play as compared to **TensorFlow**, it lacks these benefits.

**Visualisation **plays as a protagonist while presenting any project in an organisation. **TensorFlow **has TensorBoard for visualising **Machine Learning** models which helps during training the model and spot the errors quickly. It is a real-time representation of the graphs of a model which not only depicts the graphic representation but also shows the accuracy graphs in real-time. This eye-catching feature is lacked by **PyTorch**.

In **TensorFlow**, defining computational graph is a lengthy process as you have to build and run the computations within sessions. Also, you will have to use other parameters such as placeholders, variable scoping, etc. On the other hand, **Python **wins this point as it has the dynamic computation graphs which help id building the graphs dynamically. Here, the graph is built at every point of execution and you can manipulate the graph at run-time.

**PyTorch **being the dynamic computational process, the debugging process is a painless method. You can easily use ** Python debugging tools** like pdb or ipdb, etc. for instance, you can put “pdb.set_trace()” at any line of code and then proceed for executions of further computations, pinpoint the cause of the errors, etc. While, for

For now, deployment in ** TensorFlow **is much more supportive as compared to

The documentation of both frameworks is broadly available as there are examples and tutorials in abundance for both the libraries. You can say, it is a tie between both the frameworks.

Click here for ** TensorFlow documentation** and click here for

The ** serialisation **in

By default, ** Tensorflow **maps nearly all of the GPU memory of all GPUs visible to the process which is a comedown but here it automatically presumes that you want to run your code on the GPU because of the well-set defaults and thus result in fair management of the device. On the other hand,