Visualizing our computational graph. Welcome to the fourth part of this series, where we’ve been building a deep learning library in JavaScript

Welcome to the fourth part of this series, where we’ve been building a deep learning library in JavaScript

In the first part of the series, we talked about Automatic gradient and also demonstrate a simple example in javascript

In the second part, we dove deep into implementing some of the core parts of a neural network—tensors, linear layers, and ReLU, and softmax activation functions.

And in the third part, we discussed how to create a Sequential model, implemented stochastic gradient optimization, and also implemented cross-entropy loss.

In this fourth part of the series, our aim is to implement a visualization feature similar to **TensorBoard****.**

- Visualizing the computational graph of our neural network.

To be able to visualize the computational graph of our Sequential neural network created in the part 3, we need to update the model based on the following steps:

- Assign a name to each layer of the Sequential model
- Assign a name to the weight and bias of each layer
- Assign a name to the input tensors

Assigning a name to an input tensor is very easy:

With the above snippet, we’ve assigned a name to the tensor, but now, assigning a name to each of the layers seems to be a bit confusing. We can choose to add a name to each of the layers before creating them:

```
model = new Sequential([
new Linear(2,3, name="linear_1")
new ReLu(name="relu_1")
])
```

If we use this method, we can imagine doing this for a more complex and deeper neural network (a lot of layers).

With the help of the sequential model, we can see that the naming of layers will be very easy.

In the Sequential model, we loop through the `models`

list and assign a name to each of the layers; also, for layers that have weights and biased, a name is assigned to them. The naming involves the addition of a suffix to their class and function names.

We will go through some of the popular deep learning frameworks like Tensorflow, MxNet, DL4j, PyTorch and CNTK so you can choose which one is best for your project. Deep Learning is a branch of Machine Learning. Though machine learning has various algorithms, the most powerful are neural networks. To build these neural networks, we use different frameworks like Tensorflow, CNTK, and MxNet.

Top Deep Learning Frameworks in 2020: PyTorch vs TensorFlow. This article outlines five factors to help you compare these two major deep learning frameworks; PyTorch and TensorFlow.

Setting up your PC/Workstation for Deep Learning: Tensorflow and PyTorch — Windows. By far the easiest way to configure your new (or not so new) rig to crunch some neurons.

PyTorch for Deep Learning | Data Science | Machine Learning | Python. PyTorch is a library in Python which provides tools to build deep learning models. What python does for programming PyTorch does for deep learning. Python is a very flexible language for programming and just like python, the PyTorch library provides flexible tools for deep learning.

We supply you with world class machine learning experts / ML Developers with years of domain experience who can add more value to your business.