TensorFlow.js Crash Course – Machine Learning For The Web – Handwriting Recognition

TensorFlow.js Crash Course – Machine Learning For The Web – Handwriting Recognition

TensorFlow.js Crash Course — Machine Learning For The Web — Handwriting Recognition. This post has been published first on CodingTheSmartWay.com -TensorFlow.js is a library for building and executing machine learning ... and deploy machine learning models and enabled new classes of on-device ....

TensorFlow.js Crash Course — Machine Learning For The Web — Handwriting Recognition. This post has been published first on CodingTheSmartWay.com -TensorFlow.js is a library for building and executing machine learning ... and deploy machine learning models and enabled new classes of on-device ....

In the first part TensorFlow.js Crash Course – Machine Learning For The Web – Getting Started we’ve covered the following topics:

  • What TensorFlow.js is
  • How TensorFlow.js is added to your web application
  • How TensorFlow.js can be used to add machine learning capabilities to your web application

In this part we’re going one step further and will explore another use case: the recognition of handwritten digits. Therefore it is assumed that you’re familiar with the the basic building blocks of TensorFlow.js which have been introduced in the first episode.

What We’re Going To Build

Let’s take a look at the application which we’re going to build in this tutorial. The application will use the MNIST data set train a neural network. The model is being built and trained when the website is loaded. The progress can be seen in the Log Output area:

Once the training procedure is completed the user is informed with the message “Training complete” and the button in the *Predict *area is activated.

Pressing the button randomly selects one dataset fromn the MNIST data source to perform a prediction with the trained model. The output looks like the following:

The image of the handwritten digit is presented, the original value and the predicted value is outputted. If the prediction is correct the text “Value recognized successfully” is visible as well. This shows us, that the trained neural network was able to recognize the digit from the image correctly.

The user is able to use the button multiple times. The output is extended as you can see in the following screenshot:

The MNIST Database Of Handwritten Digits

MNIST is a data set which contains the images of handwritten digits from 0–9. We’ll use that database of images to train the model of our application. Furthermore we’ll make use of randomly selected images from the MINSIT data set to test if the neural network is able to perform predictions.

Preparing The Project

Again let’s start with setting up the project by creating a new folder:

$ mkdir tfjs02

Change into that newly created project folder:

$ cd tfjs02

Inside the folder we’re now ready to create a package.json file, so that we’re able to manage dependencies by using the Node.js Package Manager:

$ npm init -y

Because we’ll be installing the dependencies (e.g. the Tensorflow.js library) locally in our project folder we need to use a module bundler for our web application. To keep things as easy as possible we’re going to use the Parcel web application bundler because Parcel is working with zero configuration. Let’s install the Parcel bundler by executing the following command in the project directory:

$ npm install -g parcel-bundler

Next, let’s create two new empty files for our implementation:

$ touch index.html index.js

Finally let’s add the Bootstrap library as a dependency because we will be using some Bootstrap CSS classes for our user interface elements:

$ npm install bootstrap

Now let’s add further depependencies to the project to make sure that we’re able to use latest EcmaScript features like async / await:

npm install --save-dev babel-plugin-transform-runtime babel-runtime

Create a .babelrc file and add:

{
    "plugins": [
        ["transform-runtime",
        {
            "polyfill": false,
            "regenerator": true
        }]
    ]
}

Last but not least we do not need to forget to install TensorFlow.js as well:

$ npm install @tensorflow/tfjs

Building The Convolutional Neural Network Model

Creating A Sequential Model Instance

Before we’re starting to built the convolutional neural network model we’re defining a variable *model *which will hold the model and a function *createModel *which will contain the code which is needed to create and compile the machine learning model:

var model; function createModel() { // Insert the following pieces of code here }

Let’s first create the sequential model instance as already learned in episode 1 of this series and insert the following code in function createModel:

createLogEntry('Create model ...'); 
model = tf.sequential(); 
createLogEntry('Model created');

Additionally we’re making use of a function named createLogEntry. This function is will be implemented later on and is used to output text messages to the Log Output area.

Adding The First Layer

First, let’s add a two-dimensional convolutional layer by using the following code:

createLogEntry('Add layers ...'); 
model.add(tf.layers.conv2d({ 
 inputShape: [28, 28, 1],
 kernelSize: 5,
 filters: 8,
 strides: 1,
 activation: 'relu',
 kernelInitializer: 'VarianceScaling'
}));

The layer is created via* tf.layers.conv2d*. The configure the layer a configuration object is passed as a parameter to this method. The new layer is added to the model by passing it into the call of the method model.add.

The configuration object which is passed to *conv2d *is containing six configuration properties in total:

  • inputShape: This is the shape of the input data of the first layer. The MNIST data is containing images of shape 28×28-pixels. The color of the pixels can just be black and write, so we’re assigning the shape [28, 28, 1] here.
  • kernelSize: The *kernelSize *value if the size of the filter window of the convolutional layer which is applied to the input data. We’re using the value 5 here to define a square filter windows of 5×5 pixels.
  • filters: This is the number of filter windows (of size kernelSize) which are applied to the input data.
  • strides: This value specifies by how many pixels the filter window is sliding over the input image.
  • activation: The activation function which is applied to the data once the filter windows have been applied. Here we’re using the *Reactified Linear Unit (ReLU)*funtion, which is a very common activation function in machine learning.
  • kernelInitializer: We’re using VarianceScaling (which is a common initializer) to initialize the model weight’s.

Adding The Second Layer

The next layer we’re going to add to our neural network model is a two dimensional max pooling layer. We’re using that layer to down-sample the image so it is half the size of the input from the previous layer by defining the max pooling layer in the following way:

model.add(tf.layers.maxPooling2d({ 
 poolSize: [2, 2], 
 strides: [2, 2] 
}));

The layer is configured by passing over a configuration object with two configuration properties:

  • poolSize: This is the size of the sliding windows (2×2 pixels) which is applied to the input.
  • strides: This value specifies by how many pixels the filter window is sliding over the input image.

Since both values are set to 2,2, the pooling windows is completely non-overlapping. As a result this will cut the size of the input from the previous layer in half.

Adding Another Convolutional Layer

A common pattern in convolutional neural network models used for image recognition is to repeat the first convolutional layer and the second max pooling layer. So let’s add again a two dimensional convolutional layer as the third layer in our model:

model.add(tf.layers.conv2d({ 
 kernelSize: 5, 
 filters: 16, 
 strides: 1, 
 activation: 'relu', 
 kernelInitializer: 'VarianceScaling'  
}));

This time we do not need to define the input shape because the shape is determined by the output shape of the previous layer automatically.

Adding Another MaxPooling Layer

The fourth layer is again a max pooling layer to further down-sample the result:

model.add(tf.layers.maxPooling2d({ 
 poolSize: [2, 2],  
 strides: [2, 2] 
}));

Adding A Flatten Layer

Having repeated the pattern of a convolutional layer and a max pooling layer a second time brings us now to the point to add a flatten layer as the fifth layer in our model:

model.add(tf.layers.flatten());

This layer will flatten the output from the previous layer to a vector.

Adding A Dense Layer (Fully Connected Layer) For Performing The Final Classification

The final layer which is added to out model is a dense layer (fully connected layer). This layer will perform the final classification:

model.add(tf.layers.dense({ 
 units: 10, 
 kernelInitializer: 'VarianceScaling', 
 activation: 'softmax' 
})); 
createLogEntry('Layers created');

The dense layer configuration consists of the following properties:

  • units: The size of the output. As we’d like to do a 10-class classification to predict digitis between zero and nine (0-9) we’re setting the value to 10.
  • kernelInitializer: Set to VarianceScaling
  • activation: The activation function which is used for classification. The *softmax *activation function creates a propability distribution over the 10 classes.

Compiling The Model

All needed layers have been added to the model. Before we’re going to train the model with MNIST data sets we need to make sure that the model is compiled:

createLogEntry('Start compiling ...'); 
model.compile({ 
 optimizer: tf.train.sgd(0.15), 
 loss: 'categoricalCrossentropy' 
}); 
createLogEntry('Compiled');

The object which is passed to the call of model.compile is containing two properties:

  • optimizer: The convolutional neural network model will make use of a SGD (Stochastic Gradient Descent) optimizer with a learning rate of 0.15.
  • loss: As loss function we choose *categoricalCrossentropy *which is often used for classification tasks.
Loading The Data And Train The Model

Let’s start training the model with MNIST data sets of handwritten digits. To access the MNIST data from a remote server we’re using the MnistData class from the project https://github.com/tensorflow/tfjs-examples/tree/master/mnist. To make that class available just download the file data.js from that repository and insert that file in our project directory. In index.js use the following import statement to make the MnistData class available:

import {MnistData} from './data';

The data should be kept in a variable named data. A load function is added to our application to load the data by calling the *MnistData *method load:

let data; 
async function load() { 
 createLogEntry('Loading MNIST data ...'); 
 data = new MnistData(); 
 await data.load(); 
 createLogEntry('Data loaded successfully'); 
}

With the MNIST data records available we’re now ready to prepare for training. Let’s first define two constants:

const BATCH_SIZE = 64; 
const TRAIN_BATCHES = 150;

The training will not be performed in one operation. Instead we’ll perform the training in batches of data. The size of the batch and the number of batches to be trained is defined by those constants. The training logic is encapsulated in function train:

async function train() { 
 createLogEntry('Start training ...'); 
 for (let i = 0; i < TRAIN_BATCHES; i++) { 
  const batch = tf.tidy(() => { 
   const batch = data.nextTrainBatch(BATCH_SIZE); 
   batch.xs = batch.xs.reshape([BATCH_SIZE, 28, 28, 1]); 
   return batch; 
  }); 

  await model.fit( 
   batch.xs, batch.labels, {batchSize: BATCH_SIZE, epochs: 1} 
  ); 

  tf.dispose(batch); 

  await tf.nextFrame(); 
 } 
 createLogEntry('Training complete'); 
}

Implementing the User Interface

In the next step let’s add the HTML / CSS code which is needed to implement the user interface of our application in index.html:

<html>
<body>
    <style>
        .prediction-canvas{
            width: 100px;
            margin: 20px;
        }
        .prediction-div{
            display: inline-block;
            margin: 10px;
        }
    </style>
    <div class="container" style="padding-top: 20px">
        <div class="card">
            <div class="card-header">
                <strong>TensorFlow.js Demo - Handwriting Recognition</strong>
            </div>
            <div class="card-body">
                <div class="card">
                    <div class="card-body">
                        <h5 class="card-title">Log Output:</h5>
                        <div id="log"></div>
                    </div>
                </div>
                <br>
                <div class="card">
                    <div class="card-body">
                        <h5 class="card-title">Predict</h5>
                        <button type="button" class="btn btn-primary" id="selectTestDataButton" disabled>Please wait until model is ready ...</button>
                        <div id="predictionResult"></div>
                    </div>
                </div>
            </div>
        </div>
    </div>

    <script src="./index.js"></script>
</body>
</html>

Suggest

Here we’re making use of various Bootstrap CSS classes.

For the output which is written to the log output area a

element is added as a placeholder with ID log. The button which the user can use to perform a prediction based on a randomly selected MINST data set gets assigned the id selectTestDataButton.

The output area for the prediction result is the

element with ID predictionResult.

The createLogEntry function has already been used several times to output messages in the log area. Now let’s add the missing implementation of that function in index.js as well:

function createLogEntry(entry) { 
   document.getElementById('log').innerHTML += '<br>' + entry; 
}

Finally let’s bring everything in order and implement function main to call createModel, load and train.

async function main() {
    createModel();
    await load();
    await train();
    document.getElementById('selectTestDataButton').disabled = false;
    document.getElementById('selectTestDataButton').innerText = "Randomly Select Test Data And Predict";
}

main();

Furthermore we’re making sure that the button is enabled after the training procedure has been performed successfully.

Prediction

Let’s move on to the final task and add the code which is needed to perform the predict based on our trained convolutional neural network. Therefore we’re adding the *predict *function in the following way:

async function predict(batch) {
    tf.tidy(() => {
        const input_value = Array.from(batch.labels.argMax(1).dataSync());
        
        const div = document.createElement('div');
        div.className = 'prediction-div';

        const output = model.predict(batch.xs.reshape([-1, 28, 28, 1]));
        const prediction_value = Array.from(output.argMax(1).dataSync());

        const image = batch.xs.slice([0,0], [1, batch.xs.shape[1]]);
        
        const canvas = document.createElement('canvas');
        canvas.className = 'prediction-canvas';
        draw(image.flatten(), canvas);

        const label = document.createElement('div');
        label.innerHTML = 'Original Value: ' + input_value;
        label.innerHTML += '<br>Prediction Value: ' + prediction_value;
        console.log(prediction_value + '-' + input_value);
        if (prediction_value - input_value == 0) {
            label.innerHTML += '<br>Value recognized successfully!';
        } else {
            label.innerHTML += '<br>Recognition failed!';
        }
            
        div.appendChild(canvas);
        div.appendChild(label);
        document.getElementById('predictionResult').appendChild(div);
    });
}

Part of the output is the image of the handwritten digit. The draw the image we’re making use of the custom draw function. The implementation of that function needs to be added to index.js as well:

function draw(image, canvas) {
    const [width, height] = [28, 28];
    canvas.width = width;
    canvas.height = height;
    const ctx = canvas.getContext('2d');
    const imageData = new ImageData(width, height);
    const data = image.dataSync();
    for (let i = 0; i < height * width; ++i) {
      const j = i * 4;
      imageData.data[j + 0] = data[i] * 255;
      imageData.data[j + 1] = data[i] * 255;
      imageData.data[j + 2] = data[i] * 255;
      imageData.data[j + 3] = 255;
    }
    ctx.putImageData(imageData, 0, 0);
}

Finally we need to add the click event handler function for the selectTestDataButton:

document.getElementById('selectTestDataButton').addEventListener('click', async (el, ev) => {
    const batch = data.nextTestBatch(1);
    await predict(batch);
});

Inside this function we’re using method nextTestBatch from the MnistData class to retrieve a batch of test data of size 1 (which means that only one data set is included). Next we’re calling the asynchronous predict function by using the keyword await and passing of the test data set.

Top Machine Learning Framework: 5 Machine Learning Frameworks of 2019

Top Machine Learning Framework: 5 Machine Learning Frameworks of 2019

Machine Learning (ML) is one of the fastest-growing technologies today. ML has a lot of frameworks to build a successful app, and so as a developer, you might be getting confused about using the right framework. Herein we have curated top 5...

Machine Learning (ML) is one of the fastest-growing technologies today. ML has a lot of frameworks to build a successful app, and so as a developer, you might be getting confused about using the right framework. Herein we have curated top 5 machine learning frameworks that are cutting edge technology in your hands.

Through the machine learning frameworks, mobile phones and tablets are getting powerful enough to run the software that can learn and react in real-time. It is a complex discipline. But the implementation of ML models is far less daunting and difficult than it used to be. Now, it automatically improves the performance with the pace of time, interactions, and experiences, and the most important acquisition of useful data pertaining to the tasks allocated.

As we know that ML is considered as a subset of Artificial Intelligence (AI). The scientific study of statistical models and algorithms help a computing system to accomplish designated tasks efficiently. Now, as a mobile app developer, when you are planning to choose machine learning frameworks you must keep the following things in mind.

The framework should be performance-oriented
The grasping and coding should be quick
It allows to distribute the computational process, the framework must have parallelization
It should consist of a facility to create models and provide a developer-friendly tool
Let’s learn about the top five machine learning frameworks to make the right choice for your next ML application development project. Before we dive deeper into these mentioned frameworks, know the different types of ML frameworks that are available on the web. Here are some ML frameworks:

Mathematical oriented
Neural networks-based
Linear algebra tools
Statistical tools
Now, let’s have an insight into ML frameworks that will help you in selecting the right framework for your ML application.

Don’t Miss Out on These 5 Machine Learning Frameworks of 2019
#1 TensorFlow
TensorFlow is an open-source software library for data-based programming across multiple tasks. The framework is based on computational graphs which is essentially a network of codes. Each node represents a mathematical operation that runs some function as simple or as complex as multivariate analysis. This framework is said to be best among all the ML libraries as it supports regressions, classifications, and neural networks like complicated tasks and algorithms.

machine learning frameworks
This machine learning library demands additional efforts while learning TensorFlow Python framework. Your job becomes easy in the n-dimensional array of the framework when you have grasped the Python frameworks and libraries.

The benefits of this framework are flexibility. TensorFlow allows non-automatic migration to newer versions. It runs on the GPU, CPU, servers, desktops, and mobile devices. It provides auto differentiation and performance. There are a few goliaths like Airbus, Twitter, IBM, who have innovatively used the TensorFlow frameworks.

#2 FireBase ML Kit
Firebase machine learning framework is a library that allows effortless, minimal code, with highly accurate, pre-trained deep models. We at Space-O Technologies use this machine learning technology for image classification and object detection. The Firebase framework offers models both locally and on the Google Cloud.

machine learning frameworks
This is one of our ML tutorials to make you understand the Firebase frameworks. First of all, we collected photos of empty glass, half watered glass, full watered glass, and targeted into the machine learning algorithms. This helped the machine to search and analyze according to the nature, behavior, and patterns of the object placed in front of it.

The first photo that we targeted through machine learning algorithms was to recognize an empty glass. Thus, the app did its analysis and search for the correct answer, we provided it with certain empty glass images prior to the experiment.
The other photo that we targeted was a half water glass. The core of the machine learning app is to assemble data and to manage it as per its analysis. It was able to recognize the image accurately because of the little bits and pieces of the glass given to it beforehand.
The last one is a full glass recognition image.
Note: For correct recognition, there has to be 1 label that carries at least 100 images of a particular object.

#3 CAFFE (Convolutional Architecture for Fast Feature Embedding)
CAFFE framework is the fastest way to apply deep neural networks. It is the best machine learning framework known for its model-Zoo a pre-trained ML model that is capable of performing a great variety of tasks. Image classification, machine vision, recommender system are some of the tasks performed easily through this ML library.

machine learning frameworks
This framework is majorly written in CPP. It can run on multiple hardware and can switch between CPU and GPU with the use of a single flag. It has systematically organized the structure of Mat lab and python interface.

Now, if you have to make a machine learning app development, then it is mainly used in academic research projects and to design startups prototypes. It is the aptest machine learning technology for research experiments and industry deployment. At a time this framework can manage 60 million pictures every day with a solitary Nvidia K40 GPU.

#4 Apache Spark
The Apache Spark machine learning is a cluster-computing framework written in different languages like Java, Scala, R, and Python. Spark’s machine learning library, MLlib is considered as foundational for the Spark’s success. Building MLlib on top of Spark makes it possible to tackle the distinct needs of a single tool instead of many disjointed ones.

machine learning frameworks
The advantages of such ML library lower learning curves, less complex development and production environments, which ultimately results in a shorter time to deliver high-performing models. The key benefit of MLlib is that it allows data scientists to solve multiple data problems in addition to their machine learning problems.

It can easily solve graph computations (via GraphX), streaming (real-time calculations), and real-time interactive query processing with Spark SQL and DataFrames. The data professionals can focus on solving the data problems instead of learning and maintaining a different tool for each scenario.

#5 Scikit-Learn
Scikit-learn is said to be one of the greatest feats of Python community. This machine learning framework efficiently handles data mining and supports multiple practical tasks. It is built on foundations like SciPy, Numpy, and matplotlib. This framework is known for supervised & unsupervised learning algorithms as well as cross-validation. The Scikit learn is largely written in Python with some core algorithms in Cython to achieve performance.

machine learning frameworks
The machine learning framework can work on multiple tasks without compromising on speed. There are some remarkable machine learning apps using this framework like Spotify, Evernote, AWeber, Inria.

With the help of machine learning to build iOS apps, Android apps powered by ML have become quite an easy process. With this emerging technology trend varieties of available data, computational processing has become cheaper and more powerful, and affordable data storage. So being an app developer or having an idea for machine learning apps should definitely dive into the niche.

Conclusion
Still have any query or confusion regarding ML frameworks, machine learning app development guide, the difference between Artificial Intelligence and machine learning, ML algorithms from scratch, how this technology is helpful for your business? Just fill our contact us form. Our sales representatives will get back to you shortly and resolve your queries. The consultation is absolutely free of cost.

Author Bio: This blog is written with the help of Jigar Mistry, who has over 13 years of experience in the web and mobile app development industry. He has guided to develop over 200 mobile apps and has special expertise in different mobile app categories like Uber like apps, Health and Fitness apps, On-Demand apps and Machine Learning apps. So, we took his help to write this complete guide on machine learning technology and machine app development areas.

Introduction to Machine Learning with TensorFlow.js

Introduction to Machine Learning with TensorFlow.js

Learn how to build and train Neural Networks using the most popular Machine Learning framework for javascript, TensorFlow.js.

Learn how to build and train Neural Networks using the most popular Machine Learning framework for javascript, TensorFlow.js.

This is a practical workshop where you'll learn "hands-on" by building several different applications from scratch using TensorFlow.js.

If you have ever been interested in Machine Learning, if you want to get a taste for what this exciting field has to offer, if you want to be able to talk to other Machine Learning/AI specialists in a language they understand, then this workshop is for you.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Machine Learning and TensorFlow.js

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning In Node.js With TensorFlow.js

Machine Learning in JavaScript with TensorFlow.js

A Complete Machine Learning Project Walk-Through in Python

Top 10 Machine Learning Algorithms You Should Know to Become a Data Scientist

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

Libraries play an important role when developers decide to work in Machine Learning or Deep Learning researches. In this article, we list down 10 comparisons between TensorFlow and PyTorch these two Machine Learning Libraries.

According to this article, a survey based on a sample of 1,616 ML developers and data scientists, for every one developer using PyTorch, there are 3.4 developers using TensorFlow. In this article, we list down 10 comparisons between these two Machine Learning Libraries

1 - Origin

PyTorch has been developed by Facebook which is based on Torch while TensorFlow, an open sourced Machine Learning Library, developed by Google Brain is based on the idea of data flow graphs for building models.

2 - Features

TensorFlow has some attracting features such as TensorBoard which serves as a great option while visualising a Machine Learning model, it also has TensorFlow Serving which is a specific grpc server that is used during the deployment of models in production. On the other hand, PyTorch has several distinguished features too such as dynamic computation graphs, naive support for Python, support for CUDA which ensures less time for running the code and increase in performance.

3 - Community

TensorFlow is adopted by many researchers of various fields like academics, business organisations, etc. It has a much bigger community than PyTorch which implies that it is easier to find for resources or solutions in TensorFlow. There is a vast amount of tutorials, codes, as well as support in TensorFlow and PyTorch, being the newcomer into play as compared to TensorFlow, it lacks these benefits.

4 - Visualisation

Visualisation plays as a protagonist while presenting any project in an organisation. TensorFlow has TensorBoard for visualising Machine Learning models which helps during training the model and spot the errors quickly. It is a real-time representation of the graphs of a model which not only depicts the graphic representation but also shows the accuracy graphs in real-time. This eye-catching feature is lacked by PyTorch.

5 - Defining Computational Graphs

In TensorFlow, defining computational graph is a lengthy process as you have to build and run the computations within sessions. Also, you will have to use other parameters such as placeholders, variable scoping, etc. On the other hand, Python wins this point as it has the dynamic computation graphs which help id building the graphs dynamically. Here, the graph is built at every point of execution and you can manipulate the graph at run-time.

6 - Debugging

PyTorch being the dynamic computational process, the debugging process is a painless method. You can easily use Python debugging tools like pdb or ipdb, etc. for instance, you can put “pdb.set_trace()” at any line of code and then proceed for executions of further computations, pinpoint the cause of the errors, etc. While, for TensorFlow you have to use the TensorFlow debugger tool, tfdbg which lets you view the internal structure and states of running TensorFlow graphs during training and inference.

7 - Deployment

For now, deployment in TensorFlow is much more supportive as compared to PyTorch. It has the advantage of TensorFlow Serving which is a flexible, high-performance serving system for deploying Machine Learning models, designed for production environments. However, in PyTorch, you can use the Microframework for Python, Flask for deployment of models.

8 - Documentation

The documentation of both frameworks is broadly available as there are examples and tutorials in abundance for both the libraries. You can say, it is a tie between both the frameworks.

Click here for TensorFlow documentation and click here for PyTorch documentation.

9 - Serialisation

The serialisation in TensorFlow can be said as one of the advantages for this framework users. Here, you can save your entire graph as a protocol buffer and then later it can be loaded in other supported languages, however, PyTorch lacks this feature. 

10 - Device Management

By default, Tensorflow maps nearly all of the GPU memory of all GPUs visible to the process which is a comedown but here it automatically presumes that you want to run your code on the GPU because of the well-set defaults and thus result in fair management of the device. On the other hand, PyTorch keeps track of the currently selected GPU and all the CUDA tensors which will be allocated.