1595575740

I’ve taken up the 6-week course by Jovian.ml on **“Deep Learning with PyTorch: Zero to GANs”**taught by **Aakash N S**. In this article (and the following ones), I will share what I’ve learned during the course.

Pytorch is an open-source machine learning and deep learning framework developed by Facebook. It is created to process large-scale image analysis, including object detection, recognition and classification and several other tasks. It is written in Python and C++ language. It can be used with other frameworks like Keras etc., to implement complex algorithms.

PyTorch is built on tensors. A PyTorch tensor is an n-dimensional array, similar to NumPy arrays.

Open the **Anaconda** terminal, type and run the following command:

conda install pytorch torchvision cudatoolkit=10.2 -c pytorch

Start by importing the PyTorch library into your Jupyter Notebook. I’m importing it as tr (too lazy to type the whole thing). You can just import it as **torch **if you wish.

#tensor #pytorch #deep-learning #machine-learning #framework

1595575740

I’ve taken up the 6-week course by Jovian.ml on **“Deep Learning with PyTorch: Zero to GANs”**taught by **Aakash N S**. In this article (and the following ones), I will share what I’ve learned during the course.

Pytorch is an open-source machine learning and deep learning framework developed by Facebook. It is created to process large-scale image analysis, including object detection, recognition and classification and several other tasks. It is written in Python and C++ language. It can be used with other frameworks like Keras etc., to implement complex algorithms.

PyTorch is built on tensors. A PyTorch tensor is an n-dimensional array, similar to NumPy arrays.

Open the **Anaconda** terminal, type and run the following command:

conda install pytorch torchvision cudatoolkit=10.2 -c pytorch

Start by importing the PyTorch library into your Jupyter Notebook. I’m importing it as tr (too lazy to type the whole thing). You can just import it as **torch **if you wish.

#tensor #pytorch #deep-learning #machine-learning #framework

1596031200

**ALONG WITH SOME INTERESTING FUNCTIONS.**

What is Machine Learning? We know that humans learn from their past experiences, at least they try. But a computer or machine need step-by-step instructions on what needs to be done, i.e. they work on pure logic. They are provided with instructions through programs or scripts. So to oversimplify, Machine Learning is training computers to learn from past experiences (past data), such that they no longer require human intervention to take decisions. There are far too many algorithms and methods to achieve this, which may be supervised, unsupervised or a mix of both.

PyTorch is an open-source machine learning library. It can also be said as a python implementation of the torch machine. Torch machine was initially implemented in C with a wrapper in Lua scripting language. It is a library for manipulating the tensors used in machine learning and other complex mathematical applications.

Tensors are mathematical entity that can vaguely be considered as a matrix. It can be a 0-D tensor (scalar), a 1-D tensor (a vector) or a 3-D matrix. Rank of a tensor represent the number of axes. So the rank of a vector is one as it require only one directional indicator (axis) per component.

PyTorch was developed by Facebook’s AI research team. This library provide an end-to-end research framework which allow chaining of high-level neural networks.

PyTorch provides libraries for basic tensor manipulation on CPU’s as well as GPU’s , a built-in neural network library, model training utilities, and a multiprocessing library that can work with shared memory.

The chief advantages of PyTorch is that it allows the developer to use Python libraries and software. The existing packages like NumPy, SciPy, and Cython (for compiling Python to C for the sake of speed) can all work hand-in-hand with PyTorch. The developers also emphasize its memory efficiency due to use of custom-written GPU memory allocator. Its tensor computation can work as a replacement for similar functions in NumPy.

PyTorch is mainly employed for application such as the Computer Vision (use of image processing algorithms to gain high-level understanding from digital images or videos) and Natural Language Processing (ability of a computer program to understand human language).

**torch.set_default_tensor_type(t)**

The tensor when declared by default will have the floating point (float32) datatype. This function in torch package can be used set other data types to the declared tensor.The torch package by default has nine CPU and nine GPU tensor types.The parameters of the function are :-

t :- specify the datatype to which the tensor is to be converted.

Example for torch.set_default_tensor_type(t)

**2. torch.arange(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)**

This function is used to return a 1-D tensor of size (end-start)/step with values in the interval [start,end) that is, the sequence starts from start value and the final value would be (end-1). Step is the gap or difference between the two consecutive numbers in the sequence. We can set the amount of difference between them by adjusting the value of step.

#tensor #torch #basics #pytorch #machine-learning #deep learning

1596687300

This is a prequel to my previous blog post **My first deep learning model using PyTorch**. In this tutorial I will be covering basic know how about pytorch tensors.

PyTorch is a python based deep learning framework developed by Facebook’s AI Research. It has gained popularity because of it’s flexibility and speed. It has been integrated with cloud platforms including Amazon’s Sagemaker , Google’s GCP and Azure machine learning service.

So let’s start with understanding what exactly are tensors!!

Tensors are containers that can have data in N-dimensions . A general tensor is represented by N^R where R is rank and N is number of dimensions. From this perspective, a rank-2 tensor (requires N ^2 numbers to describe) is equivalent to an N*N matrix. Scalar can be considered as a rank-0-tensor and vector can be introduced as a rank-1-tensor.

Tensors are similar to numpy ndarray. Pytorch is targeted as a replacement for numpy to **use the power of GPUs**. By copying an array to the GPU memory, performance gets improved due to parallel computing.

Although in tensor creation documentation by PyTorch you will find multiple ways to create tensors. Here, I am discussing few popular methods.

**a) torch.tensor **— This is used to create a tensor with pre-existing data.

**b) torch.zeros **— this method returns a tensor filled with zeroes with shape defined as per the arguments.

**c) torch.rand** —this method returns a tensor filled with random numbers from a uniform distribution on the interval [0, 1). Similarly for normal distribution torch.randn can be used.

**d) torch.from_numpy **— It creates a tensor from numpy ndarray . The returned tensor and numpy ndarraywill **share the same memory**. Modifications to the tensor will be reflected in the ndarray and vice versa.

Pytorch aims to be an effective library for computations. It avoids memory copying if it can and i.e. the core difference in creating a tensor from numpy array using torch.from_numpy than torch.tensor.

Other methods like torch.randn, torch**.**as_tensor, torch.empty, etc. can be explored for tensor creation too.You can refer **here**full list **.**

To view shape of a tensor in PyTorch both size() and shape can be used. Shape is an alias to size() and that’s what numpy users would be accustomed to. It is added to match numpy closely.

You can notice similar outputs for both size() and shape.

To change shape of tensors , view and reshape can be explored. Let’s understand more about both of these methods in detail.

#tensor #autograd #pytorch #deep-learning #neural-networks #deep learning

1595580780

PyTorch is a scientific package based on Python, which is used to perform advanced operations using a special datatype known as Tensor. A Tensor is a number, vector, matrix, or multi-dimensional array with regular shape and numbers of the same datatype. PyTorch is an alternative to the NumPy package, which can additionally be used with the power of GPUs. It is also used as a framework for conducting research in deep learning.

Understanding tensors

The 5 operations are:

- expand()
- permute()
- tolist()
- narrow()
- where()

The existing tensor is expanded to new dimensions along the dimension with value 1. The tensor can be expanded along any one dimension or multiple dimensions at once. If you do not want to expand the tensor along a particular dimension, you can set its parametric value to -1.

Note: only singleton dimension can be expanded

In this example, the tensor has original dimensions as [1,2,3]. It is expanded to dimensions [2,2,3].

This function can be used to expand existing tensors along singleton dimensions. It only returns a new view and does not allocate new memory. Therefore, it can be used to study how the tensor behaves at larger dimensions.

This function returns a view of the tensor with the order of dimensions of original tensor changed according to our choice. For example, if the original dimensions are [1,2,3], we can change it to [3,2,1]. The function takes the required order of dimensions as its parameters.

#deep-learning #tensor #operations #jovian #pytorch #deep learning

1591344961

Five PyTorch Tensor Operations you had absolutely no idea about

Getting Started with PyTorch

PyTorch is a relatively new and robust deep learning framework, having a dynamic computation graph and supports flexibility. It was designed primarily by Soumith Chinatala, Facebook AI Research.

#tensor #pytorch #machine-learning #programming