For a long time, I wanted to learn TensorFlow and I finally decided to start learning this month. I used TensorFlow when I was in the robotics team during my undergrad but that was superficial knowledge.

Google developed TensorFlow and it is one of the most popular and widely used libraries for developing and deploying Machine Learning and other algorithms that has a large number of mathematical operations to perform.

Google launched TensorFlow to introduce an ecosystem that provides a collection of workflows to develop and train models, for implementing Machine learning in almost all applications. Actually, we all use TensorFlow so often without realizing it’s use: Google Photos or Google voice, you are using TensorFlow models indirectly, they work on large clusters of Google hardware and are powerful in perceptual tasks.

TensorFlow has over 150K GitHub stars and about a whopping 83.2K GitHub forks. TensorFlow’s open-source repo on GitHub is a wonderful resource. This blog is a first-hand introduction to TensorFlow, how to use it, and why to use it?

## Introduction

TensorFlow = Tensor + Flow

The core components of TensorFlow are tensors and computational graph that is across nodes through edges. So, let’s first understand what TensorFlow is made of:

### Tensor

Data for machine learning generally requires computation in some form and therefore, we often see data representation done numerically. To define a tensor: it is a container that can house data in N dimensions.

In a nutshell, tensors are mathematical objects used to describe physical properties like scalars and vectors. In fact, from what I learned, tensors are merely a generalization of scalars and vectors; a scalar is a zero-rank tensor, while a vector is a first-rank tensor. You can determine the rank of a tensor by the number of directions required to represent it. That also signified by the dimensionality of the array.