This Blog will be a 3 part series where I will explain the different Reinforcement Learning Algorithms,

This Blog will be a 3 part series where I will explain the different Reinforcement Learning Algorithms,

Part 1: Explanation of the Traditional Recurrent Neural Networks.

Part 2: Explanation of GRUs.

Part 3: Explanation of LSTMs.

A recurrent neural network is a type of Artificial Neural Network (ANN) where the output of the previous step is fed as an input in the current step. RNN’s are primarily used in prediction problems, these can be like predicting the weather, stock market share price, or predicting the next word in a sentence given the previous words.

Before understanding how a Recurrent Neural Network works, let us understand how the weights are calculated in an RNN. The formula to calculate the weights is as below:

**Weights**: To understand we can say that these are numbers which when multiplied by inputs predict the outputs.

**Learning Rate: **Say we are approaching accurate predictions then learning rate tells us about how many steps are we taking towards the right solution during each iteration.

**Gradient:** Tells us about the direction we are moving towards the right solution.

Let us see how we can relate the way we think to the way RNN works, say that you want to buy an air-cooler for your home, and you are reading the product reviews of an air-cooler. An example of the review could be something as below:

“_Great product, consumes less power, keeps room really cool. Would definitely suggest you buy, thumbs up._” — {1}

Thus at any point, our mind can remember words like:

“_Great product… less power… cool.. definitely suggest.. thumbs up.._”

Now if someone could ask you about the product, you would say:

“_It’s a great product that consumes less power, maintains the room cool, and I definitely suggest you to buy._”

Now let us see how this intuition can be expressed mathematically in RNN’s.

In an RNN, for the sentence in {1}, the words are first converted to machine-readable vectors, then the algorithm processes these vectors one by one.

Let us have look at the representation of an RNN.

An unrolled version of a Recurrent Neural Network

Where,

**‘ht’** is the hidden state at t time step(This in our sentence example can be equivalent to the memory of the words we have, given a product review),

**‘A’** is the activation (here it refers to the tanh activation),

**‘xt’** is the input at **‘t’** time step,

**‘t’** is the time step.

The algorithm considers the first input vector X0, processes it to produce the first hidden state h0 this becomes the input for the next layer to calculate the next hidden state h1, and so on. Thus the hidden state acts as neural network memory, it holds the data from the previous step.

reinforcement-learning artificial-intelligence recurrent-neural-network deep-learning

Artificial Neural Networks — Recurrent Neural Networks. Remembering the history and predicting the future with neural networks. A intuition behind Recurrent neural networks.

Artificial Neural Network | Deep Learning with Tensorflow and Artificial Intelligence | I have talked about Artificial neural networks and its implementation in TensorFlow using google colab. You will learn: What is an Artificial Neural Network? Building your neural network using Tensorflow.

Enroll now at best Artificial Intelligence training in Noida, - the best Institute in India for Artificial Intelligence Online Training Course and Certification.

The past few decades have witnessed a massive boom in the penetration as well as the power of computation, and amidst this information.

Deep Learning Explained in Layman's Terms. In this post, you will get to learn deep learning through a simple explanation (layman terms) and examples.