Recurrent Neural Networks, a.k.a. RNN is a famous supervised Deep Learning methodology. Other commonly used Deep Learning neural networks are Convolutional Neural Networks and Artificial Neural Networks. The main goal behind Deep Learning is to reiterate the functioning of a brain by a machine. As a result of which, loosely, each neural network structure epitomizes a part of the brain.

This is an image of brain. ANN is like the temporal lobe. CNN is like the occipital lobe, RNN is like the Frontal Lobe

Brain Lobes (Image Source:  https://www.nbia.ca/brain-structure-function/)

Artificial Neural Network, a.k.a. ANN stores data for a long time, so does the Temporal lobe. So it is linked with the Temporal Lobe. Convolutional Neural Networks, a.k.a. CNN, are used in image classification and Computer Vision tasks. The same work in our brain is done by Occipital Lobe and so CNN can be referenced with Occipital Lobe. Now, RNN is mainly used for time series analysis and where we have to work with a sequence of data. In such work, the network learns from what it has just observed, i.e., Short-term memory. As a result of which, it resembles the Frontal Lobe of the brain.

Importing Data

In this article, we will work on Text Classification using the IMDB movie review dataset. This dataset has 50k reviews of different movies. It is a benchmark dataset used in text-classification to train and test the Machine Learning and Deep Learning model. We will create a model to predict if the movie review is positive or negative. It is a binary classification problem. This dataset can be imported directly by using Tensorflow or can be downloaded from Kaggle.

from tensorflow.keras.datasets import imdb

Preprocessing the Data

The reviews of a movie are not uniform. Some reviews may consist of 4–5 words. Some may consist of 17–18 words. But while we feed the data to our neural network, we need to have uniform data. So we pad the data. There are two steps we need to follow before passing the data into a neural network: embedding and Padding. In the Embedding process, words are represented using vectors. The position of a word in a vector space is learned from the text, and it learns more from the words it is surrounded by. The embedding layer in _Keras _needs a uniform input, so we pad the data by defining a uniform length.

sentence=['Fast cars are good',
          'Football is a famous sport',
          'Be happy Be positive']

After padding:
[[364  50  95 313   0   0   0   0   0   0]  
 [527 723 350 333 722   0   0   0   0   0]  
 [238 216 238 775   0   0   0   0   0   0]]

In the above snippet, each sentence was padded with zeros. The length of each sentence to input is 10, and so each sentence is padded with zeroes. You can find the complete code for word embedding and padding at my GitHub profile.

#text-classification #machine-learning #artificial-intelligence #deep-learning #neural-networks

Text Classification with RNN
24.65 GEEK