Agnes  Sauer

Agnes Sauer

1596017760

Understanding Convolutions and Pooling in Neural Networks

Why convolutional networks work? What’s the _magic _behind them that allows us to succeed in a broad range of applications such as image classification, object detection, face recognition, and others?

It turns out that all of this is possible thanks to two astonishingly simple, yet powerful concepts: **convolution **and pooling.

In this post, I will try to explain them in a really **intuitive **and **visual **way, leaving the math behind. I have found a lot of documentation in the internet with a strong mathematical foundation, but I think the core ideas are sometimes watered down through all the formulas and calculations.

All the examples that we will cover in this post can be found in this notebook.

Convolutions: it’s all about filters

The first key concept that we need to understand is the **convolution **operation. This is simple: we will apply a **filter **to an image to get a resulting image.

As an example, let’s suppose we have an input image and a filter:

Image for post

Source: own elaboration

Remember that any image can be represented as a matrix of pixels, with each pixel representing a color intensity:

Image for post

What the convolution does is the following:

Image for post

This is, we are building another image by applying the filter to our input image. Note that depending on the filter we apply (its shape and values), we will get a different image.

But, why would we want to do this? Seems that it doesn’t make much sense!

The truth is that it does makes a lot of sense. To understand why, let’s get a simple image of a black and white grid.

Image for post

(Please note that this is no longer a matrix. It is in fact an image)

And apply some filters to it:

Image for post

We can see that with that particular filter, the output image only contains the vertical edges. Let’s try some more:

Image for post

In this case, we only get the horizontal lines in the output image. Another different filter allows us to emphasize the edges, regardless of the orientation:

Image for post

And, obviously, we can apply a filter that leaves the input image the same:

Image for post

This is the idea behind the filters, and now you may better understand why they are called like that: they allow us to **retain **some kind of information from a picture, and **ignore **the rest. And this is possible because of how the convolution operation works.

#computer-vision #deep learning

What is GEEK

Buddha Community

Understanding Convolutions and Pooling in Neural Networks
Nat  Kutch

Nat Kutch

1597632480

Are Convolutional Neural Network’s much different

In this publication I will write about the mathmatics behind the simplest 2D Convolutional Neural Network (CNN), I’ll try to make a connection to the Multi Layer Perceptron (MLP) that I wrote about in my last publication. I’ll present a model architecture, forward and back propagation calculations and finally show some other techniques that are widelly used in modern CNNs.

The CNN was implemented as a model that could handle images in a more efficient way than the MLP. Through the usage of kernel masks containing weights, the networks find characteristic patterns in the input images like edges, corners, circles, etc. If the network is complex enough it even may find more detailed structures like whole faces, cars, animals and so on.

The kernel masks extract features from the inputs by sliding over them. Each mask operates a matrix multiplication with the input matrix around the values it covers. For better understanding, the next image represents a kernel mask with shape 2 x 2 sliding over an input image of 3 x 3 pixels, our input:

Image for post

As you can see, the mask goes from left to right and top to bottom.

I use a 3 x 3 instance of an image for simplicity, depite, in general, images being of greater size.

Kernel masks can be whichever size we want. Generally 3 x 3 masks provide good results, but can be optimized like any other hyperparameter.

Let’s say we choose to have a filter of three masks. The results are three net input matrices that are then activated by an activation funtion, we’ll use sigmoid (not represented bellow).

#pooling #batch-normalization #mathematics #convolutional-network #neural networks

Convolutional Neural Networks-An Intuitive approach

A simple yet comprehensive approach to the concepts

Image for post

Convolutional Neural Networks

Artificial intelligence has seen a tremendous growth over the last few years, The gap between machines and humans is slowly but steadily decreasing. One important difference between humans and machines is (or rather was!) with regards to human’s perception of images and sound.How do we train a machine to recognize images and sound as we do?

At this point we can ask ourselves a few questions!!!

How would the machines perceive images and sound ?

How would the machines be able to differentiate between different images for example say between a cat and a dog?

Can machines identify and differentiate between different human beings for example lets say differentiate a male from a female or identify Leonardo Di Caprio or Brad Pitt by just feeding their images to it?

Let’s attempt to find out!!!

The Colour coding system:

Lets get a basic idea of what the colour coding system for machines is

RGB decimal system: It is denoted as rgb(255, 0, 0). It consists of three channels representing RED , BLUE and GREEN respectively . RGB defines how much red, green or blue value you’d like to have displayed in a decimal value somewhere between 0, which is no representation of the color, and 255, the highest possible concentration of the color. So, in the example rgb(255, 0, 0), we’d get a very bright red. If we wanted all green, our RGB would be rgb(0, 255, 0). For a simple blue, it would be rgb(0, 0, 255).As we know all colours can be obtained as a combination of Red , Green and Blue , we can obtain the coding for any colour we want.

Gray scale: Gray scale consists of just 1 channel (0 to 255)with 0 representing black and 255 representing white. The colors in between represent the different shades of Gray.

Computers ‘see’ in a different way than we do. Their world consists of only numbers.

Every image can be represented as 2-dimensional arrays of numbers, known as pixels.

But the fact that they perceive images in a different way, doesn’t mean we can’t train them to recognize patterns, like we do. We just have to think of what an image is in a different way.

Image for post

Now that we have a basic idea of how images can be represented , let us try and understand The architecture of a CNN

CNN architecture

Convolutional Neural Networks have a different architecture than regular Neural Networks. Regular Neural Networks transform an input by putting it through a series of hidden layers. Every layer is made up of a set of neurons, where each layer is fully connected to all neurons in the layer before. Finally, there is a last fully-connected layer — the output layer — that represent the predictions.

Convolutional Neural Networks are a bit different. First of all, the layers are organised in 3 dimensions: width, height and depth. Further, the neurons in one layer do not connect to all the neurons in the next layer but only to a small region of it. Lastly, the final output will be reduced to a single vector of probability scores, organized along the depth dimension

Image for post

Image for post

A typical CNN architecture

As can be seen above CNNs have two components:

  • The Hidden layers/Feature extraction part

In this part, the network will perform a series of **convolutions **and pooling operations during which the features are detected. If you had a picture of a tiger , this is the part where the network would recognize the stripes , 4 legs , 2 eyes , one nose , distinctive orange colour etc.

  • The Classification part

Here, the fully connected layers will serve as a classifier on top of these extracted features. They will assign a** probability** for the object on the image being what the algorithm predicts it is.

Before we proceed any further we need to understand what is “convolution”, we will come back to the architecture later:

What do we mean by the “convolution” in Convolutional Neural Networks?

Let us decode!!!

#convolutional-neural-net #convolution #computer-vision #neural networks

A Comparative Analysis of Recurrent Neural Networks

Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. RNN models are mostly used in the fields of natural language processing and speech recognition.

The vanishing and exploding gradient phenomena are often encountered in the context of RNNs. The reason why they happen is that it is difficult to capture long term dependencies because of multiplicative gradient that can be exponentially decreasing/increasing with respect to the number of layers.

Gated Recurrent Unit (GRU) and Long Short-Term Memory units (LSTM) deal with the vanishing gradient problem encountered by traditional RNNs, with LSTM being a generalization of GRU.

1D Convolution_ layer_ creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs. It is very effective for deriving features from a fixed-length segment of the overall dataset. A 1D CNN works well for natural language processing (NLP).

DATASET: IMDb Movie Review

TensorFlow Datasets is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as [_tf.data.Datasets_](https://www.tensorflow.org/api_docs/python/tf/data/Dataset), enabling easy-to-use and high-performance input pipelines.

“imdb_reviews”

This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. It provides a set of 25,000 highly polar movie reviews for training, and 25,000 for testing.

Import Libraries

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline

Load the Dataset

import tensorflow as tf
import tensorflow_datasets

imdb, info=tensorflow_datasets.load("imdb_reviews", with_info=True, as_supervised=True)
imdb

Image for post

info

Image for post

Training and Testing Data

train_data, test_data=imdb['train'], imdb['test']

training_sentences=[]
training_label=[]
testing_sentences=[]
testing_label=[]
for s,l in train_data:
  training_sentences.append(str(s.numpy()))
  training_label.append(l.numpy())
for s,l in test_data:
  testing_sentences.append(str(s.numpy()))
  testing_label.append(l.numpy())
training_label_final=np.array(training_label)
testing_label_final=np.array(testing_label)

Tokenization and Padding

vocab_size=10000
embedding_dim=16
max_length=120
trunc_type='post'
oov_tok='<oov>'
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer= Tokenizer(num_words=vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index=tokenizer.word_index
sequences=tokenizer.texts_to_sequences(training_sentences)
padded=pad_sequences(sequences, maxlen=max_length, truncating=trunc_type)
testing_sequences=tokenizer.texts_to_sequences(testing_sentences)
testing_padded=pad_sequences(testing_sequences, maxlen=max_length)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Embedding

Multi-layer Bidirectional LSTM

#imdb #convolutional-network #long-short-term-memory #recurrent-neural-network #gated-recurrent-unit #neural networks

How Graph Convolutional Networks (GCN) Work

In this post, we’re gonna take a close look at one of the well-known Graph neural networks named GCN. First, we’ll get the intuition to see how it works, then we’ll go deeper into the maths behind it.

Why Graphs?

Many problems are graphs in true nature. In our world, we see many data are graphs, such as molecules, social networks, and paper citations networks.

Image for post

Examples of graphs. (Picture from [1])

Tasks on Graphs

  • Node classification: Predict a type of a given node
  • Link prediction: Predict whether two nodes are linked
  • Community detection: Identify densely linked clusters of nodes
  • Network similarity: How similar are two (sub)networks

Machine Learning Lifecycle

In the graph, we have node features (the data of nodes) and the structure of the graph (how nodes are connected).

For the former, we can easily get the data from each node. But when it comes to the structure, it is not trivial to extract useful information from it. For example, if 2 nodes are close to one another, should we treat them differently to other pairs? How about high and low degree nodes? In fact, each specific task can consume a lot of time and effort just for Feature Engineering, i.e., to distill the structure into our features.

Image for post

Feature engineering on graphs. (Picture from [1])

It would be much better to somehow get both the node features and the structure as the input, and let the machine to figure out what information is useful by itself.

That’s why we need Graph Representation Learning.

Image for post

We want the graph can learn the “feature engineering” by itself. (Picture from [1])

Graph Convolutional Networks (GCNs)

Paper: Semi-supervised Classification with Graph Convolutional Networks(2017) [3]

GCN is a type of convolutional neural network that can work directly on graphs and take advantage of their structural information.

it solves the problem of classifying nodes (such as documents) in a graph (such as a citation network), where labels are only available for a small subset of nodes (semi-supervised learning).

Image for post

Example of Semi-supervised learning on Graphs. Some nodes dont have labels (unknown nodes).

#graph-neural-networks #graph-convolution-network #deep-learning #neural-networks

Mckenzie  Osiki

Mckenzie Osiki

1623135499

No Code introduction to Neural Networks

The simple architecture explained

Neural networks have been around for a long time, being developed in the 1960s as a way to simulate neural activity for the development of artificial intelligence systems. However, since then they have developed into a useful analytical tool often used in replace of, or in conjunction with, standard statistical models such as regression or classification as they can be used to predict or more a specific output. The main difference, and advantage, in this regard is that neural networks make no initial assumptions as to the form of the relationship or distribution that underlies the data, meaning they can be more flexible and capture non-standard and non-linear relationships between input and output variables, making them incredibly valuable in todays data rich environment.

In this sense, their use has took over the past decade or so, with the fall in costs and increase in ability of general computing power, the rise of large datasets allowing these models to be trained, and the development of frameworks such as TensforFlow and Keras that have allowed people with sufficient hardware (in some cases this is no longer even an requirement through cloud computing), the correct data and an understanding of a given coding language to implement them. This article therefore seeks to be provide a no code introduction to their architecture and how they work so that their implementation and benefits can be better understood.

Firstly, the way these models work is that there is an input layer, one or more hidden layers and an output layer, each of which are connected by layers of synaptic weights¹. The input layer (X) is used to take in scaled values of the input, usually within a standardised range of 0–1. The hidden layers (Z) are then used to define the relationship between the input and output using weights and activation functions. The output layer (Y) then transforms the results from the hidden layers into the predicted values, often also scaled to be within 0–1. The synaptic weights (W) connecting these layers are used in model training to determine the weights assigned to each input and prediction in order to get the best model fit. Visually, this is represented as:

#machine-learning #python #neural-networks #tensorflow #neural-network-algorithm #no code introduction to neural networks