This blog post will summarise the paper “ Simplifying Graph Convolutional Networks[1] ”, which tries to reverse engineer the Graph Convolutional Networks. So, let us evolve Graph Convolutional Networks backward. Thus, to debunk the GCNs, the paper tries to reverse engineer the GCN and proposes a simplified linear model called Simple Graph Convolution (SGC).

This blog post will summarise the paper “ Simplifying Graph Convolutional Networks[1] ”, which tries to reverse engineer the Graph Convolutional Networks. So, let us evolve Graph Convolutional Networks backward.

Graphs are pervasive models of structures. They are everywhere, from social networks to the chemistry molecule. Various things can be represented in terms of graphs. However, applying Machine learning to these structures is something that didn’t come directly to us. Everything in Machine learning came from a small simple idea or model which was made complex with time as per the need. Just as an example, initially, we had Perceptron which evolved to Multi-Layer perception, similarly, we had image filters that evolved to non-linear CNNs, and so on. However, Graph Convolutional Networks, referred to as GCN, were something we derived directly from existing ideas and had a more complex start. Thus, to debunk the GCNs, the paper tries to reverse engineer the GCN and proposes a simplified linear model called **Simple Graph Convolution (SGC).** SGC as when applied gives comparable performance to GCNs and is faster than even the Fast-GCN.

Inputs to the Graph convolutional network are:

1. Node Labels

2. Adjacency matrix

**Adjacency matrix: * The adjacency matrix *A *is *n x n,**matrix where n is the number of nodes, with a(i,j) = 1 if node i is connected to node j else a(i,j) = 0. If edge is weighted then a(i,j) = edge weight.

**Diagonal Matrix: * Diagonal matrix *D *is n x n matrix with d(i,i) = sum of *i**th row of adjacency matrix.

**Input features: * X is an input feature matrix of size *n x c** with c as the number of classes.

Let us see how GCNs actually work before reverse engineering it.

machine-learning graph-convolution graph-neural-networks gcn neural-networks

In this post, we’re gonna take a close look at one of the well-known Graph neural networks named GCN. First, we’ll get the intuition to see how it works, then we’ll go deeper into the maths behind it.

Convolutional Neural Network: How is it different from the other networks? What’s so unique about CNNs and what does convolution really do? This is a math-free introduction to the wonders of CNNs.

In this blog,Let's discuss Graph Neural Network and its variants. Let us start with what graph neural networks are and what are the areas in which it can be applied. The sequence in which we proceed further is as follows.

We supply you with world class machine learning experts / ML Developers with years of domain experience who can add more value to your business.

The figure below shows an example most of us are familiar with, the molecule of caffeine, whose level in my bloodstream is alarmingly low.