A very brief introduction to graph convolutional networks (GCNs), a versatile type of neural network.

Origin

GCNs were first introduced in [Spectral Networks and Deep Locally Connected Networks on Graphs](https://arxiv.org/pdf/1312.6203.pdf](https://arxiv.org/pdf/1312.6203.pdf) (Bruna et al, 2014) as a method for applying neural networks to graph-structured data. Existing types of neural networks (e.g. fully connected, CNN, RNN) produced excellent results for regularly structured data types, including images, text, and categorical data. None of them are flexible enough to handle relational data or irregularly-structured data which is represented as a graph. Relational data is extremely common in the real world, and so GCNs were created.

Applications for GCNs

  • Knowledge graph embedding
  • Community detection in social networks
  • Drug discovery
  • Material classification
  • Graph clustering
  • Collaborative filtering
  • Word embeddings

What Makes Them Convolutional?

GCNs use spectral graph convolutions, which are a generalization of image convolutions that operate on graphs with arbitrary numbers of vertices and edges. When viewed as a graph, image convolution looks something like this:

Image for post

Source: Author

Each pixel is updated by aggregating the values of all neighboring pixels. So, images can be converted to graphs by constructing an edge between all adjacent pixels. Spectral convolutions work similarly by aggregating the values of neighboring vertices in the graph.

Image for post

#machine-learning #neural-networks #artificial-intelligence #tutorial #data-science

Graph Convolutional Networks in 2 Minutes
1.50 GEEK