Exploring the Latent Space of a ConvNet Image Classifier. Linearity in the latent space of hidden layers! I used the pre-trained Resnet model and applied transfer learning on this dataset with added layers for each of the labels.
A fascinating thing about Neural networks, especially Convolutional Neural Networks is that in the final layers of the network where the layers are fully connected, each layer can be treated as a vector space. All the useful and required information that has been extracted from the image by the ConvNet has been stored in a *compact version *in the final layer in the form of a feature vector. When this final layer is treated as a vector space and is subjected to vector algebra, it starts to yield some very interesting and useful properties.
I trained a Resnet50 classifier on the iMaterialist Challenge(Fashion) dataset from Kaggle. The challenge here is to classify every apparel image into appropriate attributes like pattern, neckline, sleeve length, style, etc. I used the pre-trained Resnet model and applied transfer learning on this dataset with added layers for each of the labels. Once the model got trained, I wish to look at the vector space of the last layer and search for patterns.
The last FC layer of the Resnet50 is of length 1000. After transfer learning on the Fashion dataset, I apply PCA on this layer and reduce the dimension to 50 which retains more than 98% of the variance.
The concept of Latent Space Directions is popular with GANs and Variational Auto Encoders where the input is a vector and the output is an image, it is observed that by translating the input vector in a certain direction, a certain attribute of the output image changes. The below example is from StyleGAN trained by NVIDIA, you can have a look at this website where a fake face gets generated from a random vector every time you refresh the page.
PyTorch For Deep Learning — Convolutional Neural Networks ( Fashion-MNIST ). This blog post is all about how to create a model to predict fashion mnist images and shows how to implement convolutional layers in the network.
Deep learning on graphs: successes, challenges, and next steps. TL;DR This is the first in a series of posts where I will discuss the evolution and future trends in the field of deep learning on graphs.
A step-by-step guide to classifying dog images amongst 115 breeds! Stuck behind the paywall? Click here to read the full story with my friend link!
Basic fundamentals of CNN. CNN’s are a special type of ANN which accepts images as inputs. Below is the representation of a basic neuron of an ANN which takes as input X vector.
Lego Minifigure Gender Classification Using Deep Learning. With CNN’s and transfer learning