Transfer learning is one of the state-of-the-art techniques in machine learning that has been widely used in image classification. In this article, I will discuss about transfer learning, the VGG model, and feature extraction. In the last section, I will demonstrate an interesting example of transfer learning where the transfer learning technique displays unexpectedly poor performance in classifying the Mnist digit dataset.

**VGG **is a convolutional neural network with a specific architecture that was proposed in the paper — Very Deep Convolutional Networks for Large-Scale Image Recognitionby a group of researchers (visual geometry group) from the University of Oxford. The VGG group participated in an annual computer vision competition — ImageNet Large Scale Visual Recognition Challenge (ILSVRC) and submitted the famous VGG model for competing in object localization (detecting objects within an image coming from 200 classes)and image classification tasks (1000-class classification). The ImageNet dataset, a major computer vision benchmark dataset that was used in the competition, includes more than 14 million images belonging to 1000 classes. The VGG model outperformed other models with 92.7% top-5 test accuracy and won the 1st and 2nd place in the 2014 ILSVRC competition.

#transfer-learning #machine-learning #feature-extraction

A Poor Example of Transfer Learning: Applying VGG Pre-trained model with Keras
4.20 GEEK