Image for post

I was scrolling through notebooks in a Kaggle competition and found almost everyone was using EfficientNet as their backbone which I had not heard about till then. It is introduced by Google AI in this paper and they tried to propose a method that is more efficient as suggested by its name while improving the state of the art results. Generally, the models are made too wide, deep, or with a very high resolution. Increasing these characteristics helps the model initially but it quickly saturates and the model made just has more parameters and is therefore not efficient. In EfficientNet they are scaled in a more principled way i.e. gradually everything is increased.

Image for post

Model Scaling. (a) is a baseline network example; (b)-(d) are conventional scaling that only increases one dimension of network width, depth, or resolution. (e) is our proposed compound scaling method that uniformly scales all three dimensions with a fixed ratio.

Did not understand what going on? Don’t worry you will once you see the architecture. But first, let’s see the results they got with this.

Image for post

Model Size Vs ImageNet accuracy

With considerably fewer numbers of parameters, the family of models are efficient and also provide better results. So now we have seen why these might become the standard pre-trained model but something’s missing. I remember an article by Raimi Karim where he showed the architectures of pre-trained models and that helped me a lot in understanding them and creating similar architectures.

Illustrated: 10 CNN Architectures

A compiled visualization of the common convolutional neural networks

towardsdatascience.com

As I could not find one like this on the net, I decided to understand it and create one for all of you.

Common Things In All

The first thing is any network is its stem after which all the experimenting with the architecture starts which is common in all the eight models and the final layers.

Image for post

After this each of them contains 7 blocks. These blocks further have a varying number of sub-blocks whose number is increased as we move from EfficientNetB0 to EfficientNetB7. To have a look at the layers of the models in Colab write this code:

!pip install tf-nightly-gpu

import tensorflow as tf
IMG_SHAPE = (224, 224, 3)
model0 = tf.keras.applications.EfficientNetB0(input_shape=IMG_SHAPE, include_top=False, weights="imagenet")
tf.keras.utils.plot_model(model0) # to draw and visualize
model0.summary() # to see the list of layers and parameters

If you count the total number of layers in EfficientNet-B0 the total is 237 and in EfficientNet-B7 the total comes out to 813!! But don’t worry all these layers can be made from 5 modules shown below and the stem above.

#computer-vision #artificial-intelligence #deep-learning #machine-learning #deep learning

Complete Architectural Details of all EfficientNet Models
17.85 GEEK