This short post highlights the structural nuances between popular normalization techniques employed while training deep neural networks.

I am hoping that a quick 2 minute glance at this would refresh my memory on the concept, sometime, in the not so distant future.


Let us establish some notations, that will make the rest of the content, easy to follow. We assume that the activations at any layer would be of the dimensions NxCxHxW (and, of course, in the real number space), where, N = Batch Size, C = Number of Channels (filters) in that layer, H = Height of each activation map, W = Width of each activation map.

#ai #deep-learning #convolutional-network #neural-style-transfer #batch-normalization

Batch Normalization, Instance Normalization, Layer Normalization
1.35 GEEK