Neural networks (NN) are the backbone of many of today’s machine learning (ML) models, loosely mimicking the neurons of the human brain to recognize patterns from input data. As a result, numerous types of neural network topologies have been designed over the years, built using different types of neural network layers.
And with today’s vast array of ML frameworks and tools, just about anyone with a bit of ML knowledge can easily build a model with different types of neural network topologies. For the most part, it’s all about knowing what problems each type of neural network excels at solving, and optimizing their hyperparameter configurations.
The four most common types of neural network layers are Fully connected, Convolution, Deconvolution, and _Recurrent, _and below you will find what they are and how they can be used.
Fully connected layers connect every neuron in one layer to every neuron in the next layer. Fully connected layers are found in all different types of neural networks ranging from standard neural networks to convolutional neural networks (CNN).
Fully connected layers can become computationally expensive as their input grows, resulting in a combinatorial explosion of vector operations to be performed, and potentially poor scalability. As such, they are commonly used for specific purposes within neural networks such as classifying image data.
Use Cases
Hyperparameters Commonly Associated with Fully Connected Layers
#machine-learning #neural-networks #machine-learning-tools #ml-framework #hyperparameter-tuning