Neural network topology describes how neurons are connected to form a network. This architecture is infinitely adaptable, and novel topologies are often hailed as breakthroughs in neural network research. From the advent of the Perceptron in 1958 to the Feed Forward neural network to Long/Short Term Memory models to — more recently — Generative Adversarial networks developed by Ian Goodfellow, innovative architectures represent great advancements in machine learning.

But how does one find novel, effective architectures for specific problem sets? Solely through human ingenuity, until recently. This brings us to Neural Architecture Search (NAS), an algorithmic approach to discover optimal network topologies through raw computational power. The approach is basically a massive GridSearch. Test many combinations of hyperparameters, such as number of hidden layers, number of neurons in each layer, activation function, etc., to find the best performing architecture. If this sounds incredibly resource intensive, it is.

#neuralarchitecturesearch #machine-learning #convolutional-network #deep-learning #transfer-learning

Neural Architecture Transfer
1.30 GEEK