Neural networks, as their name implies, are computer algorithms modeled after networks of neurons in the human brain. Learn more about neural networks from Algorithmia.

Neural networks, as their name implies, are computer algorithms modeled after networks of neurons in the human brain. Learn more about neural networks.

Neural networks, as their name implies, are computer algorithms modeled after networks of neurons in the human brain. Like their counterparts in the brain, neural networks work by connecting a series of nodes organized in layers, where each node is connected to neighbors in adjacent layers by weighted edges. These weights are applied in a neural network’s forward pass by a matrix multiplication in the case of fully-connected layers.

Sitting atop most layers, there is something called an *activation function*, which squashes a layer’s output to some predefined range. Examples of activation functions are RELU layers or sigmoid layers. A RELU layer takes the maximum of the value of each node and 0. A sigmoid layer applies the sigmoid function to the value at each node.

Sigmoid function:

f(x)=1

1+e

−x

f(x)=11+e−x

While the simplest types of neural networks are constructed as above, more complicated architectures have been created to handle specialized tasks. One such architecture is called a *convolutional neural network (CNN)* and is used extensively in computer vision applications. These networks comprise *convolutional layers*, which apply the convolution of a filter with local areas of an image.

Convolutional neural networks obtain particularly good performance on image data because the filters can detect similar patterns that may occur repeatedly in the image several times. CNNs also use *pooling layers* which decrease the resolution of a neural network in order to improve training time and enable weight sharing between network nodes. The input layer to such a neural network is often the set of pixels in the representation of an input image, and the output layer might be a vector assigning probabilities to predefined classes, enabling categorization of an image.

The past few decades have witnessed a massive boom in the penetration as well as the power of computation, and amidst this information.

“You do not really understand something unless you can explain it to your grandmother” Not sure where this quote originally came from, it is sometimes kind of half-attributed to Albert Einstein.

Deep Learning Explained in Layman's Terms. In this post, you will get to learn deep learning through a simple explanation (layman terms) and examples.

A peek at Alibaba’s Mobile Neural Network (MNN) and how it achieves balance between high performance, flexibility, and ease-of-use.

Artificial Neural Networks — Recurrent Neural Networks. Remembering the history and predicting the future with neural networks. A intuition behind Recurrent neural networks.