This short article talks about the regularization techniques, the advantages, meanings, way to apply them, and why are necessary. In this paper, I’m not going to explain how to design or how are the neural networks anything about forward or backpropagation, weights, bias (threshold), normalization, but maybe in the next article, I’m going to covert those topics.

This short article talks about the regularization techniques, the advantages, meanings, way to apply them, and why are necessary. In this paper, I’m not going to explain how to design or how are the neural networks anything about forward or backpropagation, weights, bias (threshold), normalization, but maybe in the next article, I’m going to covert those topics. However, you need those concepts to understand regularization techniques.

First, we need to understand what is the problem with Neural Networks. When we are designing and creating a Neural Network we have a goal to apply them, for example, if I want to recognize the numbers between 0 to 9 (My goal), I should understand that I need to use samples with a lot of ways to write these numbers (0–9) to train the model and also samples to test the model. This is so important because like you know we have different ways to write the numbers, the lines and/or circles could be perfect in some cases or maybe not, maybe this occurs for a lot of factors like age, sickness, alcohol levels in blood, anxiety, the technique to write, and more. What do you think of doctors’ writing? yeah, that’s another topic, back to the problem we need to choose very well our samples trying to get the data which represent the future possible datasets, we are going to have many problems but in this case, we are going to talk only about “overfitting”.

To understand overfitting is necessary to know the meaning of bias and variance I recommend this video because It’s a very good explanation https://www.youtube.com/watch?v=EuBBz3bI-aA

Project walk-through on Convolution neural networks using transfer learning. From 2 years of my master’s degree, I found that the best way to learn concepts is by doing the projects.

Deep Q-Networks have revolutionized the field of Deep Reinforcement Learning, but the technical prerequisites for easy experimentation have barred newcomers until now.

Deep learning on graphs: successes, challenges, and next steps. TL;DR This is the first in a series of posts where I will discuss the evolution and future trends in the field of deep learning on graphs.

Emojify - Create your own emoji with Deep Learning. We will classify human facial expressions to filter and map corresponding emojis or avatars.

Deep Learning with scikit-learn: PyTorch, TensorFlow and Caffe aren’t the only frameworks for Deep Learning. There is also a l library with a scikit-learn like API.