Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised...
In this video I analyze all mentioned techniques one after one starting from Momentum, then Rmsprop and finally introduce Adam that is the combination of SGD with Momentum and Rmsprop, as theory says.
In this blog we will learn how to apply scaled out or in other words, Distributed Machine Learning Techniques on the cloud. We will understand how we go from a Jupyter notebook phase which is the most agile way of building ML models to a production-ready training script that can run on a cluster of GPUs using Azure ML and Horovod.
In this tutorial, we'll learn Air Piano using OpenCV and Python. What's special about it? Why are so many people looking forward to it?
best websites where you can learn how to code by playing games | Exceed Team Tech Blog
Video explain what is gradient descent and how gradient descent works with a simple example. Basic intuition and explanation are revealed in the video.The content is:
Top 10 Deep Learning Algorithms One Should Know in 2021: Convolutional Neural Network, Long Short Term Memory Networks, Recurrent Neural Networks, Generative Adversarial Networks, Radial Basis Function Network, Multilayer Perceptions, Self Organizing Maps, Deep Belief Network, Restricted Boltzmann Machine, Autoencoders
Learn about deploying an application on a serverless architecture using different services of AWS (Lambda, API Gateway, S3, etc.). How to Build and Deploy a Serverless Machine Learning App on AWS.
The battle of AI Vs. ML is unheard of. However, the difference between Artificial intelligence and machine learning is one of the most searched questions related to these fascinating technologies. Surely, you two would be interested in knowing how these technologies are different from each other and how they are doing around the world. Well! The matter of fact is that AI &
Andrew Ng is the best teacher I've ever seen. These two materials from Coursera changed my life: his machine learning course, and the deep learning specialization. Here's the differences between the 2.
Greetings, In this blog, I will be talking about gradient descent. It is one of the basic topics that must be known to the person who is studying machine learning. I will try to explain it in a very simple way and different types of Gradient Descent with mathematical equations. Let’s get started!
In the first tutorial, I introduced the most basic Reinforcement learning method called Q-learning to solve the CartPole problem. Because of its computational limitations, it is working in simple environments, where the number of states and possible actions is relatively small.
One of the key problems every machine learning model faces is the problem of over-fitting. So what is over-fitting and how do we minimize it? What is Regularization? By the end of the article, you will be clear with these concepts.
his is the sixth article in my series on Reinforcement Learning (RL). We now have a good understanding of the concepts that form the building blocks of an RL problem, and the techniques used to solve them. We have also taken a detailed look at two Value-based algorithms — Q-Learning algorithm and Deep Q Networks (DQN), which was our first step into Deep Reinforcement Learning.
This Blog will be a 3 part series where I will explain the different Reinforcement Learning Algorithms,
A simple guide to applying traditional machine learning and deep learning techniques using Python on Kaggle's Fake News Dataset. Also includes a brief text and stylometric analysis of the articles.
In vanilla federated learning , the centralized server will send a global model to each participant before training takes place. After every round of federated training, the participants send back its local gradient to the global model and the server updated it with the average of all the local gradients.
The ways in which researchers train these model varies drastically. The downstream tasks are how these methods are evaluated and are the focus of this article.
Machine learning algorithms are tunable by multiple gauges called hyperparameters. Recent deep learning models are tunable by tens of hyperparameters, that together with data augmentation parameters and training procedure parameters create quite complex space. In the reinforcement learning domain, you should also count environment params.
Modern technologies make our life more comfortable, and you probably have no idea what technologies stand behind that. Have you ever thought of how fridges adjust temp themselves or how Siri works?
In this article, let's discuss: What makes Deep Learning different from traditional Machine Learning methods?