Deep Learning

Deep Learning

Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised...
Salman  Ankit

Salman Ankit


Deep Learning Tutorial | How to Choose an Activation Function for Deep Learning

How to Choose an Activation Function for Deep Learning
In this video, we cover the different activation functions used in neural networks to provide an output of a given node, or neuron, given its set of inputs: linear, step, sigmoid / logistic, tanh / hyperbolic tangent, ReLU, Leaky ReLU, PReLu, Maxout, and more.

👕 T-shirts for programmers:
🔔 Subscribe:

#deep-learning #machine-learning

Deep Learning Tutorial | How to Choose an Activation Function for Deep Learning
Phil Tabor

Phil Tabor


Should You Go to Grad School for Artificial Intelligence?

Should you go to graduate school for artificial intelligence? As a physics PhD I have some insights for you that you may not have heard elsewhere.

Graduate school is immensely rewarding, yet also incredibly difficult intellectually and emotionally. You’ll have to deal with solving novel and complex problems, as well as learning how to deal with sacrificing your social life.

Learn about how to choose your PhD committee as well as how to get things done in the face of immense pressure.

#artificial-intelligence #deep-learning #machine-learning

Should You Go to Grad School for Artificial Intelligence?
Phil Tabor

Phil Tabor


Dueling Deep Q Learning with Tensorflow 2 & Keras

Dueling Deep Q Learning is easier than ever with Tensorflow 2 and Keras. In this tutorial for deep reinforcement learning beginners we’ll code up the dueling deep q network and agent from scratch, with no prior experience needed. We’ll train an agent to land a spacecraft on the surface of the moon, using the lunar lander environment from the OpenAI Gym.

The dueling network can be applied to both regular and double q learning, as it’s just a new network architecture. It doesn’t require any change to the q learning or double q learning algorithms. We simply have to change up our feed forward to accommodate the new value and advantage streams, and combine them in a way that makes sense.

#deep-learning #python #machine-learning #tensorflow #artificial-intelligence

Dueling Deep Q Learning with Tensorflow 2 & Keras
Phil Tabor

Phil Tabor


Everything You Need To Master Actor Critic Methods | Tensorflow 2 Tutorial

In this brief tutorial you’re going to learn the fundamentals of deep reinforcement learning, and the basic concepts behind actor critic methods. We’ll cover the Markov decision process, the agent’s policy, reward discounting and why it’s necessary, and the actor critic algorithm. We’ll implement an actor critic algorithm using Tensorflow 2 to handle the cart pole environment from the Open AI Gym.

Actor critic methods form the basis for more advanced algorithms such as deep deterministic policy gradients, soft actor critic, and twin delayed deep deterministic policy gradients, among others.

#deep-learning #python #machine-learning #artificial-intelligence #tensorflow

Everything You Need To Master Actor Critic Methods | Tensorflow 2 Tutorial
Phil Tabor

Phil Tabor


Deep Deterministic Policy Gradients (DDPG) | Tensorflow 2 Tutorial

Deep Deterministic Policy Gradients (DDPG) is an actor critic algorithm designed for use in environments with continuous action spaces. This makes it great for fields like robotics, that rely on applying continuous voltages to electric motors. You’ll get a crash course with a quick lecture, followed by a live coding tutorial.

Despite being an actor critic method, DDPG makes use of a number of innovations from deep Q learning. We’re going to make use of a replay memory for training our agent, as well as target actor and target critic networks for learning stability. One key difference is that DDPG uses a soft update rule for the target network parameters, rather than a direct hard copy of the online networks.

In this tutorial we’re going to use Tensorflow 2 to implement a deep deterministic policy gradient agent in the pendulum environment from the Open AI gym.

#python #deep-learning #artificial-intelligence #tensorflow #machine-learning

Deep Deterministic Policy Gradients (DDPG) | Tensorflow 2 Tutorial
Phil Tabor

Phil Tabor


Soft Actor Critic (SAC) in Tensorflow2

The Soft Actor Critic Algorithm is a powerful tool for solving cutting edge deep reinforcement learning problems involving continuous action space environments. It’s a variation of the actor critic method that leverages a maximum entropy framework, double Q networks, and target value networks.

The entropy is modeled by scaling the reward factor, with an inverse relationship between the reward scale and the entropy of our agent. Larger reward scaling means more deterministic behavior, and a larger reward scale means more stochastic behavior.

We’re going to implement this algorithm using the tensorflow 2 framework, and test it out on the Inverted Pendulum environment found in the PyBullet package.

#deep-learning #machine-learning #artificial-intelligence #python #reinforcement-learning #data-science

Soft Actor Critic (SAC) in Tensorflow2

New Algorithm Improves ML Model Training Over The Internet

Typically, training a deep learning model starts with a forward pass where loss functions are evaluated followed by a backward pass where the loss-compensating gradients are generated, which are then pushed to servers and updated.

#ml #deep-learning

New Algorithm Improves ML Model Training Over The Internet

Advanced Deep Learning with TensorFlow 2 and Keras

Advanced Deep Learning with TensorFlow 2 and Keras (Updated for 2nd Edition)

This is the code repository for Advanced Deep Learning with TensoFlow 2 and Keras, published by Packt. It contains all the supporting project files necessary to work through the book from start to finish.

Please note that the code examples have been updated to support TensorFlow 2.0 Keras API only.

About the Book

Advanced Deep Learning with TensorFlow 2 and Keras, Second Edition is a completely updated edition of the bestselling guide to the advanced deep learning techniques available today. Revised for TensorFlow 2.x, this edition introduces you to the practical side of deep learning with new chapters on unsupervised learning using mutual information, object detection (SSD), and semantic segmentation (FCN and PSPNet), further allowing you to create your own cutting-edge AI projects.

Using Keras as an open-source deep learning library, the book features hands-on projects that show you how to create more effective AI with the most up-to-date techniques.

Starting with an overview of multi-layer perceptrons (MLPs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs), the book then introduces more cutting-edge techniques as you explore deep neural network architectures, including ResNet and DenseNet, and how to create autoencoders. You will then learn about GANs, and how they can unlock new levels of AI performance.

Next, you’ll discover how a variational autoencoder (VAE) is implemented, and how GANs and VAEs have the generative power to synthesize data that can be extremely convincing to humans. You’ll also learn to implement DRL such as Deep Q-Learning and Policy Gradient Methods, which are critical to many modern results in AI.

Related Products


It is recommended to run within conda enviroment. Pls download Anacoda from: Anaconda. To install anaconda:

sh <name-of-downloaded-Anaconda3-installer>

A machine with at least 1 NVIDIA GPU (1060 or better) is required. The code examples have been tested on 1060, 1080Ti, RTX 2080Ti, V100, RTX Quadro 8000 on Ubuntu 18.04 LTS. Below is a rough guide to install NVIDIA driver and CuDNN to enable GPU support.

sudo add-apt-repository ppa:graphics-drivers/ppa

sudo apt update

sudo ubuntu-drivers autoinstall

sudo reboot


At the time of writing, nvidia-smishows the NVIDIA driver version is 440.64 and CUDA version is 10.2.

We are almost there. The last set of packages must be installed as follows. Some steps might require sudo access.

conda create --name packt

conda activate packt

cd <github-dir>

git clone

cd Advanced-Deep-Learning-with-Keras

pip install -r requirements.txt

sudo apt-get install python-pydot

sudo apt-get install ffmpeg

Test if a simple model can be trained without errors:

cd chapter1-keras-quick-tour


The final output shows the accuracy of the trained model on MNIST test dataset is about 98.2%.

Alternative TensorFlow Installation

If you are having problems with CUDA libraries (ie tf could not load or find, TensorFlow and CUDA libraries can be installed together using conda:

pip uninstall tensorflow-gpu
conda install -c anaconda tensorflow-gpu

Advanced Deep Learning with TensorFlow 2 and Keras code examples used in the book.

Chapter 1 - Introduction

  1. MLP on MNIST
  2. CNN on MNIST
  3. RNN on MNIST

Chapter 2 - Deep Networks

  1. Functional API on MNIST
  2. Y-Network on MNIST
  3. ResNet v1 and v2 on CIFAR10
  4. DenseNet on CIFAR10

Chapter 3 - AutoEncoders

  1. Denoising AutoEncoders

Sample outputs for random digits:

Random Digits

  1. Colorization AutoEncoder

Sample outputs for random cifar10 images:

Colorized Images

Chapter 4 - Generative Adversarial Network (GAN)

  1. Deep Convolutional GAN (DCGAN)

Radford, Alec, Luke Metz, and Soumith Chintala. “Unsupervised representation learning with deep convolutional generative adversarial networks.” arXiv preprint arXiv:1511.06434 (2015).

Sample outputs for random digits:

Random Digits

  1. Conditional (GAN)

Mirza, Mehdi, and Simon Osindero. “Conditional generative adversarial nets.” arXiv preprint arXiv:1411.1784 (2014).

Sample outputs for digits 0 to 9:

Zero to Nine

Chapter 5 - Improved GAN

  1. Wasserstein GAN (WGAN)

Arjovsky, Martin, Soumith Chintala, and Léon Bottou. “Wasserstein GAN.” arXiv preprint arXiv:1701.07875 (2017).

Sample outputs for random digits:

Random Digits

  1. Least Squares GAN (LSGAN)

Mao, Xudong, et al. “Least squares generative adversarial networks.” 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017.

Sample outputs for random digits:

Random Digits

  1. Auxiliary Classfier GAN (ACGAN)

Odena, Augustus, Christopher Olah, and Jonathon Shlens. “Conditional image synthesis with auxiliary classifier GANs. Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017.”

Sample outputs for digits 0 to 9:

Zero to Nine

Chapter 6 - GAN with Disentangled Latent Representations

  1. Information Maximizing GAN (InfoGAN)

Chen, Xi, et al. “Infogan: Interpretable representation learning by information maximizing generative adversarial nets.” Advances in Neural Information Processing Systems. 2016.

Sample outputs for digits 0 to 9:

Zero to Nine

  1. Stacked GAN

Huang, Xun, et al. “Stacked generative adversarial networks.” IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Vol. 2. 2017

Sample outputs for digits 0 to 9:

Zero to Nine

Chapter 7 - Cross-Domain GAN

  1. CycleGAN

Zhu, Jun-Yan, et al. “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks.” 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017.

Sample outputs for random cifar10 images:

Colorized Images

Sample outputs for MNIST to SVHN:


Chapter 8 - Variational Autoencoders (VAE)

  3. Conditional VAE and Beta VAE

Kingma, Diederik P., and Max Welling. “Auto-encoding Variational Bayes.” arXiv preprint arXiv:1312.6114 (2013).

Sohn, Kihyuk, Honglak Lee, and Xinchen Yan. “Learning structured output representation using deep conditional generative models.” Advances in Neural Information Processing Systems. 2015.

I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. β-VAE: Learning basic visual concepts with a constrained variational framework. ICLR, 2017.

Generated MNIST by navigating the latent space:


Chapter 9 - Deep Reinforcement Learning

  1. Q-Learning
  2. Q-Learning on Frozen Lake Environment
  3. DQN and DDQN on Cartpole Environment

Mnih, Volodymyr, et al. “Human-level control through deep reinforcement learning.” Nature 518.7540 (2015): 529

DQN on Cartpole Environment:


Chapter 10 - Policy Gradient Methods

  1. REINFORCE, REINFORCE with Baseline, Actor-Critic, A2C

Sutton and Barto, Reinforcement Learning: An Introduction

Mnih, Volodymyr, et al. “Asynchronous methods for deep reinforcement learning.” International conference on machine learning. 2016.

Policy Gradient on MountainCar Continuous Environment:


Chapter 11 - Object Detection

  1. Single-Shot Detection

Single-Shot Detection on 3 Objects

Chapter 12 - Semantic Segmentation

  1. FCN

  2. PSPNet

Semantic Segmentation

Semantic Segmentation

Chapter 13 - Unsupervised Learning using Mutual Information

  1. Invariant Information Clustering

  2. MINE: Mutual Information Estimation



If you find this work useful, please cite:

  title={Advanced Deep Learning with TensorFlow 2 and Keras: Apply DL, GANs, VAEs, deep RL, unsupervised learning, object detection and segmentation, and more},
  author={Atienza, Rowel},
  publisher={Packt Publishing Ltd}

Download Details:

Author: PacktPublishing
The Demo/Documentation: View The Demo/Documentation
Download Link: Download The Source Code
Official Website:
License: MIT

Shirts and Gifts for Your Friends & Loved ☞

#deep-learning #tensorflow #keras #machine-learning

Advanced Deep Learning with TensorFlow 2 and Keras
Franz  Bosco

Franz Bosco


Introduction to Deep Learning

This Eduonix video “Introduction to Deep Learning” as a part of our upcoming Live Machine Learning Program will introduce you to the concept of Deep Learning and help you understand more of it in a clear, detailed, and concise manner.

➡️ Topics Covered in the video -

⭐ Learn what Deep Learning is through an interactive teacher-led discussion (slides/vocab/etc)
⭐ Get introduced to the objective of the day - digit recognition, discuss challenges in handwriting, etc.
⭐ Load in the MNIST dataset from Keras, discuss data shapes and what’s there (multidimensional arrays, data types, etc.)
⭐ Data normalization, why we do it, how to do it
⭐ Discuss how the model is coming along thus far and what has been executed and what to expect in the next session.

🔶 Happy E-Learning 🔶

#deep-learning #data-science #developer

Introduction to Deep Learning

Object detection with Tensorflow model and OpenCV

Using a trained model to identify objects on static images and live video

In this article, I’m going to demonstrate how to use a trained model to detect objects in images and videos using two of the best libraries for this kind of problem. For the detection, we need a model capable of predicting multiple classes in an image and returning the location of those objects so that we can place boxes on the image.

The Model

We are going to use a model from the Tensorflow Hub library, which has multiple ready to deploy models trained in all kinds of datasets and to solve all kinds of problems. For our use, I filtered models trained for object detection tasks and models in the TFLite format. This format is usually used for IoT applications, for its small size and faster performance than bigger models. I choose this format because I intend to use this model on a Rasberry Pi on future projects.

The chosen model was the EfficientDet-Lite2 Object detection model. It was trained on the COCO17 dataset with 91 different labels and optimized for the TFLite application. This model returns:

  1. The box boundaries of the detection;
  2. The detection scores (probabilities of a given class);
  3. The detection classes;
  4. The number of detections.

#object-detection #artificial-intelligence #deep-learning #opencv #tensorflow

Object detection with Tensorflow model and OpenCV

What is BERT? | Deep Learning Tutorial (TensorFlow, Keras & Python)

What is BERT (Bidirectional Encoder Representations From Transformers) and how it is used to solve NLP tasks? This video provides a very simple explanation of it. I am not going to go in details of how transformer based architecture works etc but instead I will go over an overview where you understand the usage of BERT in NLP tasks. In coding section we will generate sentence and word embeddings using BERT for some sample text.

We will cover various topics such as,

  • Word2vec vc BERT
  • How BERT is trained on masked language model and next sentence completion task

⭐️ Timestamps ⭐️

  • 00:00 Introduction
  • 00:39 Theory
  • 11:00 Coding in tensorflow


#deep-learning #data-science

What is BERT? | Deep Learning Tutorial (TensorFlow, Keras & Python)

First Steps to the OpenCV-Python

You can go to my Github account to find some entry level projects. I share them with their sources, so you can check them out, and find more projects for yourself to learn 🌈


my last article, I mentioned briefly about computer vision. All idea behind computer vision is what computers can tell from a digital video or an image. It is a field that aims to automate tasks that the human vision can do with. Computer vision operations require some methods, like processing, analyzing, and extraction of the images. We cannot feed the model with direct images obviously. As you know, computers only understand numbers and in order to train the model, we must convert the pictures to matrices or tensors. We can also make changes in the images to make the operations easier.

🤔_ What is OpenCV library?_

OpenCV-Python is a library of Python bindings designed to solve computer vision problems.

OpenCV supports a wide variety of programming languages like Python, C++, Java, etc. It can process images and videos to identify objects, faces, or even the handwriting of a human.

In this article I’ll try to give you beginner friendly information about OpenCV’s image preprocess functions. We will cut, transform, rotate and change the colors of pictures etc. Let’s dive in 🚀

#computer-vision #deep-learning #data-science #opencv #python #first steps to the opencv-python

First Steps to the OpenCV-Python
Phil Tabor

Phil Tabor


Proximal Policy Optimization (PPO) is Easy With PyTorch | Full PPO Tutorial

Proximal Policy Optimization is an advanced actor critic algorithm designed to improve performance by constraining updates to our actor network. It’s relatively straight forward to implement in code, and in this full tutorial you’re going to get a mini lecture covering the essential concepts behind the ppo algorithm, as well as a complete implementation in the pytorch framework. We’ll test our algorithm in a simple open ai gym environment: the cartpole.

Code for this video is here:

#python #deep-learning #machine-learning #artificial-intelligence #reinforcement-learning

Proximal Policy Optimization (PPO) is Easy With PyTorch | Full PPO Tutorial
Phil Tabor

Phil Tabor


Artificial Intelligence Learns to Walk with Actor Critic (TD3)

Twin Delayed Deep Deterministic Policy Gradients (TD3) is a state of the art actor critic algorithm for mastering environments with continuous action spaces. It’s based on the deep deterministic policy gradients algorithm, but deals with the problem of overestimation bias that arises from the use of deep neural networks as function approximators.

This is one of my favorite deep reinforcement learning algorithms, and we’re going to use it on the Bipedal Walker environment from the Open ai gym in this interactive tensorflow 2 coding tutorial.

You can find the code for this tutorial here:

#machine-learning #deep-learning #artificial-intelligence #tensorflow #python

Artificial Intelligence Learns to Walk with Actor Critic (TD3)
Phil Tabor

Phil Tabor


Deep Q Learning Beats Pong | Keras Tutorial

Today I’ll show you how to beat Pong with a Deep Q Learning Agent in the Keras Framework. No prior experience needed, I’ll cover everything you need to know as we go along.

As a bonus, we’ll learn how to use the OpenAI Gym Environment wrappers to stack frames and preprocess our frames to get faster processing time and to give our agent a sense of motion.

#machine-learning #artificial-intelligence #deep-learning #python #reinforcement-learning

Deep Q Learning Beats Pong | Keras Tutorial