Conditional and Controllable Generative Adversarial Networks

Conditional and Controllable Generative Adversarial Networks

In this article, we will be looking at conditional and controllable GANs, what’s their need, and how to implement Naive conditional GAN using TensorFlow 2.x.

In this article, we will be looking at conditional and controllable GANs, what’s their need, and how to implement Naive conditional GAN using TensorFlow 2.x. Before you read further, I would like you to be familiar with DCGANs, which you can find here.

Why Conditional GAN

Till now, the generator was generating images randomly, and we had no control over the class of image to be generated i.e while training GAN, the generator was generating a random digit each time i.e it may generate one, six, or three, we don't know what it will generate. But will conditional GANs we can tell the generator to generate an image of one or six. This is where conditional GAN becomes handy. With conditional GAN you can generate images of the class of your choice.

How does it work?

Till now, we were feeding images as an only input to our generator and discriminator. But now we will be feeding class information to both the networks.

  1. The generator takes random noise and a one-hot encoded class label as input. And outputs a fake image of a particular class.
  2. The discriminator takes an image with one-hot labels added as depth to the image(channels) i.e if you have an image of 28 * 28 *1 size and one-hot vector of size n than image size will be 28 * 28 * (n+1).
  3. Discriminator outputs whether the image belongs to that class or not i.e real or fake.

Code

The code for this article is almost the same as that of DCGAN, but with some modifications. Let us see those differences.

Note: Following Implementation is a Naive way and is very slow. You can refer _[**_here**](https://machinelearningmastery.com/how-to-develop-a-conditional-generative-adversarial-network-from-scratch/)_for a much better way of coding conditional GANs._

Combining Images and Labels

  1. First, we load the MNIST dataset and normalize the images.
  2. Then, we define an add_channels function that takes an image and corresponding one-hot label as inputs. And outputs image with additional depth channels that represent a one-hot label. Out of all the depth channels that were added, only one channel contains a value of one, and all other channels contain zero values.
  3. First, we iterate over all the images and corresponding labels. For each digit in the one-hot label, we create a vector of image shape that contains every value equal to that digit. After that, we stack these vectors with the image.
  4. Here, we have 10 classes that’s why we have looped over 10 digits in a one-hot label.

data-science deep-learning computer-science computer-vision machine-learning

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Why you should learn Computer Vision and how you can get started

A few compelling reasons for you to starting learning Computer. In today’s world, Computer Vision technologies are everywhere.

Most popular Data Science and Machine Learning courses — July 2020

Most popular Data Science and Machine Learning courses — August 2020. This list was last updated in August 2020 — and will be updated regularly so as to keep it relevant

Applications Of Data Science On 3D Imagery Data

The agenda of the talk included an introduction to 3D data, its applications and case studies, 3D data alignment and more.

PyTorch for Deep Learning | Data Science | Machine Learning | Python

PyTorch for Deep Learning | Data Science | Machine Learning | Python. PyTorch is a library in Python which provides tools to build deep learning models. What python does for programming PyTorch does for deep learning. Python is a very flexible language for programming and just like python, the PyTorch library provides flexible tools for deep learning.

Deep Learning — not only for the big ones

How you can use Deep Learning even for small datasets. When you’re working on Deep Learning algorithms you almost always require a large volume of data to train your model on.