Understanding differences b/w DCGAN and WGAN. implementing WGAN with TensorFlow 2.x. In this article, we will be trying to understand the difference b/w two basic types of GAN i.e DCGAN and WGAN, and will also be looking at the difference b/w them and the implementation of WGAN with TensorFlow 2.x.

In this article, we will be trying to understand the difference b/w two basic types of GAN i.e DCGAN and WGAN, and will also be looking at the difference b/w them and the implementation of WGAN with TensorFlow 2.x. I have used the TensorFlow’s official tutorial code of DCGAN as the foundation code for this tutorial and modified it for WGAN. You can find it here.

- DCGAN consists of two neural networks like every other GAN i.e one Generator and one Discriminator.
- The generator takes random noise as input and outputs a generated fake image.
- The Discriminator takes real and fake images as inputs and outputs value b/w 0 and 1 i.e confidence level of an image being real or fake.
- DCGAN uses binary cross-entropy as a loss function.
- The generator doesn't see real images and only learns via feedback from the discriminator.
- The generator’s goal is to fool the discriminator by generating realistic fake images. While the goal of the discriminator is to correctly identify real and fake images.

Some issues that arise in DCGAN are due to the use of Binary Cross-Entropy loss and are as follows.

**Mode Collapse**: It is a term (in the context of GAN) used to define the inability of GAN to generate different class images. For example: while training with the MNIST dataset, GAN may only able to generate one type of number instead of all 10 i.e it may generate only ‘2’ or some other number. You can also take an example of a GAN that is only able to generate dogs of one breed like Husky but GAN was trained on all kinds of dog breeds.- *
*Vanishing Gradient: **As the confidence-values of the discriminator is a single value that can only be b/w 0 and 1, and the goal is to get a value closer to 1 as much as possible, hence the calculated gradients approach to zero and as a result, the generator is not able to get much information and is not able to learn. So this may result in a strong discriminator, which will lead to a poor generator.

One Solution for the issues discussed above is to use Wasserstein loss that approximates Earth Mover’s Distance (EMD is the amount of effort needed to make one distribution to another distribution. In our case we want to make the generated image distribution equal to the real image distribution). WGAN makes use of Wasserstein loss, so let us now talk about WGAN.

computer-vision data-science computer-science deep-learning machine-learning

A few compelling reasons for you to starting learning Computer. In today’s world, Computer Vision technologies are everywhere.

Most popular Data Science and Machine Learning courses — August 2020. This list was last updated in August 2020 — and will be updated regularly so as to keep it relevant

The agenda of the talk included an introduction to 3D data, its applications and case studies, 3D data alignment and more.

PyTorch for Deep Learning | Data Science | Machine Learning | Python. PyTorch is a library in Python which provides tools to build deep learning models. What python does for programming PyTorch does for deep learning. Python is a very flexible language for programming and just like python, the PyTorch library provides flexible tools for deep learning.

How you can use Deep Learning even for small datasets. When you’re working on Deep Learning algorithms you almost always require a large volume of data to train your model on.