Building a simple Generative Adversarial Network (GAN) using Keras

Building a simple Generative Adversarial Network (GAN) using Keras

A complete guide for building a Generative Adversarial Network (GAN) to make your very own anime characters with Keras

In this post, we will learn to develop a Generative Adversarial Network (GAN) for generating realistic manga or anime characters.

Photo by Moujib Aghrout on Unsplash

I’ve always been amazed by vivid animations, especially Manga and their bold looks and strokes. Wouldn’t it be awesome to be able to draw a few ourselves, to experience the thrill of creating them with the help of a self-developed Neural Network?!

So what makes a GAN different?

The best way to master a skill is to practice and improvise it until you’re satisfied with yourself and your efforts. For a machine or a neural network, the best output it can generate is the one that matches human-generated outputs—or even fool a human to believe that a human actually produced the output. That’s exactly what a GAN does—well, at least figuratively ;)

Generative adversarial networks have lately been a hot topic in Deep Learning.

Quick Overview of Generative Adversarial Networks

In Generative Adversarial Networks (GANs), two networks train and compete against each other, resulting in mutual improvisation. The generator misleads the discriminator by creating compelling fake inputs and tries to fool the discriminator into thinking of these as real inputs . The discriminator tells if an input is real or fake.

GAN Architecture

There are 3 major steps in the training of a GAN:

  1. Using the generator to create fake inputs based on random noise or in our case, random normal noise.
  2. Training the discriminator with both real and fake inputs (either simultaneously by concatenating real and fake inputs, or one after the other, the latter being preferred).
  3. Train the whole model: the model is built with the discriminator combined with the generator.

An important point to note is that the discriminator’s weights are frozen during the last step.

The reason for combining both networks is that there is no feedback on the generator’s outputs. The ONLY guide is if the discriminator accepts the generator’s output.

the minimax objective function

You can say that they are rivals destined for each other. The main character is the generator who strives to be better and better to make our purpose realized by learning from the fight with its rival, the discriminator


For the task at hand, we use a DCGAN (deep convolutional generative adversarial network)

A few guidelines to follow with DCGANs:

  1. Replace all max pooling with convolutional strides
  2. Use transposed convolution for upsampling.
  3. Eliminate fully connected layers.
  4. Use batch normalization except for the output layer for the generator and the input layer of the discriminator.
  5. Use ReLU in the generator, except for the output, which uses tanh.
  6. Use LeakyReLU in the discriminator.

Setup Details

  1. Keras version==2.2.4
  2. TensorFlow==1.8.0
  3. Jupyter notebook
  4. Matplotlib and other utility libraries like NumPy, Pandas.
  5. Python==3.5.7

The Dataset

The dataset for anime faces can be generated by curling through various manga websites and downloading images, cropping faces out of them, and resizing them to a standard size. Given below is the link for A Python code depicting the same: Anime-Face-GAN-Keras 

A Glimpse of the Dataset

glimpse of the dataset

The Model

Now let’s have a look at the architecture of our neural network! Do remember the points we discussed earlier about DCGANs.

This implementation of GAN uses deconv layers in Keras. I’ve tried various combinations of layers such as:

  • Conv + Upsampling
  • Conv + bilinear
  • Conv + Subpixel Upscaling

The Generator

The generator consists of convolution transpose layers followed by batch normlization and a leaky ReLU activation function for upsampling. We’ll use a strides parameter in the convolution layer. This is done to avoid unstable training. Leaky ReLUs are one attempt to fix the “dying ReLU” problem. Instead of the function being zero when x < 0, a leaky ReLU will instead have a small negative slope (of 0.01, or so).



The Discriminator

The discriminator also consists of convolution layers where we use strides to do downsampling and batch normalization for stability.



The Compiled GAN

To conduct back propagation for the generator, in order to keep a check on it’s outputs, we compile a network in Keras —generator followed by discriminator.

In this network, the input is the random noise for the generator, and the output would is the generator’s output fed to the discriminator, keeping the discriminator’s weights frozen to avoid the adversarial collapse. Sounds cool, right? Look it up!


Combined GAN

Training the Model

minimax objective realized during training

Basic Configurations of the model

  • Generate random normal noise for input

  • Concatenate real data sampled from dataset with generated noise

  • Add noise to the input label

  • Training only the generator

  • Training only the discriminator

  • Train the combined GAN

  • Saving instances of generator and discriminator

I trained this code on my Acer-Predator helios 300, which took almost half an hour for 10,000 steps and around 32,000 images, with an Nvidia GTX GeForce 1050Ti GPU.

Manga-Generator Results

After training for 10,000 steps, the results came out looking pretty cool and satisfying. Have a look!

Transition of Images throughout Training

Final Output Images

In terms of improving the model, I think training for a longer duration and on a bigger dataset would improve the results further (Some of the faces were scary weird! Not the conventional Manga, I must say :D)


The task of generating manga-style faces was certainly interesting.

But there is still quite a bit of room for improvement with better training, models, and dataset. Our model cannot make a human wonder if these faces generated were real of fake; even so, it does an appreciably good job of generating manga-style images. Go ahead and try this with complete manga postures, too.

data-science deep-learning machine-learning

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Most popular Data Science and Machine Learning courses — July 2020

Most popular Data Science and Machine Learning courses — August 2020. This list was last updated in August 2020 — and will be updated regularly so as to keep it relevant

PyTorch for Deep Learning | Data Science | Machine Learning | Python

PyTorch for Deep Learning | Data Science | Machine Learning | Python. PyTorch is a library in Python which provides tools to build deep learning models. What python does for programming PyTorch does for deep learning. Python is a very flexible language for programming and just like python, the PyTorch library provides flexible tools for deep learning.

Data Augmentation in Deep Learning | Data Science | Machine Learning

Data Augmentation is a technique in Deep Learning which helps in adding value to our base dataset by adding the gathered information from various sources to improve the quality of data of an organisation.

Difference between Machine Learning, Data Science, AI, Deep Learning, and Statistics

In this article, I clarify the various roles of the data scientist, and how data science compares and overlaps with related fields such as machine learning, deep learning, AI, statistics, IoT, operations research, and applied mathematics.

PyTorch for Deep Learning | Data Science | Machine Learning | Python

PyTorch is a library in Python which provides tools to build deep learning models. What python does for programming PyTorch does for deep learning.