he name itself says “Pixel to Pixel” which means, in an image it takes a pixel, then convert that into another pixel.

The goal of this model is to convert from one image to another image, in other words the goal is to learn the mapping from an input image to an output image.

But why and what application we can think of ??

Well, there are tons of applications we can think of:

Pix2Pix Gan

The Pix2Pix GAN has been demonstrated on a range of image-to-image translation tasks such as converting maps to satellite photographs, black and white photographs to color, and sketches of products to product photographs.

And the reason why we use GAN’s for this is to synthesize these photos from one space to another.

Pix2Pix is a Generative Adversarial Network, or GAN, model designed for general purpose image-to-image translation.

The approach was presented by Phillip Isola, et al. in their 2016 paper titled “Image-to-Image Translation with Conditional Adversarial Networks” and presented at CVPR in 2017.

Introduction to Gan

The GAN architecture is comprised of two models:

1. Generator model for outputting new plausible synthetic images, and a

2. Discriminator model that classifies images as real (from the dataset) or fake (generated).

The discriminator model is updated directly, whereas the generator model is updated via the discriminator model. As such, the two models are trained simultaneously in an adversarial process where the generator seeks to better fool the discriminator and the discriminator seeks to better identify the counterfeit images.

#tensorflow #generative-adversarial #python #deep-learning #machine-learning

Understanding Pix2Pix GAN
1.90 GEEK