In recent years, novel neural networks called “Generative adversarial networks” — GANs for short — have been able to enter into areas that were previously reserved exclusively for humans. Creativity or art is generally not perceived as the domain of computers. However, since the revival of GANs in science in 2014, generative models are increasingly finding their way into this area.

On my blog, I showed how to use progressive generative adversarial networks for image synthesis to create artistic images of watches with GANs. This article is about the customizable generation of watch images with StyleGAN that look quite realistic.

Image for post

AI generated images of watches

Generative adversarial networks

Instead of just repeating, what others already explained very well and in an easy-to-understand way, I refer to this article. In short, the styleGAN architecture allows to control the style of generated examples inside image synthesis network. That means that it is possible to adjust high level styles (w) of an image, by applying different vectors from W space. Furthermore, it is possible to transfer a style from one generated image to another. These styles are mapped to the generator LOD (level of detail) sub-networks, which means the effect of these styles are varying from coarse to fine.

The StyleGAN paper has been released roughly one year ago (Jan 2019) and showed some major improvements to previous generative adversarial networks. Furthermore StyleGAN2 was released ~5 months ago (Dec 2019), which adds some enhancements.

Image for post

Architecture from the original Style-GAN paper

The styleGAN paper used the Flickr-Faces-HQ dataset and produces artificial human faces, where the style can be interpreted as pose, shape and colorization of the image. The results of the paper had some media attention through the website:

#generative-adversarial #artificial-intelligence #image-generation #machine-learning #neural-networks #stylegan

Creating Artificial Watch Images with StyleGAN
11.60 GEEK