Since their introduction in 2014, GAN networks have spawned a “GAN Zoo” of over 500 different published architectures and variations. Some of the most surprising applications of GAN networks have been in the creation and modification of art. This video presents a selection of GAN systems, along with the art that each network produces.

Deep Dream (2015): A neural network modifies an image repeatedly to achieve maximum response in selected layers of the generator network, often leading to hallucinogenic results. Deep Dream was introduced in 2015, and remains popular among a worldwide community of AI artists.

Style Transfer (and Artbreeder) (2016): An image modification technique in which the AI system takes the style from one image or artist and applies it to another image. Artists using this tool will often put images through many generations of style transformations, mixing and tuning different styles along the way.

VQGAN+CLIP (2021): Two networks work together to produce an image from a text prompt. An image producing GAN-trained network (the VQGAN) tries to match what a language processing network (CLIP) is looking for, and with each iteration gets closer to that match. Finding the right verbal description for an interesting image requires skill—the general approach is called “prompt engineering” because the human artist creates the work by providing and fine-tuning the verbal prompts.

Text To Speech

Miquel Perelló Nieto, Deep Dream through all the layers of GoogleNet (excerpts), 2016

Style Transfer and Artbreeder