Location: Room 355 DEF, Calvin L. Rampton Salt Palace Convention Center in Salt Lake City, Utah
Generative adversarial networks (GANs) have been at the forefront of research on generative models in the last couple of years. GANs have been used for image generation, image processing, image synthesis from captions, image editing, visual domain adaptation, data generation for visual recognition, and many other applications, often leading to state of the art results.
This tutorial aims to provide a broad overview of generative adversarial networks, mainly including the following three parts:
For an example of the power of the generative adversarial networks, see the below result of generating high resolution realistic portraits and driving dash cam images.
10:00 Unpaired Image-to-Image Translation with CycleGAN [slides] [pdf] [video]
Taesung Park, UC Berkeley and Jun-Yan Zhu, MIT
10:30 Coffee Break
11:00 Can GANs actually learn the distribution? Some obstacles [slides] [video]
Sanjeev Arora, Princeton
11:45 Learning Disentangled Representations with an Adversarial Loss [slides] [video]
Emily Denton, NYU
12:15 Lunch break
15:00 Coffee break
This tutorial was organized by Jun-Yan Zhu, Taesung Park, Mihaela Rosca, Phillip Isola, and Ian Goodfellow.
Contact the organizers at cvpr2018gantutorial@googlegroups.com