Generative Adversarial Text to Image Synthesis
Scott Reed, Zeynep Akata , Xinchen Yan,
Lajanugen Logeswaran, Bernt Schiele and Honglak Lee
Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories such as faces, album covers, room interiors and flowers. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.
Paper, Code and Data
- If you use our code, please cite:
@inproceedings {RAYLLS16,
title = {Generative Adversarial Text to Image Synthesis},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2016},
author = {Scott Reed and Zeynep Akata and Xinchen Yan and Lajanugen Logeswaran and Bernt Schiele and Honglak Lee} }
title = {Generative Adversarial Text to Image Synthesis},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2016},
author = {Scott Reed and Zeynep Akata and Xinchen Yan and Lajanugen Logeswaran and Bernt Schiele and Honglak Lee} }