You're currently viewing an old version of this dataset. To see the current version, click here.

LatteGAN: Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation

Text-guided image manipulation tasks have recently gained attention in the vision-and-language community. The GeNeVA task is a multi-turn text-conditioned image generation (MTIM) task. It involves two participants: a Teller that instructs how to modify the image, and a Drawer that draws the image according to the Teller’s instructions.

Data and Resources

This dataset has no data

Cite this as

Shoya Matsumori, Yuki Abe, Kosuke Shingyouchi, Komei Sugiura, Michita Imai (2024). Dataset: LatteGAN: Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation. https://doi.org/10.57702/c1w5v3ns

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Shoya Matsumori
More Authors
Yuki Abe
Kosuke Shingyouchi
Komei Sugiura
Michita Imai
Homepage https://github.com/smatsumori/LatteGAN