You're currently viewing an old version of this dataset. To see the current version, click here.

MUSE: Text-to-Image Generation via Masked Generative Transformers

MUSE is a text-to-image generation model that uses masked generative transformers.

Data and Resources

Cite this as

Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, José Lezama, Lu Jiang, Ming Yang, Kevin P. Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, Dilip Krishnan (2024). Dataset: MUSE: Text-to-Image Generation via Masked Generative Transformers. https://doi.org/10.57702/2m8sue9v

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2312.02133
Author Huiwen Chang
More Authors
Han Zhang
Jarred Barber
AJ Maschinot
José Lezama
Lu Jiang
Ming Yang
Kevin P. Murphy
William T. Freeman
Michael Rubinstein
Yuanzhen Li
Dilip Krishnan