You're currently viewing an old version of this dataset. To see the current version, click here.

MULTI-CONCEPT T2I-ZERO: TWEAKING ONLY THE TEXT EMBEDDINGS AND NOTHING ELSE

The dataset used in the paper is a text-to-image diffusion model, specifically Stable Diffusion. The authors used this model to generate images from text prompts and evaluated its performance on multi-concept image synthesis, image manipulation, and personalization tasks.

Data and Resources

This dataset has no data

Cite this as

Hazarapet Tunanyan, Dejia Xu, Shant Navasardyan, Zhangyang Wang, Humphrey Shi (2024). Dataset: MULTI-CONCEPT T2I-ZERO: TWEAKING ONLY THE TEXT EMBEDDINGS AND NOTHING ELSE. https://doi.org/10.57702/8jbure2o

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2310.07419
Author Hazarapet Tunanyan
More Authors
Dejia Xu
Shant Navasardyan
Zhangyang Wang
Humphrey Shi
Homepage https://multi-concept-t2i-zero.github.io/