You're currently viewing an old version of this dataset. To see the current version, click here.

VGDiffZero: Text-to-Image Diffusion Models Can Be Zero-Shot Visual Grounders

VGDiffZero is a zero-shot visual grounding framework that leverages pre-trained text-to-image diffusion models' vision-language alignment abilities.

Data and Resources

This dataset has no data

Cite this as

Xuyang Liu, Siteng Huang, Yachen Kang, Honggang Chen, Donglin Wang (2024). Dataset: VGDiffZero: Text-to-Image Diffusion Models Can Be Zero-Shot Visual Grounders. https://doi.org/10.57702/5c7ldj53

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2309.01141
Author Xuyang Liu
More Authors
Siteng Huang
Yachen Kang
Honggang Chen
Donglin Wang
Homepage https://github.com/xuyang-liu16/VGDiffZero