You're currently viewing an old version of this dataset. To see the current version, click here.

MeaCap: Memory-Augmented Zero-shot Image Captioning

Zero-shot image captioning without well-paired image-text data can be divided into two categories, training-free and text-only-training. Generally, these two types of methods realize zero-shot IC by integrating pre-trained vision-language models like CLIP for image-text similarity evaluation and a pre-trained language model (LM) for caption generation.

Data and Resources

This dataset has no data

Cite this as

Zequn Zeng, Yan Xie, Hao Zhang, Chiyu Chen, Bo Chen (2024). Dataset: MeaCap: Memory-Augmented Zero-shot Image Captioning. https://doi.org/10.57702/m1173b82

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2403.03715
Author Zequn Zeng
More Authors
Yan Xie
Hao Zhang
Chiyu Chen
Bo Chen
Homepage https://github.com/joeyz0z/MeaCap