MeaCap: Memory-Augmented Zero-shot Image Captioning

Zero-shot image captioning without well-paired image-text data can be divided into two categories, training-free and text-only-training. Generally, these two types of methods realize zero-shot IC by integrating pre-trained vision-language models like CLIP for image-text similarity evaluation and a pre-trained language model (LM) for caption generation.

Data and Resources

Cite this as

Zequn Zeng, Yan Xie, Hao Zhang, Chiyu Chen, Bo Chen (2024). Dataset: MeaCap: Memory-Augmented Zero-shot Image Captioning. https://doi.org/10.57702/m1173b82

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2403.03715
Author Zequn Zeng
More Authors
Yan Xie
Hao Zhang
Chiyu Chen
Bo Chen
Homepage https://github.com/joeyz0z/MeaCap