MeaCap: Memory-Augmented Zero-shot Image Captioning
Zero-shot image captioning without well-paired image-text data can be divided into two categories, training-free and text-only-training. Generally, these two types of methods realize zero-shot IC by integrating pre-trained vision-language models like CLIP for image-text similarity evaluation and a pre-trained language model (LM) for caption generation.
BibTex: