-
COCO-QA-OBJ
The COCO-QA-OBJ dataset is used for object counting task. It consists of 123,287 images and 78,736 train and 38,948 test questions. -
COCO-QA-LOC
The COCO-QA-LOC dataset is used for location identification task. It consists of 123,287 images and 78,736 train and 38,948 test questions. -
COCO-QA-ID
The COCO-QA-ID dataset is used for color identification task. It consists of 123,287 images and 78,736 train and 38,948 test questions. -
Show and tell: A neural image caption generator
Show and tell: A neural image caption generator. -
From show to tell: A survey on deep learning-based image captioning
From show to tell: A survey on deep learning-based image captioning. -
Microsoft COCO
The Microsoft COCO dataset was used for training and evaluating the CNNs because it has become a standard benchmark for testing algorithms aimed at scene understanding and... -
Self-Supervised Image Captioning with CLIP
Image captioning, a fundamental task in vision-language understanding, seeks to generate accurate natural language descriptions for provided images. Current image captioning... -
MeaCap: Memory-Augmented Zero-shot Image Captioning
Zero-shot image captioning without well-paired image-text data can be divided into two categories, training-free and text-only-training. Generally, these two types of methods...