-
Conceptual Captions 12M and RedCaps
The dataset used in the paper is Conceptual Captions 12M (CC12M) and RedCaps. -
Conceptual Captions 3M, Conceptual Captions 12M, RedCaps, and LAION-400M
The dataset used in the paper is Conceptual Captions 3M (CC3M), Conceptual Captions 12M (CC12M), RedCaps, and LAION-400M. -
Conceptual Captions
The dataset used in the paper "Scaling Laws of Synthetic Images for Model Training". The dataset is used for supervised image classification and zero-shot classification tasks. -
Conceptual Captions 3M
The Conceptual Captions 3M dataset is a large-scale image-text dataset used for vision-language pre-training. -
MSCOCO dataset
The MSCOCO dataset is a large-scale image captioning dataset, containing 113,287 images with 5,000 validation images and 5,000 test images. The dataset is used for training and... -
COCO Captions
Object detection is a fundamental task in computer vision, requiring large annotated datasets that are difficult to collect. -
Visual Genome
The Visual Genome dataset is a large-scale visual question answering dataset, containing 1.5 million images, each with 15-30 annotated entities, attributes, and relationships.