9 datasets found

Tags: image-text pairs

Filter Results
  • Conceptual Captions 12M and RedCaps

    The dataset used in the paper is Conceptual Captions 12M (CC12M) and RedCaps.
  • Conceptual Captions 3M, Conceptual Captions 12M, RedCaps, and LAION-400M

    The dataset used in the paper is Conceptual Captions 3M (CC3M), Conceptual Captions 12M (CC12M), RedCaps, and LAION-400M.
  • Conceptual Captions

    The dataset used in the paper "Scaling Laws of Synthetic Images for Model Training". The dataset is used for supervised image classification and zero-shot classification tasks.
  • Conceptual Captions 3M

    The Conceptual Captions 3M dataset is a large-scale image-text dataset used for vision-language pre-training.
  • MSCOCO dataset

    The MSCOCO dataset is a large-scale image captioning dataset, containing 113,287 images with 5,000 validation images and 5,000 test images. The dataset is used for training and...
  • COCO Captions

    Object detection is a fundamental task in computer vision, requiring large annotated datasets that are difficult to collect.
  • Visual Genome

    The Visual Genome dataset is a large-scale visual question answering dataset, containing 1.5 million images, each with 15-30 annotated entities, attributes, and relationships.
  • MS-COCO

    Large scale datasets [18, 17, 27, 6] boosted text conditional image generation quality. However, in some domains it could be difficult to make such datasets and usually it could...
  • MSCOCO

    Human Pose Estimation (HPE) aims to estimate the position of each joint point of the human body in a given image. HPE tasks support a wide range of downstream tasks such as...