11 datasets found

Tags: vision-language pre-training

Filter Results
  • Open Images

    The Open Images dataset is a large-scale image dataset with a wide range of images, including but not limited to, street scenes, indoor scenes, and outdoor scenes.
  • COCO 5K

    The dataset used in the paper for unpaired vision-language pre-training via cross-modal CutMix.
  • Conceptual Captions 3M

    The Conceptual Captions 3M dataset is a large-scale image-text dataset used for vision-language pre-training.
  • EPIC: Leveraging Per Image-Token Consistency for Vision-Language Pre-training

    The proposed EPIC method is a pre-training approach that leverages more text tokens for learning vision-language associations.
  • BLIP

    The dataset used in the paper is a pre-trained diffusion backbone and a pre-trained vision-language guidance model.
  • BookCorpus

    The dataset used in this paper for unsupervised sentence representation learning, consisting of paragraphs from unlabeled text.
  • SBU Captions

    The SBU Captions dataset is a large-scale image-text dataset used for vision-language pre-training.
  • COCO Captions

    Object detection is a fundamental task in computer vision, requiring large annotated datasets that are difficult to collect.
  • ALIGN

    Scaling up visual and vision-language representation learning with noisy text supervision.
  • Visual Genome

    The Visual Genome dataset is a large-scale visual question answering dataset, containing 1.5 million images, each with 15-30 annotated entities, attributes, and relationships.
  • MSCOCO

    Human Pose Estimation (HPE) aims to estimate the position of each joint point of the human body in a given image. HPE tasks support a wide range of downstream tasks such as...
You can also access this registry using the API (see API Docs).