2 datasets found

Tags: cross-modal learning

Filter Results
  • Places

    The dataset used in the paper is Places, a large dataset of 400k pairs of images from the Places 205 dataset and corresponding spoken audio captions.
  • MSR-VTT

    The dataset used in the paper is MSR-VTT, a large video description dataset for bridging video and language. The dataset contains 10k video clips with length varying from 10 to...