5 datasets found

Tags: multimodal learning

Filter Results
  • MSRVTT-QA

    Video question answering (VideoQA) requires systems to understand the visual information and infer an answer for a natural language question from it.
  • MSVD-QA

    The MSVD-QA dataset is a benchmark for video question answering, containing 1,970 videos with multiple-choice questions.
  • EgoSchema

    EgoSchema is a diagnostic benchmark for assessing very long-form video-language understanding capabilities of modern multimodal systems.
  • MSVD

    Text-Video Retrieval (TVR) aims to align relevant video content with natural language queries. To date, most state-of-the-art TVR methods learn image-to-video transfer learning...
  • MSR-VTT

    The dataset used in the paper is MSR-VTT, a large video description dataset for bridging video and language. The dataset contains 10k video clips with length varying from 10 to...