-
MAD: A Large-Scale Benchmark for Long-Form Video Temporal Grounding
MAD: A large-scale benchmark for long-form video temporal grounding, containing over 384K natural language queries that derived from high-quality audio description of mainstream... -
TACoS Speech
The TACoS Speech dataset contains a large amount of open-world videos with more shot transitions. -
Charades-STA Speech
The Charades-STA Speech dataset contains a large amount of open-world videos with more shot transitions. -
ActivityNet Speech
The ActivityNet Speech dataset contains a large amount of open-world videos with more shot transitions. -
Dense regression network for video grounding
Dense regression network for video grounding -
Semantic conditioned dynamic modulation for temporal sentence grounding in vi...
Semantic conditioned dynamic modulation for temporal sentence grounding in videos -
Multilevel language and vision integration for text-to-clip retrieval
Multilevel language and vision integration for text-to-clip retrieval -
Tall: Temporal activity localization via language query
Tall: Temporal activity localization via language query. -
Support-Set Based Cross-Supervision for Video Grounding
Support-Set Based Cross-Supervision for Video Grounding -
Charades-STA
Charades-STA dataset contains 12,408/3720 segment-sentence pairs and 5338/1334 videos in training and test set, respectively. -
Language-free Training for Zero-shot Video Grounding
Given an untrimmed video and a language query, video grounding aims to localize the time interval by understanding the text and video simultaneously. -
Localizing moments in video with natural language
Localizing moments in video with natural language -
ActivityNet Captions
The ActivityNet Captions is a benchmark dataset proposed for dense video captioning. There are 20K untrimmed videos in total, and each video has several annotated segments with...