-
UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion ...
Video Diffusion Models have been developed for video generation, usually integrating text and image conditioning to enhance control over the generated content. -
MSR-VTT and UCF-101
The dataset used in the paper is MSR-VTT and UCF-101, two public datasets for video-text generation. MSR-VTT contains 4,900 videos with 20 manually annotated captions for each... -
CustomStudio
A comprehensive benchmark for multi-subject driven text-to-video generation, covering a wide range of subject categories and diverse subject pairs. -
Emu Video Edit Training Dataset
The Emu Video Edit model's training dataset, containing 1600 videos with 7 editing instructions each. -
Emu Video Edit Dataset
The dataset used for training the Emu Video Edit model, containing 1600 videos. -
Tune-A-Video
The dataset used in the paper for video editing tasks -
Open-Sora Plan
The dataset used in this paper for text-to-video generation, consisting of short video clips. -
VideoCrafter1
The dataset used in this paper for text-to-video generation, consisting of short video clips. -
VideoCrafter2
The dataset used in this paper for text-to-video generation, consisting of short video clips. -
ModelScope text-to-video
The dataset used in the paper for text-to-video diffusion models -
Davis and WebVid datasets
The dataset used in the paper is not explicitly described, but it is mentioned that the authors used 26 text-video pairs from the public DAVIS and WebVid datasets. -
WebVid-10M: A large-scale video dataset for text-to-video generation
WebVid-10M: A large-scale video dataset for text-to-video generation. -
MTVG: Multi-text Video Generation with Text-to-Video Models
The authors used the pre-trained diffusion-based text-to-video (T2V) generation model without additional fine-tuning. -
UCF101 dataset
UCF101 dataset is used to test the proposed text-to-video model. The dataset contains 101 action categories, and each category has 10 videos. The videos are labeled with text...