-
Video Generative Patch Nearest Neighbors (VGPNN)
A non-parametric approach for video generation from a single video, outperforming single-video GANs in visual quality and realism. -
SoloDance Dataset
The SoloDance dataset contains 179 solo dance videos in real scenes collected online. -
iPER Dataset
The iPER dataset was proposed by [25], which was collected in the laboratory environment. -
REMOT: A Region-to-Whole Framework for Realistic Human Motion Transfer
Human Video Motion Transfer (HVMT) aims to, given an image of a source person, generate his/her video that imitates the motion of the driving person. -
Events-to-Video: Bringing Modern Computer Vision to Event Cameras
E2VID is an event-to-video pipeline that converts event data into a video sequence. -
Sky Time-lapse
Sky Time-lapse for video generation -
StyleGAN-V
StyleGAN-V for video generation -
MoCoGAN-HD
MoCoGAN-HD for video generation -
WebVid dataset
WebVid dataset is used for text-to-video generation tasks. -
UCF-101, Sky Time-lapse, and Taichi datasets
UCF-101, Sky Time-lapse, and Taichi datasets are used for video generation tasks. -
Video-Infinity: Distributed Long Video Generation
Diffusion models have recently achieved remarkable results for video generation. Despite the encouraging performances, the generated videos are typically constrained to a small... -
DeepHDRVideo dataset
The DeepHDRVideo dataset encompasses both real-world dynamic scenes and static scenes enhanced with random global motion. -
Lumiere: A Space-Time Diffusion Model for Video Generation
A dataset for video generation and video-based tasks. -
Videofusion: Decomposed diffusion models for high-quality video generation
Videofusion: Decomposed diffusion models for high-quality video generation. -
Kinetics-400 and Kinetics-600
The Kinetics-400 and Kinetics-600 datasets are video understanding datasets used for learning rich and multi-scale spatiotemporal semantics from high-dimensional videos.