15 datasets found

Groups: Computer Vision Organizations: No Organization Formats: JSON

Filter Results
  • Multi-View HDR Datasets

    High dynamic range (HDR) novel view synthesis (NVS) aims to create photorealistic images from novel viewpoints using HDR imaging techniques.
  • Synthia

    The Synthia dataset is a large-scale urban scene understanding dataset, containing 9000 samples. It is used for semantic segmentation tasks.
  • Cross-Ray Neural Radiance Fields for Novel-view Synthesis from Unconstrained ...

    The dataset used in the paper for novel-view synthesis from unconstrained image collections.
  • Cross-View Image Synthesis

    Cross-view image synthesis aims to translate images between two distinct views, such as synthesizing ground images from aerial images, and vice versa.
  • LLFF dataset

    The dataset used in the paper is the LLFF dataset, which contains real-world scenes and is used for training and testing the proposed neural radiance field model.
  • SPair-71k

    The proposed method, dubbed Dynamic Hyperpixel Flow, learns to compose hypercolumn features on the fly by selecting a small number of relevant layers from a deep convolutional...
  • Diffusion Models Beat GANs on Image Synthesis

    Diffusion models have recently emerged as the state-of-the-art of generative modeling, demonstrating remarkable results in image synthesis and across other modalities.
  • Zero-1-to-3

    Zero-1-to-3: Zero-shot one image to 3D object.
  • AFHQ

    The dataset used in the paper is a set of images from the AFHQ dataset, containing 1.5K images of different animal faces.
  • LSUN-Church

    Progress in GANs has enabled the generation of high-res-olution photorealistic images of astonishing quality. StyleGANs allow for compelling attribute modification on such...
  • LSUN

    The dataset used for training and validation of the proposed approach to combine semantic segmentation and dense outlier detection.
  • CLIP

    The CLIP model and its variants are becoming the de facto backbone in many applications. However, training a CLIP model from hundreds of millions of image-text pairs can be...
  • FFHQ

    Large scale datasets [18, 17, 27, 6] boosted text conditional image generation quality. However, in some domains it could be difficult to make such datasets and usually it could...
  • COCO

    Large scale datasets [18, 17, 27, 6] boosted text conditional image generation quality. However, in some domains it could be difficult to make such datasets and usually it could...
  • LLFF

    The LLFF dataset contains 8 forward-facing scenes. Following [19, 26, 35, 42], we take every 8-th image as the novel views for testing. The input views are evenly sampled across...