11 datasets found

Groups: 3D Reconstruction

Filter Results
  • Mip-NeRF 360 and Tanks & Temples datasets

    The Mip-NeRF 360 and Tanks & Temples datasets are used to evaluate the performance of the Pixel-GS method.
  • EscherNet: A Generative Model for Scalable View Synthesis

    EscherNet is a multi-view conditioned diffusion model designed for scalable view synthesis. It leverages Stable Diffusion's 2D architecture empowered by the innovative Camera...
  • Neural scene flow fields for space-time view synthesis of dynamic scenes

    A neural scene flow fields for space-time view synthesis of dynamic scenes.
  • LLFF dataset

    The dataset used in the paper is the LLFF dataset, which contains real-world scenes and is used for training and testing the proposed neural radiance field model.
  • 360Roam

    The dataset used in the paper for omnidirectional Gaussian Splatting for fast radiance field reconstruction using omnidirectional images.
  • Hypersim

    A dataset of synthetic photorealistic images for training and testing material-aware diffusion models
  • RealEstate-10K

    The dataset used in the paper for training and testing the NeRF model.
  • NeRF

    NeRF [33] has demonstrated amazing ability to synthesize images of 3D scenes from novel views. However, they rely upon specialized volumetric rendering algorithms based on ray...
  • DTU dataset

    The DTU dataset is a large-scale dataset for multi-view stereo depth inference. It contains over 100 scans taken under 7 different lighting conditions and fixed camera...
  • DTU

    The DTU dataset is a large-scale dataset for 3D reconstruction and editing. It contains 15 scenes with images of 1600x1200 resolution and accompanying foreground masks.
  • LLFF

    The LLFF dataset contains 8 forward-facing scenes. Following [19, 26, 35, 42], we take every 8-th image as the novel views for testing. The input views are evenly sampled across...