-
Implicit Neural 3D Representation
Implicit neural 3D representation has achieved impressive results in surface or scene reconstruction and novel view synthesis, which typically uses the coordinate-based... -
Scene Reconstruction Benchmark
The dataset used in the paper for scene reconstruction. -
Stanford 3D Scanning Repository
The dataset used in the paper is not explicitly described, but it is mentioned that the authors tested their model on various signal reconstruction tasks: 1D sinusoidal... -
Instant-NGP dataset
The Instant-NGP dataset is a dataset for instant neural graphics primitives, consisting of 8 scenes with varying complexity. -
Latent Diffusion Model for 3D Scene Synthesis
A latent diffusion model that samples and reconstructs large real-world scenes represented as 3D Gaussians in as little as 0.2 seconds. -
Mutant and LEGO Dataset
The Mutant and LEGO dataset is a dynamic scene dataset. It contains 90% images for training and 10% images for evaluation. -
Local Light Field Fusion (LLFF) dataset
This dataset is used for novel view synthesis and scene reconstruction. -
HoloDreamer
The dataset used in the paper for 3D scene generation from text descriptions. -
Tanks and Temples
Neural Radiance Fields (NeRFs) model a 3D scene as a volumetric function, which can be rendered from arbitrary viewpoints to generate highly-realistic images. -
DTU dataset
The DTU dataset is a large-scale dataset for multi-view stereo depth inference. It contains over 100 scans taken under 7 different lighting conditions and fixed camera...