-
Implicit Neural 3D Representation
Implicit neural 3D representation has achieved impressive results in surface or scene reconstruction and novel view synthesis, which typically uses the coordinate-based... -
Consistent-1-to-3
Consistent-1-to-3 is a novel framework that generates consistent images of any objects from any viewpoint given a single image. -
KITTI-360 NVS Benchmark
NeRF dataset for large-scale urban scene reconstruction -
Synthetic Indoor Scene
The dataset used in the paper for 3D geometry reconstruction and novel view synthesis. -
Stanford 3D models
The dataset used in the paper for 3D geometry reconstruction and novel view synthesis. -
NVIDIA Dynamic Scene Dataset
The dataset used in the paper is the NVIDIA Dynamic Scene Dataset, which contains 8 diverse scenes with 12 sequences captured by synchronized cameras at fixed positions. -
LLFF, DTU, and Blender datasets
The dataset used for training and testing the Uncertainty-guided Optimal Transport (UGOT) approach for depth supervision in sparse-view 3D Gaussian splatting for novel view... -
LLFF dataset
The dataset used in the paper is the LLFF dataset, which contains real-world scenes and is used for training and testing the proposed neural radiance field model. -
DTU, BlendMVS, and H3DS datasets
The DTU, BlendMVS, and H3DS datasets are used for evaluating the performance of the VQ-NeRF method. -
IBRNet: Learning Multi-View Image-Based Rendering
A method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. -
Mip-NeRF360
The Mip-NeRF360 dataset contains scenes taken from a 360 degree view, with emphasis on minimizing photometric variations. -
CoR-GS: Sparse-View 3D Gaussian Splatting via Co-Regularization
This paper introduces a new co-regularization perspective for improving sparse-view 3DGS. We observe the two 3D Gaussian radiance fields exhibit different behaviors for the same...