-
Mip-NeRF 360 and Tanks & Temples datasets
The Mip-NeRF 360 and Tanks & Temples datasets are used to evaluate the performance of the Pixel-GS method. -
RFFR (Real Forward-Facing with Reflections) dataset
Real-world scenes with reflections, including forward-facing scenes with strong reflection effects caused by glass and mirrors. -
Putting nerf on a diet: Semantically consistent few-shot view synthesis
A dataset for training and testing semantically consistent few-shot view synthesis models -
Phototourism dataset
A dataset of images of buildings and landmarks, used for training and testing image-based rendering and view synthesis algorithms. -
LLFF dataset
The dataset used in the paper is the LLFF dataset, which contains real-world scenes and is used for training and testing the proposed neural radiance field model. -
Real-world Forward-Facing
Real-world forward-facing dataset for view synthesis, containing 1008x756 images. -
RealEstate-10K
The dataset used in the paper for training and testing the NeRF model. -
Soft 3D Reconstruction for View Synthesis
Soft 3D Reconstruction for View Synthesis -
Nex: Real-Time View Synthesis with Neural Basis Expansion
Nex: Real-Time View Synthesis with Neural Basis Expansion -
Stereo Magnification: Learning View Synthesis using Multi-Plane Images
Stereo Magnification: Learning View Synthesis using Multi-Plane Images -
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis -
DTU and LLFF
Real-world multi-view datasets DTU and LLFF for view synthesis from sparse inputs. -
MobileNeRF
MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures -
Sparse Neural Radiance Grids (SNeRG)
Recent work has addressed this issue by “baking” NeRFs into a sparse 3D voxel grid [21, 51]. For example, Hedman et al. introduced Sparse Neural Radiance Grids (SNeRG) [21],...