13 datasets found

Formats: JSON Tags: indoor scenes

Filter Results
  • LayoutNet

    The LayoutNet dataset is a collection of 360-degree images of indoor scenes.
  • SUNCG

    The SUNCG dataset is a large dataset of 3D indoor scenes with missing objects. The dataset is used to evaluate the performance of scene augmentation and context-based object...
  • Scannet: Richly-annotated 3D reconstructions of indoor scenes

    Scannet: Richly-annotated 3D reconstructions of indoor scenes.
  • 3DMatch

    The 3DMatch [26] is a well-known indoor registration dataset of 62 scenes captured by RGBD sensor.
  • NYU-Depth V2

    The NYU-Depth V2 dataset contains pairs of RGB and depth images collected from Microsoft Kinect in 464 indoor scenes.
  • ScanNetV2

    The dataset used in the paper is ScanNetV2, a real-world dataset for indoor scenes. It contains 1205 training scenes and 312 testing scenes, with instance-level object bounding...
  • ScanNet and ArkitScenes

    The dataset used in the Point2Pix paper, containing point clouds and camera parameters for indoor scenes.
  • Habitat

    The Habitat dataset is a large-scale indoor simulator dataset containing 145 semantically-annotated indoor scenes.
  • NTIRE2018-Dehazing challenge dataset

    The NTIRE2018-Dehazing challenge dataset contains indoor and outdoor hazy images.
  • RESIDE dataset

    The RESIDE dataset contains both synthesized and real-world hazy/clean image pairs of indoor and outdoor scenes.
  • NYUv2

    Multi-task learning (MTL) research is broadly divided into two categories: one is to learn the correlation between tasks through model structures, and the other is to balance...
  • SUN RGB-D

    RGB-D scene recognition approaches often train two standalone backbones for RGB and depth modalities with the same Places or ImageNet pre-training. However, the pre-trained...
  • LSUN

    The dataset used for training and validation of the proposed approach to combine semantic segmentation and dense outlier detection.
You can also access this registry using the API (see API Docs).