-
DTU MVS Dataset and Local Light Field Fusion Dataset
The DTU MVS Dataset and the Local Light Field Fusion Dataset are used to evaluate the performance of the proposed GARF model. -
Office-Home, Office-31, and DomainNet
Office-Home, Office-31, and DomainNet are benchmark datasets for semi-supervised domain adaptation. -
Compositional Diffusion-Based Continuous Constraint Solvers
The dataset for 2D triangle packing, 2D shape arrangement with qualitative constraints, 3D object stacking with stability constraints, and 3D object packing with robots. -
Segment Anything Model
The dataset used in this paper is the Meta Research's Segment Anything Model (SAM) dataset, which consists of images. -
RotNIST dataset
RotNIST dataset -
ScanNet Dataset
The ScanNet dataset is a large-scale indoor dataset composed of monocular sequences with ground truth poses and depth images. -
CIFAR-10, STL-10, and ImageNet
The dataset used in the paper is CIFAR-10, STL-10, and ImageNet. -
Joint Prediction of Monocular Depth and Structure using Planar and Parallax G...
The dataset used in the paper is the KITTI Vision Benchmark and Cityscapes dataset for monocular depth estimation and structure prediction. -
Real-World Motion-Blurred Hand Dataset
A real-world dataset for motion-blurred hand extraction task. -
Synthetic Motion-Blurred Hand Dataset
A synthetic dataset for motion-blurred hand extraction task. -
CIFAR-10 and ILSVRC-2012
The dataset used in the paper is CIFAR-10 and ILSVRC-2012. -
ResNet50, ResNet34, and ResNet18
The dataset used in this paper is ResNet50, ResNet34, and ResNet18. -
VGG19-BN, CIFAR-10, and CIFAR-100
The dataset used in this paper is VGG19-BN, CIFAR-10, and CIFAR-100. -
SemanticPOSS
A point cloud dataset with large quantity of dynamic instances, consisting of 2,988 real-world scans with point-level annotations. -
Geometry-aware Single-image Full-body Human Relighting
Single-image human relighting aims to relight a target human under new lighting conditions by decomposing the input image into albedo, shape and lighting.