-
OCRTOC Dataset
A dataset for object recognition and manipulation -
Grasp-Anything++
Grasp-Anything++ is a large-scale language-driven grasp detection dataset featuring 1M samples, over 3M objects, and upwards of 10M grasp instructions. -
iTHOR Rearrangement
The dataset used in the paper for Rearrangement task, which involves rearranging objects in a room. -
Amazon Picking Challenge 2016 Dataset
The dataset used in the Amazon Picking Challenge 2016, a vision-based robotic picking system developed by Team Applied Robotics. -
YCB Object and Model Set
The YCB object and model set is a benchmark for manipulation research, consisting of 15 object categories and 3D models. -
Object manipulation via visual target localization
Object manipulation via visual target localization.