-
Amazon Picking Challenge 2016 Dataset
The dataset used in the Amazon Picking Challenge 2016, a vision-based robotic picking system developed by Team Applied Robotics. -
LIMP Dataset
The dataset used in the paper is a set of 35 complex and ambiguous object goal navigation and mobile pick-and-place instructions. -
Picking task dataset
The dataset used in the paper is a collection of sequences of motor actions used to train a deep visuomotor policy. -
Shadow Hand Dataset
A dataset of 1140 grasp trajectories for a robotic hand interacting with a turntable. -
MPL Hand Dataset
A dataset of 132 sensor readings over 100 steps into the future for a robotic hand interacting with an unknown and uncertain external world. -
LocoMujoco
Motion-capture datasets for robotic continuous control tasks. -
PRS-delivery
Human-centered in-building embodied delivery task dataset, including a simulation environment and a large language model-based dataset. -
PR2 Manipulation Dataset
The dataset used for the PR2 manipulation task. -
Inverted Pendulum Dataset
The dataset used for the inverted pendulum task. -
Slot Attention
A dataset of videos of a robot interacting with blocks of different shapes and colors placed on a table in a simulation environment. -
SemanticKITTI: A dataset for semantic scene understanding of lidar sequences
SemanticKITTI: A dataset for semantic scene understanding of lidar sequences. -
TORNADO-Net: Multiview Total Variation Semantic Segmentation with Diamond Inc...
Semantic segmentation of point clouds is a key component of scene understanding for robotics and autonomous driving. In this paper, we introduce TORNADO-Net - a neural network... -
YCB Object and Model Set
The YCB object and model set is a benchmark for manipulation research, consisting of 15 object categories and 3D models. -
Dex-net 2.0
A dataset for learning robust grasps with synthetic point clouds and analytic grasp metrics. -
Crowd-sourced Language Annotations Dataset
The dataset consists of 5,600 episode-instruction pairs, where each episode is labeled with two hindsight instructions each. -
Robot Trajectories Dataset
The dataset consists of 80,000 robot trajectories collected via human teleoperation, with 2,800 demonstrations labeled by crowd-sourced language annotators. -
Data-driven Instruction Augmentation for Language-conditioned Control
Data-driven Instruction Augmentation for Language-conditioned Control (DIAL) is a method that uses pre-trained vision-language models (VLMs) to label offline datasets for... -
YCB dataset
The YCB dataset used for testing the FFHFlow-lvm model in the real-world scenario. -
BIGBIRD and KIT datasets
The dataset used for training and testing the FFHFlow-lvm model for generative grasp synthesis.