The dataset used in the paper is RLBench, a standard benchmark for vision-based robotics which has been shown to serve as a proxy for real-robot experiments.
The dataset used in the paper is a collection of images of cups with different attributes (shape, color, position) and their corresponding goal positions.
The dataset used in the paper is a collection of images of cups with different attributes (shape, color, position) and their corresponding goal positions.