The dataset used in the paper is RLBench, a standard benchmark for vision-based robotics which has been shown to serve as a proxy for real-robot experiments.
The OCHuman dataset is a more challenging dataset, where each human instance is heavily occluded by one or several others and the postures of the human bodies are more complex.
The Visual Genome dataset is a large-scale visual question answering dataset, containing 1.5 million images, each with 15-30 annotated entities, attributes, and relationships.
Large scale datasets [18, 17, 27, 6] boosted text conditional image generation quality. However, in some domains it could be difficult to make such datasets and usually it could...