4 datasets found

Tags: keypoints

Filter Results
  • RLBench

    The dataset used in the paper is RLBench, a standard benchmark for vision-based robotics which has been shown to serve as a proxy for real-robot experiments.
  • OCHuman

    The OCHuman dataset is a more challenging dataset, where each human instance is heavily occluded by one or several others and the postures of the human bodies are more complex.
  • Visual Genome

    The Visual Genome dataset is a large-scale visual question answering dataset, containing 1.5 million images, each with 15-30 annotated entities, attributes, and relationships.
  • COCO

    Large scale datasets [18, 17, 27, 6] boosted text conditional image generation quality. However, in some domains it could be difficult to make such datasets and usually it could...
You can also access this registry using the API (see API Docs).