You're currently viewing an old version of this dataset. To see the current version, click here.

HumanML3D

HumanML3D is a text-to-motion dataset built upon AMASS dataset and HumanAct12. It provides a wide range of motion-language pairs which cover ordinary activities, such as ‘jumping’, ‘walking’, ‘running’, etc.

Data and Resources

Cite this as

Chuan Guo, Xinxin Zuo, Sen Wang, Li Cheng (2024). Dataset: HumanML3D. https://doi.org/10.57702/6mrdnygc

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2207.01696
Citation
  • https://doi.org/10.48550/arXiv.2306.14795
  • https://doi.org/10.48550/arXiv.2309.01372
  • https://doi.org/10.48550/arXiv.2302.05905
  • https://doi.org/10.48550/arXiv.2407.15408
  • https://doi.org/10.48550/arXiv.2211.16016
  • https://doi.org/10.1007/s11263-024-02042-6
  • https://doi.org/10.48550/arXiv.2312.11994
Author Chuan Guo
More Authors
Xinxin Zuo
Sen Wang
Li Cheng
Homepage https://korrawe.github.io/dno-project/