You're currently viewing an old version of this dataset. To see the current version, click here.

T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations

Generating motion from textual descriptions can be used in numerous applications in the game industry, film-making, and animating robots. For example, a typical way to access new motion in the game industry is to perform motion capture, which is expensive. Therefore automatically generating motion from textual descriptions, which allows producing meaningful motion data, could save time and be more economical.

Data and Resources

This dataset has no data

Cite this as

Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Shaoli Huang, Hongwei Zhao, Hongtao Lu, Xi Shen (2024). Dataset: T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations. https://doi.org/10.57702/zz8w5z3w

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Jianrong Zhang
More Authors
Yangsong Zhang
Xiaodong Cun
Shaoli Huang
Hongwei Zhao
Hongtao Lu
Xi Shen
Homepage https://mael-zys.github.io/T2M-GPT/