T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations

Generating motion from textual descriptions can be used in numerous applications in the game industry, film-making, and animating robots. For example, a typical way to access new motion in the game industry is to perform motion capture, which is expensive. Therefore automatically generating motion from textual descriptions, which allows producing meaningful motion data, could save time and be more economical.

Data and Resources

Cite this as

Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Shaoli Huang, Hongwei Zhao, Hongtao Lu, Xi Shen (2024). Dataset: T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations. https://doi.org/10.57702/zz8w5z3w

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Jianrong Zhang
More Authors
Yangsong Zhang
Xiaodong Cun
Shaoli Huang
Hongwei Zhao
Hongtao Lu
Xi Shen
Homepage https://mael-zys.github.io/T2M-GPT/