GES-X

A large-scale co-speech gesture dataset containing more than 40M high-quality 3D meshed postures across 4.3K speakers from in-the-wild talk show videos.

Data and Resources

Cite this as

Xingqun Qi, Hengyuan Zhang, Yatian Wang, Jiahao Pan, Chen Liu, Peng Li, Xiaowei Chi, Mengfei Li, Qixun Zhang, Wei Xue, Shanghang Zhang, Qifeng Liu, Yike Guo (2024). Dataset: GES-X. https://doi.org/10.57702/b87darri

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2405.16874
Author Xingqun Qi
More Authors
Hengyuan Zhang
Yatian Wang
Jiahao Pan
Chen Liu
Peng Li
Xiaowei Chi
Mengfei Li
Qixun Zhang
Wei Xue
Shanghang Zhang
Qifeng Liu
Yike Guo
Homepage https://mattie-e.github.io/GES-X/