DiffusionTalker: Personalization and Acceleration for Speech-Driven 3D Face

Speech-driven 3D facial animation has been an attractive task in both academia and industry. Traditional methods mostly focus on learning a deterministic mapping from speech to animation. Recent approaches start to consider the non-deterministic fact of speech-driven 3D face animation and employ the diffusion model for the task.

Data and Resources

Cite this as

Peng Chen, Xiaobao Wei, Ming Lu, Yitong Zhu, Naiming Yao, Xingyu Xiao, Hui Chen (2024). Dataset: DiffusionTalker: Personalization and Acceleration for Speech-Driven 3D Face. https://doi.org/10.57702/am9oj3nk

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Peng Chen
More Authors
Xiaobao Wei
Ming Lu
Yitong Zhu
Naiming Yao
Xingyu Xiao
Hui Chen
Homepage https://chenvoid.github.io/DiffusionTalker/