You're currently viewing an old version of this dataset. To see the current version, click here.

FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion

FaceDiffuser is a non-deterministic deep learning model to generate speech-driven facial animations that is trained with both 3D vertex and blendshape based datasets.

Data and Resources

This dataset has no data

Cite this as

Kazi Injamamul Haque, Stefan Stan, Zerrin Yumak (2024). Dataset: FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion. https://doi.org/10.57702/yb4ge7ag

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2309.11306
Author Kazi Injamamul Haque
More Authors
Stefan Stan
Zerrin Yumak