FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis

FastDiff is a fast conditional diffusion model for high-quality speech synthesis. It employs a stack of time-aware location-variable convolutions with diverse receptive field patterns to model long-term time dependencies with adaptive conditions.

Data and Resources

Cite this as

Rongjie Huang, Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, Zhou Zhao (2024). Dataset: FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis. https://doi.org/10.57702/btbb2gy0

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2204.09934
Author Rongjie Huang
More Authors
Max W. Y. Lam
Jun Wang
Dan Su
Dong Yu
Yi Ren
Zhou Zhao