TESS: Text-to-Text Self-Conditioned Simplex Diffusion

Diffusion models have emerged as a power-ful paradigm for generation, obtaining strong performance in various continuous domains. However, applying continuous diffusion models to natural language remains challenging due to its discrete nature and the need for a large number of diffusion steps to generate text, making diffusion-based generation expensive.

Data and Resources

Cite this as

Rabeeh Karimi Mahabadi, James Henderson, Iz Beltagy, Hamish Ivison, Matthew E. Peters, Jaesung Tae, Arman Cohan (2024). Dataset: TESS: Text-to-Text Self-Conditioned Simplex Diffusion. https://doi.org/10.57702/bvpixosy

DOI retrieved: December 3, 2024

Additional Info

Field Value
Created December 3, 2024
Last update December 3, 2024
Defined In https://doi.org/10.48550/arXiv.2305.08379
Author Rabeeh Karimi Mahabadi
More Authors
James Henderson
Iz Beltagy
Hamish Ivison
Matthew E. Peters
Jaesung Tae
Arman Cohan
Homepage https://github.com/allenai/tess-diffusion