You're currently viewing an old version of this dataset. To see the current version, click here.

EAT: Enhanced ASR-TTS for Self-Supervised Speech Recognition

Self-supervised ASR-TTS models suffer in out-of-domain data conditions. Here we propose an enhanced ASR-TTS model that incorporates two main features: 1) The ASR→TTS direction is equipped with a language model reward to penalize the ASR hypotheses before forwarding it to TTS. 2) In the TTS→ASR direction, a hyper-parameter is introduced to scale the attention context from synthesized speech before sending it to ASR to handle out-of-domain data.

Data and Resources

This dataset has no data

Cite this as

Murali Karthick Baskar, Lukás Burget, Shinji Watanabe, Ramon Fernandez Astudillo, Jan „Honza“ ˇCernocký (2024). Dataset: EAT: Enhanced ASR-TTS for Self-Supervised Speech Recognition. https://doi.org/10.57702/24jypnyc

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.2104.07474
Author Murali Karthick Baskar
More Authors
Lukás Burget
Shinji Watanabe
Ramon Fernandez Astudillo
Jan „Honza“ ˇCernocký