10 datasets found

Groups: Speech Synthesis Formats: JSON

Filter Results
  • Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive Bias

    Scaling text-to-speech to a large and wild dataset has been proven to be highly effective in achieving timbre and speech style generalization, particularly in zero-shot TTS....
  • DeviceTTS

    A small-footprint, fast, stable network for on-device text-to-speech synthesis
  • SNIPER Training: Single-Shot Sparse Training for Text-to-Speech

    Text-to-speech (TTS) models have achieved remarkable naturalness in recent years, yet like most deep neural models, they have more parameters than necessary. Sparse TTS models...
  • Non-Attentive Tacotron

    Non-Attentive Tacotron is a neural text-to-speech model that combines a robust duration predictor with an autoregressive decoder.
  • Style Tokens

    Global Style Tokens (GSTs) are a recently-proposed method to learn latent disentangled representations of high-dimensional data. GSTs can be used within Tacotron, a...
  • Tacotron

    Global Style Tokens (GSTs) are a recently-proposed method to learn latent disentangled representations of high-dimensional data. GSTs can be used within Tacotron, a...
  • Global Style Tokens

    Global Style Tokens (GSTs) are a recently-proposed method to learn latent disentangled representations of high-dimensional data. GSTs can be used within Tacotron, a...
  • Text-Predicted Global Style Tokens

    Global Style Tokens (GSTs) are a recently-proposed method to learn latent disentangled representations of high-dimensional data. GSTs can be used within Tacotron, a...
  • VCTK

    Voice conversion (VC) is a technique that alters the voice of a source speaker to a target style, such as speaker identity, prosody, and emotion, while keeping the linguistic...
  • LibriTTS

    A popular text-based VC approach is to use an automatic speech recognition (ASR) model to extract phonetic posteriorgram (PPG) as content representation.