97 datasets found

Tags: Speech Recognition

Filter Results
  • MuST-C

    MuST-C is a multilingual speech translation dataset, which contains at least 385 hours of audio recordings from TED Talks, with their manual transcriptions and translations at...
  • Switchboard

    Human speech data comprises a rich set of domain factors such as accent, syntactic and semantic variety, or acoustic environment.
  • Dictation dataset

    The dictation dataset across 39 locales, including Latin (Albanian, Icelandic, Slovak), Arabic (Levant, Maghrebi), Cyrillic (Macedonian, Kazakh), Devanagari (Nepali), etc.
  • TIMIT, Aurora-4, AMI, and LibriSpeech

    Four different corpora are used for our experiments, which are TIMIT, Aurora-4, AMI, and LibriSpeech. TIMIT contains broadband 16kHz recordings of phonetically-balanced read...
  • BABEL dataset

    The dataset used in this paper is the BABEL dataset, which contains 10881 motion sequences, with 65926 subsequences and the corresponding textual labels.
  • Librispeech

    The Librispeech dataset is a large-scale speaker-dependent speech corpus containing 1080 hours of speech, 5600 utterances, and 1000 speakers.
  • LibriLight

    The dataset used in this paper is a large-scale production ASR system, which includes multi-domain (MD) data sets in English. The MD data sets include medium-form (MF) and...
  • CREMA-D

    The CREMA-D dataset is an audio-visual dataset for emotion recognition task, each video in which consists of both facial and acoustic emotional expressions.
  • Generating holistic 3D human motion from speech

    Generating holistic 3D human motion from speech.
  • VoxCeleb

    Speaker verification systems experience significant performance degradation when tasked with short-duration trial recordings. To address this challenge, a multi-scale feature...
  • UCONV-CONFORMER: HIGH REDUCTION OF INPUT SEQUENCE LENGTH FOR END-TO-END SPEEC...

    Optimization of modern ASR architectures is among the highest priority tasks since it saves many computational resources for model training and inference. The work proposes a...
  • VoxCeleb1

    Speaker recognition aims to identify speaker information from input speech. A type of speaker recognition is speaker verification (SV). It determines whether the test speaker's...
  • AudioMNIST

    The AudioMNIST dataset consists of 60 speakers, 33% female, who were recorded speaking individual digits (0-9) 50 times each.
  • Google Speech Commands Dataset Version II

    The Google Speech Commands Dataset Version II contains 105,829 utterances of 35 words from 2,618 speakers with a sampling rate of 16 kHz.
  • TED-LIUM dataset

    TED-LIUM dataset
  • LibriSpeech dataset

    The dataset used in the paper is the LibriSpeech dataset, which contains about 1,000 hours of English speech derived from audiobooks.
  • Deep Speech model

    Deep Speech model
You can also access this registry using the API (see API Docs).