You're currently viewing an old version of this dataset. To see the current version, click here.

MEAD

The MEAD dataset is a large-scale, high-quality emotional audio-visual dataset, which consists of 60 actors, including 8 basic emotions and 3 different emotional-intensity talking head videos.

Data and Resources

This dataset has no data

Cite this as

Chao Xu, Junwei Zhu, Jiangning Zhang, Yue Han, Wenqing Chu, Ying Tai, Chengjie Wang, Zhifeng Xie, Yong Liu (2024). Dataset: MEAD. https://doi.org/10.57702/nr14hlyo

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.1016/j.neunet.2024.106120
Citation
  • https://doi.org/10.48550/arXiv.2405.15758
  • https://doi.org/10.48550/arXiv.2305.02572
Author Chao Xu
More Authors
Junwei Zhu
Jiangning Zhang
Yue Han
Wenqing Chu
Ying Tai
Chengjie Wang
Zhifeng Xie
Yong Liu
Homepage https://arxiv.org/abs/2002.10137