MEAD

The MEAD dataset is a large-scale, high-quality emotional audio-visual dataset, which consists of 60 actors, including 8 basic emotions and 3 different emotional-intensity talking head videos.

Data and Resources

Cite this as

Chao Xu, Junwei Zhu, Jiangning Zhang, Yue Han, Wenqing Chu, Ying Tai, Chengjie Wang, Zhifeng Xie, Yong Liu (2024). Dataset: MEAD. https://doi.org/10.57702/nr14hlyo

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.1016/j.neunet.2024.106120
Citation
  • https://doi.org/10.48550/arXiv.2405.15758
  • https://doi.org/10.48550/arXiv.2305.02572
Author Chao Xu
More Authors
Junwei Zhu
Jiangning Zhang
Yue Han
Wenqing Chu
Ying Tai
Chengjie Wang
Zhifeng Xie
Yong Liu
Homepage https://arxiv.org/abs/2002.10137