Self-Distillation Prototypes Network: Learning Robust Speaker Representations Without Supervision

Training speaker-discriminative and robust speaker verification systems without explicit speaker labels remains a persisting challenge. In this paper, we propose a new self-supervised speaker verification approach, Self-Distillation Prototypes Network (SDPN), which effectively facilitates self-supervised speaker representation learning.

Data and Resources

Cite this as

Yafeng Chen, Siqi Zheng, Hui Wang, Luyao Cheng, Qian Chen, Shiliang Zhang, Wen Wang (2024). Dataset: Self-Distillation Prototypes Network: Learning Robust Speaker Representations Without Supervision. https://doi.org/10.57702/8qa3kqao

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2308.02774
Author Yafeng Chen
More Authors
Siqi Zheng
Hui Wang
Luyao Cheng
Qian Chen
Shiliang Zhang
Wen Wang
Homepage https://github.com/modelscope/3D-Speaker