Self-Distillation Prototypes Network: Learning Robust Speaker Representations without Supervision

📅 2023-08-05
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
For unsupervised speaker verification—where no speaker labels are available during training—this paper proposes the Self-Distillation Prototype Network (SDPN). Methodologically, SDPN enforces consistency between speech-enhanced and original audio views by jointly mapping them into a learnable, shared speaker prototype space. It introduces a novel prototype-guided self-distillation mechanism and incorporates an embedding diversity regularization term to effectively mitigate representation collapse caused by the absence of explicit negative samples. The approach synergistically integrates self-distillation, learnable prototype clustering, and contrastive representation learning. Evaluated on the VoxCeleb1 benchmark, SDPN achieves state-of-the-art performance with EERs of 1.80% (O), 1.99% (E), and 3.62% (H), all without leveraging any speaker identity labels during training.
📝 Abstract
Training speaker-discriminative and robust speaker verification systems without explicit speaker labels remains a persistent challenge. In this paper, we propose a novel self-supervised speaker verification approach, Self-Distillation Prototypes Network (SDPN), which effectively facilitates self-supervised speaker representation learning. SDPN assigns the representation of the augmented views of an utterance to the same prototypes as the representation of the original view, thereby enabling effective knowledge transfer between the augmented and original views. Due to lack of negative pairs in the SDPN training process, the network tends to align positive pairs quite closely in the embedding space, a phenomenon known as model collapse. To mitigate this problem, we introduce a diversity regularization term to embeddings in SDPN. Comprehensive experiments on the VoxCeleb datasets demonstrate the superiority of SDPN among self-supervised speaker verification approaches. SDPN sets a new state-of-the-art on the VoxCeleb1 speaker verification evaluation benchmark, achieving Equal Error Rate 1.80%, 1.99%, and 3.62% for trial VoxCeleb1-O, VoxCeleb1-E and VoxCeleb1-H, without using any speaker labels in training. Ablation studies show that both proposed learnable prototypes in self-distillation network and diversity regularization contribute to the verification performance.
Problem

Research questions and friction points this paper is trying to address.

Speaker Verification
Limited Training Data
Speaker Discrimination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Distillation Prototype Network
Speaker Verification
Feature Diversification
Yafeng Chen
Yafeng Chen
University of Science and Technology of China
Large Audio Language ModelSpeech Signal ProcessingDeep Learning
S
Siqi Zheng
Speech Lab, Alibaba Group
H
Hui Wang
Speech Lab, Alibaba Group
L
Luyao Cheng
Speech Lab, Alibaba Group
Q
Qian Chen
Speech Lab, Alibaba Group
Shiliang Zhang
Shiliang Zhang
Department of Computer Science, School of EECS, Peking University
Multimedia Information RetrievalMultimedia SystemsVisual Search
W
Wen Wang
Speech Lab, Alibaba Group