🤖 AI Summary
This paper addresses the challenge of unsupervised speaker representation learning by proposing a learnable prototype-guided self-distillation framework for robust, label-free speaker verification. Methodologically, it integrates self-distillation, contrastive learning, and prototype-based clustering, operating solely on raw speech signals without speaker annotations. Its key contributions are: (1) an end-to-end learnable prototype generation and matching mechanism—replacing hand-crafted or static cluster centroids—and (2) an embedding diversity regularization term that effectively mitigates representation collapse during self-distillation. Evaluated on the VoxCeleb1 benchmark, the method achieves state-of-the-art performance with EERs of 1.80% (open-set), 1.99% (evaluation), and 3.62% (hard), substantially outperforming existing unsupervised approaches.
📝 Abstract
Training speaker-discriminative and robust speaker verification systems without explicit speaker labels remains a persistent challenge. In this paper, we propose a novel self-supervised speaker verification approach, Self-Distillation Prototypes Network (SDPN), which effectively facilitates self-supervised speaker representation learning. SDPN assigns the representation of the augmented views of an utterance to the same prototypes as the representation of the original view, thereby enabling effective knowledge transfer between the augmented and original views. Due to lack of negative pairs in the SDPN training process, the network tends to align positive pairs quite closely in the embedding space, a phenomenon known as model collapse. To mitigate this problem, we introduce a diversity regularization term to embeddings in SDPN. Comprehensive experiments on the VoxCeleb datasets demonstrate the superiority of SDPN among self-supervised speaker verification approaches. SDPN sets a new state-of-the-art on the VoxCeleb1 speaker verification evaluation benchmark, achieving Equal Error Rate 1.80%, 1.99%, and 3.62% for trial VoxCeleb1-O, VoxCeleb1-E and VoxCeleb1-H, without using any speaker labels in training. Ablation studies show that both proposed learnable prototypes in self-distillation network and diversity regularization contribute to the verification performance.