DELULU: Discriminative Embedding Learning Using Latent Units for Speaker-Aware Self-Supervised Speech Foundational Model

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current self-supervised speech models excel at content modeling but exhibit insufficient speaker discriminability, limiting their effectiveness in speaker verification, diarization, and zero-shot attribute analysis. To address this, we propose the first method that explicitly incorporates highly discriminative speaker embeddings—extracted by ReDimNet—into the self-supervised pretraining clustering process, thereby establishing a strong speaker-inductive bias for joint content-speaker representation learning. Specifically, frame-level speaker embeddings guide k-means clustering to generate pseudo-labels, while a dual self-supervised objective—combining masked prediction and denoising—is employed. No fine-tuning is required to learn general speaker-aware representations. Our approach achieves a 62% relative reduction in equal error rate on speaker verification and significantly outperforms baselines on zero-shot analysis of gender, age, accent, and number of speakers.

Technology Category

Application Category

📝 Abstract
Self-supervised speech models have achieved remarkable success on content-driven tasks, yet they remain limited in capturing speaker-discriminative features critical for verification, diarization, and profiling applications. We introduce DELULU, a speaker-aware self-supervised foundational model that addresses this limitation by integrating external supervision into the pseudo-label generation process. DELULU leverages frame-level embeddings from ReDimNet, a state-of-the-art speaker verification model, to guide the k-means clustering step during pre-training, introducing a strong speaker-discriminative inductive bias that aligns representation learning with speaker identity. The model is trained using a dual objective that combines masked prediction and denoising, further enhancing robustness and generalization. DELULU significantly outperforms prior self-supervised learning (SSL) models across a range of speaker-centric tasks, achieving up to 62% relative improvement in equal error rate (EER) for speaker verification and consistent gains on zero-shot profiling tasks such as gender, age, accent, and speaker counting. Our findings demonstrate that DELULU is a strong universal encoder for speaker-aware speech processing, enabling superior performance even without task-specific fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Enhancing speaker-discriminative features in self-supervised speech models
Integrating external supervision into pseudo-label generation for speaker identity
Improving performance on verification, diarization, and profiling tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates external supervision into pseudo-label generation
Uses speaker embeddings to guide k-means clustering
Combines masked prediction and denoising for training
🔎 Similar Papers
No similar papers found.