Pretraining Multi-Speaker Identification for Neural Speaker Diarization

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
End-to-end speaker diarization heavily relies on large-scale annotated dialogue data, which is scarce in real-world scenarios; synthetic data generation is costly and suffers from poor generalization. To address this, we propose multi-speaker joint recognition as a self-supervised pretraining task, leveraging abundant speaker-identification data—comprising non-overlapping or lightly overlapping utterances—without requiring synthetic dialogues. Crucially, we pioneer the use of multi-speaker recognition under fully overlapping speech mixtures as the pretraining objective. Our method integrates multi-speaker embedding modeling, contrastive learning-driven multi-label classification pretraining, and a lightweight local diarization fine-tuning architecture. Evaluated on multiple benchmarks, our approach achieves state-of-the-art performance with 40% fewer parameters, eliminates reliance on synthetic data entirely, significantly reduces storage and I/O overhead, and substantially improves generalization to real-world overlapping speech and deployment efficiency.

Technology Category

Application Category

📝 Abstract
End-to-end speaker diarization enables accurate overlap-aware diarization by jointly estimating multiple speakers' speech activities in parallel. This approach is data-hungry, requiring a large amount of labeled conversational data, which cannot be fully obtained from real datasets alone. To address this issue, large-scale simulated data is often used for pretraining, but it requires enormous storage and I/O capacity, and simulating data that closely resembles real conversations remains challenging. In this paper, we propose pretraining a model to identify multiple speakers from an input fully overlapped mixture as an alternative to pretraining a diarization model. This method eliminates the need to prepare a large-scale simulated dataset while leveraging large-scale speaker recognition datasets for training. Through comprehensive experiments, we demonstrate that the proposed method enables a highly accurate yet lightweight local diarization model without simulated conversational data.
Problem

Research questions and friction points this paper is trying to address.

Lack of labeled data for end-to-end speaker diarization
Challenges in simulating realistic conversational data
Need for lightweight diarization without simulated data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretrains multi-speaker identification from overlapped mixtures
Leverages large-scale speaker recognition datasets
Eliminates need for simulated conversational data
🔎 Similar Papers
No similar papers found.