SE-DiCoW: Self-Enrolled Diarization-Conditioned Whisper

📅 2026-01-27
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges faced by existing cross-domain multi-speaker automatic speech recognition (ASR) systems in accurately disentangling speakers in overlapping speech and their limited generalization capability. The authors propose a self-registration mechanism that leverages speaker diarization to identify the most active utterance segment of a target speaker and uses it as a fixed conditioning embedding injected into the cross-attention layers of the Whisper encoder. This approach effectively mitigates the speaker–time–output (STNO) mask ambiguity inherent in multi-talker ASR. Combined with refined data segmentation, model initialization, and augmentation strategies, the method substantially enhances the model’s ability to separate overlapping speakers and improves cross-domain generalization. On the EMMA MT-ASR benchmark, the proposed system achieves a 52.4% relative reduction in macro-averaged tcpWER compared to DiCoW.

Technology Category

Application Category

📝 Abstract
Speaker-attributed automatic speech recognition (ASR) in multi-speaker environments remains a major challenge. While some approaches achieve strong performance when fine-tuned on specific domains, few systems generalize well across out-of-domain datasets. Our prior work, Diarization-Conditioned Whisper (DiCoW), leverages speaker diarization outputs as conditioning information and, with minimal fine-tuning, demonstrated strong multilingual and multi-domain performance. In this paper, we address a key limitation of DiCoW: ambiguity in Silence-Target-Non-target-Overlap (STNO) masks, where two or more fully overlapping speakers may have nearly identical conditioning despite differing transcriptions. We introduce SE-DiCoW (Self-Enrolled Diarization-Conditioned Whisper), which uses diarization output to locate an enrollment segment anywhere in the conversation where the target speaker is most active. This enrollment segment is used as fixed conditioning via cross-attention at each encoder layer. We further refine DiCoW with improved data segmentation, model initialization, and augmentation. Together, these advances yield substantial gains: SE-DiCoW reduces macro-averaged tcpWER by 52.4% relative to the original DiCoW on the EMMA MT-ASR benchmark.
Problem

Research questions and friction points this paper is trying to address.

speaker-attributed ASR
multi-speaker environments
cross-domain generalization
speech overlap
STNO ambiguity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Enrollment
Diarization-Conditioned ASR
Speaker-Attributed ASR
Cross-Attention Conditioning
Whisper Enhancement
🔎 Similar Papers
No similar papers found.