🤖 AI Summary
To address the inconsistency and scene mismatch arising from the decoupled treatment of speaker extraction and diarization in complex overlapping speech, this paper proposes the first end-to-end jointly optimized framework that unifies frequency-domain speech separation with time-domain speaker activity annotation. The method integrates deep clustering, mask estimation, speaker activity detection, and waveform-level separation modules, supporting variable numbers of speakers and arbitrary overlap ratios. A bidirectional协同 mechanism enables mutual enhancement between extraction and diarization, breaking away from conventional cascaded pipelines. Evaluated on LibriMix, SparseLibriMix, and the real-world telephone conversation dataset CALLHOME, the approach achieves significant improvements over state-of-the-art methods on both tasks—marking the first demonstration of simultaneous gains in extraction quality (e.g., SI-SNRi) and diarization accuracy (e.g., DER).
📝 Abstract
Speaker extraction and diarization are two enabling techniques for real-world speech applications. Speaker extraction aims to extract a target speaker's voice from a speech mixture, while speaker diarization demarcates speech segments by speaker, annotating `who spoke when'. Previous studies have typically treated the two tasks independently. In practical applications, it is more meaningful to have knowledge about `who spoke what and when', which is captured by the two tasks. The two tasks share a similar objective of disentangling speakers. Speaker extraction operates in the frequency domain, whereas diarization is in the temporal domain. It is logical to believe that speaker activities obtained from speaker diarization can benefit speaker extraction, while the extracted speech offers more accurate speaker activity detection than the speech mixture. In this paper, we propose a unified model called Universal Speaker Extraction and Diarization (USED) to address output inconsistency and scenario mismatch issues. It is designed to manage speech mixtures with varying overlap ratios and variable number of speakers. We show that the USED model significantly outperforms the competitive baselines for speaker extraction and diarization tasks on LibriMix and SparseLibriMix datasets. We further validate the diarization performance on CALLHOME, a dataset based on real recordings, and experimental results indicate that our model surpasses recently proposed approaches.