USED: Universal Speaker Extraction and Diarization

📅 2023-09-19
🏛️ arXiv.org
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
To address the inconsistency and scene mismatch arising from the decoupled treatment of speaker extraction and diarization in complex overlapping speech, this paper proposes the first end-to-end jointly optimized framework that unifies frequency-domain speech separation with time-domain speaker activity annotation. The method integrates deep clustering, mask estimation, speaker activity detection, and waveform-level separation modules, supporting variable numbers of speakers and arbitrary overlap ratios. A bidirectional协同 mechanism enables mutual enhancement between extraction and diarization, breaking away from conventional cascaded pipelines. Evaluated on LibriMix, SparseLibriMix, and the real-world telephone conversation dataset CALLHOME, the approach achieves significant improvements over state-of-the-art methods on both tasks—marking the first demonstration of simultaneous gains in extraction quality (e.g., SI-SNRi) and diarization accuracy (e.g., DER).
📝 Abstract
Speaker extraction and diarization are two enabling techniques for real-world speech applications. Speaker extraction aims to extract a target speaker's voice from a speech mixture, while speaker diarization demarcates speech segments by speaker, annotating `who spoke when'. Previous studies have typically treated the two tasks independently. In practical applications, it is more meaningful to have knowledge about `who spoke what and when', which is captured by the two tasks. The two tasks share a similar objective of disentangling speakers. Speaker extraction operates in the frequency domain, whereas diarization is in the temporal domain. It is logical to believe that speaker activities obtained from speaker diarization can benefit speaker extraction, while the extracted speech offers more accurate speaker activity detection than the speech mixture. In this paper, we propose a unified model called Universal Speaker Extraction and Diarization (USED) to address output inconsistency and scenario mismatch issues. It is designed to manage speech mixtures with varying overlap ratios and variable number of speakers. We show that the USED model significantly outperforms the competitive baselines for speaker extraction and diarization tasks on LibriMix and SparseLibriMix datasets. We further validate the diarization performance on CALLHOME, a dataset based on real recordings, and experimental results indicate that our model surpasses recently proposed approaches.
Problem

Research questions and friction points this paper is trying to address.

Speaker Diarization
Speaker Extraction
Audio Separation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal Speaker Extraction and Diarization
Joint Optimization
Robust Performance
🔎 Similar Papers
No similar papers found.
Junyi Ao
Junyi Ao
The Chinese University of Hong Kong, Shenzhen
Speech RecognitionSelf-Supervised Learning
M
Mehmet Sinan Yildirim
Department of Electrical and Computer Engineering, National University of Singapore, Singapore 119077
M
Mengyao Ge
Saw Swee Hock School of Public Health, National University of Singapore, Singapore 117549
S
Shuai Wang
Shenzhen Research Institute of Big Data, Shenzhen 518172, China
R
Ruijie Tao
Department of Electrical and Computer Engineering, National University of Singapore, Singapore 119077
Y
Yan-min Qian
Auditory Cognition and Computational Acoustics Lab, Department of Computer Science and Engineering and the MoE Key Laboratory of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai 200240, China
L
Liqun Deng
L
Longshuai Xiao
Haizhou Li
Haizhou Li
The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China; NUS, Singapore
Automatic Speech RecognitionSpeaker RecognitionLanguage RecognitionVoice ConversionMachine Translation