USEF-TSE: Universal Speaker Embedding Free Target Speaker Extraction

📅 2024-09-04
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Target speaker extraction (TSE) typically relies on pre-trained speaker verification models to extract reference embeddings, suffering from suboptimal embedding representations and difficulty in model selection. This paper proposes USEF-TSE, an embedding-free universal TSE framework that bypasses explicit speaker embeddings and instead learns frame-level target-speaker features directly from enrollment utterances. Our key contributions are: (1) a multi-head cross-attention mechanism serving as a frame-level feature extractor; (2) task decoupling between speaker extraction and identification to enhance modeling of contextual and acoustic characteristics; and (3) integration of end-to-end time-domain and time-frequency-domain separation architectures with unsupervised target-oriented feature alignment. USEF-TSE achieves state-of-the-art SI-SDR performance on WSJ0-2mix, WHAM!, and WHAMR!, and demonstrates strong generalization on LibriMix and the ICASSP 2023 DNS blind test set.

Technology Category

Application Category

📝 Abstract
Target speaker extraction aims to separate the voice of a specific speaker from mixed speech. Traditionally, this process has relied on extracting a speaker embedding from a reference speech, in which a speaker recognition model is required. However, identifying an appropriate speaker recognition model can be challenging, and using the target speaker embedding as reference information may not be optimal for target speaker extraction tasks. This paper introduces a Universal Speaker Embedding-Free Target Speaker Extraction (USEF-TSE) framework that operates without relying on speaker embeddings. USEF-TSE utilizes a multi-head cross-attention mechanism as a frame-level target speaker feature extractor. This innovative approach allows mainstream speaker extraction solutions to bypass the dependency on speaker recognition models and better leverage the information available in the enrollment speech, including speaker characteristics and contextual details. Additionally, USEF-TSE can seamlessly integrate with other time-domain or time-frequency domain speech separation models to achieve effective speaker extraction. Experimental results show that our proposed method achieves state-of-the-art (SOTA) performance in terms of Scale-Invariant Signal-to-Distortion Ratio (SI-SDR) on the WSJ0-2mix, WHAM!, and WHAMR! datasets, which are standard benchmarks for monaural anechoic, noisy and noisy-reverberant two-speaker speech separation and speaker extraction. The results on the LibriMix and the blind test set of the ICASSP 2023 DNS Challenge demonstrate that the model performs well on more diverse and out-of-domain data. For access to the source code, please visit: https://github.com/ZBang/USEF-TSE.
Problem

Research questions and friction points this paper is trying to address.

Eliminates dependency on speaker recognition models for target speaker extraction
Proposes embedding-free framework using multi-head cross-attention mechanism
Achieves state-of-the-art performance on standard speech separation benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-head cross-attention for speaker feature extraction
Eliminates dependency on speaker recognition models
Integrates with time or frequency domain models
🔎 Similar Papers
No similar papers found.