🤖 AI Summary
Existing target speaker extraction methods rely on static speaker embeddings, neglecting dynamic time-frequency domain interactions between the mixed speech and enrolled utterances—leading to target speaker confusion and insufficient robustness. To address this, we propose a dual-stream contextual fusion framework. Specifically, we design a Time-Frequency Dual-Stream Fusion Block (DSFB) that jointly models bidirectional interactions between the mixture and enrollment signals in both spatial and channel dimensions. We further introduce contextualized representation learning, cross-modal feature alignment, and an adaptive gating fusion mechanism. Our approach substantially mitigates target confusion, achieving only a 0.4% error rate, and attains a state-of-the-art SI-SDR improvement (SI-SDRi) of 21.6 dB on standard benchmarks. Moreover, it demonstrates strong robustness under challenging acoustic conditions, including noise and reverberation.
📝 Abstract
Target speaker extraction focuses on extracting a target speech signal from an environment with multiple speakers by leveraging an enrollment. Existing methods predominantly rely on speaker embeddings obtained from the enrollment, potentially disregarding the contextual information and the internal interactions between the mixture and enrollment. In this paper, we propose a novel DualStream Contextual Fusion Network (DCF-Net) in the time-frequency (T-F) domain. Specifically, DualStream Fusion Block (DSFB) is introduced to obtain contextual information and capture the interactions between contextualized enrollment and mixture representation across both spatial and channel dimensions, and then rich and consistent representations are utilized to guide the extraction network for better extraction. Experimental results demonstrate that DCF-Net outperforms state-of-the-art (SOTA) methods, achieving a scale-invariant signal-to-distortion ratio improvement (SI-SDRi) of 21.6 dB on the benchmark dataset, and exhibits its robustness and effectiveness in both noise and reverberation scenarios. In addition, the wrong extraction results of our model, called target confusion problem, reduce to 0.4%, which highlights the potential of DCF-Net for practical applications.