🤖 AI Summary
Target Speaker Extraction (TSE) aims to isolate a target speaker’s speech from a mixture using a registered enrollment utterance; however, existing generative approaches often rely on complex pipelines and pre-trained components, resulting in high computational overhead and limited modeling capacity. This paper proposes FlowTSE—the first end-to-end TSE framework leveraging Conditional Flow Matching (CFM), which directly generates the target speaker’s spectrogram conditioned on the mixture’s mel-spectrogram and the enrollment speech. We further introduce a novel complex-valued STFT-conditioned vocoder, significantly improving phase reconstruction fidelity. FlowTSE requires no pre-trained modules or cascaded processing stages. Evaluated on standard TSE benchmarks, it matches or surpasses state-of-the-art baselines while offering superior performance, architectural simplicity, and computational efficiency.
📝 Abstract
Target speaker extraction (TSE) aims to isolate a specific speaker's speech from a mixture using speaker enrollment as a reference. While most existing approaches are discriminative, recent generative methods for TSE achieve strong results. However, generative methods for TSE remain underexplored, with most existing approaches relying on complex pipelines and pretrained components, leading to computational overhead. In this work, we present FlowTSE, a simple yet effective TSE approach based on conditional flow matching. Our model receives an enrollment audio sample and a mixed speech signal, both represented as mel-spectrograms, with the objective of extracting the target speaker's clean speech. Furthermore, for tasks where phase reconstruction is crucial, we propose a novel vocoder conditioned on the complex STFT of the mixed signal, enabling improved phase estimation. Experimental results on standard TSE benchmarks show that FlowTSE matches or outperforms strong baselines.