๐ค AI Summary
This work proposes a single-step conditional generative model to address the challenges of high latency and reliance on unreliable mixture proportion estimates in target speaker extraction from multi-talker mixed speech. Built upon the AlphaFlow framework, the method enables efficient end-to-end extraction by learning the mean velocity transport trajectory from the mixture to the target speech. It introduces a Jacobian-vector-product-free AlphaFlow objective, eliminating the need for auxiliary mixture proportion prediction, and integrates flow matching with interval-consistent teacherโstudent training to enhance stability. Evaluated on Libri2Mix and REAL-T datasets, the model significantly improves target speaker similarity and generalization to real-world mixtures, while also yielding notable gains in downstream automatic speech recognition performance.
๐ Abstract
In target speaker extraction (TSE), we aim to recover target speech from a multi-talker mixture using a short enrollment utterance as reference. Recent studies on diffusion and flow-matching generators have improved target-speech fidelity. However, multi-step sampling increases latency, and one-step solutions often rely on a mixture-dependent time coordinate that can be unreliable for real-world conversations. We present AlphaFlowTSE, a one-step conditional generative model trained with a Jacobian-vector product (JVP)-free AlphaFlow objective. AlphaFlowTSE learns mean-velocity transport along a mixture-to-target trajectory starting from the observed mixture, eliminating auxiliary mixing-ratio prediction, and stabilizes training by combining flow matching with an interval-consistency teacher-student target. Experiments on Libri2Mix and REAL-T confirm that AlphaFlowTSE improves target-speaker similarity and real-mixture generalization for downstream automatic speech recognition (ASR).