๐ค AI Summary
This work proposes a training-free, multi-step inference framework for target speaker extraction that overcomes the limitations of conventional single-step approaches, which often fail to fully optimize separation quality. Building upon a frozen pre-trained model, the method introduces test-time scaling for the first time in this task, iteratively generating candidate signals through interpolation and progressively refining outputs via a joint optimization strategy that combines candidate selection with multiple objective metricsโnamely SI-SDRi, UTMOS, and SpkSim. Experimental results demonstrate significant improvements across multiple evaluation metrics when a clean reference utterance is available. Moreover, even in the absence of a reference, the framework enables preference-guided, high-quality extraction through controllable optimization, exhibiting strong practical adaptability for real-world deployment.
๐ Abstract
Target speaker extraction (TSE) aims to recover a target speaker's speech from a mixture using a reference utterance as a cue. Most TSE systems adopt conditional auto-encoder architectures with one-step inference. Inspired by test-time scaling, we propose a training-free multi-step inference method that enables iterative refinement with a frozen pretrained model. At each step, new candidates are generated by interpolating the original mixture and the previous estimate, and the best candidate is selected for further refinement until convergence. Experiments show that, when ground-truth target speech is available, optimizing an intrusive metric (SI-SDRi) yields consistent gains across multiple evaluation metrics. Without ground truth, optimizing non-intrusive metrics (UTMOS or SpkSim) improves the corresponding metric but may hurt others. We therefore introduce joint metric optimization to balance these objectives, enabling controllable extraction preferences for practical deployment.