🤖 AI Summary
This work identifies and quantifies the impact of probabilistic path straightness on generative speech enhancement performance: conventional flow-based methods learn curved probability paths, leading to inefficient training and limited generalization. To address this, we propose Independent Conditional Flow Matching (ICFM), the first method to explicitly model a straight probability path between noise and clean speech. ICFM introduces time-independent variance control, eliminating reliance on time-varying gradients. By integrating conditional flow matching, generative modeling, and a one-step inference strategy, ICFM achieves high-fidelity reconstruction while significantly accelerating inference. Experiments demonstrate that straight-path modeling consistently outperforms curved-path alternatives across objective metrics—including PESQ, STOI, and ESTOI—validating the critical role of path straightness. Moreover, ICFM improves both training efficiency and generalization capability, establishing a new paradigm for flow-based speech enhancement.
📝 Abstract
Current flow-based generative speech enhancement methods learn curved probability paths which model a mapping between clean and noisy speech. Despite impressive performance, the implications of curved probability paths are unknown. Methods such as Schrodinger bridges focus on curved paths, where time-dependent gradients and variance do not promote straight paths. Findings in machine learning research suggest that straight paths, such as conditional flow matching, are easier to train and offer better generalisation. In this paper we quantify the effect of path straightness on speech enhancement quality. We report experiments with the Schrodinger bridge, where we show that certain configurations lead to straighter paths. Conversely, we propose independent conditional flow-matching for speech enhancement, which models straight paths between noisy and clean speech. We demonstrate empirically that a time-independent variance has a greater effect on sample quality than the gradient. Although conditional flow matching improves several speech quality metrics, it requires multiple inference steps. We rectify this with a one-step solution by inferring the trained flow-based model as if it was directly predictive. Our work suggests that straighter time-independent probability paths improve generative speech enhancement over curved time-dependent paths.