🤖 AI Summary
This study investigates the impact of training objectives on model performance in generative speech enhancement, aiming to improve speech clarity, intelligibility, and subjective perceptual quality under noisy conditions. We propose a novel perception-aware loss function within the Schrödinger Bridge (SB) framework, jointly incorporating psychoacoustic priors and objective intelligibility constraints. A systematic comparison is conducted between score-matching–based and SB-based diffusion modeling paradigms, revealing fundamental differences in convergence behavior and generalization capability. Experimental results demonstrate that our method achieves significant improvements over strong baselines: +0.32 DNS-MOS and +2.1% STOI. All code and pre-trained models are publicly released.
📝 Abstract
Generative speech enhancement has recently shown promising advancements in improving speech quality in noisy environments. Multiple diffusion-based frameworks exist, each employing distinct training objectives and learning techniques. This paper aims to explain the differences between these frameworks by focusing our investigation on score-based generative models and the Schr""odinger bridge. We conduct a series of comprehensive experiments to compare their performance and highlight differing training behaviors. Furthermore, we propose a novel perceptual loss function tailored for the Schr""odinger bridge framework, demonstrating enhanced performance and improved perceptual quality of the enhanced speech signals. All experimental code and pre-trained models are publicly available to facilitate further research and development in this domain.