🤖 AI Summary
This paper addresses training instability and amplitude distortion issues in diffusion models for speech enhancement by proposing a novel Schrödinger Bridge–based framework. Methodologically: (i) time-varying preconditioning and unit-norm weight/activation normalization are employed to improve training stability; (ii) an amplitude-preserving network architecture is introduced, incorporating learnable input weighting and dual-hop connections to jointly predict noise and clean speech; (iii) a short-period exponential moving average (EMA) is adopted to approximate the target distribution—yielding substantial gains over conventional image-generation heuristics. Experiments demonstrate significant improvements over prior methods on standard metrics including PESQ and STOI. The code, pre-trained models, and audio samples are publicly released.
📝 Abstract
This paper presents a new framework for diffusion-based speech enhancement. Our method employs a Schroedinger bridge to transform the noisy speech distribution into the clean speech distribution. To stabilize and improve training, we employ time-dependent scalings of the inputs and outputs of the network, known as preconditioning. We consider two skip connection configurations, which either include or omit the current process state in the denoiser's output, enabling the network to predict either environmental noise or clean speech. Each approach leads to improved performance on different speech enhancement metrics. To maintain stable magnitude levels and balance during training, we use a magnitude-preserving network architecture that normalizes all activations and network weights to unit length. Additionally, we propose learning the contribution of the noisy input within each network block for effective input conditioning. After training, we apply a method to approximate different exponential moving average (EMA) profiles and investigate their effects on the speech enhancement performance. In contrast to image generation tasks, where longer EMA lengths often enhance mode coverage, we observe that shorter EMA lengths consistently lead to better performance on standard speech enhancement metrics. Code, audio examples, and checkpoints are available online.