🤖 AI Summary
In speech enhancement, predictive models often suffer from over-suppression and distortion due to deterministic mean estimation. To address this, we propose a full-band collaborative enhancement framework tailored for real-time streaming: it integrates a lightweight predictive network with a conditional generative adversarial network (cGAN) to establish a stochastic regeneration mechanism, enabling distribution-level modeling and circumventing bias inherent in point-wise mean estimation; additionally, noise-conditioned modeling is introduced to enhance robustness. The resulting model comprises only 3.58M parameters and supports low-latency streaming inference. Evaluated on NISQA-MOS, it significantly outperforms single-stage baselines, empirically validating the efficacy of distributional modeling in mitigating over-suppression. This framework has been successfully deployed in the 2025 Urgent Challenge and further refined in production.
📝 Abstract
In this work, we propose a full-band real-time speech enhancement system with GAN-based stochastic regeneration. Predictive models focus on estimating the mean of the target distribution, whereas generative models aim to learn the full distribution. This behavior of predictive models may lead to over-suppression, i.e. the removal of speech content. In the literature, it was shown that combining a predictive model with a generative one within the stochastic regeneration framework can reduce the distortion in the output. We use this framework to obtain a real-time speech enhancement system. With 3.58M parameters and a low latency, our system is designed for real-time streaming with a lightweight architecture. Experiments show that our system improves over the first stage in terms of NISQA-MOS metric. Finally, through an ablation study, we show the importance of noisy conditioning in our system. We participated in 2025 Urgent Challenge with our model and later made further improvements.