🤖 AI Summary
Generative diffusion models for online speech enhancement suffer from high computational overhead and latency (320–960 ms), hindering real-time streaming applications. To address this, we propose Diffusion Buffer—a single-inference generative enhancement method tailored for online scenarios. Its core innovations include: (1) aligning the physical time axis with diffusion steps; (2) designing a forward-looking 2D-convolutional U-Net architecture; and (3) introducing a data-prediction loss that explicitly balances latency and enhancement quality. Deployed on consumer-grade GPUs, Diffusion Buffer processes each audio frame with only one neural network forward pass, reducing latency to 32–176 ms. Quantitative evaluations demonstrate superior performance over state-of-the-art predictive models in terms of signal-to-noise ratio (SNR), perceptual speech quality (e.g., PESQ, STOI), and generalization to unseen noise types.
📝 Abstract
Online Speech Enhancement was mainly reserved for predictive models. A key advantage of these models is that for an incoming signal frame from a stream of data, the model is called only once for enhancement. In contrast, generative Speech Enhancement models often require multiple calls, resulting in a computational complexity that is too high for many online speech enhancement applications. This work presents the Diffusion Buffer, a generative diffusion-based Speech Enhancement model which only requires one neural network call per incoming signal frame from a stream of data and performs enhancement in an online fashion on a consumer-grade GPU. The key idea of the Diffusion Buffer is to align physical time with Diffusion time-steps. The approach progressively denoises frames through physical time, where past frames have more noise removed. Consequently, an enhanced frame is output to the listener with a delay defined by the Diffusion Buffer, and the output frame has a corresponding look-ahead. In this work, we extend upon our previous work by carefully designing a 2D convolutional UNet architecture that specifically aligns with the Diffusion Buffer's look-ahead. We observe that the proposed UNet improves performance, particularly when the algorithmic latency is low. Moreover, we show that using a Data Prediction loss instead of Denoising Score Matching loss enables flexible control over the trade-off between algorithmic latency and quality during inference. The extended Diffusion Buffer equipped with a novel NN and loss function drastically reduces the algorithmic latency from 320 - 960 ms to 32 - 176 ms with an even increased performance. While it has been shown before that offline generative diffusion models outperform predictive approaches in unseen noisy speech data, we confirm that the online Diffusion Buffer also outperforms its predictive counterpart on unseen noisy speech data.