Diffusion Buffer for Online Generative Speech Enhancement

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative diffusion models for online speech enhancement suffer from high computational overhead and latency (320–960 ms), hindering real-time streaming applications. To address this, we propose Diffusion Buffer—a single-inference generative enhancement method tailored for online scenarios. Its core innovations include: (1) aligning the physical time axis with diffusion steps; (2) designing a forward-looking 2D-convolutional U-Net architecture; and (3) introducing a data-prediction loss that explicitly balances latency and enhancement quality. Deployed on consumer-grade GPUs, Diffusion Buffer processes each audio frame with only one neural network forward pass, reducing latency to 32–176 ms. Quantitative evaluations demonstrate superior performance over state-of-the-art predictive models in terms of signal-to-noise ratio (SNR), perceptual speech quality (e.g., PESQ, STOI), and generalization to unseen noise types.

Technology Category

Application Category

📝 Abstract
Online Speech Enhancement was mainly reserved for predictive models. A key advantage of these models is that for an incoming signal frame from a stream of data, the model is called only once for enhancement. In contrast, generative Speech Enhancement models often require multiple calls, resulting in a computational complexity that is too high for many online speech enhancement applications. This work presents the Diffusion Buffer, a generative diffusion-based Speech Enhancement model which only requires one neural network call per incoming signal frame from a stream of data and performs enhancement in an online fashion on a consumer-grade GPU. The key idea of the Diffusion Buffer is to align physical time with Diffusion time-steps. The approach progressively denoises frames through physical time, where past frames have more noise removed. Consequently, an enhanced frame is output to the listener with a delay defined by the Diffusion Buffer, and the output frame has a corresponding look-ahead. In this work, we extend upon our previous work by carefully designing a 2D convolutional UNet architecture that specifically aligns with the Diffusion Buffer's look-ahead. We observe that the proposed UNet improves performance, particularly when the algorithmic latency is low. Moreover, we show that using a Data Prediction loss instead of Denoising Score Matching loss enables flexible control over the trade-off between algorithmic latency and quality during inference. The extended Diffusion Buffer equipped with a novel NN and loss function drastically reduces the algorithmic latency from 320 - 960 ms to 32 - 176 ms with an even increased performance. While it has been shown before that offline generative diffusion models outperform predictive approaches in unseen noisy speech data, we confirm that the online Diffusion Buffer also outperforms its predictive counterpart on unseen noisy speech data.
Problem

Research questions and friction points this paper is trying to address.

Enabling online generative speech enhancement with single neural network call
Reducing computational complexity for real-time speech processing applications
Achieving low algorithmic latency while maintaining enhanced speech quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses a diffusion buffer aligning physical time with diffusion steps
Introduces a 2D convolutional UNet for low-latency enhancement
Employs data prediction loss for latency-quality trade-off control
🔎 Similar Papers
No similar papers found.
B
Bunlong Lay
Signal Processing Group, Department of Informatics, Universität Hamburg, 22527 Hamburg Germany
Rostislav Makarov
Rostislav Makarov
ML Scientist in Speech
speech processing
Simon Welker
Simon Welker
Universität Hamburg
Deep learningSpeech processingGenerative modelingX-ray imagingPtychography
M
Maris Hillemann
Signal Processing Group, Department of Informatics, Universität Hamburg, 22527 Hamburg Germany
Timo Gerkmann
Timo Gerkmann
Signal Processing, Computer Science Department, Universität Hamburg, Germany
Speech EnhancementSpeech and Audio ProcessingAcoustic Signal Processing