Speech Enhancement and Dereverberation With Diffusion-Based Generative Models

📅 2022-08-11
🏛️ IEEE/ACM Transactions on Audio Speech and Language Processing
📈 Citations: 266
Influential: 58
📄 PDF
🤖 AI Summary
This work addresses speech enhancement and dereverberation via a diffusion generative model based on stochastic differential equations (SDEs). Methodologically, it (1) models the forward process as the realistic speech degradation trajectory—rather than idealized Gaussian noise addition—thereby better aligning with physical degradation mechanisms; (2) initializes reverse sampling with mixed noise (not pure Gaussian noise), substantially accelerating convergence; and (3) unifies enhancement and dereverberation within a single conditional U-Net framework, supporting diverse samplers (e.g., DDIM, Euler). Experiments demonstrate that high-fidelity clean speech reconstruction is achieved in only 30 sampling steps, yielding significant computational efficiency gains. The model exhibits superior cross-dataset generalization compared to state-of-the-art discriminative models, and achieves top performance on both real-world recordings and subjective listening evaluations.
📝 Abstract
In this work, we build upon our previous publication and use diffusion-based generative models for speech enhancement. We present a detailed overview of the diffusion process that is based on a stochastic differential equation and delve into an extensive theoretical examination of its implications. Opposed to usual conditional generation tasks, we do not start the reverse process from pure Gaussian noise but from a mixture of noisy speech and Gaussian noise. This matches our forward process which moves from clean speech to noisy speech by including a drift term. We show that this procedure enables using only 30 diffusion steps to generate high-quality clean speech estimates. By adapting the network architecture, we are able to significantly improve the speech enhancement performance, indicating that the network, rather than the formalism, was the main limitation of our original approach. In an extensive cross-dataset evaluation, we show that the improved method can compete with recent discriminative models and achieves better generalization when evaluating on a different corpus than used for training. We complement the results with an instrumental evaluation using real-world noisy recordings and a listening experiment, in which our proposed method is rated best. Examining different sampler configurations for solving the reverse process allows us to balance the performance and computational speed of the proposed method. Moreover, we show that the proposed method is also suitable for dereverberation and thus not limited to additive background noise removal.
Problem

Research questions and friction points this paper is trying to address.

Enhancing speech quality using diffusion generative models
Reducing reverberation and background noise in speech
Improving computational efficiency with fewer diffusion steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models with fewer steps
Starts reverse process from noisy speech
Adapts network architecture for enhancement
🔎 Similar Papers
No similar papers found.