🤖 AI Summary
This work addresses the challenge of simultaneously achieving dynamic adaptation to non-stationary noise, low latency, and model interpretability in real-time speech denoising. The authors propose a fully interpretable, end-to-end speech enhancement architecture that combines the explicit modeling strengths of digital signal processing with the adaptive capabilities of deep learning. Specifically, a lightweight neural network is employed to predict time-varying coefficients of a 35-band differentiable cascaded IIR filter in real time, enabling explicit spectral shaping. Experiments on the Valentini-Botinhao dataset demonstrate that the proposed method significantly outperforms both static DDSP baselines and purely data-driven deep learning approaches under dynamic noise conditions, while maintaining low latency, high adaptability, and strong interpretability.
📝 Abstract
We present TVF (Time-Varying Filtering), a low-latency speech enhancement model with 1 million parameters. Combining the interpretability of Digital Signal Processing (DSP) with the adaptability of deep learning, TVF bridges the gap between traditional filtering and modern neural speech modeling. The model utilizes a lightweight neural network backbone to predict the coefficients of a differentiable 35-band IIR filter cascade in real time, allowing it to adapt dynamically to non-stationary noise. Unlike ``black-box'' deep learning approaches, TVF offers a completely interpretable processing chain, where spectral modifications are explicit and adjustable. We demonstrate the efficacy of this approach on a speech denoising task using the Valentini-Botinhao dataset and compare the results to a static DDSP approach and a fully deep-learning-based solution, showing that TVF achieves effective adaptation to changing noise conditions.