🤖 AI Summary
Addressing the long-standing challenge of poor generalization of adaptive filtering (AF) under non-Gaussian noise, this paper proposes a novel deep neural network (DNN)-driven AF framework. Methodologically, the DNN is formulated as a universal nonlinear operator that directly maps the residual error to the learning gradient—bypassing explicit cost function design—and implicitly optimizes maximum likelihood estimation for data-driven gradient generation. The framework tightly integrates DNNs with classical filter architectures through implicit cost modeling and direct gradient projection. We rigorously establish mean and mean-square stability of the algorithm. Experimental results demonstrate that the proposed method significantly improves convergence speed, steady-state accuracy, and robustness across diverse non-Gaussian environments, outperforming state-of-the-art AF algorithms in generalization capability.
📝 Abstract
This paper proposes a deep neural network (DNN)-driven framework to address the longstanding generalization challenge in adaptive filtering (AF). In contrast to traditional AF frameworks that emphasize explicit cost function design, the proposed framework shifts the paradigm toward direct gradient acquisition. The DNN, functioning as a universal nonlinear operator, is structurally embedded into the core architecture of the AF system, establishing a direct mapping between filtering residuals and learning gradients. The maximum likelihood is adopted as the implicit cost function, rendering the derived algorithm inherently data-driven and thus endowed with exemplary generalization capability, which is validated by extensive numerical experiments across a spectrum of non-Gaussian scenarios. Corresponding mean value and mean square stability analyses are also conducted in detail.