DenoGrad: Deep Gradient Denoising Framework for Enhancing the Performance of Interpretable AI Models

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Explainable AI (XAI) models suffer performance degradation under noise in training and production data; existing denoising methods often distort the original data distribution, compromising the fidelity of data patterns essential for interpretability. To address this, we propose the first deep learning gradient-based instance-level denoising framework. Leveraging task-specific high-quality model gradients, it performs gradient inversion to detect and adaptively correct noise, dynamically refining noisy instances while strictly preserving the underlying data distribution. We introduce a novel task-adaptive noise definition mechanism, enabling effective application to both tabular and time-series data. Extensive experiments demonstrate that our method consistently outperforms state-of-the-art approaches across diverse noise settings, simultaneously improving model accuracy, robustness, and interpretability—crucially maintaining strict fidelity to the true data distribution.

Technology Category

Application Category

📝 Abstract
The performance of Machine Learning (ML) models, particularly those operating within the Interpretable Artificial Intelligence (Interpretable AI) framework, is significantly affected by the presence of noise in both training and production data. Denoising has therefore become a critical preprocessing step, typically categorized into instance removal and instance correction techniques. However, existing correction approaches often degrade performance or oversimplify the problem by altering the original data distribution. This leads to unrealistic scenarios and biased models, which is particularly problematic in contexts where interpretable AI models are employed, as their interpretability depends on the fidelity of the underlying data patterns. In this paper, we argue that defining noise independently of the solution may be ineffective, as its nature can vary significantly across tasks and datasets. Using a task-specific high quality solution as a reference can provide a more precise and adaptable noise definition. To this end, we propose DenoGrad, a novel Gradient-based instance Denoiser framework that leverages gradients from an accurate Deep Learning (DL) model trained on the target data -- regardless of the specific task -- to detect and adjust noisy samples. Unlike conventional approaches, DenoGrad dynamically corrects noisy instances, preserving problem's data distribution, and improving AI models robustness. DenoGrad is validated on both tabular and time series datasets under various noise settings against the state-of-the-art. DenoGrad outperforms existing denoising strategies, enhancing the performance of interpretable IA models while standing out as the only high quality approach that preserves the original data distribution.
Problem

Research questions and friction points this paper is trying to address.

Noise in training data impairs interpretable AI model performance
Existing correction methods distort data distribution causing model bias
Task-independent noise definitions are ineffective across diverse datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

DenoGrad uses gradients from deep learning models
It dynamically corrects noisy instances without altering data
This framework preserves original data distribution while denoising
🔎 Similar Papers
No similar papers found.
J
J. J. Alonso-Ramos
Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain.
I
Ignacio Aguilera-Martos
Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain.
Andrés Herrera-Poyatos
Andrés Herrera-Poyatos
Lecturer at University of Granada, deparment of Algebra. PhD from the University of Oxford
Randomised algorithmsComputational ComplexityCombinatoricsDeep Learning
Francisco Herrera
Francisco Herrera
Professor Computer Science and AI, DaSCI Research Institute, Granada University, Spain
Artificial IntelligenceComputational IntelligenceData ScienceTrustworthy AI