QUCE: The Minimisation and Quantification of Path-Based Uncertainty for Generative Counterfactual Explanations

📅 2024-02-27
🏛️ Industrial Conference on Data Mining
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural network (DNN) interpretability degrades with increasing model complexity, particularly under out-of-distribution (OOD) path traversal, where gradient instability severely undermines the reliability of path-based explainers (e.g., AGI) and counterfactual generation. Method: We propose QUCE—a unified framework that jointly quantifies and minimizes path uncertainty, integrating explanation credibility assessment and counterfactual optimization. QUCE explicitly mitigates OOD gradient perturbations via uncertainty calibration, Monte Carlo path sampling, and gradient regularization—breaking the implicit reliance on gradient stability inherent in conventional path-integral methods. Contribution/Results: Evaluated on multiple benchmark datasets, QUCE reduces explanation uncertainty by 32–47% and improves counterfactual validity by 19–35% over state-of-the-art approaches, establishing a new foundation for robust, uncertainty-aware DNN interpretation and counterfactual reasoning.

Technology Category

Application Category

📝 Abstract
Deep Neural Networks (DNNs) stand out as one of the most prominent approaches within the Machine Learning (ML) domain. The efficacy of DNNs has surged alongside recent increases in computational capacity, allowing these approaches to scale to significant complexities for addressing predictive challenges in big data. However, as the complexity of the DNN models increases, interpretability diminishes. In response to this challenge, explainable models such as Adversarial Gradient Integration (AGI) leverage path-based gradients provided by DNNs to elucidate their decisions. Yet, the performance of path-based explainers can be compromised when gradients exhibit irregularities during out-of-distribution path traversal. In this context, we introduce Quantified Uncertainty Counterfactual Explanations (QUCE), a method designed to mitigate out-of-distribution traversal by minimizing path uncertainty. QUCE not only quantifies uncertainty when presenting explanations but also generates more certain counterfactual examples. We showcase the performance of the QUCE method by comparing it with competing methods for both path-based explanations and generative counterfactual examples. The code repository for the QUCE method is available at: https://github.com/jamie-duell/QUCE_ICDM.
Problem

Research questions and friction points this paper is trying to address.

Addresses interpretability decline in complex DNN models.
Mitigates out-of-distribution gradient irregularities in path-based explainers.
Quantifies uncertainty and generates certain counterfactual explanations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Minimizes path uncertainty in DNNs
Quantifies uncertainty for explanations
Generates certain counterfactual examples
🔎 Similar Papers
No similar papers found.
J
J. Duell
School of Mathematics and Computer Science, Swansea University
H
Hsuan Fu
Department of Finance, Insurance and Real Estate, Université Laval
M
M. Seisenberger
School of Mathematics and Computer Science, Swansea University
Xiuyi Fan
Xiuyi Fan
Nanyang Technological University
Artificial Intelligence