Towards Better Visualizing the Decision Basis of Networks via Unfold and Conquer Attribution Guidance

📅 2023-06-26
🏛️ AAAI Conference on Artificial Intelligence
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks (DNNs) lack fine-grained interpretability in decision-making, particularly overlooking auxiliary detail features critical for robust inference. Method: We propose UCAG, a posterior attribution framework centered on the novel “Unfold–Conquer” attribution guidance mechanism, which jointly models global confidence and local feature fidelity to spatially decompose input feature contributions to model outputs. UCAG integrates energy-driven per-patch confidence modeling, attribution map density analysis, and deletion/insertion evaluation, validated quantitatively via pointing games. Results: UCAG significantly outperforms state-of-the-art methods across multiple metrics—including deletion/insertion curves, positive/negative density map consistency, and energy-based pointing accuracy. Qualitatively, it generates attribution maps that are sharper, richer, and more semantically coherent, effectively uncovering key auxiliary cues masked by dominant regions.
📝 Abstract
Revealing the transparency of Deep Neural Networks (DNNs) has been widely studied to describe the decision mechanisms of network inner structures. In this paper, we propose a novel post-hoc framework, Unfold and Conquer Attribution Guidance (UCAG), which enhances the explainability of the network decision by spatially scrutinizing the input features with respect to the model confidence. Addressing the phenomenon of missing detailed descriptions, UCAG sequentially complies with the confidence of slices of the image, leading to providing an abundant and clear interpretation. Therefore, it is possible to enhance the representation ability of explanation by preserving the detailed descriptions of assistant input features, which are commonly overwhelmed by the main meaningful regions. We conduct numerous evaluations to validate the performance in several metrics: i) deletion and insertion, ii) (energy-based) pointing games, and iii) positive and negative density maps. Experimental results, including qualitative comparisons, demonstrate that our method outperforms the existing methods with the nature of clear and detailed explanations and applicability.
Problem

Research questions and friction points this paper is trying to address.

Enhancing DNN decision explainability via spatial feature analysis
Addressing missing detailed descriptions in network interpretations
Improving explanation representation by preserving assistant input features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-hoc framework UCAG enhances DNN explainability
Spatially scrutinizes input features for confidence
Preserves detailed descriptions of assistant features
🔎 Similar Papers
No similar papers found.
Jung-Ho Hong
Jung-Ho Hong
Korea University
Artificial IntelligenceDeep LearningExplainable AI
Woo-Jeoung Nam
Woo-Jeoung Nam
kyungpook National University
machine learningexplainable AIdeep learning
K
Kyu-Sung Jeon
Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
S
Seong-Whan Lee
Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea