🤖 AI Summary
Deep neural networks (DNNs) lack fine-grained interpretability in decision-making, particularly overlooking auxiliary detail features critical for robust inference.
Method: We propose UCAG, a posterior attribution framework centered on the novel “Unfold–Conquer” attribution guidance mechanism, which jointly models global confidence and local feature fidelity to spatially decompose input feature contributions to model outputs. UCAG integrates energy-driven per-patch confidence modeling, attribution map density analysis, and deletion/insertion evaluation, validated quantitatively via pointing games.
Results: UCAG significantly outperforms state-of-the-art methods across multiple metrics—including deletion/insertion curves, positive/negative density map consistency, and energy-based pointing accuracy. Qualitatively, it generates attribution maps that are sharper, richer, and more semantically coherent, effectively uncovering key auxiliary cues masked by dominant regions.
📝 Abstract
Revealing the transparency of Deep Neural Networks (DNNs) has been widely studied to describe the decision mechanisms of network inner structures. In this paper, we propose a novel post-hoc framework, Unfold and Conquer Attribution Guidance (UCAG), which enhances the explainability of the network decision by spatially scrutinizing the input features with respect to the model confidence. Addressing the phenomenon of missing detailed descriptions, UCAG sequentially complies with the confidence of slices of the image, leading to providing an abundant and clear interpretation. Therefore, it is possible to enhance the representation ability of explanation by preserving the detailed descriptions of assistant input features, which are commonly overwhelmed by the main meaningful regions. We conduct numerous evaluations to validate the performance in several metrics: i) deletion and insertion, ii) (energy-based) pointing games, and iii) positive and negative density maps. Experimental results, including qualitative comparisons, demonstrate that our method outperforms the existing methods with the nature of clear and detailed explanations and applicability.