Fusion-CAM: Integrating Gradient and Region-Based Class Activation Maps for Robust Visual Explanations

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing Class Activation Mapping (CAM) methods, which often suffer from noise interference, incomplete coverage, or excessive smoothing, thereby struggling to balance discriminativeness and completeness in visualizing neural network decisions. To overcome these issues, the authors propose a unified framework that integrates gradient- and region-based paradigms by synergistically combining Grad-CAM and Score-CAM. The approach employs gradient map denoising, region contribution weighting, and an adaptive pixel-level similarity fusion mechanism to generate input-adaptive, high-quality explanations. Evaluated on standard benchmarks, the method significantly outperforms current CAM techniques, achieving superior performance in both qualitative visualizations and quantitative evaluation metrics.

Technology Category

Application Category

📝 Abstract
Interpreting the decision-making process of deep convolutional neural networks remains a central challenge in achieving trustworthy and transparent artificial intelligence. Explainable AI (XAI) techniques, particularly Class Activation Map (CAM) methods, are widely adopted to visualize the input regions influencing model predictions. Gradient-based approaches (e.g. Grad-CAM) provide highly discriminative, fine-grained details by computing gradients of class activations but often yield noisy and incomplete maps that emphasize only the most salient regions rather than the complete objects. Region-based approaches (e.g. Score-CAM) aggregate information over larger areas, capturing broader object coverage at the cost of over-smoothing and reduced sensitivity to subtle features. We introduce Fusion-CAM, a novel framework that bridges this explanatory gap by unifying both paradigms through a dedicated fusion mechanism to produce robust and highly discriminative visual explanations. Our method first denoises gradient-based maps, yielding cleaner and more focused activations. It then combines the refined gradient map with region-based maps using contribution weights to enhance class coverage. Finally, we propose an adaptive similarity-based pixel-level fusion that evaluates the agreement between both paradigms and dynamically adjusts the fusion strength. This adaptive mechanism reinforces consistent activations while softly blending conflicting regions, resulting in richer, context-aware, and input-adaptive visual explanations. Extensive experiments on standard benchmarks show that Fusion-CAM consistently outperforms existing CAM variants in both qualitative visualization and quantitative evaluation, providing a robust and flexible tool for interpreting deep neural networks.
Problem

Research questions and friction points this paper is trying to address.

Class Activation Map
Explainable AI
Visual Explanation
Gradient-based CAM
Region-based CAM
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fusion-CAM
Class Activation Map
Explainable AI
Gradient-based Visualization
Adaptive Fusion
🔎 Similar Papers
No similar papers found.