Matching-Based Few-Shot Semantic Segmentation Models Are Interpretable by Design

๐Ÿ“… 2025-11-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Few-shot semantic segmentation (FSS) models suffer from poor interpretability, hindering their trustworthy deployment and support-set optimization in data-scarce scenarios. Method: We present the first systematic study of FSS interpretability, proposing a structured attribution method based on multi-level feature matching scores to generate pixel-wise attribution maps that quantify each support pixelโ€™s contribution to the query prediction. Contribution/Results: We introduce novel, task-specific evaluation metrics for FSS interpretability and extend classical interpretability assessment frameworks to accommodate the support-query paradigm. Experiments on PASCAL-5i and COCO-20i demonstrate that our method significantly outperforms existing attribution techniques, yielding attribution maps with strong structural consistency and semantic plausibility. These maps effectively facilitate model diagnosis and provide actionable guidance for support-set selection.

Technology Category

Application Category

๐Ÿ“ Abstract
Few-Shot Semantic Segmentation (FSS) models achieve strong performance in segmenting novel classes with minimal labeled examples, yet their decision-making processes remain largely opaque. While explainable AI has advanced significantly in standard computer vision tasks, interpretability in FSS remains virtually unexplored despite its critical importance for understanding model behavior and guiding support set selection in data-scarce scenarios. This paper introduces the first dedicated method for interpreting matching-based FSS models by leveraging their inherent structural properties. Our Affinity Explainer approach extracts attribution maps that highlight which pixels in support images contribute most to query segmentation predictions, using matching scores computed between support and query features at multiple feature levels. We extend standard interpretability evaluation metrics to the FSS domain and propose additional metrics to better capture the practical utility of explanations in few-shot scenarios. Comprehensive experiments on FSS benchmark datasets, using different models, demonstrate that our Affinity Explainer significantly outperforms adapted standard attribution methods. Qualitative analysis reveals that our explanations provide structured, coherent attention patterns that align with model architectures and and enable effective model diagnosis. This work establishes the foundation for interpretable FSS research, enabling better model understanding and diagnostic for more reliable few-shot segmentation systems. The source code is publicly available at https://github.com/pasqualedem/AffinityExplainer.
Problem

Research questions and friction points this paper is trying to address.

Interpreting decision-making processes in few-shot semantic segmentation models
Developing attribution methods for matching-based FSS model explanations
Evaluating explanation utility in data-scarce few-shot learning scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages inherent structural properties of matching-based models
Extracts attribution maps using multi-level feature matching scores
Establishes interpretability foundation with specialized FSS evaluation metrics
๐Ÿ”Ž Similar Papers
No similar papers found.