Generating Part-Based Global Explanations Via Correspondence

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Global interpretability of deep learning models is hindered by the high cost of concept-level annotation and the limited generalizability of local explanations. To address this, we propose a cross-image explanation transfer framework that leverages only a small set of user-defined part-level labels (e.g., “wheel”, “beak”). Our method first extracts part-level visual responses using local explanation techniques; then models inter-image part correspondences to transfer explanatory knowledge from a labeled subset to large-scale unlabeled data; finally, aggregates these responses into global, symbolic decision explanations. This approach breaks the conventional reliance on exhaustive concept annotations, achieving high interpretability with minimal labeling effort—only ~100 annotated images—while preserving explanation accuracy, consistency, and human comprehensibility. Experiments across multiple vision tasks demonstrate scalable and verifiable model behavior analysis.

Technology Category

Application Category

📝 Abstract
Deep learning models are notoriously opaque. Existing explanation methods often focus on localized visual explanations for individual images. Concept-based explanations, while offering global insights, require extensive annotations, incurring significant labeling cost. We propose an approach that leverages user-defined part labels from a limited set of images and efficiently transfers them to a larger dataset. This enables the generation of global symbolic explanations by aggregating part-based local explanations, ultimately providing human-understandable explanations for model decisions on a large scale.
Problem

Research questions and friction points this paper is trying to address.

Generating global symbolic explanations from part-based annotations
Transferring limited user-defined labels to larger datasets efficiently
Providing human-understandable explanations for deep learning decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages user-defined part labels
Transfers labels to larger dataset
Generates global symbolic explanations