Towards Human-Understandable Multi-Dimensional Concept Discovery

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the semantic ambiguity and low human interpretability of multidimensional concept discovery (MCD) in concept-based eXplainable AI (C-XAI). To enhance both concept interpretability and model faithfulness, we propose a novel method with three core contributions: (1) the first integration of the Segment Anything Model (SAM) into concept identification, coupled with a CNN-tailored input masking mechanism that preserves semantic integrity of salient regions; (2) a multidimensional concept subspace decomposition framework; and (3) explicit modeling of concept completeness relationships to improve semantic clarity and decision consistency. Extensive quantitative evaluations—including fidelity, plausibility, and accuracy metrics—alongside human-subject studies demonstrate that our approach significantly outperforms state-of-the-art C-XAI methods in explanation accuracy, faithfulness, and human comprehensibility. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Concept-based eXplainable AI (C-XAI) aims to overcome the limitations of traditional saliency maps by converting pixels into human-understandable concepts that are consistent across an entire dataset. A crucial aspect of C-XAI is completeness, which measures how well a set of concepts explains a model's decisions. Among C-XAI methods, Multi-Dimensional Concept Discovery (MCD) effectively improves completeness by breaking down the CNN latent space into distinct and interpretable concept subspaces. However, MCD's explanations can be difficult for humans to understand, raising concerns about their practical utility. To address this, we propose Human-Understandable Multi-dimensional Concept Discovery (HU-MCD). HU-MCD uses the Segment Anything Model for concept identification and implements a CNN-specific input masking technique to reduce noise introduced by traditional masking methods. These changes to MCD, paired with the completeness relation, enable HU-MCD to enhance concept understandability while maintaining explanation faithfulness. Our experiments, including human subject studies, show that HU-MCD provides more precise and reliable explanations than existing C-XAI methods. The code is available at https://github.com/grobruegge/hu-mcd.
Problem

Research questions and friction points this paper is trying to address.

Improving human-understandability of concept-based AI explanations
Reducing noise in concept identification using advanced masking
Maintaining explanation faithfulness while enhancing concept clarity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Segment Anything Model for concept identification
Implements CNN-specific input masking technique
Enhances concept understandability with completeness relation
🔎 Similar Papers
No similar papers found.