Concepts from Representations: Post-hoc Concept Bottleneck Models via Sparse Decomposition of Visual Representations

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes PCBM-ReD, a novel post-hoc concept bottleneck model that transforms any pretrained deep network into a highly accurate and interpretable system without requiring retraining. Addressing the limitations of existing approaches—such as poor interpretability of standard deep models and the reliance of conventional concept bottleneck models on manual annotations, unreliable concepts, or missing visual grounding—PCBM-ReD leverages CLIP’s vision-language alignment and multimodal large language models (MLLMs) to automatically identify task-relevant, human-understandable visual concepts. It further refines these concepts through reconstruction-guided sparse decomposition to enhance representational fidelity. Evaluated across 11 image classification benchmarks, the method achieves state-of-the-art performance, substantially narrowing the accuracy gap with end-to-end models while delivering high-fidelity, human-interpretable explanations.

Technology Category

Application Category

📝 Abstract
Deep learning has achieved remarkable success in image recognition, yet their inherent opacity poses challenges for deployment in critical domains. Concept-based interpretations aim to address this by explaining model reasoning through human-understandable concepts. However, existing post-hoc methods and ante-hoc concept bottleneck models (CBMs), suffer from limitations such as unreliable concept relevance, non-visual or labor-intensive concept definitions, and model or data-agnostic assumptions. This paper introduces Post-hoc Concept Bottleneck Model via Representation Decomposition (PCBM-ReD), a novel pipeline that retrofits interpretability onto pretrained opaque models. PCBM-ReD automatically extracts visual concepts from a pre-trained encoder, employs multimodal large language models (MLLMs) to label and filter concepts based on visual identifiability and task relevance, and selects an independent subset via reconstruction-guided optimization. Leveraging CLIP's visual-text alignment, it decomposes image representations into linear combination of concept embeddings to fit into the CBMs abstraction. Extensive experiments across 11 image classification tasks show PCBM-ReD achieves state-of-the-art accuracy, narrows the performance gap with end-to-end models, and exhibits better interpretability.
Problem

Research questions and friction points this paper is trying to address.

concept-based interpretation
post-hoc interpretability
concept bottleneck models
visual concept extraction
model opacity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-hoc interpretability
Concept Bottleneck Models
Sparse decomposition
Multimodal LLMs
CLIP
🔎 Similar Papers
No similar papers found.