Insight: Interpretable Semantic Hierarchies in Vision-Language Encoders

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited interpretability of current vision-language foundation models and the inadequacy of existing explanation methods, which suffer from poor spatial localization and applicability only to classification tasks. The authors propose a novel framework that integrates hierarchical sparse autoencoders with a semantically rich vision-language foundation model, enabling, for the first time, fine-grained, spatially precise, and hierarchically structured interpretable concept representations within a vision-language encoder. By modeling local co-occurrence relationships among concepts to construct semantic hierarchies and incorporating automatic concept naming, the method supports high-quality explanations for both image classification and segmentation. Experiments demonstrate that the model matches the performance of non-interpretable baselines on standard benchmarks while providing human-understandable, multi-granular concept-level explanations.

Technology Category

Application Category

📝 Abstract
Language-aligned vision foundation models perform strongly across diverse downstream tasks. Yet, their learned representations remain opaque, making interpreting their decision-making hard. Recent works decompose these representations into human-interpretable concepts, but provide poor spatial grounding and are limited to image classification tasks. In this work, we propose Insight, a language-aligned concept foundation model that provides fine-grained concepts, which are human-interpretable and spatially grounded in the input image. We leverage a hierarchical sparse autoencoder and a foundation model with strong semantic representations to automatically extract concepts at various granularities. Examining local co-occurrence dependencies of concepts allows us to define concept relationships. Through these relations we further improve concept naming and obtain richer explanations. On benchmark data, we show that Insight provides performance on classification and segmentation that is competitive with opaque foundation models while providing fine-grained, high quality concept-based explanations. Code is available at https://github.com/kawi19/Insight.
Problem

Research questions and friction points this paper is trying to address.

interpretable representations
vision-language models
spatial grounding
concept decomposition
model explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

interpretable concepts
spatial grounding
hierarchical sparse autoencoder
vision-language foundation models
semantic hierarchies
🔎 Similar Papers