Interpretable and Steerable Concept Bottleneck Sparse Autoencoders

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse autoencoders (SAEs) suffer from low neuron interpretability, absence of user-specified concepts, and difficulty in intervention—limiting their utility in interpretability and controllability tasks for large language and multimodal models. To address this, we propose Concept-Bottleneck Sparse Autoencoders (CB-SAEs), introducing a lightweight concept-alignment bottleneck module and a differentiable importance-aware structured neuron pruning mechanism. CB-SAEs explicitly incorporate user-defined concept priors while preserving sparse encoding capability. Our method jointly optimizes concept-space alignment, posterior interpretability, and efficient pruning. Evaluated on multimodal foundation models and image generation tasks, CB-SAEs achieve +32.1% improvement in interpretability and +14.5% in controllability over baseline SAEs. We release all code and pre-trained weights to foster reproducibility and further research.

Technology Category

Application Category

📝 Abstract
Sparse autoencoders (SAEs) promise a unified approach for mechanistic interpretability, concept discovery, and model steering in LLMs and LVLMs. However, realizing this potential requires that the learned features be both interpretable and steerable. To that end, we introduce two new computationally inexpensive interpretability and steerability metrics and conduct a systematic analysis on LVLMs. Our analysis uncovers two observations; (i) a majority of SAE neurons exhibit either low interpretability or low steerability or both, rendering them ineffective for downstream use; and (ii) due to the unsupervised nature of SAEs, user-desired concepts are often absent in the learned dictionary, thus limiting their practical utility. To address these limitations, we propose Concept Bottleneck Sparse Autoencoders (CB-SAE) - a novel post-hoc framework that prunes low-utility neurons and augments the latent space with a lightweight concept bottleneck aligned to a user-defined concept set. The resulting CB-SAE improves interpretability by +32.1% and steerability by +14.5% across LVLMs and image generation tasks. We will make our code and model weights available.
Problem

Research questions and friction points this paper is trying to address.

Improves interpretability and steerability of sparse autoencoders
Addresses low-utility neurons and missing user-desired concepts
Enhances concept discovery and model steering in vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prunes low-utility neurons for efficiency
Adds lightweight concept bottleneck for alignment
Improves interpretability and steerability metrics significantly
🔎 Similar Papers
No similar papers found.