Language Models Can Explain Visual Features via Steering

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of automatically interpreting the vast number of features extracted by sparse autoencoders (SAEs) in visual models without manual intervention. The authors propose a causal intervention–based feature steering method: by activating individual SAE features in isolation on an empty (zeroed) input image and prompting a vision–language model to describe what it “sees,” semantic explanations for the corresponding visual concepts are generated end-to-end. To further enhance interpretability, they introduce a Steering-informed Top-k fusion strategy that combines the strengths of causal interventions and representative input examples, significantly improving explanation quality without additional computational cost. The approach is highly scalable, with interpretation fidelity increasing alongside the size of the language model, marking the first fully automated pipeline for high-quality visual feature explanation.

Technology Category

Application Category

📝 Abstract
Sparse Autoencoders uncover thousands of features in vision models, yet explaining these features without requiring human intervention remains an open challenge. While previous work has proposed generating correlation-based explanations based on top activating input examples, we present a fundamentally different alternative based on causal interventions. We leverage the structure of Vision-Language Models and steer individual SAE features in the vision encoder after providing an empty image. Then, we prompt the language model to explain what it ``sees'', effectively eliciting the visual concept represented by each feature. Results show that Steering offers an scalable alternative that complements traditional approaches based on input examples, serving as a new axis for automated interpretability in vision models. Moreover, the quality of explanations improves consistently with the scale of the language model, highlighting our method as a promising direction for future research. Finally, we propose Steering-informed Top-k, a hybrid approach that combines the strengths of causal interventions and input-based approaches to achieve state-of-the-art explanation quality without additional computational cost.
Problem

Research questions and friction points this paper is trying to address.

visual features
automated interpretability
sparse autoencoders
vision models
feature explanation
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal intervention
feature steering
vision-language models
sparse autoencoders
automated interpretability
🔎 Similar Papers
No similar papers found.