Steerable Visual Representations

πŸ“… 2026-04-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing visual representations struggle to precisely attend to non-salient yet semantically relevant objects in images under natural language guidance, while large multimodal models often compromise general visual capabilities to enhance language alignment. To address this, this work proposes a novel visual representation method that integrates a lightweight cross-attention mechanism into intermediate layers of a Vision Transformer, enabling early-stage textual guidance over both global and local visual features. This approach is the first to endow pretrained visual representations with fine-grained language controllability without degrading their performance on general vision tasks. The authors further introduce a new benchmark for evaluating representational steerability and demonstrate the method’s effectiveness and strong zero-shot generalization across tasks such as referential object focusing, anomaly detection, and personalized recognition.
πŸ“ Abstract
Pretrained Vision Transformers (ViTs) such as DINOv2 and MAE provide generic image features that can be applied to a variety of downstream tasks such as retrieval, classification, and segmentation. However, such representations tend to focus on the most salient visual cues in the image, with no way to direct them toward less prominent concepts of interest. In contrast, Multimodal LLMs can be guided with textual prompts, but the resulting representations tend to be language-centric and lose their effectiveness for generic visual tasks. To address this, we introduce Steerable Visual Representations, a new class of visual representations, whose global and local features can be steered with natural language. While most vision-language models (e.g., CLIP) fuse text with visual features after encoding (late fusion), we inject text directly into the layers of the visual encoder (early fusion) via lightweight cross-attention. We introduce benchmarks for measuring representational steerability, and demonstrate that our steerable visual features can focus on any desired objects in an image while preserving the underlying representation quality. Our method also matches or outperforms dedicated approaches on anomaly detection and personalized object discrimination, exhibiting zero-shot generalization to out-of-distribution tasks.
Problem

Research questions and friction points this paper is trying to address.

steerable representations
vision-language models
visual attention
feature guidance
multimodal learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Steerable Visual Representations
early fusion
cross-attention
vision-language models
zero-shot generalization
πŸ”Ž Similar Papers
No similar papers found.