Textual Steering Vectors Can Improve Visual Understanding in Multimodal Large Language Models

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) lack efficient, parameter-free mechanisms for behavioral guidance. This paper proposes a text-backbone-driven directional vector guidance method: semantic vectors—extracted from a pure-text LLM (e.g., Llama-3) via sparse autoencoding, mean-shift clustering, and linear probing—are directly injected into the MLLM’s visual encoder or cross-modal alignment module, enabling zero-shot, annotation-free visual reasoning enhancement without fine-tuning. Crucially, this work is the first to empirically demonstrate that structured semantic knowledge encoded in text-only models can be effectively transferred to multimodal tasks, exhibiting architecture-agnosticity and strong generalization. On CV-Bench, our method achieves +7.3% absolute accuracy gain in spatial relation recognition and +3.3% in counting—substantially outperforming prompt engineering—and maintains robust performance on out-of-distribution data.

Technology Category

Application Category

📝 Abstract
Steering methods have emerged as effective and targeted tools for guiding large language models' (LLMs) behavior without modifying their parameters. Multimodal large language models (MLLMs), however, do not currently enjoy the same suite of techniques, due in part to their recency and architectural diversity. Inspired by this gap, we investigate whether MLLMs can be steered using vectors derived from their text-only LLM backbone, via sparse autoencoders (SAEs), mean shift, and linear probing. We find that text-derived steering consistently enhances multimodal accuracy across diverse MLLM architectures and visual tasks. In particular, mean shift boosts spatial relationship accuracy on CV-Bench by up to +7.3% and counting accuracy by up to +3.3%, outperforming prompting and exhibiting strong generalization to out-of-distribution datasets. These results highlight textual steering vectors as a powerful, efficient mechanism for enhancing grounding in MLLMs with minimal additional data collection and computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Enhancing visual understanding in multimodal LLMs using textual steering vectors
Improving MLLM accuracy across architectures without parameter modification
Boosting spatial and counting accuracy in visual tasks via text-derived steering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-derived steering vectors enhance MLLMs
Mean shift boosts spatial and counting accuracy
Sparse autoencoders and linear probing used
🔎 Similar Papers
No similar papers found.