🤖 AI Summary
This work addresses key bottlenecks in visual language model (VLM) behavior control—including reliance on internal model access, detectability, and constraints imposed by closed API environments—by proposing VISOR: a non-intrusive, input-only method for stealthy, bidirectional behavioral redirection. VISOR requires no architectural modification, text instructions, or system prompts; instead, it end-to-end learns compact, universal steering images (~150 KB) that manipulate VLM behavior solely through optimized visual inputs. Leveraging activation pattern regularization and multi-task alignment objectives, VISOR achieves a 25% behavioral shift while preserving near-perfect accuracy (99.9%) on semantic-irrelevant benchmarks such as MMLU. To our knowledge, this is the first demonstration of effective and undetectable pure-vision-channel manipulation on LLaVA-1.5-7B, uncovering critical security vulnerabilities at the visual input layer of VLMs.
📝 Abstract
Vision Language Models (VLMs) are increasingly being used in a broad range of applications, bringing their security and behavioral control to the forefront. While existing approaches for behavioral control or output redirection, like system prompting in VLMs, are easily detectable and often ineffective, activation-based steering vectors require invasive runtime access to model internals--incompatible with API-based services and closed-source deployments. We introduce VISOR (Visual Input-based Steering for Output Redirection), a novel method that achieves sophisticated behavioral control through optimized visual inputs alone. By crafting universal steering images that induce target activation patterns, VISOR enables practical deployment across all VLM serving modalities while remaining imperceptible compared to explicit textual instructions. We validate VISOR on LLaVA-1.5-7B across three critical alignment tasks: refusal, sycophancy and survival instinct. A single 150KB steering image matches steering vector performance within 1-2% for positive behavioral shifts while dramatically exceeding it for negative steering--achieving up to 25% shifts from baseline compared to steering vectors' modest changes. Unlike system prompting (3-4% shifts), VISOR provides robust bidirectional control while maintaining 99.9% performance on 14,000 unrelated MMLU tasks. Beyond eliminating runtime overhead and model access requirements, VISOR exposes a critical security vulnerability: adversaries can achieve sophisticated behavioral manipulation through visual channels alone, bypassing text-based defenses. Our work fundamentally re-imagines multimodal model control and highlights the urgent need for defenses against visual steering attacks.