🤖 AI Summary
This work proposes the first physical prompt injection attack against large vision-language models that requires neither model access, input modification, nor prior knowledge of user queries—addressing a critical limitation of existing attacks that rely on digital input channels and are thus impractical in real-world settings. By embedding malicious textual instructions into the physical environment, the method manipulates model behavior solely through visual observation under black-box conditions. It leverages high-saliency visual cue selection and an environment-adaptive placement strategy guided by spatiotemporal attention mechanisms, effectively exploiting the model’s own perception pipeline to achieve robust control. Evaluated across ten state-of-the-art models on tasks including visual question answering, planning, and navigation, the attack achieves up to 98% success rates and demonstrates consistent performance across varying distances, viewpoints, and lighting conditions.
📝 Abstract
Large Vision-Language Models (LVLMs) are increasingly deployed in real-world intelligent systems for perception and reasoning in open physical environments. While LVLMs are known to be vulnerable to prompt injection attacks, existing methods either require access to input channels or depend on knowledge of user queries, assumptions that rarely hold in practical deployments. We propose the first Physical Prompt Injection Attack (PPIA), a black-box, query-agnostic attack that embeds malicious typographic instructions into physical objects perceivable by the LVLM. PPIA requires no access to the model, its inputs, or internal pipeline, and operates solely through visual observation. It combines offline selection of highly recognizable and semantically effective visual prompts with strategic environment-aware placement guided by spatiotemporal attention, ensuring that the injected prompts are both perceivable and influential on model behavior. We evaluate PPIA across 10 state-of-the-art LVLMs in both simulated and real-world settings on tasks including visual question answering, planning, and navigation, PPIA achieves attack success rates up to 98%, with strong robustness under varying physical conditions such as distance, viewpoint, and illumination. Our code is publicly available at https://github.com/2023cghacker/Physical-Prompt-Injection-Attack.