Adversarial Prompt Injection Attack on Multimodal Large Language Models

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of multimodal large language models (MLLMs) to prompt injection attacks during instruction following, noting that existing approaches are often detectable due to their reliance on explicit textual or visual cues. The authors propose a stealthy visual prompt injection method that adaptively embeds malicious instructions via bounded text overlays and iteratively optimizes image perturbations to align features of adversarial visual and textual targets at both coarse and fine granularities. For the first time, this approach achieves concealed attacks on closed-source MLLMs by introducing dynamically optimized text-rendered images as visual targets, substantially enhancing semantic fidelity and cross-model transferability. Experimental results demonstrate that the proposed method significantly outperforms current attacks on two comprehension tasks, offering both high effectiveness and strong imperceptibility.
📝 Abstract
Although multimodal large language models (MLLMs) are increasingly deployed in real-world applications, their instruction-following behavior leaves them vulnerable to prompt injection attacks. Existing prompt injection methods predominantly rely on textual prompts or perceptible visual prompts that are observable by human users. In this work, we study imperceptible visual prompt injection against powerful closed-source MLLMs, where adversarial instructions are embedded in the visual modality. Our method adaptively embeds the malicious prompt into the input image via a bounded text overlay to provide semantic guidance. Meanwhile, the imperceptible visual perturbation is iteratively optimized to align the feature representation of the attacked image with those of the malicious visual and textual targets at both coarse- and fine-grained levels. Specifically, the visual target is instantiated as a text-rendered image and progressively refined during optimization to more faithfully represent the desired semantics and improve transferability. Extensive experiments on two multimodal understanding tasks across multiple closed-source MLLMs demonstrate the superior performance of our approach compared to existing methods.
Problem

Research questions and friction points this paper is trying to address.

adversarial prompt injection
multimodal large language models
imperceptible visual attack
instruction-following vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial prompt injection
multimodal large language models
imperceptible visual perturbation
feature alignment
text-rendered image
🔎 Similar Papers
No similar papers found.
M
Meiwen Ding
Rapid-Rich Object Search Lab, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Song Xia
Song Xia
NTU
Machine Learning
C
Chenqi Kong
Rapid-Rich Object Search Lab, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Xudong Jiang
Xudong Jiang
IEEE Fellow, Nanyang Technological University, Singapore
Pattern RecognitionComputer VisionMachine LearningImage ProcessingBiometrics