TRAP: Hijacking VLA CoT-Reasoning via Adversarial Patches

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The safety of Chain-of-Thought (CoT) reasoning in vision-language-action (VLA) models remains underexplored, posing risks of malicious exploitation to hijack robotic behaviors. This work proposes TRAP, the first framework to selectively disrupt CoT intermediate reasoning through physically realizable adversarial patches, inducing the model to execute attacker-specified high-risk actions—such as erroneously handing over a knife—without altering user instructions. TRAP integrates adversarial example generation with a CoT-specific adversarial loss function and demonstrates effectiveness across three mainstream VLA architectures and CoT paradigms. The results reveal that semantically inconsistent CoT outputs can still dominate behavioral decisions, and the attack patches require only standard printing for deployment in real-world environments.

Technology Category

Application Category

📝 Abstract
By integrating Chain-of-Thought(CoT) reasoning, Vision-Language-Action (VLA) models have demonstrated strong capabilities in robotic manipulation, particularly by improving generalization and interpretability. However, the security of CoT-based reasoning mechanisms remains largely unexplored. In this paper, we show that CoT reasoning introduces a novel attack vector for targeted control hijacking--for example, causing a robot to mistakenly deliver a knife to a person instead of an apple--without modifying the user's instruction. We first provide empirical evidence that CoT strongly governs action generation, even when it is semantically misaligned with the input instructions. Building on this observation, we propose TRAP, the first targeted adversarial attack framework for CoT-reasoning VLA models. TRAP uses an adversarial patch (e.g., a coaster placed on the table) to corrupt intermediate CoT reasoning and hijack the VLA's output. By optimizing the CoT adversarial loss, TRAP induces specific and adversary-defined behaviors. Extensive evaluations across 3 mainstream VLA architectures and 3 CoT reasoning paradigms validate the effectiveness of TRAP. Notably, we implemented the patch by printing it on paper in a real-world setting. Our findings highlight the urgent need to secure CoT reasoning in VLA systems.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought reasoning
Vision-Language-Action models
adversarial attack
robotic manipulation
security vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial patch
Chain-of-Thought reasoning
Vision-Language-Action model
targeted attack
robotic manipulation
🔎 Similar Papers
No similar papers found.
Z
Zhengxian Huang
Zhejiang University, Hangzhou, China
W
Wenjun Zhu
Zhejiang University, Hangzhou, China
H
Haoxuan Qiu
Harbin Institute of Technology, Harbin, China
Xiaoyu Ji
Xiaoyu Ji
Professor, Zhejiang University
IoT securitySensor SecurityAI security
Wenyuan Xu
Wenyuan Xu
Professor, IEEE Fellow, Zhejiang University, College of EE
Wireless Network SecurityEmbedded System SecurityAnalog Cyber SecurityIoT Security