BadRobot: Jailbreaking Embodied LLMs in the Physical World

📅 2024-07-16
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes critical safety vulnerabilities in embodied large language models (LLMs) when executing harmful physical actions under voice指令—induced commands in real-world environments, introducing the first “jailbreak” attack paradigm for embodied AI in the physical world. The method systematically identifies and exploits three novel vulnerability classes: inherent LLM manipulability, semantic misalignment between linguistic outputs and physical actions, and hazardous behaviors arising from world-model deficiencies. We develop a voice-interaction modeling framework, a cross-platform adversarial testing infrastructure (encompassing Voxposer, Code-as-Policies, and ProgPrompt), and a benchmark of malicious physical-action queries. Our attacks successfully trigger diverse policy violations on mainstream embodied LLM systems. Experiments demonstrate the attack’s broad applicability and tangible physical risks. This work establishes the first reproducible, standardized evaluation benchmark and methodology for assessing safety in embodied AI systems.

Technology Category

Application Category

📝 Abstract
Embodied AI represents systems where AI is integrated into physical entities. Large Language Model (LLM), which exhibits powerful language understanding abilities, has been extensively employed in embodied AI by facilitating sophisticated task planning. However, a critical safety issue remains overlooked: could these embodied LLMs perpetrate harmful behaviors? In response, we introduce BadRobot, a novel attack paradigm aiming to make embodied LLMs violate safety and ethical constraints through typical voice-based user-system interactions. Specifically, three vulnerabilities are exploited to achieve this type of attack: (i) manipulation of LLMs within robotic systems, (ii) misalignment between linguistic outputs and physical actions, and (iii) unintentional hazardous behaviors caused by world knowledge's flaws. Furthermore, we construct a benchmark of various malicious physical action queries to evaluate BadRobot's attack performance. Based on this benchmark, extensive experiments against existing prominent embodied LLM frameworks (e.g., Voxposer, Code as Policies, and ProgPrompt) demonstrate the effectiveness of our BadRobot.
Problem

Research questions and friction points this paper is trying to address.

Identifies vulnerabilities in embodied large language models.
Explores harmful behaviors in AI-physical system integrations.
Assesses safety risks through voice-based interaction attacks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits LLM vulnerabilities
Manipulates linguistic-physical misalignment
Evaluates malicious action queries
🔎 Similar Papers
No similar papers found.
Hangtao Zhang
Hangtao Zhang
Huazhong University of Science and Technology (HUST)
AI Security
C
Chenyu Zhu
Huazhong University of Science and Technology, Wuhan, China
Xianlong Wang
Xianlong Wang
Ph.D. student, City University of Hong Kong
Trustworthy LLM/VLMEmbodied AIUnlearnable Example3D Point CloudPoisoning/Adversarial Attack
Z
Ziqi Zhou
Huazhong University of Science and Technology, Wuhan, China
C
Changgan Yin
Huazhong University of Science and Technology, Wuhan, China
Minghui Li
Minghui Li
Huazhong University of Science and Technology
AI Security
L
Lulu Xue
Huazhong University of Science and Technology, Wuhan, China
Y
Yichen Wang
Huazhong University of Science and Technology, Wuhan, China
Shengshan Hu
Shengshan Hu
School of CSE, Huazhong University of Science and Technology (HUST)
AI SecurityEmbodied AIAutonomous Driving
A
Aishan Liu
Beihang University, Beijing, China
P
Peijin Guo
Huazhong University of Science and Technology, Wuhan, China
L
Leo Yu Zhang
Griffith University, Queensland, Australia