Manipulation Facing Threats: Evaluating Physical Vulnerabilities in End-to-End Vision Language Action Models

📅 2024-09-20
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the physical fragility of end-to-end vision-language-action models (VLAMs) in open-vocabulary robotic manipulation tasks. To this end, we introduce PVEP—the first unified evaluation framework for assessing VLAMs’ physical-world robustness. PVEP systematically benchmarks three realistic threats: out-of-distribution samples, font-perturbed prompts, and adversarial patch attacks. It integrates multimodal safety analysis, vision-based adversarial attack generation, vision–action joint reasoning evaluation, and physics-based simulation testing to establish a generalizable paradigm for analyzing VLAMs’ physical security responses. Experiments reveal significant, consistent performance degradation across all threat types. Moreover, PVEP demonstrates high discriminability, strong reproducibility, and cross-architecture applicability—validated on state-of-the-art models including RT-2 and VoxPoser. Our framework provides a scalable benchmark and methodological foundation for advancing robustness and safety research in VLAMs.

Technology Category

Application Category

📝 Abstract
Recently, driven by advancements in Multimodal Large Language Models (MLLMs), Vision Language Action Models (VLAMs) are being proposed to achieve better performance in open-vocabulary scenarios for robotic manipulation tasks. Since manipulation tasks involve direct interaction with the physical world, ensuring robustness and safety during the execution of this task is always a very critical issue. In this paper, by synthesizing current safety research on MLLMs and the specific application scenarios of the manipulation task in the physical world, we comprehensively evaluate VLAMs in the face of potential physical threats. Specifically, we propose the Physical Vulnerability Evaluating Pipeline (PVEP) that can incorporate as many visual modal physical threats as possible for evaluating the physical robustness of VLAMs. The physical threats in PVEP specifically include Out-of-Distribution, Typography-based Visual Prompts, and Adversarial Patch Attacks. By comparing the performance fluctuations of VLAMs before and after being attacked, we provide generalizable Analyses of how VLAMs respond to different physical security threats. Our project page is in this link: https://chaducheng.github.io/Manipulat-Facing-Threats/.
Problem

Research questions and friction points this paper is trying to address.

Evaluating physical vulnerabilities in Vision Language Action Models
Assessing robustness against visual modal physical threats
Analyzing VLAMs' responses to Out-of-Distribution and adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating physical vulnerabilities in VLAMs
Proposing Physical Vulnerability Evaluating Pipeline
Testing Out-of-Distribution Typography Adversarial attacks