Attention-Guided Patch-Wise Sparse Adversarial Attacks on Vision-Language-Action Models

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models rely on end-to-end training for adversarial attacks, resulting in perceptible perturbations and high computational cost. This paper proposes a lightweight adversarial attack framework tailored for embodied intelligence: instead of pixel-level manipulation, it injects attention-guided, locally sparse perturbations into the textual feature space output by the visual encoder. By integrating Top-K masking, sensitivity enhancement, and gradient-based optimization under an ℓ∞ constraint of ≤4/255, the method achieves near 100% attack success rate while perturbing fewer than 10% of image patches. Perturbations concentrate densely on semantically critical regions, remaining visually imperceptible, with an average per-step latency of ~0.06 seconds. To our knowledge, this is the first work to transfer adversarial perturbations into cross-modal feature space and leverage attention mechanisms for efficient sparsification—significantly reducing both training overhead and perceptual visibility.

Technology Category

Application Category

📝 Abstract
In recent years, Vision-Language-Action (VLA) models in embodied intelligence have developed rapidly. However, existing adversarial attack methods require costly end-to-end training and often generate noticeable perturbation patches. To address these limitations, we propose ADVLA, a framework that directly applies adversarial perturbations on features projected from the visual encoder into the textual feature space. ADVLA efficiently disrupts downstream action predictions under low-amplitude constraints, and attention guidance allows the perturbations to be both focused and sparse. We introduce three strategies that enhance sensitivity, enforce sparsity, and concentrate perturbations. Experiments demonstrate that under an $L_{infty}=4/255$ constraint, ADVLA combined with Top-K masking modifies less than 10% of the patches while achieving an attack success rate of nearly 100%. The perturbations are concentrated on critical regions, remain almost imperceptible in the overall image, and a single-step iteration takes only about 0.06 seconds, significantly outperforming conventional patch-based attacks. In summary, ADVLA effectively weakens downstream action predictions of VLA models under low-amplitude and locally sparse conditions, avoiding the high training costs and conspicuous perturbations of traditional patch attacks, and demonstrates unique effectiveness and practical value for attacking VLA feature spaces.
Problem

Research questions and friction points this paper is trying to address.

Attack Vision-Language-Action models efficiently with low-amplitude perturbations
Generate sparse, focused adversarial patches using attention guidance
Disrupt downstream action predictions without costly end-to-end training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct adversarial perturbations on visual-textual feature projections
Attention-guided sparse patch modifications under low-amplitude constraints
Three strategies for sensitivity enhancement, sparsity enforcement, and perturbation concentration
🔎 Similar Papers
No similar papers found.
N
Naifu Zhang
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Wei Tao
Wei Tao
Huazhong University of Science and Technology
QuantizationLLMTime-Series
Xi Xiao
Xi Xiao
Oak Ridge National Laboratory | University of Alabama at Birmingham
LLM / MLLM EfficiencyImage / Video GenerationImage / Video Understanding
Q
Qianpu Sun
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Y
Yuxin Zheng
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Wentao Mo
Wentao Mo
Tsinghua University
Trustworthy Artificial IntelligenceMultimodal Learning
P
Peiqiang Wang
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
N
Nan Zhang
Ping An Technology, Shenzhen, China