DTP: A Simple yet Effective Distracting Token Pruning Framework for Vision-Language Action Models

📅 2026-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of existing vision-language-action (VLA) models to task-irrelevant regions in input images—referred to as distracting tokens—which induce attentional drift and degrade task success rates. The study is the first to demonstrate a strong negative correlation between the attention weights assigned to such distracting tokens and downstream task performance. To mitigate this issue, the authors propose Dynamic Token Pruning (DTP), a plug-and-play framework that dynamically identifies and removes distracting tokens during inference without altering the model architecture or requiring additional inputs, thereby realigning the attention distribution toward task-relevant features. Extensive experiments on the SIMPLER benchmark show that DTP consistently and significantly improves task success rates across multiple state-of-the-art VLA models, confirming its effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Vision-Language Action (VLA) models have shown remarkable progress in robotic manipulation by leveraging the powerful perception abilities of Vision-Language Models (VLMs) to understand environments and directly output actions. However, by default, VLA models may overly attend to image tokens in the task-irrelevant region, which we describe as'distracting tokens'. This behavior can disturb the model from the generation of the desired action tokens in each step, affecting the success rate of tasks. In this paper, we introduce a simple yet effective plug-and-play Distracting Token Pruning (DTP) framework, which dynamically detects and prunes these distracting image tokens. By correcting the model's visual attention patterns, we aim to improve the task success rate, as well as exploring the performance upper boundaries of the model without altering its original architecture or adding additional inputs. Experiments on the SIMPLER Benchmark (Li et al., 2024) show that our method consistently achieving relative improvements in task success rates across different types of novel VLA models, demonstrating generalizability to transformer-based VLAs. Further analysis reveals a negative correlation between the task success rate and the amount of attentions in the task-irrelevant region for all models tested, highlighting a common phenomenon of VLA models that could guide future research. We also publish our code at: https://anonymous.4open.science/r/CBD3.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Action models
distracting tokens
visual attention
task success rate
robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distracting Token Pruning
Vision-Language Action Models
Attention Correction
Plug-and-Play Framework
Task-Irrelevant Region
🔎 Similar Papers
No similar papers found.
C
Chenyang Li
Australian National University
J
Jieyuan Liu
University of California, San Diego
B
Bin Li
Chinese Academy of Sciences
B
Bo Gao
Beijing Institute of Graphic Communication
Y
Yilin Yuan
Beijing Institute of Graphic Communication
Yangfan He
Yangfan He
University of Minnesota - Twin Cities
AI AgentReasoningAI AlignmentFoundation Models
Y
Yuchen Li
Baidu Search
Jingqun Tang
Jingqun Tang
ByteDance Inc.
Computer VisionDocument IntelligenceMLLMMultimodal Generative Models