VLA-IAP: Training-Free Visual Token Pruning via Interaction Alignment for Vision-Language-Action Models

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference cost of existing vision-language-action (VLA) models on resource-constrained platforms, which stems from their large number of visual tokens. Conventional pruning methods often neglect the physical interaction characteristics inherent in robotic tasks, inadvertently removing structurally critical regions and causing early-stage behavioral instability. To overcome this, the authors propose a training-free, dynamic visual token pruning approach that introduces an “interaction-first” paradigm by explicitly modeling physical interactions as the pruning criterion. The method leverages geometric priors to preserve structural anchor points and adaptively modulates pruning intensity based on a semantic-motion alignment metric, enabling a scheduling strategy that transitions from conservative to aggressive pruning. This plug-and-play technique requires no fine-tuning, achieves a 97.8% success rate on the LIBERO benchmark, accelerates inference by up to 1.54× (1.25× on average), and demonstrates consistent performance across multiple models and both simulated and real-world robotic platforms.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have rapidly advanced embodied intelligence, enabling robots to execute complex, instruction-driven tasks. However, as model capacity and visual context length grow, the inference cost of VLA systems becomes a major bottleneck for real-world deployment on resource-constrained platforms. Existing visual token pruning methods mainly rely on semantic saliency or simple temporal cues, overlooking the continuous physical interaction, a fundamental property of VLA tasks. Consequently, current approaches often prune visually sparse yet structurally critical regions that support manipulation, leading to unstable behavior during early task phases. To overcome this, we propose a shift toward an explicit Interaction-First paradigm. Our proposed \textbf{training-free} method, VLA-IAP (Interaction-Aligned Pruning), introduces a geometric prior mechanism to preserve structural anchors and a dynamic scheduling strategy that adapts pruning intensity based on semantic-motion alignment. This enables a conservative-to-aggressive transition, ensuring robustness during early uncertainty and efficiency once interaction is locked. Extensive experiments show that VLA-IAP achieves a \textbf{97.8\% success rate} with a \textbf{$1.25\times$ speedup} on the LIBERO benchmark, and up to \textbf{$1.54\times$ speedup} while maintaining performance \textbf{comparable to the unpruned backbone}. Moreover, the method demonstrates superior and consistent performance across multiple model architectures and three different simulation environments, as well as a real robot platform, validating its strong generalization capability and practical applicability. Our project website is: \href{https://chengjt1999.github.io/VLA-IAP.github.io/}{VLA-IAP.com}.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
visual token pruning
embodied intelligence
inference efficiency
physical interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

training-free pruning
interaction alignment
vision-language-action models
geometric prior
dynamic scheduling
🔎 Similar Papers
No similar papers found.