🤖 AI Summary
To address the high computational overhead in vision-language-action (VLA) models for long-horizon robotic manipulation—caused by dynamic redundancy in visual tokens—this paper proposes the first action-aware dynamic visual token pruning framework. The method leverages two key insights: (1) a strong correlation between manipulation phases and visual redundancy, and (2) a dual-path gating mechanism guided jointly by textual instructions and action trajectories to adaptively modulate token retention rates per phase. Evaluated on the LIBERO benchmark and real-world robotic tasks, our approach significantly reduces computational cost—e.g., cutting FLOPs and inference latency by 1.35× for OpenVLA-OFT—while simultaneously improving task success rate by +25.8%. This demonstrates a synergistic optimization of efficiency and accuracy, establishing a new paradigm for resource-efficient VLA modeling in embodied AI.
📝 Abstract
Robotic manipulation with Vision-Language-Action models requires efficient inference over long-horizon multi-modal context, where attention to dense visual tokens dominates computational cost. Existing methods optimize inference speed by reducing visual redundancy within VLA models, but they overlook the varying redundancy across robotic manipulation stages. We observe that the visual token redundancy is higher in coarse manipulation phase than in fine-grained operations, and is strongly correlated with the action dynamic. Motivated by this observation, we propose extbf{A}ction-aware extbf{D}ynamic extbf{P}runing ( extbf{ADP}), a multi-modal pruning framework that integrates text-driven token selection with action-aware trajectory gating. Our method introduces a gating mechanism that conditions the pruning signal on recent action trajectories, using past motion windows to adaptively adjust token retention ratios in accordance with dynamics, thereby balancing computational efficiency and perceptual precision across different manipulation stages. Extensive experiments on the LIBERO suites and diverse real-world scenarios demonstrate that our method significantly reduces FLOPs and action inference latency ( extit{e.g.} $1.35 imes$ speed up on OpenVLA-OFT) while maintaining competitive success rates ( extit{e.g.} 25.8% improvements with OpenVLA) compared to baselines, thereby providing a simple plug-in path to efficient robot policies that advances the efficiency and performance frontier of robotic manipulation. Our project website is: href{https://vla-adp.github.io/}{ADP.com}.