🤖 AI Summary
This work addresses the high inference latency of Vision-Language-Action models caused by processing dense visual tokens, a challenge exacerbated by existing static pruning methods based on attention magnitude that often degrade policy performance. To overcome this, the authors propose TIES, a novel framework that introduces inter-layer token ranking consistency as a dynamic criterion for the first time, synergistically combined with attention magnitude to adaptively identify and retain critical tokens without requiring retraining. Evaluated on the CogACT+SIMPLER benchmark, TIES reduces token usage by 78% while improving average task success rate by 6%. The method demonstrates strong generalization across diverse decoders and benchmarks, establishing a new paradigm for efficient and effective token pruning in embodied AI systems.
📝 Abstract
Vision-Language-Action (VLA) models excel in robotic manipulation but suffer from significant inference latency due to processing dense visual tokens. Existing token reduction methods predominantly rely on attention magnitude as a static selection. In this work, we challenge this assumption, revealing that high-attention tokens are task-dependent and can even degrade policy performance. To address this, we introduce \textbf{TIES} (\textbf{T}au-guided \textbf{I}nter-layer \textbf{E}fficient \textbf{S}election), a dynamic framework guided by inter-layer token ranking consistency. By adaptively balancing attention magnitude with ranking consistency, TIES ensures robust token selection without requiring additional training. On the CogACT + SIMPLER benchmark, TIES improves average success rates by 6\% while reducing token usage by 78\%, and demonstrate strong generalization across diverse decoders and benchmarks.