Beyond Attention Magnitude: Leveraging Inter-layer Rank Consistency for Efficient Vision-Language-Action Models

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference latency of Vision-Language-Action models caused by processing dense visual tokens, a challenge exacerbated by existing static pruning methods based on attention magnitude that often degrade policy performance. To overcome this, the authors propose TIES, a novel framework that introduces inter-layer token ranking consistency as a dynamic criterion for the first time, synergistically combined with attention magnitude to adaptively identify and retain critical tokens without requiring retraining. Evaluated on the CogACT+SIMPLER benchmark, TIES reduces token usage by 78% while improving average task success rate by 6%. The method demonstrates strong generalization across diverse decoders and benchmarks, establishing a new paradigm for efficient and effective token pruning in embodied AI systems.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models excel in robotic manipulation but suffer from significant inference latency due to processing dense visual tokens. Existing token reduction methods predominantly rely on attention magnitude as a static selection. In this work, we challenge this assumption, revealing that high-attention tokens are task-dependent and can even degrade policy performance. To address this, we introduce \textbf{TIES} (\textbf{T}au-guided \textbf{I}nter-layer \textbf{E}fficient \textbf{S}election), a dynamic framework guided by inter-layer token ranking consistency. By adaptively balancing attention magnitude with ranking consistency, TIES ensures robust token selection without requiring additional training. On the CogACT + SIMPLER benchmark, TIES improves average success rates by 6\% while reducing token usage by 78\%, and demonstrate strong generalization across diverse decoders and benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
inference latency
token reduction
attention magnitude
robotic manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inter-layer Rank Consistency
Dynamic Token Selection
Vision-Language-Action Models
Efficient Inference
TIES
🔎 Similar Papers
P
Peiju Liu
Fudan University
Jinming Liu
Jinming Liu
Shanghai Jiao Tong Univeristy
VLMLLMComputer VisionImage/Video Compression
X
Xipeng Qiu
Fudan University
X
Xuanjing Huang
Fudan University