VLA-InfoEntropy: A Training-Free Vision-Attention Information Entropy Approach for Vision-Language-Action Models Inference Acceleration and Success

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-Language-Action models face significant computational overhead and low inference efficiency due to their joint processing of high-dimensional visual inputs, complex language instructions, and continuous action spaces, hindering real-time deployment. To address this, this work proposes a training-free dynamic focusing strategy that dynamically guides the model during inference from global features to local critical regions. The approach leverages image entropy to quantify the textural informativeness of visual tokens and attention entropy to assess textual semantic relevance, integrating spatial, semantic, and temporal cues to identify and prioritize task-relevant regions. This method substantially reduces redundant computation and the number of active parameters during inference, achieving markedly higher inference speed while maintaining or even improving task performance compared to existing approaches.
📝 Abstract
Vision-Language-Action (VLA) models integrate visual perception, language understanding, and action decision-making for cross-modal semantic alignment, exhibiting broad application potential. However, the joint processing of high-dimensional visual features, complex linguistic inputs, and continuous action sequences incurs significant computational overhead and low inference efficiency, thereby hindering real-time deployment and reliability. To address this issue, we use image entropy to quantify the grayscale distribution characteristics of each visual token and introduce attention entropy to capture the distribution of attention scores over task-related text. Visual entropy identifies texture-rich or structurally informative regions, while attention entropy pinpoints semantically relevant tokens. Combined with timestep information, these metrics enable a dynamic transition strategy that shifts the model's focus from global visual features to attention-guided local informative regions. Thus, the resulting VLA-InfoEntropy method integrates spatial, semantic, and temporal cues to reduce redundancy while preserving critical content. Extensive experiments show that our method reduces inference parameters, accelerates inference speed, and outperforms existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
inference efficiency
computational overhead
real-time deployment
cross-modal semantic alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

information entropy
vision-language-action models
inference acceleration
attention mechanism
training-free method
🔎 Similar Papers
2024-06-09Annual Meeting of the Association for Computational LinguisticsCitations: 13
2024-05-14IEEE/RJS International Conference on Intelligent RObots and SystemsCitations: 2
C
Chuhang Liu
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China; Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Y
Yayun He
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
Z
Zuheng Kang
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
X
Xiaoyang Qu
Ping An Technology (Shenzhen) Co., Ltd., Shenzhen, China
Jianzong Wang
Jianzong Wang
Postdoctoral Researcher of Department of Electrical and Computer Engineering, University of Florida
Big DataStorage SystemCloud Computing