🤖 AI Summary
Current evaluations of vision-language-action (VLA) models predominantly rely on inference-centric metrics such as parameter count and FLOPs, which poorly reflect actual execution performance in real-world robotic tasks. This work introduces the first systematic suite of embodied efficiency metrics—including task completion time, trajectory smoothness, cumulative joint rotation, and motion energy consumption—and employs this framework to re-evaluate prominent VLA models. Through comparative experiments involving model compression, token sparsification, action sequence compression, contextual prompting, and supervised fine-tuning, we demonstrate that conventional efficiency optimizations often degrade execution quality, while existing adaptation methods yield only marginal and metric-specific improvements. Our findings advocate a paradigm shift in VLA evaluation from pure inference efficiency toward embodied execution efficiency.
📝 Abstract
Vision-Language-Action (VLA) models have recently enabled embodied agents to perform increasingly complex tasks by jointly reasoning over visual, linguistic, and motor modalities. However, we find that the prevailing notion of ``efficiency'' in current VLA research, characterized by parameters, FLOPs, or token decoding throughput, does not reflect actual performance on robotic platforms. In real-world execution, efficiency is determined by system-level embodied behaviors such as task completion time, trajectory smoothness, cumulative joint rotation, and motion energy. Through controlled studies across model compression, token sparsification, and action sequence compression, we make several observations that challenge common assumptions. (1) Methods that reduce computation under conventional metrics often increase end-to-end execution cost or degrade motion quality, despite maintaining task success rates. (2) System-level embodied efficiency metrics reveal performance differences in the learned action policies that remain hidden under conventional evaluations. (3) Common adaptation methods such as in-context prompting or supervised fine-tuning show only mild and metric-specific improvements in embodied efficiency. While these methods can reduce targeted embodied-efficiency metrics such as jerk or action rate, the resulting gains may come with trade-offs in other metrics, such as longer completion time. Taken together, our results suggest that conventional inference efficiency metrics can overlook important aspects of embodied execution. Incorporating embodied efficiency provides a more complete view of policy behavior and practical performance, enabling fairer and more comprehensive comparisons of VLA models.