🤖 AI Summary
Existing vision-language-action (VLA) models rely on fixed action chunking lengths, which struggle to balance responsiveness and action consistency. This work proposes an inference-time adaptive chunking strategy that, for the first time, leverages action prediction entropy as a measure of uncertainty to dynamically adjust chunk sizes. By doing so, the method enhances responsiveness while preserving execution coherence, without requiring any modifications to the training pipeline—adaptation occurs solely during inference. Evaluated across diverse simulated and real-world robotic manipulation tasks, the approach significantly outperforms baseline methods, achieving higher task success rates and improved action smoothness.
📝 Abstract
In Vision-Language-Action (VLA) models, action chunking (i.e., executing a sequence of actions without intermediate replanning) is a key technique to improve robotic manipulation abilities. However, a large chunk size reduces the model's responsiveness to new information, while a small one increases the likelihood of mode-jumping, jerky behavior resulting from discontinuities between chunks. Therefore, selecting the optimal chunk size is an urgent demand to balance the model's reactivity and consistency. Unfortunately, a dominant trend in current VLA models is an empirical fixed chunk length at inference-time, hindering their superiority and scalability across diverse manipulation tasks. To address this issue, we propose a novel Adaptive Action Chunking (AAC) strategy, which exploits action entropy as the cue to adaptively determine the chunk size based on current predictions. Extensive experiments on a wide range of simulated and real-world robotic manipulation tasks have demonstrated that our approach substantially improves performance over the state-of-the-art alternatives. The videos and source code are publicly available at https://lance-lot.github.io/adaptive-chunking.github.io/.