🤖 AI Summary
In autoregressive action-oriented world models, continuous action generation performance degrades significantly with increasing sequence length due to error propagation and amplification from early action predictions. To address this, we propose the first unified framework integrating vision-language-action (VLA) understanding with an implicit world model. Our method introduces two core innovations: (1) a bidirectional co-modeling mechanism between actions and images that jointly learns environmental dynamics and physical constraints; and (2) a dynamic attention masking strategy that explicitly suppresses error propagation during autoregressive decoding. Evaluated across multiple benchmarks, our approach substantially outperforms both standalone action prediction models and conventional world models—achieving up to a 12.7% improvement in action token generation tasks. These results validate the effectiveness and generalizability of our bidirectional co-modeling and error-suppression mechanisms for long-horizon action forecasting.
📝 Abstract
We present WorldVLA, an autoregressive action world model that unifies action and image understanding and generation. Our WorldVLA intergrates Vision-Language-Action (VLA) model and world model in one single framework. The world model predicts future images by leveraging both action and image understanding, with the purpose of learning the underlying physics of the environment to improve action generation. Meanwhile, the action model generates the subsequent actions based on image observations, aiding in visual understanding and in turn helps visual generation of the world model. We demonstrate that WorldVLA outperforms standalone action and world models, highlighting the mutual enhancement between the world model and the action model. In addition, we find that the performance of the action model deteriorates when generating sequences of actions in an autoregressive manner. This phenomenon can be attributed to the model's limited generalization capability for action prediction, leading to the propagation of errors from earlier actions to subsequent ones. To address this issue, we propose an attention mask strategy that selectively masks prior actions during the generation of the current action, which shows significant performance improvement in the action chunk generation task.