🤖 AI Summary
Existing vision-language-action (VLA) models struggle to capture the spatiotemporal dynamics of physical interactions, leading to insufficient understanding of environmental changes. To address this limitation, this work proposes incorporating privileged 4D information during training via a lightweight point trajectory prediction head, using future 3D point trajectories as a supervisory signal. This enables the model to implicitly learn the evolution of scene geometry within its shared representation, without incurring additional computational overhead at inference time. Built upon a standard VLA architecture, the approach jointly optimizes action prediction and trajectory forecasting in an end-to-end manner. Experiments demonstrate significant performance gains—10% on LIBERO-Long and 40% on RoboCasa—highlighting enhanced physical awareness and execution capability in complex manipulation tasks.
📝 Abstract
Humans learn not only how their bodies move, but also how the surrounding world responds to their actions. In contrast, while recent Vision-Language-Action (VLA) models exhibit impressive semantic understanding, they often fail to capture the spatiotemporal dynamics governing physical interaction. In this paper, we introduce Pri4R, a simple yet effective approach that endows VLA models with an implicit understanding of world dynamics by leveraging privileged 4D information during training. Specifically, Pri4R augments VLAs with a lightweight point track head that predicts 3D point tracks. By injecting VLA features into this head to jointly predict future 3D trajectories, the model learns to incorporate evolving scene geometry within its shared representation space, enabling more physically aware context for precise control. Due to its architectural simplicity, Pri4R is compatible with dominant VLA design patterns with minimal changes. During inference, we run the model using the original VLA architecture unchanged; Pri4R adds no extra inputs, outputs, or computational overhead. Across simulation and real-world evaluations, Pri4R significantly improves performance on challenging manipulation tasks, including a +10% gain on LIBERO-Long and a +40% gain on RoboCasa. We further show that 3D point track prediction is an effective supervision target for learning action-world dynamics, and validate our design choices through extensive ablations.