🤖 AI Summary
In embodied intelligence, learning compact, task-relevant state representations for efficient world modeling and decision-making remains a key challenge; existing approaches struggle to balance expressivity and conciseness. This paper proposes an unsupervised framework that employs a lightweight encoder to extract a two-token compact state representation from a single RGB frame, coupled with a pretrained Diffusion Transformer (DiT) decoder for generalizable image reconstruction. Crucially, it defines latent actions as differences between state tokens, interpolated via latent-variable interpolation—enabling implicit, structured dynamics modeling without video supervision or explicit action labels. The method supports joint co-training of states and actions. Evaluated on the LIBERO benchmark, it achieves a 14.3% improvement in success rate; on real-robot tasks, it increases success rate by 30%; and it outperforms prior SOTA by 10.4% in policy coordination performance. It incurs low inference overhead and is compatible with simulation, real-world platforms, and multi-view videos.
📝 Abstract
A fundamental challenge in embodied intelligence is developing expressive and compact state representations for efficient world modeling and decision making. However, existing methods often fail to achieve this balance, yielding representations that are either overly redundant or lacking in task-critical information. We propose an unsupervised approach that learns a highly compressed two-token state representation using a lightweight encoder and a pre-trained Diffusion Transformer (DiT) decoder, capitalizing on its strong generative prior. Our representation is efficient, interpretable, and integrates seamlessly into existing VLA-based models, improving performance by 14.3% on LIBERO and 30% in real-world task success with minimal inference overhead. More importantly, we find that the difference between these tokens, obtained via latent interpolation, naturally serves as a highly effective latent action, which can be further decoded into executable robot actions. This emergent capability reveals that our representation captures structured dynamics without explicit supervision. We name our method StaMo for its ability to learn generalizable robotic Motion from compact State representation, which is encoded from static images, challenging the prevalent dependence to learning latent action on complex architectures and video data. The resulting latent actions also enhance policy co-training, outperforming prior methods by 10.4% with improved interpretability. Moreover, our approach scales effectively across diverse data sources, including real-world robot data, simulation, and human egocentric video.