Learning Latent Action World Models In The Wild

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the challenge of lacking explicit action labels in in-the-wild videos by proposing an unsupervised learning framework to construct generalizable world models for agent reasoning and planning. The approach introduces spatially localized, continuously constrained latent action representations, enabling self-supervised learning of action-state dynamics from diverse real-world videos without requiring a unified embodied structure. By integrating continuous latent action modeling, a controller mapping mechanism, and tailored architectural design, this framework is the first to successfully extend latent-action world models to complex in-the-wild scenarios. Experiments demonstrate that the model achieves performance on par with baselines using ground-truth action labels in cross-video action transfer and planning tasks, validating the effectiveness and scalability of latent actions as a universal interface.

Technology Category

Application Category

📝 Abstract
Agents capable of reasoning and planning in the real world require the ability of predicting the consequences of their actions. While world models possess this capability, they most often require action labels, that can be complex to obtain at scale. This motivates the learning of latent action models, that can learn an action space from videos alone. Our work addresses the problem of learning latent actions world models on in-the-wild videos, expanding the scope of existing works that focus on simple robotics simulations, video games, or manipulation data. While this allows us to capture richer actions, it also introduces challenges stemming from the video diversity, such as environmental noise, or the lack of a common embodiment across videos. To address some of the challenges, we discuss properties that actions should follow as well as relevant architectural choices and evaluations. We find that continuous, but constrained, latent actions are able to capture the complexity of actions from in-the-wild videos, something that the common vector quantization does not. We for example find that changes in the environment coming from agents, such as humans entering the room, can be transferred across videos. This highlights the capability of learning actions that are specific to in-the-wild videos. In the absence of a common embodiment across videos, we are mainly able to learn latent actions that become localized in space, relative to the camera. Nonetheless, we are able to train a controller that maps known actions to latent ones, allowing us to use latent actions as a universal interface and solve planning tasks with our world model with similar performance as action-conditioned baselines. Our analyses and experiments provide a step towards scaling latent action models to the real world.
Problem

Research questions and friction points this paper is trying to address.

latent actions
world models
in-the-wild videos
action prediction
embodiment
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent actions
world models
in-the-wild videos
action representation
embodiment-agnostic learning
🔎 Similar Papers
No similar papers found.