Olaf-World: Orienting Latent Actions for Video World Modeling

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video world models struggle to learn transferable latent action representations from unlabeled videos due to the absence of action labels, leading to misaligned action semantics across scenes. To address this, this work proposes SeqΔ-REPA, a sequence-level control-effect alignment objective that leverages temporal feature differences extracted by a frozen self-supervised video encoder as semantic anchors to construct an action-conditioned world model. This approach achieves, for the first time, cross-scene alignment of latent action semantics without requiring action labels, thereby establishing a shared action coordinate system. Consequently, it significantly enhances zero-shot action transferability and data efficiency. Experimental results demonstrate that the proposed method outperforms current state-of-the-art approaches across multiple benchmarks.

Technology Category

Application Category

📝 Abstract
Scaling action-controllable world models is limited by the scarcity of action labels. While latent action learning promises to extract control interfaces from unlabeled video, learned latents often fail to transfer across contexts: they entangle scene-specific cues and lack a shared coordinate system. This occurs because standard objectives operate only within each clip, providing no mechanism to align action semantics across contexts. Our key insight is that although actions are unobserved, their semantic effects are observable and can serve as a shared reference. We introduce Seq$\Delta$-REPA, a sequence-level control-effect alignment objective that anchors integrated latent action to temporal feature differences from a frozen, self-supervised video encoder. Building on this, we present Olaf-World, a pipeline that pretrains action-conditioned video world models from large-scale passive video. Extensive experiments demonstrate that our method learns a more structured latent action space, leading to stronger zero-shot action transfer and more data-efficient adaptation to new control interfaces than state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

latent actions
action-controllable world models
cross-context transfer
action semantics alignment
unlabeled video
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent action learning
world modeling
action-controllable video generation
self-supervised alignment
zero-shot transfer
🔎 Similar Papers
No similar papers found.
Y
Yuxin Jiang
Show Lab, National University of Singapore
Yuchao Gu
Yuchao Gu
National University of Singapore
Generative ModelsVisual GenerationMulti-Modal Generation
I
Ivor W. Tsang
CFAR & IHPC, Agency for Science, Technology and Research (A*STAR), Singapore
M
Mike Zheng Shou
Show Lab, National University of Singapore