MoWM: Mixture-of-World-Models for Embodied Planning via Latent-to-Pixel Feature Modulation

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Embodied action planning confronts a fundamental tension between visual redundancy and fine-grained manipulation requirements: pixel-space world models are susceptible to irrelevant visual noise, while latent-space models sacrifice critical low-level details. To address this, we propose MoWM—a hybrid world model that, for the first time, jointly couples motion-aware latent priors with pixel-space generative modeling. MoWM dynamically modulates pixel-level features using latent priors conditioned on high-level semantic goals, enabling semantic-guided learning of fine-grained action representations. Trained end-to-end on the CALVIN benchmark, MoWM significantly improves action decoding accuracy and cross-task generalization. It achieves state-of-the-art task success rates, empirically demonstrating that synergistic modeling across representation spaces—latent and pixel—is essential for robust embodied planning.

Technology Category

Application Category

📝 Abstract
Embodied action planning is a core challenge in robotics, requiring models to generate precise actions from visual observations and language instructions. While video generation world models are promising, their reliance on pixel-level reconstruction often introduces visual redundancies that hinder action decoding and generalization. Latent world models offer a compact, motion-aware representation, but overlook the fine-grained details critical for precise manipulation. To overcome these limitations, we propose MoWM, a mixture-of-world-model framework that fuses representations from hybrid world models for embodied action planning. Our approach uses motion-aware representations from a latent model as a high-level prior, which guides the extraction of fine-grained visual features from the pixel space model. This design allows MoWM to highlight the informative visual details needed for action decoding. Extensive evaluations on the CALVIN benchmark demonstrate that our method achieves state-of-the-art task success rates and superior generalization. We also provide a comprehensive analysis of the strengths of each feature space, offering valuable insights for future research in embodied planning. The code is available at: https://github.com/tsinghua-fib-lab/MoWM.
Problem

Research questions and friction points this paper is trying to address.

Overcoming pixel-level redundancies in video generation world models
Addressing fine-grained detail neglect in latent world models
Fusing hybrid world model representations for embodied planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses hybrid world models for embodied planning
Uses latent model as high-level motion prior
Guides pixel space feature extraction for details
🔎 Similar Papers
No similar papers found.