Learning 3D Persistent Embodied World Models

📅 2025-05-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video world models lack long-term memory for partially observable environments, resulting in inconsistent long-horizon planning. To address this, we propose the first world model with explicit, persistent 3D memory: it employs an RGB-D video diffusion model to predict future observations and continuously fuses them into dynamic 3D voxel or point cloud maps; spatially conditioned modeling ensures temporally consistent scene representation and reasoning across timesteps. The model enables long-horizon simulation over both observed and unobserved regions and integrates seamlessly with embodied simulation and policy learning frameworks. Experiments demonstrate substantial improvements in planning consistency, policy robustness, and zero-shot generalization for navigation and manipulation tasks—overcoming the myopic limitations inherent in conventional video-based world models.

Technology Category

Application Category

📝 Abstract
The ability to simulate the effects of future actions on the world is a crucial ability of intelligent embodied agents, enabling agents to anticipate the effects of their actions and make plans accordingly. While a large body of existing work has explored how to construct such world models using video models, they are often myopic in nature, without any memory of a scene not captured by currently observed images, preventing agents from making consistent long-horizon plans in complex environments where many parts of the scene are partially observed. We introduce a new persistent embodied world model with an explicit memory of previously generated content, enabling much more consistent long-horizon simulation. During generation time, our video diffusion model predicts RGB-D video of the future observations of the agent. This generation is then aggregated into a persistent 3D map of the environment. By conditioning the video model on this 3D spatial map, we illustrate how this enables video world models to faithfully simulate both seen and unseen parts of the world. Finally, we illustrate the efficacy of such a world model in downstream embodied applications, enabling effective planning and policy learning.
Problem

Research questions and friction points this paper is trying to address.

Simulating long-horizon effects of actions in 3D environments
Addressing myopic vision in current video-based world models
Enhancing consistency in partially observed complex scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Persistent 3D map for long-horizon simulation
RGB-D video prediction with video diffusion model
Conditioning on 3D spatial map for unseen areas