TesserAct: Learning 4D Embodied World Models

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces the first end-to-end learnable 4D embodied world model, designed to jointly model the spatiotemporally consistent dynamic evolution of 3D scenes under embodied agent actions. Methodologically, it jointly processes RGB-D-N multimodal video sequences by integrating diffusion-based fine-tuning, depth- and normal-guided neural radiance field (NeRF) implicit reconstruction, and geometrically constrained 4D voxelization for direct mapping from raw sensory inputs to explicit 4D geometric-temporal scene representations. The key contribution lies in overcoming the limitations of conventional 2D video world models, enabling—for the first time—high-fidelity inverse dynamics modeling and seamless 4D scene reconstruction. Experiments demonstrate significant improvements: novel-view synthesis PSNR increases by 3.2 dB, and downstream policy learning success rate improves by 27% over prior video-based world models.

Technology Category

Application Category

📝 Abstract
This paper presents an effective approach for learning novel 4D embodied world models, which predict the dynamic evolution of 3D scenes over time in response to an embodied agent's actions, providing both spatial and temporal consistency. We propose to learn a 4D world model by training on RGB-DN (RGB, Depth, and Normal) videos. This not only surpasses traditional 2D models by incorporating detailed shape, configuration, and temporal changes into their predictions, but also allows us to effectively learn accurate inverse dynamic models for an embodied agent. Specifically, we first extend existing robotic manipulation video datasets with depth and normal information leveraging off-the-shelf models. Next, we fine-tune a video generation model on this annotated dataset, which jointly predicts RGB-DN (RGB, Depth, and Normal) for each frame. We then present an algorithm to directly convert generated RGB, Depth, and Normal videos into a high-quality 4D scene of the world. Our method ensures temporal and spatial coherence in 4D scene predictions from embodied scenarios, enables novel view synthesis for embodied environments, and facilitates policy learning that significantly outperforms those derived from prior video-based world models.
Problem

Research questions and friction points this paper is trying to address.

Predicting 3D scene evolution over time for embodied agents
Learning 4D world models from RGB-DN video data
Ensuring spatial-temporal coherence in dynamic 4D scene generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns 4D world models from RGB-DN videos
Extends datasets with depth and normal data
Converts RGB-DN videos into 4D scenes
🔎 Similar Papers
No similar papers found.