RoboScape: Physics-informed Embodied World Model

📅 2025-06-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing embodied world models exhibit deficiencies in modeling 3D geometry and motion dynamics, leading to physically implausible artifacts in video generation for contact-rich scenes. This paper proposes a physics-aware embodied world model that jointly learns RGB video generation and physics priors to implicitly encode object shape, material properties, and other physical attributes. Our core innovations include two physics-aware auxiliary tasks—temporal depth prediction and keypoint dynamics modeling—integrated with end-to-end neural rendering and explicit physical constraints in a unified training framework. This joint optimization significantly improves geometric consistency and motion plausibility. Experiments demonstrate that the model generates high-fidelity, physically realistic videos across diverse robotic manipulation scenarios, effectively supporting data-driven policy training and evaluation. Results validate its strong generalization capability and practical utility for embodied AI applications.

Technology Category

Application Category

📝 Abstract
World models have become indispensable tools for embodied intelligence, serving as powerful simulators capable of generating realistic robotic videos while addressing critical data scarcity challenges. However, current embodied world models exhibit limited physical awareness, particularly in modeling 3D geometry and motion dynamics, resulting in unrealistic video generation for contact-rich robotic scenarios. In this paper, we present RoboScape, a unified physics-informed world model that jointly learns RGB video generation and physics knowledge within an integrated framework. We introduce two key physics-informed joint training tasks: temporal depth prediction that enhances 3D geometric consistency in video rendering, and keypoint dynamics learning that implicitly encodes physical properties (e.g., object shape and material characteristics) while improving complex motion modeling. Extensive experiments demonstrate that RoboScape generates videos with superior visual fidelity and physical plausibility across diverse robotic scenarios. We further validate its practical utility through downstream applications including robotic policy training with generated data and policy evaluation. Our work provides new insights for building efficient physics-informed world models to advance embodied intelligence research. The code is available at: https://github.com/tsinghua-fib-lab/RoboScape.
Problem

Research questions and friction points this paper is trying to address.

Enhances physical awareness in robotic video generation
Improves 3D geometry and motion dynamics modeling
Integrates physics knowledge for realistic robotic scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified physics-informed world model framework
Temporal depth prediction for 3D consistency
Keypoint dynamics learning for motion modeling
🔎 Similar Papers
No similar papers found.
Yu Shang
Yu Shang
Department of Electronic Engineering, Tsinghua University
Multimodal LearningLLM AgentRecommender System
X
Xin Zhang
Manifold AI
Yinzhou Tang
Yinzhou Tang
Tsinghua University
L
Lei Jin
Tsinghua University
C
Chen Gao
Tsinghua University
W
Wei Wu
Manifold AI
Y
Yong Li
Tsinghua University