EmbodieDreamer: Advancing Real2Sim2Real Transfer for Policy Training via Embodied World Modeling

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Embodied AI faces bottlenecks in real-world data collection—costly and inefficient—while simulation-based training suffers from a substantial Real2Sim2Real gap, especially in physical dynamics and visual appearance. To bridge this gap, we propose EmbodieDreamer, the first framework integrating a differentiable physics-alignment module (PhysAligner) and a conditional video-diffusion-based visual-alignment module (VisAligner), jointly optimizing simulated physical parameters and generating high-fidelity visual sequences. Our approach unifies differentiable physics modeling, simulated annealing optimization, video diffusion generation, and reinforcement learning. Experiments demonstrate that PhysAligner reduces physical parameter estimation error by 3.74% and accelerates optimization by 89.91%. When transferred to real robots, policies trained with EmbodieDreamer achieve an average 29.17% improvement in task success rate, significantly narrowing the sim-to-real domain gap.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Embodied AI has led to an increasing demand for large-scale, high-quality real-world data. However, collecting such embodied data remains costly and inefficient. As a result, simulation environments have become a crucial surrogate for training robot policies. Yet, the significant Real2Sim2Real gap remains a critical bottleneck, particularly in terms of physical dynamics and visual appearance. To address this challenge, we propose EmbodieDreamer, a novel framework that reduces the Real2Sim2Real gap from both the physics and appearance perspectives. Specifically, we propose PhysAligner, a differentiable physics module designed to reduce the Real2Sim physical gap. It jointly optimizes robot-specific parameters such as control gains and friction coefficients to better align simulated dynamics with real-world observations. In addition, we introduce VisAligner, which incorporates a conditional video diffusion model to bridge the Sim2Real appearance gap by translating low-fidelity simulated renderings into photorealistic videos conditioned on simulation states, enabling high-fidelity visual transfer. Extensive experiments validate the effectiveness of EmbodieDreamer. The proposed PhysAligner reduces physical parameter estimation error by 3.74% compared to simulated annealing methods while improving optimization speed by 89.91%. Moreover, training robot policies in the generated photorealistic environment leads to a 29.17% improvement in the average task success rate across real-world tasks after reinforcement learning. Code, model and data will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

Reducing Real2Sim2Real gap in physical dynamics and visual appearance
Optimizing robot-specific parameters to align simulated with real-world dynamics
Translating low-fidelity simulations into photorealistic videos for visual transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable physics module for dynamics alignment
Conditional video diffusion for photorealistic rendering
Joint optimization of robot-specific physical parameters
🔎 Similar Papers
No similar papers found.
Boyuan Wang
Boyuan Wang
Institute of Automation, Chinese Academy of Sciences
Computer VisionAIGCWorld ModelEmbodied AI
X
Xinpan Meng
GigaAI, Institute of Automation, Chinese Academy of Sciences
X
Xiaofeng Wang
GigaAI, Institute of Automation, Chinese Academy of Sciences
Z
Zheng Zhu
GigaAI, Institute of Automation, Chinese Academy of Sciences
A
Angen Ye
GigaAI, Institute of Automation, Chinese Academy of Sciences
Y
Yang Wang
GigaAI
Zhiqin Yang
Zhiqin Yang
The Chinese University of Hong Kong
Reasoning ModelsCollaborative Learning
C
Chaojun Ni
GigaAI, Peking University
G
Guan Huang
GigaAI
X
Xingang Wang
Institute of Automation, Chinese Academy of Sciences