High-Fidelity Simulated Data Generation for Real-World Zero-Shot Robotic Manipulation Learning with Gaussian Splatting

πŸ“… 2025-10-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Real-world robotic learning is hindered by the high cost of real-data collection and labor-intensive annotation, while existing simulation-based approaches suffer from visual and physical domain gaps that impede sim-to-real transfer. To address this, we propose Real2Sim2Real: a novel framework that first reconstructs high-fidelity 3D environments from multi-view real images, jointly leveraging 3D Gaussian splatting for appearance modeling and deformable mesh primitives for physics-aware interaction modeling. Second, it employs a multimodal large language model to autonomously parse scenes, infer object motion structures and physical properties, and generate drivable assets and simulation parameters. Finally, policies are trained exclusively in simulation and deployed zero-shot on real robots. Experiments demonstrate significant improvements over state-of-the-art methods across diverse real-world manipulation tasks, validating the framework’s strong generalization capability and practical efficacy.

Technology Category

Application Category

πŸ“ Abstract
The scalability of robotic learning is fundamentally bottlenecked by the significant cost and labor of real-world data collection. While simulated data offers a scalable alternative, it often fails to generalize to the real world due to significant gaps in visual appearance, physical properties, and object interactions. To address this, we propose RoboSimGS, a novel Real2Sim2Real framework that converts multi-view real-world images into scalable, high-fidelity, and physically interactive simulation environments for robotic manipulation. Our approach reconstructs scenes using a hybrid representation: 3D Gaussian Splatting (3DGS) captures the photorealistic appearance of the environment, while mesh primitives for interactive objects ensure accurate physics simulation. Crucially, we pioneer the use of a Multi-modal Large Language Model (MLLM) to automate the creation of physically plausible, articulated assets. The MLLM analyzes visual data to infer not only physical properties (e.g., density, stiffness) but also complex kinematic structures (e.g., hinges, sliding rails) of objects. We demonstrate that policies trained entirely on data generated by RoboSimGS achieve successful zero-shot sim-to-real transfer across a diverse set of real-world manipulation tasks. Furthermore, data from RoboSimGS significantly enhances the performance and generalization capabilities of SOTA methods. Our results validate RoboSimGS as a powerful and scalable solution for bridging the sim-to-real gap.
Problem

Research questions and friction points this paper is trying to address.

Bridging the sim-to-real gap in robotic manipulation learning
Automating creation of physically plausible articulated assets
Generating scalable high-fidelity simulated data from real images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Converts real images into interactive simulation environments
Uses Gaussian Splatting for photorealistic scene reconstruction
Employs MLLM to automate physical property inference
πŸ”Ž Similar Papers
No similar papers found.
H
Haoyu Zhao
Wuhan University
C
Cheng Zeng
Tsinghua University
Linghao Zhuang
Linghao Zhuang
Xinjiang University
Deep Learning
Y
Yaxi Zhao
DAMO Academy, Alibaba Group
S
Shengke Xue
DAMO Academy, Alibaba Group
H
Hao Wang
Huazhong University of Science and Technology
Xingyue Zhao
Xingyue Zhao
Peking Union Medical College Hospital; Institute of Automation, Chinese Academy of Sciences; Alibaba
Medical Image Analysis3D Vision
Z
Zhongyu Li
The Chinese University of Hong Kong
Kehan Li
Kehan Li
Stanford University
Siteng Huang
Siteng Huang
Alibaba DAMO Academy | ZJU | Westlake University
Vision-language ModelsGenerative ModelsEmbodied AI
M
Mingxiu Chen
DAMO Academy, Alibaba Group
X
Xin Li
DAMO Academy, Alibaba Group
Deli Zhao
Deli Zhao
Alibaba DAMO Academy
generative modelsmultimodal learningfoundation models
H
Hua Zou
Wuhan University