🤖 AI Summary
To address poor generalization of vision-based navigation agents in sim2real transfer caused by simulation artifacts, this paper proposes a real2sim paradigm: end-to-end construction of photorealistic, physically interactive city-scale neural 3D digital twins from monocular video. Our method integrates NeRF-based reconstruction, implicit physical modeling, video-based temporal geometric constraint optimization, and an RL-friendly simulation interface—overcoming fidelity and interactivity limitations inherent in conventional graphics engines and domain randomization. Experiments demonstrate substantial improvements: navigation success rates increase by 31.2% in digital twin environments and by 68.3% on real-city benchmarks, significantly outperforming existing simulation-based training approaches. To our knowledge, this is the first work enabling fully automatic, end-to-end generation of embodied, high-fidelity simulation environments directly from ordinary monocular video.
📝 Abstract
Sim-to-real gap has long posed a significant challenge for robot learning in simulation, preventing the deployment of learned models in the real world. Previous work has primarily focused on domain randomization and system identification to mitigate this gap. However, these methods are often limited by the inherent constraints of the simulation and graphics engines. In this work, we propose Vid2Sim, a novel framework that effectively bridges the sim2real gap through a scalable and cost-efficient real2sim pipeline for neural 3D scene reconstruction and simulation. Given a monocular video as input, Vid2Sim can generate photorealistic and physically interactable 3D simulation environments to enable the reinforcement learning of visual navigation agents in complex urban environments. Extensive experiments demonstrate that Vid2Sim significantly improves the performance of urban navigation in the digital twins and real world by 31.2% and 68.3% in success rate compared with agents trained with prior simulation methods.