🤖 AI Summary
This work addresses the sim-to-real performance gap in visual navigation, where policies trained in simulation underperform when deployed on real-world platforms. We propose a novel approach integrating pretrained visual representations, end-to-end deep reinforcement learning, and a lightweight real-time inference architecture, leveraging online learning from simulated data to enhance cross-domain generalization. Key contributions include: (i) freezing a pretrained image encoder to extract robust visual features that mitigate appearance discrepancies between simulation and reality; and (ii) retaining the online adaptation mechanism of the simulated policy—enabling it to surpass purely real-data-trained baselines. On wheeled robots, our method achieves a 31% higher success rate than real-data training and outperforms current state-of-the-art methods by 50%. Furthermore, it successfully transfers to quadrotor drones without architectural modification, empirically validating its cross-platform generalizability.
📝 Abstract
This paper investigates how the performance of visual navigation policies trained in simulation compares to policies trained with real-world data. Performance degradation of simulator-trained policies is often significant when they are evaluated in the real world. However, despite this well-known sim-to-real gap, we demonstrate that simulator-trained policies can match the performance of their real-world-trained counterparts.
Central to our approach is a navigation policy architecture that bridges the sim-to-real appearance gap by leveraging pretrained visual representations and runs real-time on robot hardware. Evaluations on a wheeled mobile robot show that the proposed policy, when trained in simulation, outperforms its real-world-trained version by 31% and the prior state-of-the-art methods by 50% in navigation success rate. Policy generalization is verified by deploying the same model onboard a drone.
Our results highlight the importance of diverse image encoder pretraining for sim-to-real generalization, and identify on-policy learning as a key advantage of simulated training over training with real data.