🤖 AI Summary
To address the significant performance degradation of UAV policies during sim-to-real transfer—caused by visual domain shift between stereo-depth estimation and ground-truth depth in simulation—this paper proposes an unsupervised domain adaptation method based on VAE latent-space alignment. The approach enables direct deployment of simulation-trained policies on real-world stereo-depth inputs without fine-tuning, online optimization, or access to real-world labels. Its core innovation lies in constructing a cross-domain consistent depth representation space, circumventing the instability inherent in pixel-level alignment. Evaluated on IsaacGym obstacle avoidance, the method achieves nearly 100% improvement in success rate. It outperforms state-of-the-art methods on the cross-simulator AvoidBench benchmark. Extensive real-world indoor and outdoor UAV experiments further demonstrate its strong robustness and generalization capability.
📝 Abstract
Sim-to-real transfer is a fundamental challenge in robot reinforcement learning. Discrepancies between simulation and reality can significantly impair policy performance, especially if it receives high-dimensional inputs such as dense depth estimates from vision. We propose a novel depth transfer method based on domain adaptation to bridge the visual gap between simulated and real-world depth data. A Variational Autoencoder (VAE) is first trained to encode ground-truth depth images from simulation into a latent space, which serves as input to a reinforcement learning (RL) policy. During deployment, the encoder is refined to align stereo depth images with this latent space, enabling direct policy transfer without fine-tuning. We apply our method to the task of autonomous drone navigation through cluttered environments. Experiments in IsaacGym show that our method nearly doubles the obstacle avoidance success rate when switching from ground-truth to stereo depth input. Furthermore, we demonstrate successful transfer to the photo-realistic simulator AvoidBench using only IsaacGym-generated stereo data, achieving superior performance compared to state-of-the-art baselines. Real-world evaluations in both indoor and outdoor environments confirm the effectiveness of our approach, enabling robust and generalizable depth-based navigation across diverse domains.