Depth Transfer: Learning to See Like a Simulator for Real-World Drone Navigation

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the significant performance degradation of UAV policies during sim-to-real transfer—caused by visual domain shift between stereo-depth estimation and ground-truth depth in simulation—this paper proposes an unsupervised domain adaptation method based on VAE latent-space alignment. The approach enables direct deployment of simulation-trained policies on real-world stereo-depth inputs without fine-tuning, online optimization, or access to real-world labels. Its core innovation lies in constructing a cross-domain consistent depth representation space, circumventing the instability inherent in pixel-level alignment. Evaluated on IsaacGym obstacle avoidance, the method achieves nearly 100% improvement in success rate. It outperforms state-of-the-art methods on the cross-simulator AvoidBench benchmark. Extensive real-world indoor and outdoor UAV experiments further demonstrate its strong robustness and generalization capability.

Technology Category

Application Category

📝 Abstract
Sim-to-real transfer is a fundamental challenge in robot reinforcement learning. Discrepancies between simulation and reality can significantly impair policy performance, especially if it receives high-dimensional inputs such as dense depth estimates from vision. We propose a novel depth transfer method based on domain adaptation to bridge the visual gap between simulated and real-world depth data. A Variational Autoencoder (VAE) is first trained to encode ground-truth depth images from simulation into a latent space, which serves as input to a reinforcement learning (RL) policy. During deployment, the encoder is refined to align stereo depth images with this latent space, enabling direct policy transfer without fine-tuning. We apply our method to the task of autonomous drone navigation through cluttered environments. Experiments in IsaacGym show that our method nearly doubles the obstacle avoidance success rate when switching from ground-truth to stereo depth input. Furthermore, we demonstrate successful transfer to the photo-realistic simulator AvoidBench using only IsaacGym-generated stereo data, achieving superior performance compared to state-of-the-art baselines. Real-world evaluations in both indoor and outdoor environments confirm the effectiveness of our approach, enabling robust and generalizable depth-based navigation across diverse domains.
Problem

Research questions and friction points this paper is trying to address.

Bridging visual gap between simulated and real-world depth data
Improving drone navigation policy transfer without fine-tuning
Enhancing obstacle avoidance success rate with stereo depth input
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain adaptation bridges sim-real depth gap
VAE encodes simulation depth for RL policy
Encoder aligns stereo depth for direct transfer
🔎 Similar Papers
No similar papers found.
H
Hang Yu
Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands
Christophe De Wagter
Christophe De Wagter
Assistant Professor, Delft University of Technology
UAVMAVControlVisionAI
G
Guido C. H. E de Croon
Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands