SkyDreamer: Interpretable End-to-End Vision-Based Drone Racing with Model-Based Reinforcement Learning

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing end-to-end vision-based drone racing approaches struggle to simultaneously achieve sim-to-real transfer, full onboard deployment, and champion-level performance. This paper introduces the first model-based reinforcement learning framework for end-to-end visual drone racing, integrating an enhanced Dreamer architecture, implicit state decoding, online parameter estimation, and model predictive control—operating without external localization or camera calibration. The resulting system enables interpretable, adaptive pixel-to-motor control. It supports real-time, fully onboard inference and achieves 21 m/s speed and 6g acceleration on physical race tracks, successfully executing high-difficulty maneuvers including inverted loops and Split-S turns. Moreover, it demonstrates robustness to battery degradation and low-fidelity visual inputs. Experimental validation confirms strong cross-platform deployability and generalization capability.

Technology Category

Application Category

📝 Abstract
Autonomous drone racing (ADR) systems have recently achieved champion-level performance, yet remain highly specific to drone racing. While end-to-end vision-based methods promise broader applicability, no system to date simultaneously achieves full sim-to-real transfer, onboard execution, and champion-level performance. In this work, we present SkyDreamer, to the best of our knowledge, the first end-to-end vision-based ADR policy that maps directly from pixel-level representations to motor commands. SkyDreamer builds on informed Dreamer, a model-based reinforcement learning approach where the world model decodes to privileged information only available during training. By extending this concept to end-to-end vision-based ADR, the world model effectively functions as an implicit state and parameter estimator, greatly improving interpretability. SkyDreamer runs fully onboard without external aid, resolves visual ambiguities by tracking progress using the state decoded from the world model's hidden state, and requires no extrinsic camera calibration, enabling rapid deployment across different drones without retraining. Real-world experiments show that SkyDreamer achieves robust, high-speed flight, executing tight maneuvers such as an inverted loop, a split-S and a ladder, reaching speeds of up to 21 m/s and accelerations of up to 6 g. It further demonstrates a non-trivial visual sim-to-real transfer by operating on poor-quality segmentation masks, and exhibits robustness to battery depletion by accurately estimating the maximum attainable motor RPM and adjusting its flight path in real-time. These results highlight SkyDreamer's adaptability to important aspects of the reality gap, bringing robustness while still achieving extremely high-speed, agile flight.
Problem

Research questions and friction points this paper is trying to address.

Develops end-to-end vision-based drone racing with full sim-to-real transfer
Creates interpretable autonomous racing using model-based reinforcement learning
Achieves onboard execution without external aids or camera calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end vision-based drone racing policy
Model-based reinforcement learning with world model
Onboard execution without external aid or calibration
🔎 Similar Papers
No similar papers found.
Aderik Verraest
Aderik Verraest
Micro Air Vehicle Lab of the Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands
S
Stavrow Bahnam
Micro Air Vehicle Lab of the Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands
R
Robin Ferede
Micro Air Vehicle Lab of the Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands
Guido de Croon
Guido de Croon
Full professor, Delft University of Technology
Bio-inspired RoboticsMicro Air VehiclesVision-based NavigationSwarm Robotics
Christophe De Wagter
Christophe De Wagter
Assistant Professor, Delft University of Technology
UAVMAVControlVisionAI