🤖 AI Summary
Existing end-to-end vision-based drone racing approaches struggle to simultaneously achieve sim-to-real transfer, full onboard deployment, and champion-level performance. This paper introduces the first model-based reinforcement learning framework for end-to-end visual drone racing, integrating an enhanced Dreamer architecture, implicit state decoding, online parameter estimation, and model predictive control—operating without external localization or camera calibration. The resulting system enables interpretable, adaptive pixel-to-motor control. It supports real-time, fully onboard inference and achieves 21 m/s speed and 6g acceleration on physical race tracks, successfully executing high-difficulty maneuvers including inverted loops and Split-S turns. Moreover, it demonstrates robustness to battery degradation and low-fidelity visual inputs. Experimental validation confirms strong cross-platform deployability and generalization capability.
📝 Abstract
Autonomous drone racing (ADR) systems have recently achieved champion-level performance, yet remain highly specific to drone racing. While end-to-end vision-based methods promise broader applicability, no system to date simultaneously achieves full sim-to-real transfer, onboard execution, and champion-level performance. In this work, we present SkyDreamer, to the best of our knowledge, the first end-to-end vision-based ADR policy that maps directly from pixel-level representations to motor commands. SkyDreamer builds on informed Dreamer, a model-based reinforcement learning approach where the world model decodes to privileged information only available during training. By extending this concept to end-to-end vision-based ADR, the world model effectively functions as an implicit state and parameter estimator, greatly improving interpretability. SkyDreamer runs fully onboard without external aid, resolves visual ambiguities by tracking progress using the state decoded from the world model's hidden state, and requires no extrinsic camera calibration, enabling rapid deployment across different drones without retraining. Real-world experiments show that SkyDreamer achieves robust, high-speed flight, executing tight maneuvers such as an inverted loop, a split-S and a ladder, reaching speeds of up to 21 m/s and accelerations of up to 6 g. It further demonstrates a non-trivial visual sim-to-real transfer by operating on poor-quality segmentation masks, and exhibits robustness to battery depletion by accurately estimating the maximum attainable motor RPM and adjusting its flight path in real-time. These results highlight SkyDreamer's adaptability to important aspects of the reality gap, bringing robustness while still achieving extremely high-speed, agile flight.