🤖 AI Summary
Autonomous drone navigation in dynamic environments faces high real-world data acquisition costs and challenges in sim-to-real transfer. Method: This paper proposes the first diffusion-based framework for joint modeling of first-person-view (FPV) video generation and action prediction. Given a single FPV image, it synthesizes physically plausible and diverse flight video sequences while directly generating state-action trajectory pairs. We introduce trajectory-level physical constraints and a state-action joint sampling mechanism, coupled with an end-to-end pipeline for simulation validation and real-world deployment evaluation. Results: The generated trajectories achieve an average positional error of 0.25 m and angular error of 0.19 rad. In real-world downstream navigation tasks, success rates exceed 60%, with no statistically significant performance gap compared to simulation—demonstrating substantial improvements in data efficiency, navigation robustness, and sim-to-real generalization.
📝 Abstract
We present FlightDiffusion, a diffusion-model-based framework for training autonomous drones from first-person view (FPV) video. Our model generates realistic video sequences from a single frame, enriched with corresponding action spaces to enable reasoning-driven navigation in dynamic environments. Beyond direct policy learning, FlightDiffusion leverages its generative capabilities to synthesize diverse FPV trajectories and state-action pairs, facilitating the creation of large-scale training datasets without the high cost of real-world data collection. Our evaluation demonstrates that the generated trajectories are physically plausible and executable, with a mean position error of 0.25 m (RMSE 0.28 m) and a mean orientation error of 0.19 rad (RMSE 0.24 rad). This approach enables improved policy learning and dataset scalability, leading to superior performance in downstream navigation tasks. Results in simulated environments highlight enhanced robustness, smoother trajectory planning, and adaptability to unseen conditions. An ANOVA revealed no statistically significant difference between performance in simulation and reality (F(1, 16) = 0.394, p = 0.541), with success rates of M = 0.628 (SD = 0.162) and M = 0.617 (SD = 0.177), respectively, indicating strong sim-to-real transfer. The generated datasets provide a valuable resource for future UAV research. This work introduces diffusion-based reasoning as a promising paradigm for unifying navigation, action generation, and data synthesis in aerial robotics.