FlightDiffusion: Revolutionising Autonomous Drone Training with Diffusion Models Generating FPV Video

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous drone navigation in dynamic environments faces high real-world data acquisition costs and challenges in sim-to-real transfer. Method: This paper proposes the first diffusion-based framework for joint modeling of first-person-view (FPV) video generation and action prediction. Given a single FPV image, it synthesizes physically plausible and diverse flight video sequences while directly generating state-action trajectory pairs. We introduce trajectory-level physical constraints and a state-action joint sampling mechanism, coupled with an end-to-end pipeline for simulation validation and real-world deployment evaluation. Results: The generated trajectories achieve an average positional error of 0.25 m and angular error of 0.19 rad. In real-world downstream navigation tasks, success rates exceed 60%, with no statistically significant performance gap compared to simulation—demonstrating substantial improvements in data efficiency, navigation robustness, and sim-to-real generalization.

Technology Category

Application Category

📝 Abstract
We present FlightDiffusion, a diffusion-model-based framework for training autonomous drones from first-person view (FPV) video. Our model generates realistic video sequences from a single frame, enriched with corresponding action spaces to enable reasoning-driven navigation in dynamic environments. Beyond direct policy learning, FlightDiffusion leverages its generative capabilities to synthesize diverse FPV trajectories and state-action pairs, facilitating the creation of large-scale training datasets without the high cost of real-world data collection. Our evaluation demonstrates that the generated trajectories are physically plausible and executable, with a mean position error of 0.25 m (RMSE 0.28 m) and a mean orientation error of 0.19 rad (RMSE 0.24 rad). This approach enables improved policy learning and dataset scalability, leading to superior performance in downstream navigation tasks. Results in simulated environments highlight enhanced robustness, smoother trajectory planning, and adaptability to unseen conditions. An ANOVA revealed no statistically significant difference between performance in simulation and reality (F(1, 16) = 0.394, p = 0.541), with success rates of M = 0.628 (SD = 0.162) and M = 0.617 (SD = 0.177), respectively, indicating strong sim-to-real transfer. The generated datasets provide a valuable resource for future UAV research. This work introduces diffusion-based reasoning as a promising paradigm for unifying navigation, action generation, and data synthesis in aerial robotics.
Problem

Research questions and friction points this paper is trying to address.

Generating realistic FPV drone videos from single frames
Creating synthetic training datasets to avoid costly real data collection
Enabling reasoning-driven navigation in dynamic environments through diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion models generate FPV video sequences
Synthesizes diverse trajectories and state-action pairs
Enables scalable training without real-world data collection
🔎 Similar Papers
No similar papers found.