Approximate Imitation Learning for Event-based Quadrotor Flight in Cluttered Environments

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of high-speed quadrotor flight under motion blur, where conventional cameras fail and event cameras—despite their high temporal resolution—pose difficulties for efficient control policy training. The authors propose an end-to-end neural network that directly maps event data to control commands, coupled with an approximate imitation learning framework. This framework leverages offline simulation data to learn task representations and integrates lightweight state information for online policy refinement, thereby circumventing costly event rendering. By decoupling representation learning from policy optimization, the approach substantially reduces simulation overhead and achieves, for the first time, efficient, purely event-driven high-speed obstacle avoidance. Real-world experiments demonstrate robust flight at up to 9.8 m/s through complex environments, outperforming standard imitation learning baselines.

Technology Category

Application Category

📝 Abstract
Event cameras offer high temporal resolution and low latency, making them ideal sensors for high-speed robotic applications where conventional cameras suffer from image degradations such as motion blur. In addition, their low power consumption can enhance endurance, which is critical for resource-constrained platforms. Motivated by these properties, we present a novel approach that enables a quadrotor to fly through cluttered environments at high speed by perceiving the environment with a single event camera. Our proposed method employs an end-to-end neural network trained to map event data directly to control commands, eliminating the reliance on standard cameras. To enable efficient training in simulation, where rendering synthetic event data is computationally expensive, we propose Approximate Imitation Learning, a novel imitation learning framework. Our approach leverages a large-scale offline dataset to learn a task-specific representation space. Subsequently, the policy is trained through online interactions that rely solely on lightweight, simulated state information, eliminating the need to render events during training. This enables the efficient training of event-based control policies for fast quadrotor flight, highlighting the potential of our framework for other modalities where data simulation is costly or impractical. Our approach outperforms standard imitation learning baselines in simulation and demonstrates robust performance in real-world flight tests, achieving speeds up to 9.8 ms-1 in cluttered environments.
Problem

Research questions and friction points this paper is trying to address.

event-based vision
quadrotor flight
cluttered environments
imitation learning
high-speed navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Approximate Imitation Learning
event camera
quadrotor flight
end-to-end control
simulation-efficient training