Event-Aided Sharp Radiance Field Reconstruction for Fast-Flying Drones

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of reconstructing high-quality 3D radiance fields from fast-moving drones, where motion blur and pose drift severely degrade traditional NeRF performance. To overcome this, we propose the first NeRF framework that incorporates event camera data, introducing an unsupervised joint optimization method that fuses event streams with motion-blurred images to simultaneously refine the neural radiance field and an event-based visual-inertial odometry (Event-VIO) system. Without requiring ground-truth supervision, our approach significantly enhances both radiance field sharpness and trajectory accuracy in high-speed scenarios. Evaluated on real-world high-velocity drone flight data, the method achieves over a 50% improvement in reconstruction quality compared to existing approaches, effectively preserving fine details and demonstrating the efficacy and superiority of event-image fusion for dynamic NeRF reconstruction.

Technology Category

Application Category

📝 Abstract
Fast-flying aerial robots promise rapid inspection under limited battery constraints, with direct applications in infrastructure inspection, terrain exploration, and search and rescue. However, high speeds lead to severe motion blur in images and induce significant drift and noise in pose estimates, making dense 3D reconstruction with Neural Radiance Fields (NeRFs) particularly challenging due to their high sensitivity to such degradations. In this work, we present a unified framework that leverages asynchronous event streams alongside motion-blurred frames to reconstruct high-fidelity radiance fields from agile drone flights. By embedding event-image fusion into NeRF optimization and jointly refining event-based visual-inertial odometry priors using both event and frame modalities, our method recovers sharp radiance fields and accurate camera trajectories without ground-truth supervision. We validate our approach on both synthetic data and real-world sequences captured by a fast-flying drone. Despite highly dynamic drone flights, where RGB frames are severely degraded by motion blur and pose priors become unreliable, our method reconstructs high-fidelity radiance fields and preserves fine scene details, delivering a performance gain of over 50% on real-world data compared to state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

motion blur
pose drift
Neural Radiance Fields
fast-flying drones
3D reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event Camera
Neural Radiance Fields
Motion Blur
Visual-Inertial Odometry
Event-Image Fusion
🔎 Similar Papers
No similar papers found.