๐ค AI Summary
This work addresses the challenge of high-frame-rate imaging of fast-moving, non-rigid extended objects under strong atmospheric turbulence, where scene motion and turbulence-induced distortions are difficult to disentangle. The authors propose the first event-based light-field imaging system that integrates multi-view event data with a machine learningโdriven reconstruction algorithm to effectively decouple true object motion from turbulent perturbations. By leveraging the high temporal resolution of event cameras and the spatial diversity of light-field sensing, the method overcomes the perceptual limitations of conventional event-based imaging in turbulent environments. Experimental validation on a desktop-scale setup demonstrates successful high-speed, clear imaging of targets moving at speeds up to 16,000 pixels per second under severe turbulence conditions.
๐ Abstract
This work introduces and demonstrates the first system capable of imaging fast-moving extended non-rigid objects through strong atmospheric turbulence at high frame rate. Event cameras are a novel sensing architecture capable of estimating high-speed imagery at thousands of frames per second. However, on their own event cameras are unable to disambiguate scene motion from turbulence. In this work, we overcome this limitation using event-based light field cameras: By simultaneously capturing multiple views of a scene, event-based light field cameras and machine learning-based reconstruction algorithms are able to disambiguate motion-induced dynamics, which produce events that are strongly correlated across views, from turbulence-induced dynamics, which produce events that are weakly correlated across view. Tabletop experiments demonstrate event-based light field can overcome strong turbulence while imaging high-speed objects traveling at up to 16,000 pixels per second.