🤖 AI Summary
Atmospheric turbulence induces severe image blur, and existing methods struggle to balance accuracy and efficiency. This work is the first to reveal that turbulence causes polarity alternation in event cameras and that moving objects form distinctive “event tube” structures. Leveraging these insights, the authors propose a dual-module framework—comprising scene refinement and motion decoupling—that overcomes the conventional reliance on multiple frames. By modeling polarity-weighted gradients and imposing event tube constraints, the method achieves low-latency, high-quality turbulence mitigation. Evaluated on a newly collected real-world turbulent event-frame dataset, the approach substantially outperforms state-of-the-art methods, particularly in dynamic scene recovery, while reducing data overhead and system latency by 77.3% and 89.5%, respectively.
📝 Abstract
Turbulence mitigation (TM) is highly ill-posed due to the stochastic nature of atmospheric turbulence. Most methods rely on multiple frames recorded by conventional cameras to capture stable patterns in natural scenarios. However, they inevitably suffer from a trade-off between accuracy and efficiency: more frames enhance restoration at the cost of higher system latency and larger data overhead. Event cameras, equipped with microsecond temporal resolution and efficient sensing of dynamic changes, offer an opportunity to break the bottleneck. In this work, we present EHETM, a high-quality and efficient TM method inspired by the superiority of events to model motions in continuous sequences. We discover two key phenomena: (1) turbulence-induced events exhibit distinct polarity alternation correlated with sharp image gradients, providing structural cues for restoring scenes; and (2) dynamic objects form spatiotemporally coherent ``event tubes'' in contrast to irregular patterns within turbulent events, providing motion priors for disentangling objects from turbulence. Based on these insights, we design two complementary modules that respectively leverage polarity-weighted gradients for scene refinement and event-tube constraints for motion decoupling, achieving high-quality restoration with few frames. Furthermore, we construct two real-world event-frame turbulence datasets covering atmospheric and thermal cases. Experiments show that EHETM outperforms SOTA methods, especially under scenes with dynamic objects, while reducing data overhead and system latency by approximately 77.3% and 89.5%, respectively. Our code is available at: https://github.com/Xavier667/EHETM.