Event Camera Guided Visual Media Restoration & 3D Reconstruction: A Survey

๐Ÿ“… 2025-09-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the fusion difficulty arising from modality heterogeneity between event cameras and frame-based cameras. We propose a spatiotemporal-cooperative deep learning framework that unifies the complementary characteristics of event streams and intensity frames. Methodologically, we design a cross-modal feature alignment module and a dynamic weight fusion mechanism, integrating multiple vision restoration tasksโ€”including motion deblurring, super-resolution, HDR reconstruction, low-light enhancement, and denoising. We systematically survey event-driven fusion paradigms in video restoration and 3D reconstruction, and construct the first open benchmark list of paired event-frame datasets dedicated to visual restoration. Experiments demonstrate significant improvements in video quality recovery accuracy under extreme lighting conditions and rapid motion, as well as enhanced robustness in 3D reconstruction. The framework provides a reproducible technical pathway and standardized benchmark resources for cross-modal visual understanding.

Technology Category

Application Category

๐Ÿ“ Abstract
Event camera sensors are bio-inspired sensors which asynchronously capture per-pixel brightness changes and output a stream of events encoding the polarity, location and time of these changes. These systems are witnessing rapid advancements as an emerging field, driven by their low latency, reduced power consumption, and ultra-high capture rates. This survey explores the evolution of fusing event-stream captured with traditional frame-based capture, highlighting how this synergy significantly benefits various video restoration and 3D reconstruction tasks. The paper systematically reviews major deep learning contributions to image/video enhancement and restoration, focusing on two dimensions: temporal enhancement (such as frame interpolation and motion deblurring) and spatial enhancement (including super-resolution, low-light and HDR enhancement, and artifact reduction). This paper also explores how the 3D reconstruction domain evolves with the advancement of event driven fusion. Diverse topics are covered, with in-depth discussions on recent works for improving visual quality under challenging conditions. Additionally, the survey compiles a comprehensive list of openly available datasets, enabling reproducible research and benchmarking. By consolidating recent progress and insights, this survey aims to inspire further research into leveraging event camera systems, especially in combination with deep learning, for advanced visual media restoration and enhancement.
Problem

Research questions and friction points this paper is trying to address.

Surveying event camera fusion for video restoration tasks
Exploring event-based 3D reconstruction advancements and methods
Reviewing deep learning approaches for spatial-temporal enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fusing event-stream with frame-based capture
Deep learning for temporal and spatial enhancement
Event-driven fusion advancing 3D reconstruction
๐Ÿ”Ž Similar Papers
No similar papers found.