A Survey of 3D Reconstruction with Event Cameras: From Event-based Geometry to Neural 3D Rendering

๐Ÿ“… 2025-05-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the challenges of 3D reconstruction using event cameras under extreme conditionsโ€”high-speed motion, low illumination, and high dynamic range. It presents the first systematic survey of event-based 3D reconstruction methods, categorizing them historically across three paradigms: geometric modeling, deep learning, and neural rendering (e.g., NeRF and 3D Gaussian Splatting), for stereo, monocular, and multimodal systems. We propose a two-dimensional taxonomy based on input modality and reconstruction paradigm, and identify four core challenges: event data sparsity, lack of standardized evaluation metrics, inconsistent geometric representations, and inadequate dynamic scene modeling. Our contributions include: (i) constructing the first methodology map for event-driven 3D reconstruction; (ii) unifying all publicly available benchmark datasets; and (iii) providing a theoretical framework and practical guidelines for algorithm design, system deployment, and co-design with next-generation event sensors.

Technology Category

Application Category

๐Ÿ“ Abstract
Event cameras have emerged as promising sensors for 3D reconstruction due to their ability to capture per-pixel brightness changes asynchronously. Unlike conventional frame-based cameras, they produce sparse and temporally rich data streams, which enable more accurate 3D reconstruction and open up the possibility of performing reconstruction in extreme environments such as high-speed motion, low light, or high dynamic range scenes. In this survey, we provide the first comprehensive review focused exclusively on 3D reconstruction using event cameras. The survey categorises existing works into three major types based on input modality - stereo, monocular, and multimodal systems, and further classifies them by reconstruction approach, including geometry-based, deep learning-based, and recent neural rendering techniques such as Neural Radiance Fields and 3D Gaussian Splatting. Methods with a similar research focus were organised chronologically into the most subdivided groups. We also summarise public datasets relevant to event-based 3D reconstruction. Finally, we highlight current research limitations in data availability, evaluation, representation, and dynamic scene handling, and outline promising future research directions. This survey aims to serve as a comprehensive reference and a roadmap for future developments in event-driven 3D reconstruction.
Problem

Research questions and friction points this paper is trying to address.

Surveying 3D reconstruction methods using event cameras
Classifying approaches by input modality and reconstruction technique
Addressing limitations in data, evaluation, and dynamic scene handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event cameras enable asynchronous per-pixel brightness capture
Geometry-based and deep learning approaches for 3D reconstruction
Neural rendering techniques like NeRF and Gaussian Splatting
๐Ÿ”Ž Similar Papers
No similar papers found.
Chuanzhi Xu
Chuanzhi Xu
Student, The University of Sydney
Neuromorphic VisionHigh-level VisionComputational Aesthetics
H
Haoxian Zhou
School of Computer Science, The University of Sydney, NSW, Australia
Langyi Chen
Langyi Chen
MPhil, University of Sydney
Computer VisionArtificial IntelligenceDeep learning
H
Haodong Chen
School of Computer Science, The University of Sydney, NSW, Australia
Y
Ying Zhou
School of Computer Science, The University of Sydney, NSW, Australia
V
Vera Chung
School of Computer Science, The University of Sydney, NSW, Australia
Qiang Qu
Qiang Qu
Professor, Chinese Academy of Sciences, Shenzhen Institutes of Advanced Technology
BlockchainData IntelligenceData-intensive SystemsData Mining