🤖 AI Summary
To address the degraded 3D reconstruction performance of standard frame-based cameras under low-light and motion-blurred conditions, this paper proposes an event-driven lightweight 3D Gaussian Splatting (3DGS) reconstruction framework. Methodologically, it introduces: (1) a hardware-cooperative event camera with tunable transmittance, enabling controllable-exposure events—first leveraged to assist 3DGS training; (2) a dual-stream event modeling architecture for exposure and motion, coupled with a tri-modal adaptive reconstruction strategy; and (3) the EME-3D dataset—the first real-world event-based 3D dataset featuring calibrated sensor parameters and ground-truth point clouds. Quantitative and qualitative evaluations on EventNeRF and EME-3D demonstrate significant improvements over EventNeRF and RGB+event baselines, particularly in low-light and overexposed scenarios, where fine geometric details are better preserved. The method achieves faster reconstruction and lower hardware cost while maintaining high fidelity.
📝 Abstract
Achieving 3D reconstruction from images captured under optimal conditions has been extensively studied in the vision and imaging fields. However, in real-world scenarios, challenges such as motion blur and insufficient illumination often limit the performance of standard frame-based cameras in delivering high-quality images. To address these limitations, we incorporate a transmittance adjustment device at the hardware level, enabling event cameras to capture both motion and exposure events for diverse 3D reconstruction scenarios. Motion events (triggered by camera or object movement) are collected in fast-motion scenarios when the device is inactive, while exposure events (generated through controlled camera exposure) are captured during slower motion to reconstruct grayscale images for high-quality training and optimization of event-based 3D Gaussian Splatting (3DGS). Our framework supports three modes: High-Quality Reconstruction using exposure events, Fast Reconstruction relying on motion events, and Balanced Hybrid optimizing with initial exposure events followed by high-speed motion events. On the EventNeRF dataset, we demonstrate that exposure events significantly improve fine detail reconstruction compared to motion events and outperform frame-based cameras under challenging conditions such as low illumination and overexposure. Furthermore, we introduce EME-3D, a real-world 3D dataset with exposure events, motion events, camera calibration parameters, and sparse point clouds. Our method achieves faster and higher-quality reconstruction than event-based NeRF and is more cost-effective than methods combining event and RGB data. E-3DGS sets a new benchmark for event-based 3D reconstruction with robust performance in challenging conditions and lower hardware demands. The source code and dataset will be available at https://github.com/MasterHow/E-3DGS.