E-4DGS: High-Fidelity Dynamic Reconstruction from the Multi-view Event Cameras

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RGB-based novel view synthesis and 4D reconstruction methods suffer from limited performance under low-light conditions and high-speed motion—due to illumination dependency, motion blur, and narrow dynamic range. To address this, we propose the first event-driven dynamic Gaussian splatting framework. Our method introduces event-stream-guided 3D initialization, adaptive temporal slicing for point cloud construction, and intensity-aware importance pruning to enhance temporal modeling fidelity and 3D consistency. Furthermore, it integrates multi-view event alignment, contrast-adaptive threshold optimization, and native event-based rendering for end-to-end high-fidelity dynamic scene reconstruction. Evaluated on a newly constructed multi-view event dataset, our approach significantly outperforms state-of-the-art pure-event and event-RGB fusion methods, particularly in challenging high-speed motion and low-light scenarios—achieving substantial gains in reconstruction accuracy and geometric fidelity.

Technology Category

Application Category

📝 Abstract
Novel view synthesis and 4D reconstruction techniques predominantly rely on RGB cameras, thereby inheriting inherent limitations such as the dependence on adequate lighting, susceptibility to motion blur, and a limited dynamic range. Event cameras, offering advantages of low power, high temporal resolution and high dynamic range, have brought a new perspective to addressing the scene reconstruction challenges in high-speed motion and low-light scenes. To this end, we propose E-4DGS, the first event-driven dynamic Gaussian Splatting approach, for novel view synthesis from multi-view event streams with fast-moving cameras. Specifically, we introduce an event-based initialization scheme to ensure stable training and propose event-adaptive slicing splatting for time-aware reconstruction. Additionally, we employ intensity importance pruning to eliminate floating artifacts and enhance 3D consistency, while incorporating an adaptive contrast threshold for more precise optimization. We design a synthetic multi-view camera setup with six moving event cameras surrounding the object in a 360-degree configuration and provide a benchmark multi-view event stream dataset that captures challenging motion scenarios. Our approach outperforms both event-only and event-RGB fusion baselines and paves the way for the exploration of multi-view event-based reconstruction as a novel approach for rapid scene capture.
Problem

Research questions and friction points this paper is trying to address.

Dynamic scene reconstruction from multi-view event cameras
Overcoming RGB camera limitations in high-speed motion
Novel view synthesis in low-light and fast-moving scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event-driven dynamic Gaussian Splatting approach
Event-based initialization for stable training
Intensity importance pruning for artifact elimination
🔎 Similar Papers
No similar papers found.