E2EGS: Event-to-Edge Gaussian Splatting for Pose-Free 3D Reconstruction

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes E2EGS, the first event-driven 3D reconstruction method that operates entirely without pose priors, addressing a key limitation of existing approaches that either rely on known camera poses or are constrained by initial depth estimates. By exploiting the spatio-temporal consistency of edge regions in event streams, the method extracts structured edge information to guide the initialization and optimization of 3D Gaussian splats. Furthermore, it introduces an edge-weighted loss into both tracking and bundle adjustment stages to enhance geometric fidelity. Experiments demonstrate that E2EGS significantly outperforms current state-of-the-art methods on both synthetic and real-world datasets, achieving superior reconstruction quality and trajectory accuracy, thereby validating the feasibility of high-fidelity, purely event-based 3D scene reconstruction.

Technology Category

Application Category

📝 Abstract
The emergence of neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) has advanced novel view synthesis (NVS). These methods, however, require high-quality RGB inputs and accurate corresponding poses, limiting robustness under real-world conditions such as fast camera motion or adverse lighting. Event cameras, which capture brightness changes at each pixel with high temporal resolution and wide dynamic range, enable precise sensing of dynamic scenes and offer a promising solution. However, existing event-based NVS methods either assume known poses or rely on depth estimation models that are bounded by their initial observations, failing to generalize as the camera traverses previously unseen regions. We present E2EGS, a pose-free framework operating solely on event streams. Our key insight is that edge information provides rich structural cues essential for accurate trajectory estimation and high-quality NVS. To extract edges from noisy event streams, we exploit the distinct spatio-temporal characteristics of edges and non-edge regions. The event camera's movement induces consistent events along edges, while non-edge regions produce sparse noise. We leverage this through a patch-based temporal coherence analysis that measures local variance to extract edges while robustly suppressing noise. The extracted edges guide structure-aware Gaussian initialization and enable edge-weighted losses throughout initialization, tracking, and bundle adjustment. Extensive experiments on both synthetic and real datasets demonstrate that E2EGS achieves superior reconstruction quality and trajectory accuracy, establishing a fully pose-free paradigm for event-based 3D reconstruction.
Problem

Research questions and friction points this paper is trying to address.

pose-free
event camera
3D reconstruction
novel view synthesis
trajectory estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event-based reconstruction
Pose-free 3DGS
Edge extraction
Temporal coherence
Gaussian splatting
🔎 Similar Papers
No similar papers found.