Dark-EvGS: Event Camera as an Eye for Radiance Field in the Dark

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In low-light conditions, conventional cameras suffer from limited dynamic range and motion blur, hindering acquisition of high-quality multi-view imagery and thus impeding radiance field reconstruction. To address this, we propose an event-camera-guided 3D Gaussian Splatting (GS) framework—the first to leverage asynchronous event streams for radiance field modeling and bright-frame synthesis under extreme illumination constraints. To tackle challenges including high event noise, poor reconstruction fidelity, and chromatic inconsistency in darkness, we introduce a triplet-level supervision scheme jointly optimizing geometric accuracy, radiometric consistency, and color coherence, along with a lightweight color-matching module. Experiments on real-world low-light datasets demonstrate that our method significantly improves radiance field reconstruction accuracy and yields sharper, more photorealistic, and color-faithful novel-view bright frames—outperforming state-of-the-art approaches across all major metrics.

Technology Category

Application Category

📝 Abstract
In low-light environments, conventional cameras often struggle to capture clear multi-view images of objects due to dynamic range limitations and motion blur caused by long exposure. Event cameras, with their high-dynamic range and high-speed properties, have the potential to mitigate these issues. Additionally, 3D Gaussian Splatting (GS) enables radiance field reconstruction, facilitating bright frame synthesis from multiple viewpoints in low-light conditions. However, naively using an event-assisted 3D GS approach still faced challenges because, in low light, events are noisy, frames lack quality, and the color tone may be inconsistent. To address these issues, we propose Dark-EvGS, the first event-assisted 3D GS framework that enables the reconstruction of bright frames from arbitrary viewpoints along the camera trajectory. Triplet-level supervision is proposed to gain holistic knowledge, granular details, and sharp scene rendering. The color tone matching block is proposed to guarantee the color consistency of the rendered frames. Furthermore, we introduce the first real-captured dataset for the event-guided bright frame synthesis task via 3D GS-based radiance field reconstruction. Experiments demonstrate that our method achieves better results than existing methods, conquering radiance field reconstruction under challenging low-light conditions. The code and sample data are included in the supplementary material.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing bright frames from noisy low-light event camera data
Ensuring color consistency in rendered frames during radiance field reconstruction
Overcoming motion blur and dynamic range issues in dark environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event-assisted 3D Gaussian Splatting framework
Triplet-level supervision for sharp rendering
Color tone matching for consistency
🔎 Similar Papers
No similar papers found.