SaENeRF: Suppressing Artifacts in Event-based Neural Radiance Fields

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the concurrent presence of geometric distortion and photometric artifacts in event-camera data reconstruction, this paper proposes a robust Event-NeRF framework. Methodologically, it introduces (1) a novel radiance-change normalization mechanism based on polarity-aware event accumulation to mitigate inherent event-stream noise, and (2) a dual-objective regularization loss that jointly suppresses sub-threshold artifacts and enhances brightness contrast for non-zero events. The method enables end-to-end, self-supervised optimization from raw event streams to neural radiance fields, incorporating polarity-guided radiance modeling. Experiments demonstrate substantial improvements in reconstruction density, geometric consistency, and photometric fidelity for static scenes. Quantitatively, our approach achieves superior PSNR and SSIM scores; qualitatively, it outperforms existing state-of-the-art methods in visual quality and structural accuracy.

Technology Category

Application Category

📝 Abstract
Event cameras are neuromorphic vision sensors that asynchronously capture changes in logarithmic brightness changes, offering significant advantages such as low latency, low power consumption, low bandwidth, and high dynamic range. While these characteristics make them ideal for high-speed scenarios, reconstructing geometrically consistent and photometrically accurate 3D representations from event data remains fundamentally challenging. Current event-based Neural Radiance Fields (NeRF) methods partially address these challenges but suffer from persistent artifacts caused by aggressive network learning in early stages and the inherent noise of event cameras. To overcome these limitations, we present SaENeRF, a novel self-supervised framework that effectively suppresses artifacts and enables 3D-consistent, dense, and photorealistic NeRF reconstruction of static scenes solely from event streams. Our approach normalizes predicted radiance variations based on accumulated event polarities, facilitating progressive and rapid learning for scene representation construction. Additionally, we introduce regularization losses specifically designed to suppress artifacts in regions where photometric changes fall below the event threshold and simultaneously enhance the light intensity difference of non-zero events, thereby improving the visual fidelity of the reconstructed scene. Extensive qualitative and quantitative experiments demonstrate that our method significantly reduces artifacts and achieves superior reconstruction quality compared to existing methods. The code is available at https://github.com/Mr-firework/SaENeRF.
Problem

Research questions and friction points this paper is trying to address.

Suppressing artifacts in event-based NeRF reconstructions
Improving 3D consistency and photorealism from event data
Addressing noise and early-stage overfitting in event NeRF training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised framework suppressing NeRF artifacts
Normalizes radiance via accumulated event polarities
Regularization losses enhance photometric fidelity
🔎 Similar Papers
No similar papers found.
Y
Yuanjian Wang
College of Computer Science, Sichuan University, Chengdu, China
Y
Yufei Deng
College of Computer Science, Sichuan University, Chengdu, China
R
Rong Xiao
College of Computer Science, Sichuan University, Chengdu, China
J
Jiahao Fan
College of Computer Science, Sichuan University, Chengdu, China
Chenwei Tang
Chenwei Tang
Sichuan University
neural networkzero-shot learningdeep learning
Deng Xiong
Deng Xiong
Stevens Institute of Technology
Artificial IntelligenceCloud ComputingAlgorithmsNatural and Applied SciencesEngineering
Jiancheng Lv
Jiancheng Lv
University of Science and Technology of China
Operations ManagementMarketing