SweepEvGS: Event-Based 3D Gaussian Splatting for Macro and Micro Radiance Field Rendering from a Single Sweep

📅 2024-12-16
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Traditional 3D Gaussian Splatting (3D-GS) for novel-view synthesis in dynamic scenes relies on high-frame-rate, dense, motion-blur-free imagery—leading to inefficient data acquisition and poor adaptability to rapid motion. To address this, we propose the first hardware-cooperative, single-sweep event-driven 3D-GS framework. Our method reconstructs and renders dynamic scenes using only a sparse, asynchronous event stream from an event camera and a single static reference frame, jointly modeling a static-event radiance field augmented with multi-scale geometric priors to unify macro- and micro-scale reconstruction. Unlike prior approaches, it eliminates dependence on dense image inputs, drastically reducing acquisition overhead. Extensive experiments on synthetic and real-world macro/micro-dynamic scenes demonstrate superior rendering quality, faster inference speed, and higher computational efficiency—enabling real-time performance.

Technology Category

Application Category

📝 Abstract
Recent advancements in 3D Gaussian Splatting (3D-GS) have demonstrated the potential of using 3D Gaussian primitives for high-speed, high-fidelity, and cost-efficient novel view synthesis from continuously calibrated input views. However, conventional methods require high-frame-rate dense and high-quality sharp images, which are time-consuming and inefficient to capture, especially in dynamic environments. Event cameras, with their high temporal resolution and ability to capture asynchronous brightness changes, offer a promising alternative for more reliable scene reconstruction without motion blur. In this paper, we propose SweepEvGS, a novel hardware-integrated method that leverages event cameras for robust and accurate novel view synthesis across various imaging settings from a single sweep. SweepEvGS utilizes the initial static frame with dense event streams captured during a single camera sweep to effectively reconstruct detailed scene views. We also introduce different real-world hardware imaging systems for real-world data collection and evaluation for future research. We validate the robustness and efficiency of SweepEvGS through experiments in three different imaging settings: synthetic objects, real-world macro-level, and real-world micro-level view synthesis. Our results demonstrate that SweepEvGS surpasses existing methods in visual rendering quality, rendering speed, and computational efficiency, highlighting its potential for dynamic practical applications.
Problem

Research questions and friction points this paper is trying to address.

Enables high-fidelity 3D rendering from single sweep event data
Overcomes motion blur in dynamic environments with event cameras
Supports macro and micro-level view synthesis efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event cameras for robust view synthesis
Single sweep with dense event streams
Hardware-integrated dynamic scene reconstruction
🔎 Similar Papers
No similar papers found.
J
Jingqian Wu
Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
S
Shu Zhu
Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
Chutian Wang
Chutian Wang
HKU/IC/USTB
Neuromorphic ImagingComputational ImagingWavefront Sensing
Boxin Shi
Boxin Shi
Peking University
Computer VisionComputational Photography
E
Edmund Y. Lam
Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China