🤖 AI Summary
Traditional 3D Gaussian Splatting (3D-GS) for novel-view synthesis in dynamic scenes relies on high-frame-rate, dense, motion-blur-free imagery—leading to inefficient data acquisition and poor adaptability to rapid motion. To address this, we propose the first hardware-cooperative, single-sweep event-driven 3D-GS framework. Our method reconstructs and renders dynamic scenes using only a sparse, asynchronous event stream from an event camera and a single static reference frame, jointly modeling a static-event radiance field augmented with multi-scale geometric priors to unify macro- and micro-scale reconstruction. Unlike prior approaches, it eliminates dependence on dense image inputs, drastically reducing acquisition overhead. Extensive experiments on synthetic and real-world macro/micro-dynamic scenes demonstrate superior rendering quality, faster inference speed, and higher computational efficiency—enabling real-time performance.
📝 Abstract
Recent advancements in 3D Gaussian Splatting (3D-GS) have demonstrated the potential of using 3D Gaussian primitives for high-speed, high-fidelity, and cost-efficient novel view synthesis from continuously calibrated input views. However, conventional methods require high-frame-rate dense and high-quality sharp images, which are time-consuming and inefficient to capture, especially in dynamic environments. Event cameras, with their high temporal resolution and ability to capture asynchronous brightness changes, offer a promising alternative for more reliable scene reconstruction without motion blur. In this paper, we propose SweepEvGS, a novel hardware-integrated method that leverages event cameras for robust and accurate novel view synthesis across various imaging settings from a single sweep. SweepEvGS utilizes the initial static frame with dense event streams captured during a single camera sweep to effectively reconstruct detailed scene views. We also introduce different real-world hardware imaging systems for real-world data collection and evaluation for future research. We validate the robustness and efficiency of SweepEvGS through experiments in three different imaging settings: synthetic objects, real-world macro-level, and real-world micro-level view synthesis. Our results demonstrate that SweepEvGS surpasses existing methods in visual rendering quality, rendering speed, and computational efficiency, highlighting its potential for dynamic practical applications.