๐ค AI Summary
To address the degraded 3D Gaussian Splatting (3D-GS) reconstruction accuracy under motion blur, this paper proposes an event-enhanced joint optimization framework. Our method explicitly models intra-exposure intensity dynamics and camera motion using event camera streamsโmarking the first such integration for 3D-GS. We introduce an Event Dual Integration (EDI) prior to enforce intermediate-frame consistency, enabling end-to-end joint optimization of motion deblurring, 3D Gaussian parameters, and camera trajectories. The framework unifies event-driven modeling, multi-moment latent image optimization, and a synthetic blur loss, while incorporating bundle adjustment to enhance geometric consistency. Evaluated on both synthetic and real-world motion-blurred datasets, our approach significantly outperforms existing RGB-only and event-augmented methods. It yields more accurate geometry, sharper texture recovery, and substantially improved novel view synthesis quality.
๐ Abstract
While 3D Gaussian Splatting (3D-GS) achieves photorealistic novel view synthesis, its performance degrades with motion blur. In scenarios with rapid motion or low-light conditions, existing RGB-based deblurring methods struggle to model camera pose and radiance changes during exposure, reducing reconstruction accuracy. Event cameras, capturing continuous brightness changes during exposure, can effectively assist in modeling motion blur and improving reconstruction quality. Therefore, we propose Event-driven Bundle Adjusted Deblur Gaussian Splatting (EBAD-Gaussian), which reconstructs sharp 3D Gaussians from event streams and severely blurred images. This method jointly learns the parameters of these Gaussians while recovering camera motion trajectories during exposure time. Specifically, we first construct a blur loss function by synthesizing multiple latent sharp images during the exposure time, minimizing the difference between real and synthesized blurred images. Then we use event stream to supervise the light intensity changes between latent sharp images at any time within the exposure period, supplementing the light intensity dynamic changes lost in RGB images. Furthermore, we optimize the latent sharp images at intermediate exposure times based on the event-based double integral (EDI) prior, applying consistency constraints to enhance the details and texture information of the reconstructed images. Extensive experiments on synthetic and real-world datasets show that EBAD-Gaussian can achieve high-quality 3D scene reconstruction under the condition of blurred images and event stream inputs.