BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the motion blur problem in dynamic scenes captured by handheld monocular cameras, where coupled camera and object motion degrades reconstruction quality, and tackles the poor robustness of existing dynamic reconstruction methods to blurred inputs and pose noise. We propose a robust dynamic reconstruction framework based on 3D Gaussian splatting. Our core contributions are threefold: (1) the first explicit decoupling and separate modeling of camera motion blur and object motion blur; (2) the construction of the first real-world dynamic blur scene dataset; and (3) a dual-path joint deblurring network coupled with a pose-robust optimization strategy. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches on real blurred dynamic scenes, achieving high-fidelity novel-view synthesis and geometric reconstruction with improved accuracy and stability under motion blur and pose uncertainty.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) has shown remarkable potential for static scene reconstruction, and recent advancements have extended its application to dynamic scenes. However, the quality of reconstructions depends heavily on high-quality input images and precise camera poses, which are not that trivial to fulfill in real-world scenarios. Capturing dynamic scenes with handheld monocular cameras, for instance, typically involves simultaneous movement of both the camera and objects within a single exposure. This combined motion frequently results in image blur that existing methods cannot adequately handle. To address these challenges, we introduce BARD-GS, a novel approach for robust dynamic scene reconstruction that effectively handles blurry inputs and imprecise camera poses. Our method comprises two main components: 1) camera motion deblurring and 2) object motion deblurring. By explicitly decomposing motion blur into camera motion blur and object motion blur and modeling them separately, we achieve significantly improved rendering results in dynamic regions. In addition, we collect a real-world motion blur dataset of dynamic scenes to evaluate our approach. Extensive experiments demonstrate that BARD-GS effectively reconstructs high-quality dynamic scenes under realistic conditions, significantly outperforming existing methods.
Problem

Research questions and friction points this paper is trying to address.

Handles blurry inputs in dynamic scene reconstruction
Addresses imprecise camera poses in real-world scenarios
Improves rendering in dynamic regions by decomposing motion blur
Innovation

Methods, ideas, or system contributions that make the work stand out.

Handles blurry inputs and imprecise camera poses
Decomposes motion blur into camera and object motion
Uses real-world motion blur dataset for evaluation
🔎 Similar Papers
No similar papers found.