🤖 AI Summary
Monocular dynamic 3D reconstruction struggles to balance geometric fidelity and temporal consistency in complex motion scenes. To address this, we propose a hierarchical Gaussian motion modeling framework. Our key contributions are: (1) a novel tree-structured hierarchical motion representation that explicitly encodes multi-scale temporal dynamics—from rigid to non-rigid motion; (2) shared motion basis parameterization, which reduces redundancy and enforces inter-frame consistency; and (3) a perception-driven loss function and quality metric jointly optimizing geometric and appearance fidelity. Unlike NeRF-based approaches, our method directly reconstructs dynamic 3D Gaussians without implicit neural representations. Evaluated on challenging monocular videos, it achieves state-of-the-art novel-view synthesis performance—significantly improving PSNR, SSIM, and LPIPS over existing methods. The framework establishes an efficient, interpretable, and robust paradigm for monocular dynamic scene reconstruction.
📝 Abstract
We present Hierarchical Motion Representation (HiMoR), a novel deformation representation for 3D Gaussian primitives capable of achieving high-quality monocular dynamic 3D reconstruction. The insight behind HiMoR is that motions in everyday scenes can be decomposed into coarser motions that serve as the foundation for finer details. Using a tree structure, HiMoR's nodes represent different levels of motion detail, with shallower nodes modeling coarse motion for temporal smoothness and deeper nodes capturing finer motion. Additionally, our model uses a few shared motion bases to represent motions of different sets of nodes, aligning with the assumption that motion tends to be smooth and simple. This motion representation design provides Gaussians with a more structured deformation, maximizing the use of temporal relationships to tackle the challenging task of monocular dynamic 3D reconstruction. We also propose using a more reliable perceptual metric as an alternative, given that pixel-level metrics for evaluating monocular dynamic 3D reconstruction can sometimes fail to accurately reflect the true quality of reconstruction. Extensive experiments demonstrate our method's efficacy in achieving superior novel view synthesis from challenging monocular videos with complex motions.