🤖 AI Summary
Reconstructing geometrically consistent, part-aware, and motion-accurate digital twins of articulated objects from monocular video remains challenging due to strong coupling among geometry, part segmentation, and joint motion.
Method: This paper proposes a motion-prior-guided neural radiance field (NeRF) optimization framework. It decouples camera and part motions using a pre-trained tracking prior, introduces a hybrid center-grid part assignment module, and jointly optimizes geometry, part segmentation, and joint parameters via 3D trajectory analysis, motion-aware deformation field modeling, and differentiable rendering.
Contribution/Results: To our knowledge, this is the first end-to-end articulated object learning framework leveraging tracking priors. It simultaneously optimizes shape reconstruction, part segmentation, and joint parameter estimation. On standard benchmarks, it reduces joint motion and mesh reconstruction errors by approximately two orders of magnitude compared to state-of-the-art methods, significantly enhancing the fidelity and practicality of monocular dynamic-scene digital twins.
📝 Abstract
Building digital twins of articulated objects from monocular video presents an essential challenge in computer vision, which requires simultaneous reconstruction of object geometry, part segmentation, and articulation parameters from limited viewpoint inputs. Monocular video offers an attractive input format due to its simplicity and scalability; however, it's challenging to disentangle the object geometry and part dynamics with visual supervision alone, as the joint movement of the camera and parts leads to ill-posed estimation. While motion priors from pre-trained tracking models can alleviate the issue, how to effectively integrate them for articulation learning remains largely unexplored. To address this problem, we introduce VideoArtGS, a novel approach that reconstructs high-fidelity digital twins of articulated objects from monocular video. We propose a motion prior guidance pipeline that analyzes 3D tracks, filters noise, and provides reliable initialization of articulation parameters. We also design a hybrid center-grid part assignment module for articulation-based deformation fields that captures accurate part motion. VideoArtGS demonstrates state-of-the-art performance in articulation and mesh reconstruction, reducing the reconstruction error by about two orders of magnitude compared to existing methods. VideoArtGS enables practical digital twin creation from monocular video, establishing a new benchmark for video-based articulated object reconstruction. Our work is made publicly available at: https://videoartgs.github.io.