🤖 AI Summary
Efficient and high-fidelity 3D reconstruction from monocular video remains a critical challenge for applications such as virtual reality and robotic navigation. This paper proposes an end-to-end sparse-to-dense incremental reconstruction framework that operates without prior camera calibration. First, joint pose and structure optimization is performed via sliding-window stereo matching and cross-frame, cross-clip sparse point trajectory propagation. Second, a geometric prior–driven global hierarchical alignment mechanism hierarchically fuses local clips while suppressing error accumulation. Finally, dense reconstruction is refined by minimizing reprojection error. Our key contribution is the novel integration of learnable geometric priors into the hierarchical alignment pipeline—enabling a >82% reduction in training time. Extensive experiments demonstrate state-of-the-art performance in reconstruction accuracy, visual fidelity, and computational efficiency.
📝 Abstract
Efficiently reconstructing accurate 3D models from monocular video is a key challenge in computer vision, critical for advancing applications in virtual reality, robotics, and scene understanding. Existing approaches typically require pre-computed camera parameters and frame-by-frame reconstruction pipelines, which are prone to error accumulation and entail significant computational overhead. To address these limitations, we introduce VideoLifter, a novel framework that leverages geometric priors from a learnable model to incrementally optimize a globally sparse to dense 3D representation directly from video sequences. VideoLifter segments the video sequence into local windows, where it matches and registers frames, constructs consistent fragments, and aligns them hierarchically to produce a unified 3D model. By tracking and propagating sparse point correspondences across frames and fragments, VideoLifter incrementally refines camera poses and 3D structure, minimizing reprojection error for improved accuracy and robustness. This approach significantly accelerates the reconstruction process, reducing training time by over 82% while surpassing current state-of-the-art methods in visual fidelity and computational efficiency.