🤖 AI Summary
Monocular 3D foundation models suffer from scale ambiguity, leading to cross-view geometric inconsistency and inaccurate scaling. To address this, we propose a training-free geometric optimization method. Our approach establishes inter-frame feature correspondences to infer cross-view point matches, approximates local surface geometry via planar priors, and formulates a graph-based optimization framework that explicitly enforces multi-frame geometric consistency constraints. Crucially, we couple graph optimization with local planarity regularization—without altering the original 3D representation or requiring additional training—thereby effectively mitigating scale ambiguity and enhancing geometric fidelity. Experiments demonstrate significant improvements in sparse-view 3D reconstruction accuracy and novel-view synthesis quality, particularly in cross-view consistency and scale alignment.
📝 Abstract
Monocular 3D foundation models offer an extensible solution for perception tasks, making them attractive for broader 3D vision applications. In this paper, we propose MoRe, a training-free Monocular Geometry Refinement method designed to improve cross-view consistency and achieve scale alignment. To induce inter-frame relationships, our method employs feature matching between frames to establish correspondences. Rather than applying simple least squares optimization on these matched points, we formulate a graph-based optimization framework that performs local planar approximation using the estimated 3D points and surface normals estimated by monocular foundation models. This formulation addresses the scale ambiguity inherent in monocular geometric priors while preserving the underlying 3D structure. We further demonstrate that MoRe not only enhances 3D reconstruction but also improves novel view synthesis, particularly in sparse view rendering scenarios.