MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion

📅 2024-10-04
🏛️ arXiv.org
📈 Citations: 69
Influential: 18
📄 PDF
🤖 AI Summary
Dynamic scene geometry estimation faces challenges including complex motion modeling and error accumulation in multi-stage pipelines. This paper proposes a geometry-first, single-stage approach that directly regresses spatiotemporal point clouds from video frame sequences—without explicit optical flow or motion modeling—thereby avoiding the limitations of conventional depth–flow decoupling and global optimization. Our key contributions are: (i) the first adaptation of the static reconstruction model DUST3R to dynamic scenes, enabling motion-robust 4D geometry estimation solely via point-wise regression; and (ii) a video data curation and co-training framework that jointly optimizes video depth and camera pose estimation. Evaluated on standard benchmarks, our method achieves significant improvements over state-of-the-art approaches in both depth and pose accuracy, while maintaining feedforward inference efficiency and strong generalization robustness.

Technology Category

Application Category

📝 Abstract
Estimating geometry from dynamic scenes, where objects move and deform over time, remains a core challenge in computer vision. Current approaches often rely on multi-stage pipelines or global optimizations that decompose the problem into subtasks, like depth and flow, leading to complex systems prone to errors. In this paper, we present Motion DUSt3R (MonST3R), a novel geometry-first approach that directly estimates per-timestep geometry from dynamic scenes. Our key insight is that by simply estimating a pointmap for each timestep, we can effectively adapt DUST3R's representation, previously only used for static scenes, to dynamic scenes. However, this approach presents a significant challenge: the scarcity of suitable training data, namely dynamic, posed videos with depth labels. Despite this, we show that by posing the problem as a fine-tuning task, identifying several suitable datasets, and strategically training the model on this limited data, we can surprisingly enable the model to handle dynamics, even without an explicit motion representation. Based on this, we introduce new optimizations for several downstream video-specific tasks and demonstrate strong performance on video depth and camera pose estimation, outperforming prior work in terms of robustness and efficiency. Moreover, MonST3R shows promising results for primarily feed-forward 4D reconstruction.
Problem

Research questions and friction points this paper is trying to address.

Estimating geometry in dynamic scenes with moving objects
Overcoming scarcity of labeled training data for dynamic scenes
Improving video depth and camera pose estimation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Estimates per-timestep geometry directly
Adapts static scene representation to dynamics
Fine-tunes model on limited dynamic data
🔎 Similar Papers
No similar papers found.