🤖 AI Summary
This work addresses the challenge of efficiently and densely estimating per-pixel 3D trajectories in world coordinates from monocular video to enable comprehensive understanding of scene dynamics. To this end, we propose Track4World—the first end-to-end feedforward model for global, dense world-centric 3D pixel tracking—overcoming the limitations of prior approaches that are either sparse or reliant on slow optimization. Our method builds a global 3D scene representation using a VGGT-style Vision Transformer and introduces a novel 3D correlation mechanism to jointly predict dense pixel-level 2D and 3D optical flow between arbitrary frame pairs. Experiments demonstrate that Track4World significantly outperforms existing methods across multiple benchmarks, achieving state-of-the-art robustness and scalability in both 2D/3D optical flow estimation and 3D tracking tasks.
📝 Abstract
Estimating the 3D trajectory of every pixel from a monocular video is crucial and promising for a comprehensive understanding of the 3D dynamics of videos. Recent monocular 3D tracking works demonstrate impressive performance, but are limited to either tracking sparse points on the first frame or a slow optimization-based framework for dense tracking. In this paper, we propose a feedforward model, called Track4World, enabling an efficient holistic 3D tracking of every pixel in the world-centric coordinate system. Built on the global 3D scene representation encoded by a VGGT-style ViT, Track4World applies a novel 3D correlation scheme to simultaneously estimate the pixel-wise 2D and 3D dense flow between arbitrary frame pairs. The estimated scene flow, along with the reconstructed 3D geometry, enables subsequent efficient 3D tracking of every pixel of this video. Extensive experiments on multiple benchmarks demonstrate that our approach consistently outperforms existing methods in 2D/3D flow estimation and 3D tracking, highlighting its robustness and scalability for real-world 4D reconstruction tasks.