St4RTrack: Simultaneous 4D Reconstruction and Tracking in the World

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the joint modeling of 4D reconstruction and point-level tracking for dynamic scenes from monocular RGB video, introducing the first end-to-end framework operating in world coordinates without requiring 4D ground-truth supervision. Our method jointly predicts pairwise point clouds aligned in a unified world coordinate system, thereby coupling geometric reconstruction with motion estimation. We propose a reprojection-based adaptive loss optimization and a frame-chain propagation strategy to enable long-term 3D point tracking. Key contributions include: (1) the first benchmark for evaluating long-term 3D correspondences in world coordinates; (2) state-of-the-art performance in both reconstruction accuracy and tracking robustness; and (3) full open-sourcing of code, pretrained models, and datasets.

Technology Category

Application Category

📝 Abstract
Dynamic 3D reconstruction and point tracking in videos are typically treated as separate tasks, despite their deep connection. We propose St4RTrack, a feed-forward framework that simultaneously reconstructs and tracks dynamic video content in a world coordinate frame from RGB inputs. This is achieved by predicting two appropriately defined pointmaps for a pair of frames captured at different moments. Specifically, we predict both pointmaps at the same moment, in the same world, capturing both static and dynamic scene geometry while maintaining 3D correspondences. Chaining these predictions through the video sequence with respect to a reference frame naturally computes long-range correspondences, effectively combining 3D reconstruction with 3D tracking. Unlike prior methods that rely heavily on 4D ground truth supervision, we employ a novel adaptation scheme based on a reprojection loss. We establish a new extensive benchmark for world-frame reconstruction and tracking, demonstrating the effectiveness and efficiency of our unified, data-driven framework. Our code, model, and benchmark will be released.
Problem

Research questions and friction points this paper is trying to address.

Simultaneous 4D reconstruction and tracking from RGB videos
Unifying dynamic 3D reconstruction and point tracking tasks
Achieving long-range 3D correspondences without 4D supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simultaneous 4D reconstruction and tracking
Predicts pointmaps for static and dynamic geometry
Uses reprojection loss instead of 4D supervision
🔎 Similar Papers
No similar papers found.