🤖 AI Summary
To address inter-frame geometric inconsistency in 3D vision foundation models caused by frame-wise prediction in online driving scenarios, this paper proposes a plug-and-play long-term consistency alignment framework. Methodologically, it introduces a Thin-Plate Spline (TPS)-based global control point propagation mechanism to enable high-degree-of-freedom correction of spatially varying errors; and designs a point-agnostic subgraph registration strategy that eliminates reliance on local rigidity assumptions and precise point correspondences, thereby significantly improving noise robustness and registration range. The framework is compatible with monocular and surround-view camera setups, as well as diverse 3D foundation models. Experiments demonstrate consistent and substantial reductions in trajectory error across multiple datasets, backbone architectures, and camera configurations, yielding more coherent and stable 3D reconstructions—validating its generality and robustness. The code is publicly available.
📝 Abstract
3D vision foundation models have shown strong generalization in reconstructing key 3D attributes from uncalibrated images through a single feed-forward pass. However, when deployed in online settings such as driving scenarios, predictions are made over temporal windows, making it non-trivial to maintain consistency across time. Recent strategies align consecutive predictions by solving global transformation, yet our analysis reveals their fundamental limitations in assumption validity, local alignment scope, and robustness under noisy geometry. In this work, we propose a higher-DOF and long-term alignment framework based on Thin Plate Spline, leveraging globally propagated control points to correct spatially varying inconsistencies. In addition, we adopt a point-agnostic submap registration design that is inherently robust to noisy geometry predictions. The proposed framework is fully plug-and-play, compatible with diverse 3D foundation models and camera configurations (e.g., monocular or surround-view). Extensive experiments demonstrate that our method consistently yields more coherent geometry and lower trajectory errors across multiple datasets, backbone models, and camera setups, highlighting its robustness and generality. Codes are publicly available at href{https://github.com/Xian-Bei/TALO}{https://github.com/Xian-Bei/TALO}.