π€ AI Summary
Existing video point tracking methods suffer from inadequate temporal feature modeling and rely on inefficient coarse-to-fine two-stage optimization, limiting both accuracy and efficiency. This paper proposes Chronoβa lightweight, end-to-end temporal-aware feature backbone. Its core innovation lies in the first integration of self-supervised visual representations from DINOv2 with learnable temporal adapters, enabling single-stage, differentiable high-precision point tracking and eliminating the need for conventional refiner modules. Through transfer learning and end-to-end joint optimization, Chrono achieves state-of-the-art performance on TAP-Vid-DAVIS and TAP-Vid-Kinetics under the refiner-free setting. It significantly outperforms existing methods in inference speed while maintaining superior accuracy, efficiency, and deployment friendliness.
π Abstract
Point tracking in videos is a fundamental task with applications in robotics, video editing, and more. While many vision tasks benefit from pre-trained feature backbones to improve generalizability, point tracking has primarily relied on simpler backbones trained from scratch on synthetic data, which may limit robustness in real-world scenarios. Additionally, point tracking requires temporal awareness to ensure coherence across frames, but using temporally-aware features is still underexplored. Most current methods often employ a two-stage process: an initial coarse prediction followed by a refinement stage to inject temporal information and correct errors from the coarse stage. These approach, however, is computationally expensive and potentially redundant if the feature backbone itself captures sufficient temporal information. In this work, we introduce Chrono, a feature backbone specifically designed for point tracking with built-in temporal awareness. Leveraging pre-trained representations from self-supervised learner DINOv2 and enhanced with a temporal adapter, Chrono effectively captures long-term temporal context, enabling precise prediction even without the refinement stage. Experimental results demonstrate that Chrono achieves state-of-the-art performance in a refiner-free setting on the TAP-Vid-DAVIS and TAP-Vid-Kinetics datasets, among common feature backbones used in point tracking as well as DINOv2, with exceptional efficiency. Project page: https://cvlab-kaist.github.io/Chrono/