🤖 AI Summary
Monocular 3D object tracking typically relies on dense 3D annotations, which are expensive to acquire and difficult to scale. This work proposes the first sparsely supervised monocular 3D tracking framework, decoupling the task into two stages: 2D query matching and 3D geometric estimation. By leveraging temporal consistency, the method automatically generates high-quality 3D pseudo-labels from as few as four sparse ground-truth annotations per trajectory, enabling dense tracking without extensive labeling. Evaluated on KITTI and nuScenes benchmarks, the approach significantly outperforms existing methods, achieving performance gains of up to 15.50 percentage points while substantially reducing annotation costs and maintaining high accuracy.
📝 Abstract
Monocular 3D object tracking aims to estimate temporally consistent 3D object poses across video frames, enabling autonomous agents to reason about scene dynamics. However, existing state-of-the-art approaches are fully supervised and rely on dense 3D annotations over long video sequences, which are expensive to obtain and difficult to scale. In this work, we address this fundamental limitation by proposing the first sparsely supervised framework for monocular 3D object tracking. Our approach decomposes the task into two sequential sub-problems: 2D query matching and 3D geometry estimation. Both components leverage the spatio-temporal consistency of image sequences to augment a sparse set of labeled samples and learn rich 2D and 3D representations of the scene. Leveraging these learned cues, our model automatically generates high-quality 3D pseudolabels across entire videos, effectively transforming sparse supervision into dense 3D track annotations. This enables existing fully-supervised trackers to effectively operate under extreme label sparsity. Extensive experiments on the KITTI and nuScenes datasets demonstrate that our method significantly improves tracking performance, achieving an improvement of up to 15.50 p.p. while using at most four ground truth annotations per track.