🤖 AI Summary
Multi-view object tracking (MVOT) suffers from data scarcity and inadequate cross-view feature fusion. To address this, we introduce MVTrack—the first large-scale, high-quality multi-view tracking dataset comprising 234K frames—and propose MITracker, an end-to-end trainable framework enabling robust tracking across arbitrary viewpoints and video durations. Our key contributions are: (1) a novel dimension-lifting compression paradigm—2D features → 3D voxels → bird’s-eye view (BEV)—that achieves viewpoint-invariant feature alignment; and (2) a geometry-prior-guided cross-view attention mechanism that explicitly incorporates camera pose and spatial constraints. Evaluated on both MVTrack and GMTD, MITracker achieves state-of-the-art performance, significantly outperforming existing single- and multi-view baselines. The dataset and source code are publicly released to foster community advancement.
📝 Abstract
Multi-view object tracking (MVOT) offers promising solutions to challenges such as occlusion and target loss, which are common in traditional single-view tracking. However, progress has been limited by the lack of comprehensive multi-view datasets and effective cross-view integration methods. To overcome these limitations, we compiled a Multi-View object Tracking (MVTrack) dataset of 234K high-quality annotated frames featuring 27 distinct objects across various scenes. In conjunction with this dataset, we introduce a novel MVOT method, Multi-View Integration Tracker (MITracker), to efficiently integrate multi-view object features and provide stable tracking outcomes. MITracker can track any object in video frames of arbitrary length from arbitrary viewpoints. The key advancements of our method over traditional single-view approaches come from two aspects: (1) MITracker transforms 2D image features into a 3D feature volume and compresses it into a bird's eye view (BEV) plane, facilitating inter-view information fusion; (2) we propose an attention mechanism that leverages geometric information from fused 3D feature volume to refine the tracking results at each view. MITracker outperforms existing methods on the MVTrack and GMTD datasets, achieving state-of-the-art performance. The code and the new dataset will be available at https://mii-laboratory.github.io/MITracker/.