🤖 AI Summary
This work addresses the limitations of existing vision-language multi-object tracking methods, which are constrained by the narrow field of view of conventional cameras and prone to target loss and contextual discontinuity. To overcome this, we introduce panoramic referring multi-object tracking (ORMOT)—the first formulation of referring multi-object tracked in 360° panoramic scenes—and present ORSet, the first large-scale multimodal dataset comprising 27 panoramic scenes, 848 natural language expressions, and 3,401 annotated objects. We further propose ORTrack, a novel framework built upon large vision-language models that integrates panoramic image understanding, cross-modal alignment, and temporal tracking mechanisms. Experimental results demonstrate that ORTrack significantly improves the robustness and accuracy of language-guided multi-object tracking in panoramic environments on the ORSet benchmark.
📝 Abstract
Multi-Object Tracking (MOT) is a fundamental task in computer vision, aiming to track targets across video frames. Existing MOT methods perform well in general visual scenes, but face significant challenges and limitations when extended to visual-language settings. To bridge this gap, the task of Referring Multi-Object Tracking (RMOT) has recently been proposed, which aims to track objects that correspond to language descriptions. However, current RMOT methods are primarily developed on datasets captured by conventional cameras, which suffer from limited field of view. This constraint often causes targets to move out of the frame, leading to fragmented tracking and loss of contextual information. In this work, we propose a novel task, called Omnidirectional Referring Multi-Object Tracking (ORMOT), which extends RMOT to omnidirectional imagery, aiming to overcome the field-of-view (FoV) limitation of conventional datasets and improve the model's ability to understand long-horizon language descriptions. To advance the ORMOT task, we construct ORSet, an Omnidirectional Referring Multi-Object Tracking dataset, which contains 27 diverse omnidirectional scenes, 848 language descriptions, and 3,401 annotated objects, providing rich visual, temporal, and language information. Furthermore, we propose ORTrack, a Large Vision-Language Model (LVLM)-driven framework tailored for Omnidirectional Referring Multi-Object Tracking. Extensive experiments on the ORSet dataset demonstrate the effectiveness of our ORTrack framework. The dataset and code will be open-sourced at https://github.com/chen-si-jia/ORMOT.