🤖 AI Summary
Existing language-guided multi-object tracking (RMOT) methods struggle with instructions requiring complex logical reasoning. To address this, we introduce Reasoning-based Multi-Object Tracking (ReaMOT), a new task demanding that models comprehend natural language instructions exhibiting logical reasoning patterns and accurately identify and consistently track objects satisfying semantic constraints. We present the first ReaMOT benchmark—comprising 1,156 instruction samples, 423K image–language pairs, and 869 diverse scenes—spanning three levels of reasoning difficulty, along with dedicated evaluation metrics. Furthermore, we propose ReaTrack, a zero-shot, training-free framework that synergistically integrates a large vision-language model (LVLM) with SAM2 to enable reasoning-driven, language-guided tracking. On the ReaMOT Challenge, ReaTrack achieves substantial improvements in tracking accuracy and reasoning robustness under complex semantic instructions.
📝 Abstract
Referring Multi-object tracking (RMOT) is an important research field in computer vision. Its task form is to guide the models to track the objects that conform to the language instruction. However, the RMOT task commonly requires clear language instructions, such methods often fail to work when complex language instructions with reasoning characteristics appear. In this work, we propose a new task, called Reasoning-based Multi-Object Tracking (ReaMOT). ReaMOT is a more challenging task that requires accurate reasoning about objects that match the language instruction with reasoning characteristic and tracking the objects' trajectories. To advance the ReaMOT task and evaluate the reasoning capabilities of tracking models, we construct ReaMOT Challenge, a reasoning-based multi-object tracking benchmark built upon 12 datasets. Specifically, it comprises 1,156 language instructions with reasoning characteristic, 423,359 image-language pairs, and 869 diverse scenes, which is divided into three levels of reasoning difficulty. In addition, we propose a set of evaluation metrics tailored for the ReaMOT task. Furthermore, we propose ReaTrack, a training-free framework for reasoning-based multi-object tracking based on large vision-language models (LVLM) and SAM2, as a baseline for the ReaMOT task. Extensive experiments on the ReaMOT Challenge benchmark demonstrate the effectiveness of our ReaTrack framework.