ReaMOT: A Benchmark and Framework for Reasoning-based Multi-Object Tracking

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing language-guided multi-object tracking (RMOT) methods struggle with instructions requiring complex logical reasoning. To address this, we introduce Reasoning-based Multi-Object Tracking (ReaMOT), a new task demanding that models comprehend natural language instructions exhibiting logical reasoning patterns and accurately identify and consistently track objects satisfying semantic constraints. We present the first ReaMOT benchmark—comprising 1,156 instruction samples, 423K image–language pairs, and 869 diverse scenes—spanning three levels of reasoning difficulty, along with dedicated evaluation metrics. Furthermore, we propose ReaTrack, a zero-shot, training-free framework that synergistically integrates a large vision-language model (LVLM) with SAM2 to enable reasoning-driven, language-guided tracking. On the ReaMOT Challenge, ReaTrack achieves substantial improvements in tracking accuracy and reasoning robustness under complex semantic instructions.

Technology Category

Application Category

📝 Abstract
Referring Multi-object tracking (RMOT) is an important research field in computer vision. Its task form is to guide the models to track the objects that conform to the language instruction. However, the RMOT task commonly requires clear language instructions, such methods often fail to work when complex language instructions with reasoning characteristics appear. In this work, we propose a new task, called Reasoning-based Multi-Object Tracking (ReaMOT). ReaMOT is a more challenging task that requires accurate reasoning about objects that match the language instruction with reasoning characteristic and tracking the objects' trajectories. To advance the ReaMOT task and evaluate the reasoning capabilities of tracking models, we construct ReaMOT Challenge, a reasoning-based multi-object tracking benchmark built upon 12 datasets. Specifically, it comprises 1,156 language instructions with reasoning characteristic, 423,359 image-language pairs, and 869 diverse scenes, which is divided into three levels of reasoning difficulty. In addition, we propose a set of evaluation metrics tailored for the ReaMOT task. Furthermore, we propose ReaTrack, a training-free framework for reasoning-based multi-object tracking based on large vision-language models (LVLM) and SAM2, as a baseline for the ReaMOT task. Extensive experiments on the ReaMOT Challenge benchmark demonstrate the effectiveness of our ReaTrack framework.
Problem

Research questions and friction points this paper is trying to address.

Tracking objects with complex reasoning-based language instructions
Evaluating reasoning capabilities in multi-object tracking models
Developing a benchmark for reasoning-based object tracking tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Reasoning-based Multi-Object Tracking (ReaMOT)
Uses large vision-language models (LVLM) and SAM2
Proposes training-free framework ReaTrack for ReaMOT
🔎 Similar Papers
No similar papers found.
S
Sijia Chen
Huazhong University of Science and Technology
Y
Yanqiu Yu
Huazhong University of Science and Technology
E
En Yu
Huazhong University of Science and Technology
Wenbing Tao
Wenbing Tao
Professor of School of Automation, Huazhong University of Science and Technology
image processingcomputer visionpattern recognition