Tell Me What to Track: Infusing Robust Language Guidance for Enhanced Referring Multi-Object Tracking

📅 2024-12-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses three core challenges in referring multi-object tracking (RMOT): weak detection of newborn objects, shallow multimodal fusion, and difficult temporal association. To this end, we propose an end-to-end trainable framework. First, we design a multimodal collaborative matching strategy to mitigate sample imbalance between newborn and existing objects. Second, we construct a cross-modal enhanced encoder and a referring-injection decoder, enabling explicit, query-token-driven language guidance. Third, we introduce a cross-scale, cross-modal feature fusion mechanism to strengthen vision-language alignment and temporal modeling. Evaluated on standard benchmarks, our method achieves a 3.42% improvement in MOTA, significantly enhancing newborn object detection recall and inter-frame localization robustness.

Technology Category

Application Category

📝 Abstract
Referring multi-object tracking (RMOT) is an emerging cross-modal task that aims to localize an arbitrary number of targets based on a language expression and continuously track them in a video. This intricate task involves reasoning on multi-modal data and precise target localization with temporal association. However, prior studies overlook the imbalanced data distribution between newborn targets and existing targets due to the nature of the task. In addition, they only indirectly fuse multi-modal features, struggling to deliver clear guidance on newborn target detection. To solve the above issues, we conduct a collaborative matching strategy to alleviate the impact of the imbalance, boosting the ability to detect newborn targets while maintaining tracking performance. In the encoder, we integrate and enhance the cross-modal and multi-scale fusion, overcoming the bottlenecks in previous work, where limited multi-modal information is shared and interacted between feature maps. In the decoder, we also develop a referring-infused adaptation that provides explicit referring guidance through the query tokens. The experiments showcase the superior performance of our model (+3.42%) compared to prior works, demonstrating the effectiveness of our designs.
Problem

Research questions and friction points this paper is trying to address.

Addresses imbalanced data distribution in referring multi-object tracking.
Enhances cross-modal and multi-scale fusion for better target detection.
Provides explicit referring guidance through query tokens in the decoder.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative matching strategy for data imbalance
Enhanced cross-modal and multi-scale fusion
Referring-infused adaptation with explicit guidance
🔎 Similar Papers
No similar papers found.