🤖 AI Summary
This paper addresses three core challenges in referring multi-object tracking (RMOT): weak detection of newborn objects, shallow multimodal fusion, and difficult temporal association. To this end, we propose an end-to-end trainable framework. First, we design a multimodal collaborative matching strategy to mitigate sample imbalance between newborn and existing objects. Second, we construct a cross-modal enhanced encoder and a referring-injection decoder, enabling explicit, query-token-driven language guidance. Third, we introduce a cross-scale, cross-modal feature fusion mechanism to strengthen vision-language alignment and temporal modeling. Evaluated on standard benchmarks, our method achieves a 3.42% improvement in MOTA, significantly enhancing newborn object detection recall and inter-frame localization robustness.
📝 Abstract
Referring multi-object tracking (RMOT) is an emerging cross-modal task that aims to localize an arbitrary number of targets based on a language expression and continuously track them in a video. This intricate task involves reasoning on multi-modal data and precise target localization with temporal association. However, prior studies overlook the imbalanced data distribution between newborn targets and existing targets due to the nature of the task. In addition, they only indirectly fuse multi-modal features, struggling to deliver clear guidance on newborn target detection. To solve the above issues, we conduct a collaborative matching strategy to alleviate the impact of the imbalance, boosting the ability to detect newborn targets while maintaining tracking performance. In the encoder, we integrate and enhance the cross-modal and multi-scale fusion, overcoming the bottlenecks in previous work, where limited multi-modal information is shared and interacted between feature maps. In the decoder, we also develop a referring-infused adaptation that provides explicit referring guidance through the query tokens. The experiments showcase the superior performance of our model (+3.42%) compared to prior works, demonstrating the effectiveness of our designs.