🤖 AI Summary
This work identifies and systematically quantifies a long-overlooked critical bottleneck in single-object tracking (SOT): Similar Object Interference (SOI). To mitigate SOI-induced drift, we propose a novel paradigm leveraging natural-language-driven external semantic cognition—utilizing large-scale vision-language models (VLMs) to inject semantic priors into RGB-based trackers. We introduce SOIBench, the first vision-language benchmark explicitly designed for SOI evaluation, featuring online interference masking, multi-tracker collaborative discrimination, and hierarchical semantic annotations. Experiments demonstrate that eliminating interference sources improves state-of-the-art trackers’ AUC by up to 4.35 points; our method achieves a 0.93-point AUC gain on SOIBench, significantly outperforming existing semantic-enhanced approaches. This advances robust, interpretable, semantics-aware tracking.
📝 Abstract
In this paper, we present the first systematic investigation and quantification of Similar Object Interference (SOI), a long-overlooked yet critical bottleneck in Single Object Tracking (SOT). Through controlled Online Interference Masking (OIM) experiments, we quantitatively demonstrate that eliminating interference sources leads to substantial performance improvements (AUC gains up to 4.35) across all SOTA trackers, directly validating SOI as a primary constraint for robust tracking and highlighting the feasibility of external cognitive guidance. Building upon these insights, we adopt natural language as a practical form of external guidance, and construct SOIBench-the first semantic cognitive guidance benchmark specifically targeting SOI challenges. It automatically mines SOI frames through multi-tracker collective judgment and introduces a multi-level annotation protocol to generate precise semantic guidance texts. Systematic evaluation on SOIBench reveals a striking finding: existing vision-language tracking (VLT) methods fail to effectively exploit semantic cognitive guidance, achieving only marginal improvements or even performance degradation (AUC changes of -0.26 to +0.71). In contrast, we propose a novel paradigm employing large-scale vision-language models (VLM) as external cognitive engines that can be seamlessly integrated into arbitrary RGB trackers. This approach demonstrates substantial improvements under semantic cognitive guidance (AUC gains up to 0.93), representing a significant advancement over existing VLT methods. We hope SOIBench will serve as a standardized evaluation platform to advance semantic cognitive tracking research and contribute new insights to the tracking research community.