SOI is the Root of All Evil: Quantifying and Breaking Similar Object Interference in Single Object Tracking

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies and systematically quantifies a long-overlooked critical bottleneck in single-object tracking (SOT): Similar Object Interference (SOI). To mitigate SOI-induced drift, we propose a novel paradigm leveraging natural-language-driven external semantic cognition—utilizing large-scale vision-language models (VLMs) to inject semantic priors into RGB-based trackers. We introduce SOIBench, the first vision-language benchmark explicitly designed for SOI evaluation, featuring online interference masking, multi-tracker collaborative discrimination, and hierarchical semantic annotations. Experiments demonstrate that eliminating interference sources improves state-of-the-art trackers’ AUC by up to 4.35 points; our method achieves a 0.93-point AUC gain on SOIBench, significantly outperforming existing semantic-enhanced approaches. This advances robust, interpretable, semantics-aware tracking.

Technology Category

Application Category

📝 Abstract
In this paper, we present the first systematic investigation and quantification of Similar Object Interference (SOI), a long-overlooked yet critical bottleneck in Single Object Tracking (SOT). Through controlled Online Interference Masking (OIM) experiments, we quantitatively demonstrate that eliminating interference sources leads to substantial performance improvements (AUC gains up to 4.35) across all SOTA trackers, directly validating SOI as a primary constraint for robust tracking and highlighting the feasibility of external cognitive guidance. Building upon these insights, we adopt natural language as a practical form of external guidance, and construct SOIBench-the first semantic cognitive guidance benchmark specifically targeting SOI challenges. It automatically mines SOI frames through multi-tracker collective judgment and introduces a multi-level annotation protocol to generate precise semantic guidance texts. Systematic evaluation on SOIBench reveals a striking finding: existing vision-language tracking (VLT) methods fail to effectively exploit semantic cognitive guidance, achieving only marginal improvements or even performance degradation (AUC changes of -0.26 to +0.71). In contrast, we propose a novel paradigm employing large-scale vision-language models (VLM) as external cognitive engines that can be seamlessly integrated into arbitrary RGB trackers. This approach demonstrates substantial improvements under semantic cognitive guidance (AUC gains up to 0.93), representing a significant advancement over existing VLT methods. We hope SOIBench will serve as a standardized evaluation platform to advance semantic cognitive tracking research and contribute new insights to the tracking research community.
Problem

Research questions and friction points this paper is trying to address.

Quantifying Similar Object Interference in Single Object Tracking
Evaluating effectiveness of semantic guidance in tracking performance
Proposing a new paradigm using vision-language models for tracking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online Interference Masking for SOI quantification
SOIBench benchmark with semantic guidance texts
VLM integration as external cognitive engines
🔎 Similar Papers
No similar papers found.
Y
Yipei Wang
Southeast University, China
Shiyu Hu
Shiyu Hu
Research Fellow, Nanyang Technological University (NTU)
Computer VisionData-centric AIAI for Science
S
Shukun Jia
Southeast University, China
P
Panxi Xu
University of Science and Technology Beijing, China
H
Hongfei Ma
University of Science and Technology Beijing, China
Yiping Ma
Yiping Ma
UPenn, UC Berkeley
securitycryptographysystems
X
Xiaobo Lu
Southeast University, China
X
Xin Zhao
University of Science and Technology Beijing, China