Serial Over Parallel: Learning Continual Unification for Multi-Modal Visual Object Tracking and Benchmarking

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing MMVOT methods suffer from training–testing inconsistency and performance degradation, as they train on mixed multi-sensor data but evaluate on isolated benchmarks. To address this, we propose a serialized multi-task fusion framework and introduce UniBench300—the first unified benchmark for multimodal visual object tracking—formulating it as a progressive continual learning process to explicitly mitigate catastrophic forgetting. Our approach leverages multimodal feature alignment and shared representation learning to enable unified single-pass inference across RGB-T, RGB-D, and RGB-E tracking tasks. Extensive evaluation across four benchmarks and two backbone architectures demonstrates a 27% reduction in inference latency and significant suppression of performance degradation. Furthermore, we identify network capacity and inter-modal discrepancy as the two primary factors governing degradation.

Technology Category

Application Category

📝 Abstract
Unifying multiple multi-modal visual object tracking (MMVOT) tasks draws increasing attention due to the complementary nature of different modalities in building robust tracking systems. Existing practices mix all data sensor types in a single training procedure, structuring a parallel paradigm from the data-centric perspective and aiming for a global optimum on the joint distribution of the involved tasks. However, the absence of a unified benchmark where all types of data coexist forces evaluations on separated benchmarks, causing extit{inconsistency} between training and testing, thus leading to performance extit{degradation}. To address these issues, this work advances in two aspects: ding{182} A unified benchmark, coined as UniBench300, is introduced to bridge the inconsistency by incorporating multiple task data, reducing inference passes from three to one and cutting time consumption by 27%. ding{183} The unification process is reformulated in a serial format, progressively integrating new tasks. In this way, the performance degradation can be specified as knowledge forgetting of previous tasks, which naturally aligns with the philosophy of continual learning (CL), motivating further exploration of injecting CL into the unification process. Extensive experiments conducted on two baselines and four benchmarks demonstrate the significance of UniBench300 and the superiority of CL in supporting a stable unification process. Moreover, while conducting dedicated analyses, the performance degradation is found to be negatively correlated with network capacity. Additionally, modality discrepancies contribute to varying degradation levels across tasks (RGBT > RGBD > RGBE in MMVOT), offering valuable insights for future multi-modal vision research. Source codes and the proposed benchmark is available at extit{https://github.com/Zhangyong-Tang/UniBench300}.
Problem

Research questions and friction points this paper is trying to address.

Unifying multi-modal visual object tracking tasks with inconsistent benchmarks
Addressing performance degradation due to training-testing inconsistency
Reformulating unification process using continual learning for stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces UniBench300 for unified multi-modal tracking
Reformulates unification process in serial format
Incorporates continual learning to prevent performance degradation
🔎 Similar Papers
No similar papers found.
Zhangyong Tang
Zhangyong Tang
Jiangnan University
T
Tianyang Xu
Jiangnan University, Wuxi, China
X
Xuefeng Zhu
Jiangnan University, Wuxi, China
C
Chunyang Cheng
Jiangnan University, Wuxi, China
T
Tao Zhou
Nanjing University of Science and Technology, Nanjing, China
X
Xiaojun Wu
Jiangnan University, Wuxi, China
Josef Kittler
Josef Kittler
University of Surrey
engineering