🤖 AI Summary
Existing visual object tracking methods suffer from poor generalization to unseen objects due to reliance on explicit matching or strong dependence on annotated training bounding boxes. To address this, we propose an implicit denoising paradigm: tracking is formulated as a progressive denoising process of bounding boxes—bypassing the computational overhead of multi-step sampling in conventional diffusion models. We design a lightweight ViT-based denoising architecture featuring a conditional projection mechanism and multi-stage box refinement. Furthermore, we integrate dual memory modules—trajectory memory and visual memory—to enhance temporal consistency. Our method achieves state-of-the-art performance across multiple benchmarks, demonstrating significant robustness improvements under severe occlusion, large deformations, and rapid motion, while maintaining real-time inference speed (≥30 FPS).
📝 Abstract
Previous visual object tracking methods employ image-feature regression models or coordinate autoregression models for bounding box prediction. Image-feature regression methods heavily depend on matching results and do not utilize positional prior, while the autoregressive approach can only be trained using bounding boxes available in the training set, potentially resulting in suboptimal performance during testing with unseen data. Inspired by the diffusion model, denoising learning enhances the model's robustness to unseen data. Therefore, We introduce noise to bounding boxes, generating noisy boxes for training, thus enhancing model robustness on testing data. We propose a new paradigm to formulate the visual object tracking problem as a denoising learning process. However, tracking algorithms are usually asked to run in real-time, directly applying the diffusion model to object tracking would severely impair tracking speed. Therefore, we decompose the denoising learning process into every denoising block within a model, not by running the model multiple times, and thus we summarize the proposed paradigm as an in-model latent denoising learning process. Specifically, we propose a denoising Vision Transformer (ViT), which is composed of multiple denoising blocks. In the denoising block, template and search embeddings are projected into every denoising block as conditions. A denoising block is responsible for removing the noise in a predicted bounding box, and multiple stacked denoising blocks cooperate to accomplish the whole denoising process. Subsequently, we utilize image features and trajectory information to refine the denoised bounding box. Besides, we also utilize trajectory memory and visual memory to improve tracking stability. Experimental results validate the effectiveness of our approach, achieving competitive performance on several challenging datasets.