🤖 AI Summary
To address the efficiency–performance trade-off in transferring vision-language models (VLMs) to open-world tracking (OWT)—where full fine-tuning incurs prohibitive parameter/memory overhead and zero-shot generalization remains weak—this paper proposes a lightweight side network architecture. The VLM backbone is frozen; only a compact, auxiliary network—integrating Transformer and CNN modules—is trained. Crucially, a sparse interaction mechanism is introduced within its MLP layers to enable selective backpropagation. Our method updates merely 1.3% of total parameters, reduces memory consumption by 36.4%, and improves the core metric OWTA by 5.5%, substantially outperforming existing baselines. The key contribution is the first introduction of a sparse, learnable side structure for VLM-to-OWT transfer, achieving strong generalization and efficient adaptation simultaneously under minimal computational overhead.
📝 Abstract
Open-World Tracking (OWT) aims to track every object of any category, which requires the model to have strong generalization capabilities. Trackers can improve their generalization ability by leveraging Visual Language Models (VLMs). However, challenges arise with the fine-tuning strategies when VLMs are transferred to OWT: full fine-tuning results in excessive parameter and memory costs, while the zero-shot strategy leads to sub-optimal performance. To solve the problem, EffOWT is proposed for efficiently transferring VLMs to OWT. Specifically, we build a small and independent learnable side network outside the VLM backbone. By freezing the backbone and only executing backpropagation on the side network, the model's efficiency requirements can be met. In addition, EffOWT enhances the side network by proposing a hybrid structure of Transformer and CNN to improve the model's performance in the OWT field. Finally, we implement sparse interactions on the MLP, thus reducing parameter updates and memory costs significantly. Thanks to the proposed methods, EffOWT achieves an absolute gain of 5.5% on the tracking metric OWTA for unknown categories, while only updating 1.3% of the parameters compared to full fine-tuning, with a 36.4% memory saving. Other metrics also demonstrate obvious improvement.