EffOWT: Transfer Visual Language Models to Open-World Tracking Efficiently and Effectively

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the efficiency–performance trade-off in transferring vision-language models (VLMs) to open-world tracking (OWT)—where full fine-tuning incurs prohibitive parameter/memory overhead and zero-shot generalization remains weak—this paper proposes a lightweight side network architecture. The VLM backbone is frozen; only a compact, auxiliary network—integrating Transformer and CNN modules—is trained. Crucially, a sparse interaction mechanism is introduced within its MLP layers to enable selective backpropagation. Our method updates merely 1.3% of total parameters, reduces memory consumption by 36.4%, and improves the core metric OWTA by 5.5%, substantially outperforming existing baselines. The key contribution is the first introduction of a sparse, learnable side structure for VLM-to-OWT transfer, achieving strong generalization and efficient adaptation simultaneously under minimal computational overhead.

Technology Category

Application Category

📝 Abstract
Open-World Tracking (OWT) aims to track every object of any category, which requires the model to have strong generalization capabilities. Trackers can improve their generalization ability by leveraging Visual Language Models (VLMs). However, challenges arise with the fine-tuning strategies when VLMs are transferred to OWT: full fine-tuning results in excessive parameter and memory costs, while the zero-shot strategy leads to sub-optimal performance. To solve the problem, EffOWT is proposed for efficiently transferring VLMs to OWT. Specifically, we build a small and independent learnable side network outside the VLM backbone. By freezing the backbone and only executing backpropagation on the side network, the model's efficiency requirements can be met. In addition, EffOWT enhances the side network by proposing a hybrid structure of Transformer and CNN to improve the model's performance in the OWT field. Finally, we implement sparse interactions on the MLP, thus reducing parameter updates and memory costs significantly. Thanks to the proposed methods, EffOWT achieves an absolute gain of 5.5% on the tracking metric OWTA for unknown categories, while only updating 1.3% of the parameters compared to full fine-tuning, with a 36.4% memory saving. Other metrics also demonstrate obvious improvement.
Problem

Research questions and friction points this paper is trying to address.

Transferring VLMs to Open-World Tracking efficiently
Balancing parameter costs and performance in fine-tuning
Enhancing generalization for tracking unknown object categories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Small learnable side network outside VLM backbone
Hybrid Transformer-CNN structure enhances performance
Sparse MLP interactions reduce parameters and memory
🔎 Similar Papers
No similar papers found.
B
Bingyang Wang
Dalian University of Technology, China
Kaer Huang
Kaer Huang
Lenovo Research
Reinforcement LearningLLM/MLLMGUI Agent
B
Bin Li
Lenovo, China
Yiqiang Yan
Yiqiang Yan
Lenovo
Lihe Zhang
Lihe Zhang
Dalian University of Technology
H
Huchuan Lu
Dalian University of Technology, China
Y
You He
Dalian University of Technology, China, Tsinghua University, China