🤖 AI Summary
To address substantial accuracy degradation, training complexity, and strong architectural dependency in compressing Transformer-based visual object trackers, this paper proposes CompressTracker—a general lightweighting framework. Methodologically, it introduces (1) a novel phased teacher-model partitioning and stochastic phase replacement training strategy, and (2) a prediction-guided, multi-stage feature imitation knowledge distillation scheme that eliminates reliance on specific backbone architectures. Without modifying the original backbone, CompressTracker-4—a 4-layer compressed variant derived from OSTrack—achieves 96% of the original performance on LaSOT (66.1% AUC) while accelerating inference by 2.17×. These results significantly outperform state-of-the-art compression methods, demonstrating superior trade-offs among accuracy, efficiency, and architectural generality.
📝 Abstract
Transformer-based trackers have established a dominant role in the field of visual object tracking. While these trackers exhibit promising performance, their deployment on resource-constrained devices remains challenging due to inefficiencies. To improve the inference efficiency and reduce the computation cost, prior approaches have aimed to either design lightweight trackers or distill knowledge from larger teacher models into more compact student trackers. However, these solutions often sacrifice accuracy for speed. Thus, we propose a general model compression framework for efficient transformer object tracking, named CompressTracker, to reduce the size of a pre-trained tracking model into a lightweight tracker with minimal performance degradation. Our approach features a novel stage division strategy that segments the transformer layers of the teacher model into distinct stages, enabling the student model to emulate each corresponding teacher stage more effectively. Additionally, we also design a unique replacement training technique that involves randomly substituting specific stages in the student model with those from the teacher model, as opposed to training the student model in isolation. Replacement training enhances the student model's ability to replicate the teacher model's behavior. To further forcing student model to emulate teacher model, we incorporate prediction guidance and stage-wise feature mimicking to provide additional supervision during the teacher model's compression process. Our framework CompressTracker is structurally agnostic, making it compatible with any transformer architecture. We conduct a series of experiment to verify the effectiveness and generalizability of CompressTracker. Our CompressTracker-4 with 4 transformer layers, which is compressed from OSTrack, retains about 96% performance on LaSOT (66.1% AUC) while achieves 2.17x speed up.