🤖 AI Summary
Existing visual tracking methods often struggle to model the continuous dynamics of targets due to limited historical frames or insufficient fusion of memory features. To address this, this work proposes Uni-MDTrack, which introduces a Memory-aware Compressed Prompt (MCP) module to efficiently compress external memory into prompt tokens and enable deep interaction with the backbone network, alongside a Dynamic State Fusion (DSF) module that explicitly models the target’s dynamic evolution. By decoupling memory retention from dynamic modeling, the approach incorporates two plug-and-play, parameter-efficient modules. With only approximately 30% of parameters fine-tuned, Uni-MDTrack achieves state-of-the-art performance across ten benchmarks spanning five modalities—including RGB, RGB-D/T/E, and language-guided tracking—and significantly enhances diverse baseline trackers.
📝 Abstract
With the advent of Transformer-based one-stream trackers that possess strong capability in inter-frame relation modeling, recent research has increasingly focused on how to introduce spatio-temporal context. However, most existing methods rely on a limited number of historical frames, which not only leads to insufficient utilization of the context, but also inevitably increases the length of input and incurs prohibitive computational overhead. Methods that query an external memory bank, on the other hand, suffer from inadequate fusion between the retrieved spatio-temporal features and the backbone. Moreover, using discrete historical frames as context overlooks the rich dynamics of the target. To address the issues, we propose Uni-MDTrack, which consists of two core components: Memory-Aware Compression Prompt (MCP) module and Dynamic State Fusion (DSF) module. MCP effectively compresses rich memory features into memory-aware prompt tokens, which deeply interact with the input throughout the entire backbone, significantly enhancing the performance while maintaining a stable computational load. DSF complements the discrete memory by capturing the continuous dynamic, progressively introducing the updated dynamic state features from shallow to deep layers, while also preserving high efficiency. Uni-MDTrack also supports unified tracking across RGB, RGB-D/T/E, and RGB-Language modalities. Experiments show that in Uni-MDTrack, training only the MCP, DSF, and prediction head, keeping the proportion of trainable parameters around 30%, yields substantial performance gains, achieves state-of-the-art results on 10 datasets spanning five modalities. Furthermore, both MCP and DSF exhibit excellent generality, functioning as plug-and-play components that can boost the performance of various baseline trackers, while significantly outperforming existing parameter-efficient training approaches.