🤖 AI Summary
To address insufficient feature extraction and target drift caused by modality switching in cross-modal object tracking (CMOT), this paper proposes SwiTrack, a three-state switching framework. Methodologically, SwiTrack introduces: (1) a near-infrared (NIR) gated adapter that dynamically modulates a shared latent space to enhance modality-specific representation; (2) a trajectory prediction module integrating spatiotemporal context to improve robustness under invalid modality inputs; and (3) dynamic template reconstruction with similarity alignment loss to ensure cross-modal feature consistency. Evaluated on the latest CMOT benchmark, SwiTrack achieves absolute improvements of 7.2% in precision rate and 4.3% in success rate over state-of-the-art methods, while maintaining real-time performance at 65 FPS. These results demonstrate significant advances in both tracking accuracy and computational efficiency for CMOT.
📝 Abstract
Cross-modal object tracking (CMOT) is an emerging task that maintains target consistency while the video stream switches between different modalities, with only one modality available in each frame, mostly focusing on RGB-Near Infrared (RGB-NIR) tracking. Existing methods typically connect parallel RGB and NIR branches to a shared backbone, which limits the comprehensive extraction of distinctive modality-specific features and fails to address the issue of object drift, especially in the presence of unreliable inputs. In this paper, we propose SwiTrack, a novel state-switching framework that redefines CMOT through the deployment of three specialized streams. Specifically, RGB frames are processed by the visual encoder, while NIR frames undergo refinement via a NIR gated adapter coupled with the visual encoder to progressively calibrate shared latent space features, thereby yielding more robust cross-modal representations. For invalid modalities, a consistency trajectory prediction module leverages spatio-temporal cues to estimate target movement, ensuring robust tracking and mitigating drift. Additionally, we incorporate dynamic template reconstruction to iteratively update template features and employ a similarity alignment loss to reinforce feature consistency. Experimental results on the latest benchmarks demonstrate that our tracker achieves state-of-the-art performance, boosting precision rate and success rate gains by 7.2% and 4.3%, respectively, while maintaining real-time tracking at 65 frames per second. Code and models are available at https://github.com/xuboyue1999/SwiTrack.git.