SwiTrack: Tri-State Switch for Cross-Modal Object Tracking

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient feature extraction and target drift caused by modality switching in cross-modal object tracking (CMOT), this paper proposes SwiTrack, a three-state switching framework. Methodologically, SwiTrack introduces: (1) a near-infrared (NIR) gated adapter that dynamically modulates a shared latent space to enhance modality-specific representation; (2) a trajectory prediction module integrating spatiotemporal context to improve robustness under invalid modality inputs; and (3) dynamic template reconstruction with similarity alignment loss to ensure cross-modal feature consistency. Evaluated on the latest CMOT benchmark, SwiTrack achieves absolute improvements of 7.2% in precision rate and 4.3% in success rate over state-of-the-art methods, while maintaining real-time performance at 65 FPS. These results demonstrate significant advances in both tracking accuracy and computational efficiency for CMOT.

Technology Category

Application Category

📝 Abstract
Cross-modal object tracking (CMOT) is an emerging task that maintains target consistency while the video stream switches between different modalities, with only one modality available in each frame, mostly focusing on RGB-Near Infrared (RGB-NIR) tracking. Existing methods typically connect parallel RGB and NIR branches to a shared backbone, which limits the comprehensive extraction of distinctive modality-specific features and fails to address the issue of object drift, especially in the presence of unreliable inputs. In this paper, we propose SwiTrack, a novel state-switching framework that redefines CMOT through the deployment of three specialized streams. Specifically, RGB frames are processed by the visual encoder, while NIR frames undergo refinement via a NIR gated adapter coupled with the visual encoder to progressively calibrate shared latent space features, thereby yielding more robust cross-modal representations. For invalid modalities, a consistency trajectory prediction module leverages spatio-temporal cues to estimate target movement, ensuring robust tracking and mitigating drift. Additionally, we incorporate dynamic template reconstruction to iteratively update template features and employ a similarity alignment loss to reinforce feature consistency. Experimental results on the latest benchmarks demonstrate that our tracker achieves state-of-the-art performance, boosting precision rate and success rate gains by 7.2% and 4.3%, respectively, while maintaining real-time tracking at 65 frames per second. Code and models are available at https://github.com/xuboyue1999/SwiTrack.git.
Problem

Research questions and friction points this paper is trying to address.

Addresses object drift in cross-modal RGB-NIR tracking
Handles unreliable inputs through specialized three-stream framework
Enhances feature consistency with dynamic template reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tri-state switching framework with three specialized streams
NIR gated adapter calibrates shared latent space features
Consistency trajectory prediction module mitigates object drift
🔎 Similar Papers
No similar papers found.