Robust RGB-T Tracking via Learnable Visual Fourier Prompt Fine-tuning and Modality Fusion Prompt Generation

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parameter-efficient fine-tuning (PEFT) methods for RGB-T tracking solely exploit spatial-domain prompts, neglecting complementary frequency-domain information, thereby limiting tracking performance. Method: This paper introduces visual Fourier prompting (VFP), the first PEFT framework incorporating frequency-domain modeling into RGB-T prompt learning. VFP jointly models spatial and frequency representations via fast Fourier transform (FFT); employs a symmetric, weight-shared encoder with bidirectional cross-modal prompting to enable efficient feature interaction between RGB and thermal modalities; and freezes the backbone entirely, optimizing only lightweight prompt modules. Contribution/Results: Evaluated on three mainstream RGB-T benchmarks, VFP consistently outperforms existing PEFT-based multimodal trackers, achieving state-of-the-art accuracy and robustness while maintaining high parameter efficiency.

Technology Category

Application Category

📝 Abstract
Recently, visual prompt tuning is introduced to RGB-Thermal (RGB-T) tracking as a parameter-efficient finetuning (PEFT) method. However, these PEFT-based RGB-T tracking methods typically rely solely on spatial domain information as prompts for feature extraction. As a result, they often fail to achieve optimal performance by overlooking the crucial role of frequency-domain information in prompt learning. To address this issue, we propose an efficient Visual Fourier Prompt Tracking (named VFPTrack) method to learn modality-related prompts via Fast Fourier Transform (FFT). Our method consists of symmetric feature extraction encoder with shared parameters, visual fourier prompts, and Modality Fusion Prompt Generator that generates bidirectional interaction prompts through multi-modal feature fusion. Specifically, we first use a frozen feature extraction encoder to extract RGB and thermal infrared (TIR) modality features. Then, we combine the visual prompts in the spatial domain with the frequency domain prompts obtained from the FFT, which allows for the full extraction and understanding of modality features from different domain information. Finally, unlike previous fusion methods, the modality fusion prompt generation module we use combines features from different modalities to generate a fused modality prompt. This modality prompt is interacted with each individual modality to fully enable feature interaction across different modalities. Extensive experiments conducted on three popular RGB-T tracking benchmarks show that our method demonstrates outstanding performance.
Problem

Research questions and friction points this paper is trying to address.

RGB-T tracking methods overlook frequency-domain information in prompt learning
Existing approaches rely solely on spatial domain information for feature extraction
Need to improve feature interaction across RGB and thermal infrared modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Fast Fourier Transform for frequency-domain prompts
Combines spatial and frequency domain visual prompts
Generates fused modality prompts via bidirectional interaction
🔎 Similar Papers
No similar papers found.
H
Hongtao Yang
Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin 541004, China, and the Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541004, China
B
Bineng Zhong
Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin 541004, China, and the Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541004, China
Q
Qihua Liang
Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin 541004, China, and the Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541004, China
Z
Zhiruo Zhu
Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin 541004, China, and the Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541004, China
Yaozong Zheng
Yaozong Zheng
Guangxi Normal University
Visual TrackingMultimodal Tracking
N
Ning Li
Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin 541004, China, and the Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541004, China