SSF-Net: Spatial-Spectral Fusion Network with Spectral Angle Awareness for Hyperspectral Object Tracking

📅 2024-03-09
🏛️ IEEE Transactions on Image Processing
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient spectral information exploitation and weak multimodal feature complementarity in hyperspectral video object tracking, this paper proposes a robust tracking framework integrating spatial-spectral-temporal 3D modeling. We innovatively design a Spectral Angle Awareness Mechanism (SAAM) with its dedicated Spectral Angle Awareness Loss (SAAL) for region-level spectral similarity measurement. Furthermore, we construct a Spatial-Spectral Fusion Backbone (S2FB) and a Spectral Attention Fusion Module (SAFM) to enhance cross-modal (HS-RGB) collaboration and feature complementarity. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art approaches on the HOTC-2020, HOTC-2024, and BihoT benchmarks, achieving substantial improvements in tracking accuracy and robustness—particularly under challenging conditions such as complex backgrounds and intra-class visual similarity.

Technology Category

Application Category

📝 Abstract
Hyperspectral video (HSV) offers valuable spatial, spectral, and temporal information simultaneously, making it highly suitable for handling challenges such as background clutter and visual similarity in object tracking. However, existing methods primarily focus on band regrouping and rely on RGB trackers for feature extraction, resulting in limited exploration of spectral information and difficulties in achieving complementary representations of object features. In this paper, a spatial-spectral fusion network with spectral angle awareness (SSF-Net) is proposed for hyperspectral (HS) object tracking. Firstly, to address the issue of insufficient spectral feature extraction in existing networks, a spatial-spectral feature backbone (S2FB) is designed. With the spatial and spectral extraction branch, a joint representation of texture and spectrum is obtained. Secondly, a spectral attention fusion module (SAFM) is presented to capture the intra- and inter-modality correlation to obtain the fused features from the HS and RGB modalities. It can incorporate the visual information into the HS context to form a robust representation. Thirdly, to ensure a more accurate response to the object position, a spectral angle awareness module (SAAM) is designed to investigate the region-level spectral similarity between the template and search images during the prediction stage. Furthermore, a novel spectral angle awareness loss (SAAL) is developed to offer guidance for the SAAM based on similar regions. Finally, to obtain the robust tracking results, a weighted prediction method is considered to combine the HS and RGB predicted motions of objects to leverage the strengths of each modality. Extensive experiments on the HOTC-2020, HOTC-2024, and BihoT datasets demonstrate the effectiveness of the proposed SSF-Net compared with state-of-the-art trackers. The source code will be available at https://github.com/hzwyhc/hsvt.
Problem

Research questions and friction points this paper is trying to address.

Insufficient spectral feature extraction in existing networks
Difficulty in achieving complementary object feature representations
Limited exploration of spectral information in object tracking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatial-spectral feature backbone for joint representation
Spectral attention fusion module for modality correlation
Spectral angle awareness module for accurate tracking
🔎 Similar Papers
2024-08-22IEEE Transactions on Neural Networks and Learning SystemsCitations: 0
Hanzheng Wang
Hanzheng Wang
Staff Machine Vision Engineer, Tesla
Machine VisionOptical Sensors
W
Wei Li
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China, and also with the Beijing Key Laboratory of Fractional Signals and Systems, Beijing 100081, China
X
X. Xia
Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA
Q
Q. Du
Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS 39762, USA
Jing Tian
Jing Tian
National University of Singapore
Video analyticsComputer visionIndustrial AI