SDF-Net: Structure-Aware Disentangled Feature Learning for Opticall-SAR Ship Re-identification

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of cross-modal vessel re-identification between optical and synthetic aperture radar (SAR) images, which arises from their inherent radiometric discrepancies. To tackle this issue, the authors propose a structure-aware decoupled feature learning framework that leverages the geometric stability of vessels as a physical prior. Built upon a Vision Transformer backbone, the method introduces a structural consistency constraint to extract scale-invariant gradient energy statistics. In the final stage, it explicitly disentangles modality-invariant identity features from modality-specific characteristics and enhances discriminability through a parameter-free residual fusion mechanism. Evaluated on the HOSS-ReID dataset, the proposed approach significantly outperforms existing methods, and both code and models have been publicly released.

Technology Category

Application Category

📝 Abstract
Cross-modal ship re-identification (ReID) between optical and synthetic aperture radar (SAR) imagery is fundamentally challenged by the severe radiometric discrepancy between passive optical imaging and coherent active radar sensing. While existing approaches primarily rely on statistical distribution alignment or semantic matching, they often overlook a critical physical prior: ships are rigid objects whose geometric structures remain stable across sensing modalities, whereas texture appearance is highly modality-dependent. In this work, we propose SDF-Net, a Structure-Aware Disentangled Feature Learning Network that systematically incorporates geometric consistency into optical--SAR ship ReID. Built upon a ViT backbone, SDF-Net introduces a structure consistency constraint that extracts scale-invariant gradient energy statistics from intermediate layers to robustly anchor representations against radiometric variations. At the terminal stage, SDF-Net disentangles the learned representations into modality-invariant identity features and modality-specific characteristics. These decoupled cues are then integrated through a parameter-free additive residual fusion, effectively enhancing discriminative power. Extensive experiments on the HOSS-ReID dataset demonstrate that SDF-Net consistently outperforms existing state-of-the-art methods. The code and trained models are publicly available at https://github.com/cfrfree/SDF-Net.
Problem

Research questions and friction points this paper is trying to address.

cross-modal
ship re-identification
optical-SAR
radiometric discrepancy
geometric structure
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structure-Aware Disentanglement
Cross-modal Ship Re-identification
Geometric Consistency
Modality-Invariant Feature
Gradient Energy Statistics
🔎 Similar Papers
No similar papers found.
F
Furui Chen
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; University of Chinese Academy of Sciences, Beijing 100049, China
H
Han Wang
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; University of Chinese Academy of Sciences, Beijing 100049, China
Yuhan Sun
Yuhan Sun
Ph.D. student of Computer Science, Arizona State Unviersity
GeoSpatial Graphdatabase
J
Jianing You
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; University of Chinese Academy of Sciences, Beijing 100049, China
Y
Yixuan Lv
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; School of Software, Beihang University, Beijing 100191, China
Z
Zhuang Zhou
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
H
Hong Tan
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; University of Chinese Academy of Sciences, Beijing 100049, China
S
Shengyang Li
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing 100094, China; University of Chinese Academy of Sciences, Beijing 100049, China