🤖 AI Summary
To address the underutilization of frequency-domain features in UAV-based multispectral object detection, this paper proposes a Spatial-Frequency Feature Reconstruction (SFFR) method. SFFR leverages the Kolmogorov–Arnold Network (KAN) to achieve the first spatial-frequency dual-domain feature disentanglement: (i) a Frequency Component Exchange KAN (FCEKAN) module enhances frequency complementarity between RGB and infrared modalities; (ii) a Multi-Scale Gaussian KAN (MSGKAN) module captures spatial nonlinearity and scale adaptivity. Evaluated on SeaDroneSee, DroneVehicle, and DVTOD benchmarks, SFFR achieves consistent improvements in detection accuracy—up to +4.2% mAP—while demonstrating strong robustness and cross-scenario generalization. The proposed framework establishes a novel dual-domain collaborative modeling paradigm for multispectral detection, advancing the integration of spatial and spectral information in UAV vision systems.
📝 Abstract
Recent multispectral object detection methods have primarily focused on spatial-domain feature fusion based on CNNs or Transformers, while the potential of frequency-domain feature remains underexplored. In this work, we propose a novel Spatial and Frequency Feature Reconstruction method (SFFR) method, which leverages the spatial-frequency feature representation mechanisms of the Kolmogorov-Arnold Network (KAN) to reconstruct complementary representations in both spatial and frequency domains prior to feature fusion. The core components of SFFR are the proposed Frequency Component Exchange KAN (FCEKAN) module and Multi-Scale Gaussian KAN (MSGKAN) module. The FCEKAN introduces an innovative selective frequency component exchange strategy that effectively enhances the complementarity and consistency of cross-modal features based on the frequency feature of RGB and IR images. The MSGKAN module demonstrates excellent nonlinear feature modeling capability in the spatial domain. By leveraging multi-scale Gaussian basis functions, it effectively captures the feature variations caused by scale changes at different UAV flight altitudes, significantly enhancing the model's adaptability and robustness to scale variations. It is experimentally validated that our proposed FCEKAN and MSGKAN modules are complementary and can effectively capture the frequency and spatial semantic features respectively for better feature fusion. Extensive experiments on the SeaDroneSee, DroneVehicle and DVTOD datasets demonstrate the superior performance and significant advantages of the proposed method in UAV multispectral object perception task. Code will be available at https://github.com/qchenyu1027/SFFR.