🤖 AI Summary
To address the challenges of heterogeneous feature fusion between convolutional and Transformer-based representations, high model complexity, and excessive parameter count in optical remote sensing image (ORSI) object segmentation, this paper proposes a dual-perspective fusion Transformer architecture. Our key contributions are: (1) a global-local hybrid attention mechanism that jointly captures long-range dependencies and fine-grained spatial structures; (2) a Fourier-domain feature fusion strategy to mitigate representation mismatch between frequency- and spatial-domain features; and (3) a gated linear feed-forward network coupled with a multi-scale decoder for efficient cross-layer feature aggregation and enhancement. Extensive experiments on multiple remote sensing segmentation benchmarks demonstrate significant improvements over state-of-the-art methods—particularly in boundary accuracy and small-object recall. The source code is publicly available.
📝 Abstract
Automatically segmenting objects from optical remote sensing images (ORSIs) is an important task. Most existing models are primarily based on either convolutional or Transformer features, each offering distinct advantages. Exploiting both advantages is valuable research, but it presents several challenges, including the heterogeneity between the two types of features, high complexity, and large parameters of the model. However, these issues are often overlooked in existing the ORSIs methods, causing sub-optimal segmentation. For that, we propose a novel Dual-Perspective United Transformer (DPU-Former) with a unique structure designed to simultaneously integrate long-range dependencies and spatial details. In particular, we design the global-local mixed attention, which captures diverse information through two perspectives and introduces a Fourier-space merging strategy to obviate deviations for efficient fusion. Furthermore, we present a gated linear feed-forward network to increase the expressive ability. Additionally, we construct a DPU-Former decoder to aggregate and strength features at different layers. Consequently, the DPU-Former model outperforms the state-of-the-art methods on multiple datasets. Code: https://github.com/CSYSI/DPU-Former.