Dual-Perspective United Transformer for Object Segmentation in Optical Remote Sensing Images

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of heterogeneous feature fusion between convolutional and Transformer-based representations, high model complexity, and excessive parameter count in optical remote sensing image (ORSI) object segmentation, this paper proposes a dual-perspective fusion Transformer architecture. Our key contributions are: (1) a global-local hybrid attention mechanism that jointly captures long-range dependencies and fine-grained spatial structures; (2) a Fourier-domain feature fusion strategy to mitigate representation mismatch between frequency- and spatial-domain features; and (3) a gated linear feed-forward network coupled with a multi-scale decoder for efficient cross-layer feature aggregation and enhancement. Extensive experiments on multiple remote sensing segmentation benchmarks demonstrate significant improvements over state-of-the-art methods—particularly in boundary accuracy and small-object recall. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Automatically segmenting objects from optical remote sensing images (ORSIs) is an important task. Most existing models are primarily based on either convolutional or Transformer features, each offering distinct advantages. Exploiting both advantages is valuable research, but it presents several challenges, including the heterogeneity between the two types of features, high complexity, and large parameters of the model. However, these issues are often overlooked in existing the ORSIs methods, causing sub-optimal segmentation. For that, we propose a novel Dual-Perspective United Transformer (DPU-Former) with a unique structure designed to simultaneously integrate long-range dependencies and spatial details. In particular, we design the global-local mixed attention, which captures diverse information through two perspectives and introduces a Fourier-space merging strategy to obviate deviations for efficient fusion. Furthermore, we present a gated linear feed-forward network to increase the expressive ability. Additionally, we construct a DPU-Former decoder to aggregate and strength features at different layers. Consequently, the DPU-Former model outperforms the state-of-the-art methods on multiple datasets. Code: https://github.com/CSYSI/DPU-Former.
Problem

Research questions and friction points this paper is trying to address.

Integrate convolutional and Transformer features for segmentation
Address feature heterogeneity and model complexity issues
Enhance segmentation accuracy in remote sensing images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Perspective United Transformer for segmentation
Global-local mixed attention captures diverse information
Gated linear feed-forward network enhances expressiveness
🔎 Similar Papers
No similar papers found.
Y
Yanguang Sun
PCA Lab, Nanjing University of Science and Technology, Nanjing, China
J
Jiexi Yan
School of Computer Science and Technology, Xidian University, Xian, China
Jianjun Qian
Jianjun Qian
Nanjing University of Science and Technology
Pattern RecognitionComputer VisionFace Recognition
C
Chunyan Xu
PCA Lab, Nanjing University of Science and Technology, Nanjing, China
J
Jian Yang
PCA Lab, Nanjing University of Science and Technology, Nanjing, China
Lei Luo
Lei Luo
Kansas State University
Computer VisionGANsImage Restoration