GRAD-Former: Gated Robust Attention-based Differential Transformer for Change Detection

📅 2026-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges in high-resolution remote sensing change detection, where existing methods struggle to accurately delineate changed regions and conventional Transformers suffer from high computational complexity and low data efficiency. To overcome these limitations, the authors propose GRAD-Former, a lightweight and efficient framework that integrates an Adaptive Feature Embedding Amplification (SEA) module and a Global-Local Feature Refinement (GLFR) module. By incorporating gated mechanisms and differential attention, GRAD-Former effectively selects salient features while suppressing redundancy, thereby enhancing both global and local contextual awareness. Evaluated on three benchmark datasets—LEVIR-CD, CDD, and DSIFN-CD—the proposed method achieves state-of-the-art performance with fewer parameters, significantly improving detection accuracy and computational efficiency, and establishing a new benchmark for remote sensing change detection.

Technology Category

Application Category

📝 Abstract
Change detection (CD) in remote sensing aims to identify semantic differences between satellite images captured at different times. While deep learning has significantly advanced this field, existing approaches based on convolutional neural networks (CNNs), transformers and Selective State Space Models (SSMs) still struggle to precisely delineate change regions. In particular, traditional transformer-based methods suffer from quadratic computational complexity when applied to very high-resolution (VHR) satellite images and often perform poorly with limited training data, leading to under-utilization of the rich spatial information available in VHR imagery. We present GRAD-Former, a novel framework that enhances contextual understanding while maintaining efficiency through reduced model size. The proposed framework consists of a novel encoder with Adaptive Feature Relevance and Refinement (AFRAR) module, fusion and decoder blocks. AFRAR integrates global-local contextual awareness through two proposed components: the Selective Embedding Amplification (SEA) module and the Global-Local Feature Refinement (GLFR) module. SEA and GLFR leverage gating mechanisms and differential attention, respectively, which generates multiple softmax heaps to capture important features while minimizing the captured irreverent features. Multiple experiments across three challenging CD datasets (LEVIR-CD, CDD, DSIFN-CD) demonstrate GRAD-Former's superior performance compared to existing approaches. Notably, GRAD-Former outperforms the current state-of-the-art models across all the metrics and all the datasets while using fewer parameters. Our framework establishes a new benchmark for remote sensing change detection performance. Our code will be released at: https://github.com/Ujjwal238/GRAD-Former
Problem

Research questions and friction points this paper is trying to address.

Change Detection
Remote Sensing
Very High-Resolution Images
Computational Complexity
Limited Training Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gated Attention
Differential Transformer
Change Detection
Remote Sensing
Feature Refinement
🔎 Similar Papers
No similar papers found.