🤖 AI Summary
In remote sensing change detection, spurious changes induced by imaging environmental discrepancies severely hinder accurate identification of genuine changes; existing methods often introduce distortions via style transfer and suffer from poor generalization under domain shifts. To address this, we propose DonaNet, a domain-agnostic difference learning network. DonaNet innovatively employs local statistics as style proxies, designs a domain-difference removal module and an enhanced feature decorrelation mechanism, and introduces an adversarial cross-temporal generalization learning strategy to explicitly model latent domain shifts. By achieving style disentanglement, learning domain-invariant representations, and adopting a lightweight architecture, DonaNet surpasses state-of-the-art methods on three public benchmarks—reducing model parameters by 32% while significantly improving robustness to unseen domains and effectively mitigating spurious-change interference.
📝 Abstract
Change detection has essential significance for the region's development, in which pseudo-changes between bitemporal images induced by imaging environmental factors are key challenges. Existing transformation-based methods regard pseudo-changes as a kind of style shift and alleviate it by transforming bitemporal images into the same style using generative adversarial networks (GANs). However, their efforts are limited by two drawbacks: 1) Transformed images suffer from distortion that reduces feature discrimination. 2) Alignment hampers the model from learning domain-agnostic representations that degrades performance on scenes with domain shifts from the training data. Therefore, oriented from pseudo-changes caused by style differences, we present a generalizable domain-agnostic difference learning network (DonaNet). For the drawback 1), we argue for local-level statistics as style proxies to assist against domain shifts. For the drawback 2), DonaNet learns domain-agnostic representations by removing domain-specific style of encoded features and highlighting the class characteristics of objects. In the removal, we propose a domain difference removal module to reduce feature variance while preserving discriminative properties and propose its enhanced version to provide possibilities for eliminating more style by decorrelating the correlation between features. In the highlighting, we propose a cross-temporal generalization learning strategy to imitate latent domain shifts, thus enabling the model to extract feature representations more robust to shifts actively. Extensive experiments conducted on three public datasets demonstrate that DonaNet outperforms existing state-of-the-art methods with a smaller model size and is more robust to domain shift.