π€ AI Summary
Weak generalization of remote sensing change detection models stems primarily from dataset-specific training, hindering robustness against cross-domain distribution shifts and annotation inconsistencies. To address this, we propose CANetβa lightweight adapter network that enables efficient cross-domain transfer via a shared backbone jointly optimized with dataset-specific lightweight modules. Our key contributions are: (1) an adaptive change-region mask (ICM) that focuses model attention on genuine change areas, mitigating annotation inconsistency; (2) dataset-specific BatchNorm layers explicitly modeling domain-wise distribution shifts; and (3) a parameter-efficient fine-tuning strategy updating only 4.1%β7.7% of parameters. Extensive experiments across multiple public benchmarks demonstrate state-of-the-art generalization performance, significant accuracy gains under few-shot settings, and plug-and-play compatibility with existing architectures.
π Abstract
Deep learning methods have shown promising performances in remote sensing image change detection (CD). However, existing methods usually train a dataset-specific deep network for each dataset. Due to the significant differences in the data distribution and labeling between various datasets, the trained dataset-specific deep network has poor generalization performances on other datasets. To solve this problem, this paper proposes a change adapter network (CANet) for a more universal and generalized CD. CANet contains dataset-shared and dataset-specific learning modules. The former explores the discriminative features of images, and the latter designs a lightweight adapter model, to deal with the characteristics of different datasets in data distribution and labeling. The lightweight adapter can quickly generalize the deep network for new CD tasks with a small computation cost. Specifically, this paper proposes an interesting change region mask (ICM) in the adapter, which can adaptively focus on interested change objects and decrease the influence of labeling differences in various datasets. Moreover, CANet adopts a unique batch normalization layer for each dataset to deal with data distribution differences. Compared with existing deep learning methods, CANet can achieve satisfactory CD performances on various datasets simultaneously. Experimental results on several public datasets have verified the effectiveness and advantages of the proposed CANet on CD. CANet has a stronger generalization ability, smaller training costs (merely updating 4.1%-7.7% parameters), and better performances under limited training datasets than other deep learning methods, which also can be flexibly inserted with existing deep models.