π€ AI Summary
Existing unsupervised domain adaptation (UDA) methods treat masked image modeling (MIM) merely as input perturbation, lacking theoretical grounding and thus limiting its potential for feature extraction and representation learning. To address this, we propose MaskTwinsβa novel framework that, for the first time, reformulates MIM from the perspective of sparse signal recovery. We introduce complementary mask dualities and theoretically prove that they enhance domain-invariant feature learning and explicitly model cross-domain structural consistency. Our method employs a dual-branch network that jointly optimizes complementary mask reconstruction and feature alignment, enabling end-to-end UDA for semantic segmentation without requiring pretraining. Extensive experiments on both natural and biomedical image segmentation benchmarks demonstrate significant improvements over state-of-the-art UDA baselines, validating the generalizability and effectiveness of MaskTwins.
π Abstract
Recent works have correlated Masked Image Modeling (MIM) with consistency regularization in Unsupervised Domain Adaptation (UDA). However, they merely treat masking as a special form of deformation on the input images and neglect the theoretical analysis, which leads to a superficial understanding of masked reconstruction and insufficient exploitation of its potential in enhancing feature extraction and representation learning. In this paper, we reframe masked reconstruction as a sparse signal reconstruction problem and theoretically prove that the dual form of complementary masks possesses superior capabilities in extracting domain-agnostic image features. Based on this compelling insight, we propose MaskTwins, a simple yet effective UDA framework that integrates masked reconstruction directly into the main training pipeline. MaskTwins uncovers intrinsic structural patterns that persist across disparate domains by enforcing consistency between predictions of images masked in complementary ways, enabling domain generalization in an end-to-end manner. Extensive experiments verify the superiority of MaskTwins over baseline methods in natural and biological image segmentation. These results demonstrate the significant advantages of MaskTwins in extracting domain-invariant features without the need for separate pre-training, offering a new paradigm for domain-adaptive segmentation.