🤖 AI Summary
This paper addresses the challenging problem of outlier rejection in dense correspondence sets for cross-scene and cross-domain image matching. To this end, we propose a robust correspondence pruning method. Our key contributions are: (1) a style-decoupled dual-branch architecture that explicitly disentangles content and style features to mitigate domain shift; (2) a Bi-Fusion Mixture of Experts (MoE) module that adaptively fuses multi-view graph features; and (3) linear-complexity attention coupled with a dynamic expert routing mechanism, enhancing both generalization capability and computational efficiency. Extensive experiments demonstrate that our method achieves state-of-the-art performance on multiple cross-domain benchmarks—including SPair-71k and PF-Pascal—outperforming existing approaches in both matching accuracy and cross-domain generalization. To foster reproducibility and further research, our source code and pre-trained models are publicly released.
📝 Abstract
Establishing reliable correspondences between image pairs is a fundamental task in computer vision, underpinning applications such as 3D reconstruction and visual localization. Although recent methods have made progress in pruning outliers from dense correspondence sets, they often hypothesize consistent visual domains and overlook the challenges posed by diverse scene structures. In this paper, we propose CorrMoE, a novel correspondence pruning framework that enhances robustness under cross-domain and cross-scene variations. To address domain shift, we introduce a De-stylization Dual Branch, performing style mixing on both implicit and explicit graph features to mitigate the adverse influence of domain-specific representations. For scene diversity, we design a Bi-Fusion Mixture of Experts module that adaptively integrates multi-perspective features through linear-complexity attention and dynamic expert routing. Extensive experiments on benchmark datasets demonstrate that CorrMoE achieves superior accuracy and generalization compared to state-of-the-art methods. The code and pre-trained models are available at https://github.com/peiwenxia/CorrMoE.