Domain-Shift Immunity in Deep Deformable Registration via Local Feature Representations

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning-based registration models are commonly perceived as vulnerable to domain shift, requiring large-scale diverse training data for robustness and lacking interpretability. This work reveals that their intrinsic cross-domain robustness stems from local feature consistency rather than global appearance matching, and—crucially—identifies data bias in early convolutional layers as the primary source of modality sensitivity. To address this, we propose UniReg, a general-purpose registration framework that decouples a pre-trained feature extractor (e.g., an ImageNet-pretrained backbone) from a lightweight UNet-based deformation predictor, enabling explicit feature-deformation separation. Trained on only a single source dataset, UniReg achieves accuracy on par with traditional optimization-based methods across unseen modalities and cross-domain data, significantly outperforming end-to-end CNNs. It delivers strong generalization, computational efficiency, and enhanced interpretability through modular design and feature-level abstraction.

Technology Category

Application Category

📝 Abstract
Deep learning has advanced deformable image registration, surpassing traditional optimization-based methods in both accuracy and efficiency. However, learning-based models are widely believed to be sensitive to domain shift, with robustness typically pursued through large and diverse training datasets, without explaining the underlying mechanisms. In this work, we show that domain-shift immunity is an inherent property of deep deformable registration models, arising from their reliance on local feature representations rather than global appearance for deformation estimation. To isolate and validate this mechanism, we introduce UniReg, a universal registration framework that decouples feature extraction from deformation estimation using fixed, pre-trained feature extractors and a UNet-based deformation network. Despite training on a single dataset, UniReg exhibits robust cross-domain and multi-modal performance comparable to optimization-based methods. Our analysis further reveals that failures of conventional CNN-based models under modality shift originate from dataset-induced biases in early convolutional layers. These findings identify local feature consistency as the key driver of robustness in learning-based deformable registration and motivate backbone designs that preserve domain-invariant local features.
Problem

Research questions and friction points this paper is trying to address.

Addresses domain-shift immunity in deep deformable registration models.
Investigates robustness via local feature representations, not global appearance.
Identifies dataset biases in CNN layers causing modality shift failures.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using local feature representations for deformation estimation
Decoupling feature extraction with pre-trained fixed extractors
Employing UNet-based network for robust cross-domain registration
🔎 Similar Papers
No similar papers found.