FractMorph: A Fractional Fourier-Based Multi-Domain Transformer for Deformable Image Registration

📅 2025-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical image non-rigid registration struggles to jointly model fine-grained local deformations and large-scale global deformations within a unified framework. To address this, we propose the first dual-branch 3D Transformer architecture incorporating the Fractional Fourier Transform (FrFT): one branch extracts multi-scale spectral features in the fractional domain, while the other processes spatial features; a novel fractional-domain cross-attention mechanism enables deep spectral–spatial feature fusion, eliminating conventional multi-scale or hierarchical designs. Additionally, log-magnitude encoding and a lightweight U-Net decoder substantially reduce parameter count. Evaluated on the ACDC cardiac MRI dataset, our method achieves 86.45% overall Dice score and 1.54 mm Hausdorff Distance at 95th percentile (HD95), with only 29.6M parameters—halving memory footprint while outperforming state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Deformable image registration (DIR) is a crucial and challenging technique for aligning anatomical structures in medical images and is widely applied in diverse clinical applications. However, existing approaches often struggle to capture fine-grained local deformations and large-scale global deformations simultaneously within a unified framework. We present FractMorph, a novel 3D dual-parallel transformer-based architecture that enhances cross-image feature matching through multi-domain fractional Fourier transform (FrFT) branches. Each Fractional Cross-Attention (FCA) block applies parallel FrFTs at fractional angles of 0°, 45°, 90°, along with a log-magnitude branch, to effectively extract local, semi-global, and global features at the same time. These features are fused via cross-attention between the fixed and moving image streams. A lightweight U-Net style network then predicts a dense deformation field from the transformer-enriched features. On the ACDC cardiac MRI dataset, FractMorph achieves state-of-the-art performance with an overall Dice Similarity Coefficient (DSC) of 86.45%, an average per-structure DSC of 75.15%, and a 95th-percentile Hausdorff distance (HD95) of 1.54 mm on our data split. We also introduce FractMorph-Light, a lightweight variant of our model with only 29.6M parameters, which maintains the superior accuracy of the main model while using approximately half the memory. Our results demonstrate that multi-domain spectral-spatial attention in transformers can robustly and efficiently model complex non-rigid deformations in medical images using a single end-to-end network, without the need for scenario-specific tuning or hierarchical multi-scale networks. The source code of our implementation is available at https://github.com/shayankebriti/FractMorph.
Problem

Research questions and friction points this paper is trying to address.

Aligning anatomical structures in medical images accurately
Capturing local and global deformations simultaneously
Improving efficiency in deformable image registration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-parallel transformer with multi-domain FrFT branches
Fractional Cross-Attention blocks for multi-scale features
Lightweight U-Net predicts dense deformation fields
🔎 Similar Papers
No similar papers found.