🤖 AI Summary
This work addresses the challenges of large deformations, complex backgrounds, and regional overlap in chest CT image registration—a domain where existing deep learning approaches, primarily designed for brain images, often underperform. To this end, we propose LDRNet, a fast unsupervised deep registration network that employs a coarse-to-fine multi-resolution strategy. LDRNet integrates two novel components: a Refine Block for multi-scale optimization of the deformation field and a Rigid Block that estimates rigid transformations from high-level features. Evaluated on both a private dataset and the public SegTHOR benchmark, LDRNet significantly outperforms state-of-the-art methods—including VoxelMorph, RCN, and LapIRN—achieving superior registration accuracy while maintaining faster inference speed.
📝 Abstract
Most of the deep learning based medical image registration algorithms focus on brain image registration tasks.Compared with brain registration, the chest CT registration has larger deformation, more complex background and region over-lap. In this paper, we propose a fast unsupervised deep learning method, LDRNet, for large deformation image registration of chest CT images. We first predict a coarse resolution registration field, then refine it from coarse to fine. We propose two innovative technical components: 1) a refine block that is used to refine the registration field in different resolutions, 2) a rigid block that is used to learn transformation matrix from high-level features. We train and evaluate our model on the private dataset and public dataset SegTHOR. We compare our performance with state-of-the-art traditional registration methods as well as deep learning registration models VoxelMorph, RCN, and LapIRN. The results demonstrate that our model achieves state-of-the-art performance for large deformation images registration and is much faster.