Biomechanical Constraints Assimilation in Deep-Learning Image Registration: Application to sliding and locally rigid deformations

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional regularization strategies in medical image registration impose global, uniform constraints that fail to capture tissue heterogeneity-induced non-uniform deformations—particularly violating biomechanical priors in low-contrast regions. To address this, we propose the first spatially adaptive solid-mechanics-based regularization framework for deep registration, embedding tissue-specific physical constraints—including local rigidity, interfacial sliding, and pseudo-elasticity—directly into CNN/U-Net architectures. Our method employs a learnable, physics-guided loss to enable anatomy-aware deformation estimation. Evaluated end-to-end on synthetic and real 3D thoracoabdominal datasets, it reduces deformation error in hard tissues by 32%, improves interfacial sliding localization accuracy by 27%, and significantly enhances both biomechanical plausibility and cross-domain generalizability of predicted deformations. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Regularization strategies in medical image registration often take a one-size-fits-all approach by imposing uniform constraints across the entire image domain. Yet biological structures are anything but regular. Lacking structural awareness, these strategies may fail to consider a panoply of spatially inhomogeneous deformation properties, which would faithfully account for the biomechanics of soft and hard tissues, especially in poorly contrasted structures. To bridge this gap, we propose a learning-based image registration approach in which the inferred deformation properties can locally adapt themselves to trained biomechanical characteristics. Specifically, we first enforce in the training process local rigid displacements, shearing motions or pseudo-elastic deformations using regularization losses inspired from the field of solid-mechanics. We then show on synthetic and real 3D thoracic and abdominal images that these mechanical properties of different nature are well generalized when inferring the deformations between new image pairs. Our approach enables neural-networks to infer tissue-specific deformation patterns directly from input images, ensuring mechanically plausible motion. These networks preserve rigidity within hard tissues while allowing controlled sliding in regions where tissues naturally separate, more faithfully capturing physiological motion. The code is publicly available at https://github.com/Kheil-Z/biomechanical_DLIR .
Problem

Research questions and friction points this paper is trying to address.

Adapting deformation properties to biomechanical characteristics locally
Ensuring mechanically plausible motion in image registration
Preserving rigidity in hard tissues while allowing sliding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive local deformation properties learning
Biomechanical regularization losses training
Tissue-specific motion inference networks
🔎 Similar Papers
No similar papers found.
Ziad Kheil
Ziad Kheil
INSERM, Oncopole Claudius Regaud
Computer VisionDeep LearningRadiotherapy
S
Soleakhena Ken
1Centre de Recherches en Cancérologie de Toulouse, INSERM UMR1037, 3Institut Universitaire du Cancer – Oncopole Claudius Régaud, 31059 Toulouse, France
Laurent Risser
Laurent Risser
CNRS - Toulouse Mathematics Institute - ANITI
XAIsurrogate modelsbias mitigation in MLimage analysis