PADReg: Physics-Aware Deformable Registration Guided by Contact Force for Ultrasound Sequences

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ultrasound elastography registration under large deformations suffers from poor anatomical consistency and limited biomechanical interpretability due to low image contrast, severe noise, and ill-defined tissue boundaries. To address this, we propose a deformable registration framework incorporating contact-force–guided physical priors: for the first time, we jointly leverage robotically acquired contact forces and pixel-wise stiffness maps to construct a lightweight, Hooke’s law–constrained physics-aware module that indirectly estimates dense deformation fields from multimodal ultrasound images. Our method significantly improves both anatomical alignment accuracy and biomechanical plausibility. Evaluated on an in vivo dataset, it achieves an HD95 of 12.90 mm—21.34% lower than the state-of-the-art—demonstrating its potential for high-precision, interpretable registration in thyroid and breast disease diagnosis.

Technology Category

Application Category

📝 Abstract
Ultrasound deformable registration estimates spatial transformations between pairs of deformed ultrasound images, which is crucial for capturing biomechanical properties and enhancing diagnostic accuracy in diseases such as thyroid nodules and breast cancer. However, ultrasound deformable registration remains highly challenging, especially under large deformation. The inherently low contrast, heavy noise and ambiguous tissue boundaries in ultrasound images severely hinder reliable feature extraction and correspondence matching. Existing methods often suffer from poor anatomical alignment and lack physical interpretability. To address the problem, we propose PADReg, a physics-aware deformable registration framework guided by contact force. PADReg leverages synchronized contact force measured by robotic ultrasound systems as a physical prior to constrain the registration. Specifically, instead of directly predicting deformation fields, we first construct a pixel-wise stiffness map utilizing the multi-modal information from contact force and ultrasound images. The stiffness map is then combined with force data to estimate a dense deformation field, through a lightweight physics-aware module inspired by Hooke's law. This design enables PADReg to achieve physically plausible registration with better anatomical alignment than previous methods relying solely on image similarity. Experiments on in-vivo datasets demonstrate that it attains a HD95 of 12.90, which is 21.34% better than state-of-the-art methods. The source code is available at https://github.com/evelynskip/PADReg.
Problem

Research questions and friction points this paper is trying to address.

Estimates spatial transformations in deformed ultrasound images
Addresses poor alignment and physical interpretability in registration
Improves registration accuracy using contact force and stiffness maps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-aware deformable registration guided by contact force
Pixel-wise stiffness map from force and ultrasound data
Lightweight physics-aware module inspired by Hooke's law
🔎 Similar Papers
No similar papers found.
Y
Yimeng Geng
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China and School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
Mingyang Zhao
Mingyang Zhao
Academy of Mathematics and System Sciences, CAS
Geometric computationArtificial intelligenceStatistics
F
Fan Xu
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China and School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
G
Guanglin Cao
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China and School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
G
Gaofeng Meng
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China and Center for Artificial Intelligence and Robotics, HK Institute of Science & Innovation, Chinese Academy of Sciences, Hong Kong SAR
H
Hongbin Liu
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China, Center for Artificial Intelligence and Robotics, HK Institute of Science & Innovation, Chinese Academy of Sciences, Hong Kong SAR and School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK