Robotic CBCT Meets Robotic Ultrasound

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current imaging systems lack sufficient flexibility and mobility, hindering their integration into standardized clinical workflows and autonomous interventional systems. To address this, we propose the first robotic cone-beam CT (CBCT)–ultrasound (US) bimodal imaging system, enabling registration-free, deformation-invariant cross-modal fusion via pre-calibration and rigid dynamic co-registration. Our key contributions are: (1) a robotic bimodal coordination architecture; (2) a Doppler-signal self-prompted SAM2 method for vascular segmentation; and (3) a multimodal mapping framework supporting vessel highlighting, 3D path planning, and real-time needle guidance. Experimental evaluation yields a mean mapping error of 1.72 ± 0.62 mm; needle insertion time, accuracy, and success rate improve by approximately 50%. The system’s efficacy is validated under challenging conditions—including rib occlusion and flow-simulating phantoms—demonstrating robustness in clinically relevant scenarios.

Technology Category

Application Category

📝 Abstract
The multi-modality imaging system offers optimal fused images for safe and precise interventions in modern clinical practices, such as computed tomography - ultrasound (CT-US) guidance for needle insertion. However, the limited dexterity and mobility of current imaging devices hinder their integration into standardized workflows and the advancement toward fully autonomous intervention systems. In this paper, we present a novel clinical setup where robotic cone beam computed tomography (CBCT) and robotic US are pre-calibrated and dynamically co-registered, enabling new clinical applications. This setup allows registration-free rigid registration, facilitating multi-modal guided procedures in the absence of tissue deformation. First, a one-time pre-calibration is performed between the systems. To ensure a safe insertion path by highlighting critical vasculature on the 3D CBCT, SAM2 segments vessels from B-mode images, using the Doppler signal as an autonomously generated prompt. Based on the registration, the Doppler image or segmented vessel masks are then mapped onto the CBCT, creating an optimally fused image with comprehensive detail. To validate the system, we used a specially designed phantom, featuring lesions covered by ribs and multiple vessels with simulated moving flow. The mapping error between US and CBCT resulted in an average deviation of 1.72+-0.62 mm. A user study demonstrated the effectiveness of CBCT-US fusion for needle insertion guidance, showing significant improvements in time efficiency, accuracy, and success rate. Needle intervention performance improved by approximately 50% compared to the conventional US-guided workflow. We present the first robotic dual-modality imaging system designed to guide clinical applications. The results show significant performance improvements compared to traditional manual interventions.
Problem

Research questions and friction points this paper is trying to address.

Enhances multi-modal imaging for clinical interventions
Overcomes limited dexterity in current imaging devices
Improves needle insertion accuracy and efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robotic CBCT and US co-registration
SAM2 vessel segmentation technology
Dynamic multi-modal image fusion
🔎 Similar Papers
No similar papers found.
F
Feng Li
CAMP, Technical University of Munich, Munich, Germany; Munich Center of Machine Learning, Munich, Germany
Y
Yuanwei Bi
CAMP, Technical University of Munich, Munich, Germany; Munich Center of Machine Learning, Munich, Germany
Dianye Huang
Dianye Huang
Technical University of Munich
robotic ultrasoundmedical robotintelligent controlhuman robot interaction
Zhongliang Jiang
Zhongliang Jiang
University of Hong Kong
Medical RoboticsUltrasound imagingRobot learningSurgical RoboticsHuman-robot Interaction
N
N. Navab
CAMP, Technical University of Munich, Munich, Germany; Munich Center of Machine Learning, Munich, Germany