Anatomical Prior-Driven Framework for Autonomous Robotic Cardiac Ultrasound Standard View Acquisition

πŸ“… 2026-03-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the challenges in automated acquisition of standard echocardiographic views, which heavily relies on operator expertise and suffers from anatomically inconsistent segmentations in low-texture images, as well as heuristic- or black-box-based probe control strategies. To overcome these limitations, the authors propose an end-to-end framework integrating anatomical priors: a spatial relationship graph (SRG) module is embedded into the YOLOv11s segmentation network to enforce anatomical consistency, andβ€”for the first timeβ€”a Gaussian prior over quantifiable anatomical features is incorporated into the state representation and reward function of a reinforcement learning agent for autonomous probe control. Experimental results demonstrate that SRG-YOLOv11s achieves an 11.3% improvement in mAP50 and a 6.8% gain in mIoU on the Special Case dataset, while the reinforcement learning agent attains standard view acquisition success rates of 92.5% in simulation and 86.7% in phantom experiments.

Technology Category

Application Category

πŸ“ Abstract
Cardiac ultrasound diagnosis is critical for cardiovascular disease assessment, but acquiring standard views remains highly operator-dependent. Existing medical segmentation models often yield anatomically inconsistent results in images with poor textural differentiation between distinct feature classes, while autonomous probe adjustment methods either rely on simplistic heuristic rules or black-box learning. To address these issues, our study proposed an anatomical prior (AP)-driven framework integrating cardiac structure segmentation and autonomous probe adjustment for standard view acquisition. A YOLO-based multi-class segmentation model augmented by a spatial-relation graph (SRG) module is designed to embed AP into the feature pyramid. Quantifiable anatomical features of standard views are extracted. Their priors are fitted to Gaussian distributions to construct probabilistic APs. The probe adjustment process of robotic ultrasound scanning is formalized as a reinforcement learning (RL) problem, with the RL state built from real-time anatomical features and the reward reflecting the AP matching. Experiments validate the efficacy of the framework. The SRG-YOLOv11s improves mAP50 by 11.3% and mIoU by 6.8% on the Special Case dataset, while the RL agent achieves a 92.5% success rate in simulation and 86.7% in phantom experiments.
Problem

Research questions and friction points this paper is trying to address.

cardiac ultrasound
standard view acquisition
anatomical inconsistency
autonomous probe adjustment
operator dependency
Innovation

Methods, ideas, or system contributions that make the work stand out.

anatomical prior
spatial-relation graph
reinforcement learning
autonomous robotic ultrasound
standard view acquisition
πŸ”Ž Similar Papers
No similar papers found.
Z
Zhiyan Cao
State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China
Z
Zhengxi Wu
School of Biomedical Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China
Yiwei Wang
Yiwei Wang
Huazhong University of Science and Technology
Control SystemsSurgical RoboticsHuman-Robot Interaction
P
Pei-Hsuan Lin
Information Intelligence Lab, Department of Electrical Engineering, National Chung Hsing University, Taichung 402, Taiwan
L
Li Zhang
Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Jiefang Avenue 1277, Wuhan, Hubei 430022, China
Z
Zhen Xie
Institute of Systems Science (ISS), National University of Singapore (NUS), Singapore 119615, Singapore
Huan Zhao
Huan Zhao
Huazhong University of Science and Technology
Robotic machiningRobotic assemblyMedical surgical robot
H
Han Ding
State Key Laboratory of Intelligent Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China