🤖 AI Summary
This study addresses the limitations of existing echocardiographic interpretation methods, which often rely solely on perception or reasoning and fail to meet clinical demands for integrated, coordinated analysis. To bridge this gap, we propose EchoAgent—the first end-to-end intelligent agent framework that emulates the full clinical workflow by synergistically integrating visual perception, manual measurement, and structured expert knowledge in an “eye–hand–brain” paradigm. The system features a cognitive engine, a hierarchical toolkit, and a multimodal reasoning hub, enabling automated view recognition, anatomical segmentation, quantitative measurement, and explainable logical inference. Evaluated on the CAMUS and MIMIC-EchoQA datasets across 14 anatomical regions and 48 echo views, EchoAgent achieves a structural analysis accuracy of 80.00%, substantially improving interpretive consistency and clinical utility.
📝 Abstract
Reliable interpretation of echocardiography (Echo) is crucial for assessing cardiac function, which demands clinicians to synchronously orchestrate multiple capabilities, including visual observation (eyes), manual measurement (hands), and expert knowledge learning and reasoning (minds). While current task-specific deep-learning approaches and multimodal large language models have demonstrated promise in assisting Echo analysis through automated segmentation or reasoning, they remain focused on restricted skills, i.e., eyes-hands or eyes-minds, thereby limiting clinical reliability and utility. To address these issues, we propose EchoAgent, an agentic system tailored for end-to-end Echo interpretation, which achieves a fully coordinated eyes-hands-minds workflow that learns, observes, operates, and reasons like a cardiac sonographer. First, we introduce an expertise-driven cognition engine where our agent can automatically assimilate credible Echo guidelines into a structured knowledge base, thus constructing an Echo-customized mind. Second, we devise a hierarchical collaboration toolkit to endow EchoAgent with eyes-hands, which can automatically parse Echo video streams, identify cardiac views, perform anatomical segmentation, and quantitative measurement. Third, we integrate the perceived multimodal evidence with the exclusive knowledge base into an orchestrated reasoning hub to conduct explainable inferences. We evaluate EchoAgent on CAMUS and MIMIC-EchoQA datasets, which cover 48 distinct echocardiographic views spanning 14 cardiac anatomical regions. Experimental results show that EchoAgent achieves optimal performance across diverse structure analyses, yielding overall accuracy of up to 80.00%. Importantly, EchoAgent empowers a single system with abilities to learn, observe, operate and reason like an echocardiologist, which holds great promise for reliable Echo interpretation.