🤖 AI Summary
Medical ultrasound images exhibit substantial visual variability due to differences in acquisition parameters, posing significant interpretability and operational challenges for non-expert users—such as frontline healthcare workers. To address this, we propose an ultrasound interpretability enhancement framework tailored for non-experts. Our method introduces semantic scene graphs (SGs) to ultrasound for the first time, enabling structured image representation without explicit object detection. It integrates a Transformer-based single-stage SG generation module, an LLM-driven query-adaptive semantic refinement mechanism, anatomical completeness assessment, and interactive feedback. This enables natural-language–driven image interpretation and real-time scanning guidance. Evaluated on neck ultrasound data from five volunteers, our framework significantly improves non-experts’ accuracy in image content comprehension and adherence to standardized scanning protocols. The approach delivers practical, deployable support for point-of-care ultrasound applications.
📝 Abstract
Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.