🤖 AI Summary
Medical ultrasound imaging faces persistent challenges including high operator dependency, inter-observer variability, limited spatial resolution, and scarcity of expert personnel. To address these, this paper introduces the first systematic taxonomy mapping the full clinical ultrasound workflow—encompassing scanning guidance, standardized plane localization, and real-time quality control—to reinforcement learning (RL) methodologies. We propose a unified classification framework that aligns ultrasound procedural stages with RL development paradigms. Through a comprehensive review of RL techniques—including DQN, PPO, and imitation learning—applied across workflow stages, we identify generalizability, safety, and clinical interpretability as three core challenges. This work bridges a critical theoretical gap in RL-driven fully autonomous ultrasound systems and delineates a clear technical roadmap for advancement. It provides both a methodological foundation and practical guidelines for developing next-generation automated ultrasound platforms. (149 words)
📝 Abstract
Medical Ultrasound (US) imaging has seen increasing demands over the past years, becoming one of the most preferred imaging modalities in clinical practice due to its affordability, portability, and real-time capabilities. However, it faces several challenges that limit its applicability, such as operator dependency, variability in interpretation, and limited resolution, which are amplified by the low availability of trained experts. This calls for the need of autonomous systems that are capable of reducing the dependency on humans for increased efficiency and throughput. Reinforcement Learning (RL) comes as a rapidly advancing field under Artificial Intelligence (AI) that allows the development of autonomous and intelligent agents that are capable of executing complex tasks through rewarded interactions with their environments. Existing surveys on advancements in the US scanning domain predominantly focus on partially autonomous solutions leveraging AI for scanning guidance, organ identification, plane recognition, and diagnosis. However, none of these surveys explore the intersection between the stages of the US process and the recent advancements in RL solutions. To bridge this gap, this review proposes a comprehensive taxonomy that integrates the stages of the US process with the RL development pipeline. This taxonomy not only highlights recent RL advancements in the US domain but also identifies unresolved challenges crucial for achieving fully autonomous US systems. This work aims to offer a thorough review of current research efforts, highlighting the potential of RL in building autonomous US solutions while identifying limitations and opportunities for further advancements in this field.