🤖 AI Summary
To address the lack of real-time, interpretable feedback for novice operators learning cardiac surface ultrasound scanning, this paper introduces eXplainMR—the first mixed-reality (MR) instructional system designed specifically for foundational cardiac ultrasound training. The system integrates a head-mounted display with a gesture-based simulated ultrasound probe, enabling low-cost, portable, and immersive desktop training. It features a novel four-tiered real-time feedback framework: (1) sub-goal decomposition guidance, (2) multimodal textual and visual explanations, (3) dynamic ultrasound image segmentation and annotation, and (4) 3D anatomical cross-sectional visualization. Technically, it unifies MR spatial mapping, real-time semantic segmentation, natural language generation (NLG)-driven explanatory feedback, and ultrasound image understanding. Empirical evaluation demonstrates a 47% improvement in scan task completion rate, average explanation latency under 200 ms, significantly enhanced anatomical localization accuracy, and improved self-directed learning efficacy—enabling effective training even without dedicated ultrasound hardware.
📝 Abstract
eXplainMR is a Mixed Reality tutoring system designed for basic cardiac surface ultrasound training. Trainees wear a head-mounted display (HMD) and hold a controller, mimicking a real ultrasound probe, while treating a desk surface as the patient's body for low-cost and anywhere training. eXplainMR engages trainees with troubleshooting questions and provides automated feedback through four key mechanisms: 1) subgoals that break down tasks into single-movement steps, 2) textual explanations comparing the current incorrect view with the target view, 3) real-time segmentation and annotation of ultrasound images for direct visualization, and 4) the 3D visual cues provide further explanations on the intersection between the slicing plane and anatomies.