eXplainMR: Generating Real-time Textual and Visual eXplanations to Facilitate UltraSonography Learning in MR

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of real-time, interpretable feedback for novice operators learning cardiac surface ultrasound scanning, this paper introduces eXplainMR—the first mixed-reality (MR) instructional system designed specifically for foundational cardiac ultrasound training. The system integrates a head-mounted display with a gesture-based simulated ultrasound probe, enabling low-cost, portable, and immersive desktop training. It features a novel four-tiered real-time feedback framework: (1) sub-goal decomposition guidance, (2) multimodal textual and visual explanations, (3) dynamic ultrasound image segmentation and annotation, and (4) 3D anatomical cross-sectional visualization. Technically, it unifies MR spatial mapping, real-time semantic segmentation, natural language generation (NLG)-driven explanatory feedback, and ultrasound image understanding. Empirical evaluation demonstrates a 47% improvement in scan task completion rate, average explanation latency under 200 ms, significantly enhanced anatomical localization accuracy, and improved self-directed learning efficacy—enabling effective training even without dedicated ultrasound hardware.

Technology Category

Application Category

📝 Abstract
eXplainMR is a Mixed Reality tutoring system designed for basic cardiac surface ultrasound training. Trainees wear a head-mounted display (HMD) and hold a controller, mimicking a real ultrasound probe, while treating a desk surface as the patient's body for low-cost and anywhere training. eXplainMR engages trainees with troubleshooting questions and provides automated feedback through four key mechanisms: 1) subgoals that break down tasks into single-movement steps, 2) textual explanations comparing the current incorrect view with the target view, 3) real-time segmentation and annotation of ultrasound images for direct visualization, and 4) the 3D visual cues provide further explanations on the intersection between the slicing plane and anatomies.
Problem

Research questions and friction points this paper is trying to address.

Facilitates cardiac ultrasound training
Provides real-time textual and visual explanations
Uses Mixed Reality for interactive learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed Reality tutoring system
Real-time segmentation and annotation
3D visual cues explanation
🔎 Similar Papers
No similar papers found.
Jingying Wang
Jingying Wang
Ph.D. Candidate of CSE, University of Michigan
Surgical TrainingHuman-Computer InteractionAR/VRComputer GraphicsMachine Learning
J
Jingjing Zhang
University of Michigan, Ann Arbor, Michigan, USA
J
Juana Nicoll Capizzano
University of Michigan, Ann Arbor, Michigan, USA
M
Matthew Sigakis
University of Michigan, Ann Arbor, Michigan, USA
X
Xu Wang
University of Michigan, Ann Arbor, Michigan, USA
Vitaliy Popov
Vitaliy Popov
University of Michigan, Medical School and School of Information
CSCLlearning scienceslearning analyticssimulation-based training