🤖 AI Summary
Prior evaluations of AI-powered ambient scribing systems have largely focused on technical performance, neglecting real-world patient safety implications in clinical documentation. Method: This study is the first to systematically identify patient safety risks associated with AI scribing using a mixed-methods approach—quantitative error analysis and thematic coding—based on authentic clinician feedback from a large U.S. hospital system. It specifically examines transcription errors affecting critical elements: drug names, dosages, frequencies, and treatment plans. Contribution/Results: Findings reveal that such errors pose tangible risks for medication errors and therapeutic mismanagement. Unlike conventional technical assessments, this work adopts a real-world usage perspective, providing empirical evidence on clinical safety impacts previously lacking in the literature. The identified high-risk scenarios offer an evidence base for risk stratification, human-AI collaborative design, and development of regulatory frameworks governing AI-enabled clinical documentation tools.
📝 Abstract
AI scribes are transforming clinical documentation at scale. However, their real-world performance remains understudied, especially regarding their impacts on patient safety. To this end, we initiate a mixed-methods study of patient safety issues raised in feedback submitted by AI scribe users (healthcare providers) in a large U.S. hospital system. Both quantitative and qualitative analysis suggest that AI scribes may induce various patient safety risks due to errors in transcription, most significantly regarding medication and treatment; however, further study is needed to contextualize the absolute degree of risk.