🤖 AI Summary
This study addresses the challenge that large language models (LLMs) face when transitioning from static question-answering to dynamic decision-making in dental clinical settings, where a disconnect between knowledge and action leads to fragility in multi-turn patient interactions. To this end, the authors propose the Standardized Clinical Management and Performance Evaluation (SCMPE) benchmark, which introduces a novel two-dimensional evaluation framework integrating guideline adherence and decision quality to systematically assess LLM reliability across static knowledge tasks and simulated dynamic clinical workflows. The findings reveal that while general-purpose models excel in static tasks, they consistently exhibit a “high efficacy, low safety” risk profile. Retrieval-augmented generation (RAG) mitigates static hallucinations but fails to enhance dynamic reasoning, primarily due to limitations in proactive information gathering and state tracking.
📝 Abstract
The transition of Large Language Models (LLMs) from passive knowledge retrievers to autonomous clinical agents demands a shift in evaluation-from static accuracy to dynamic behavioral reliability. To explore this boundary in dentistry, a domain where high-quality AI advice uniquely empowers patient-participatory decision-making, we present the Standardized Clinical Management&Performance Evaluation (SCMPE) benchmark, which comprehensively assesses performance from knowledge-oriented evaluations (static objective tasks) to workflow-based simulations (multi-turn simulated patient interactions). Our analysis reveals that while models demonstrate high proficiency in static objective tasks, their performance precipitates in dynamic clinical dialogues, identifying that the primary bottleneck lies not in knowledge retention, but in the critical challenges of active information gathering and dynamic state tracking. Mapping"Guideline Adherence"versus"Decision Quality"reveals a prevalent"High Efficacy, Low Safety"risk in general models. Furthermore, we quantify the impact of Retrieval-Augmented Generation (RAG). While RAG mitigates hallucinations in static tasks, its efficacy in dynamic workflows is limited and heterogeneous, sometimes causing degradation. This underscores that external knowledge alone cannot bridge the reasoning gap without domain-adaptive pre-training. This study empirically charts the capability boundaries of dental LLMs, providing a roadmap for bridging the gap between standardized knowledge and safe, autonomous clinical practice.