🤖 AI Summary
This study investigates, for the first time, the capacity of large language models (LLMs) to function as “co-constructive explainers” in explanatory dialogues—specifically, their ability to dynamically adapt explanations to users’ background knowledge and cognitive needs. Method: A user study was conducted, integrating prompt-engineered dialogue interventions, pre-/post-test comprehension assessments, multidimensional user perception questionnaires, and behavioral coding analysis. Results: LLMs spontaneously generate verification questions, enhancing user engagement and comprehension; however, they exhibit significant limitations in modeling explanation pacing, real-time cognitive load, and knowledge gaps—lacking robust metacognitive monitoring and scaffolding capabilities. Contribution: The work establishes a novel evaluation framework for co-constructive explanation, empirically delineates LLMs’ emergent yet bounded capacities in interactive explanation guidance, and provides foundational evidence and design implications for the conversational evolution of explainable AI.
📝 Abstract
The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research has focused on co-constructive explanation dialogues, where the explainer continuously monitors the explainee's understanding and adapts explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with LLMs, of which some have been instructed to explain a predefined topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results indicate that current LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.