Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates, for the first time, the capacity of large language models (LLMs) to function as “co-constructive explainers” in explanatory dialogues—specifically, their ability to dynamically adapt explanations to users’ background knowledge and cognitive needs. Method: A user study was conducted, integrating prompt-engineered dialogue interventions, pre-/post-test comprehension assessments, multidimensional user perception questionnaires, and behavioral coding analysis. Results: LLMs spontaneously generate verification questions, enhancing user engagement and comprehension; however, they exhibit significant limitations in modeling explanation pacing, real-time cognitive load, and knowledge gaps—lacking robust metacognitive monitoring and scaffolding capabilities. Contribution: The work establishes a novel evaluation framework for co-constructive explanation, empirically delineates LLMs’ emergent yet bounded capacities in interactive explanation guidance, and provides foundational evidence and design implications for the conversational evolution of explainable AI.

Technology Category

Application Category

📝 Abstract
The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research has focused on co-constructive explanation dialogues, where the explainer continuously monitors the explainee's understanding and adapts explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with LLMs, of which some have been instructed to explain a predefined topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results indicate that current LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to co-construct explanations dynamically
Assessing LLMs' effectiveness in monitoring explainee understanding
Measuring impact of LLMs' co-constructive behaviors on comprehension
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs engage in co-constructive explanation dialogues
LLMs adapt explanations dynamically based on understanding
LLMs ask verification questions to improve engagement
🔎 Similar Papers
No similar papers found.
L
L. Fichtel
Leibniz University Hannover, Institute of Artificial Intelligence
M
Maximilian Spliethover
Leibniz University Hannover, Institute of Artificial Intelligence
E
Eyke Hullermeier
LMU Munich, MCML
P
Patricia Jimenez
Paderborn University
N
N. Klowait
Paderborn University
Stefan Kopp
Stefan Kopp
Bielefeld University, CITEC
Artificial IntelligenceCognitive ScienceSocially Interactive AgentsArtificial Social IntelligenceConversational Agents
A
A. Ngomo
Paderborn University
A
A. Robrecht
Bielefeld University, CITEC
I
Ingrid Scharlau
Paderborn University
L
Lutz Terfloth
Paderborn University
Anna-Lisa Vollmer
Anna-Lisa Vollmer
Medical School OWL & CITEC - Bielefeld University
Interactive Robot LearningHuman-Robot InteractionCo-constructionAssistance Systems
Henning Wachsmuth
Henning Wachsmuth
Leibniz University Hannover, L3S Research Center
Natural Language ProcessingComputational ArgumentationComputational SociolinguisticsArtificial Intelligence