SNAPE-PM: Building and Utilizing Dynamic Partner Models for Adaptive Explanation Generation

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of dynamically adapting explanation generation in dialogue systems to users’ evolving cognitive states and interaction context, this paper proposes an online-evolving dynamic partner model. We formulate explanation strategy selection as a non-stationary Markov decision process (NS-MDP) and integrate Bayesian inference to enable continuous partner model updating and rational policy adaptation. This work is the first to combine Bayesian inference with NS-MDPs for computational partner modeling and interactive feedback learning. Experiments across five simulated user types demonstrate that our approach autonomously generates personalized explanations, significantly improving explanation adaptivity and response consistency. The framework provides a scalable, principled paradigm for dynamic human–machine explanation interaction.

Technology Category

Application Category

📝 Abstract
Adapting to the addressee is crucial for successful explanations, yet poses significant challenges for dialogsystems. We adopt the approach of treating explanation generation as a non-stationary decision process, where the optimal strategy varies according to changing beliefs about the explainee and the interaction context. In this paper we address the questions of (1) how to track the interaction context and the relevant listener features in a formally defined computational partner model, and (2) how to utilize this model in the dynamically adjusted, rational decision process that determines the currently best explanation strategy. We propose a Bayesian inference-based approach to continuously update the partner model based on user feedback, and a non-stationary Markov Decision Process to adjust decision-making based on the partner model values. We evaluate an implementation of this framework with five simulated interlocutors, demonstrating its effectiveness in adapting to different partners with constant and even changing feedback behavior. The results show high adaptivity with distinct explanation strategies emerging for different partners, highlighting the potential of our approach to improve explainable AI systems and dialogsystems in general.
Problem

Research questions and friction points this paper is trying to address.

How to track interaction context and listener features in a computational partner model
How to utilize the partner model for dynamic explanation strategy adjustment
How to adapt explanation generation to different and changing feedback behaviors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian inference updates partner model dynamically
Non-stationary Markov Decision Process adjusts strategies
Simulated evaluation shows high adaptivity effectiveness
🔎 Similar Papers
No similar papers found.
A
Amelie S. Robrecht
Social Cognitive Systems, TRR 318 | Constructing Explainability, Bielefeld University, Germany
C
Christoph R. Kowalski
Social Cognitive Systems, TRR 318 | Constructing Explainability, Bielefeld University, Germany
Stefan Kopp
Stefan Kopp
Bielefeld University, CITEC
Artificial IntelligenceCognitive ScienceSocially Interactive AgentsArtificial Social IntelligenceConversational Agents