🤖 AI Summary
BDI robots performing everyday kitchen cleaning tasks often exhibit anomalous behaviors that confuse users, undermining transparency and trust.
Method: This paper proposes a context-aware explanation generation and triggering mechanism integrated into the BDI architecture. It introduces two algorithms embeddable within the BDI reasoning cycle: (i) a dynamic anomaly detection algorithm that identifies explanation-worthy behaviors by jointly modeling user preferences and the agent’s beliefs, desires, and intentions; and (ii) an explanation generation algorithm producing concise, intention-grounded, and context-sensitive explanations—explicitly referencing environmental states and task goals.
Contribution/Results: Experiments demonstrate that users significantly prefer these short, contextualized explanations when encountering unexpected robot behavior, leading to substantial improvements in comprehension and trust. To our knowledge, this is the first work to tightly couple explanation triggering and generation with the core BDI decision-making loop, thereby co-optimizing explainability and autonomous reasoning.
📝 Abstract
When robots perform complex and context-dependent tasks in our daily lives, deviations from expectations can confuse users. Explanations of the robot's reasoning process can help users to understand the robot intentions. However, when to provide explanations and what they contain are important to avoid user annoyance. We have investigated user preferences for explanation demand and content for a robot that helps with daily cleaning tasks in a kitchen. Our results show that users want explanations in surprising situations and prefer concise explanations that clearly state the intention behind the confusing action and the contextual factors that were relevant to this decision. Based on these findings, we propose two algorithms to identify surprising actions and to construct effective explanations for Belief-Desire-Intention (BDI) robots. Our algorithms can be easily integrated in the BDI reasoning process and pave the way for better human-robot interaction with context- and user-specific explanations.