🤖 AI Summary
In long-term human-robot interaction (HRI), dynamically adapting explanations to users’ evolving knowledge levels—i.e., personalized eXplainable HRI (XHRI)—remains a critical challenge. This paper proposes a dynamic explanation framework grounded in user knowledge memory modeling: it constructs an updateable, retrievable cognitive memory bank representing users’ conceptual knowledge, and leverages large language models (LLMs) to implement a two-stage explanation generation architecture (generation followed by optimization), enabling fine-grained, history-aware control over explanation granularity. Unlike conventional static or coarse-grained personalization approaches, our method automatically grounds explanations in users’ prior concepts and minimizes redundant complexity. Empirical evaluation across two real-world scenarios—hospital patrol and kitchen assistance—demonstrates significant improvements in explanation adaptability, comprehension efficiency, and user experience. The framework establishes a scalable, cognition-driven paradigm for sustained XHRI.
📝 Abstract
In the field of Human-Robot Interaction (HRI), a fundamental challenge is to facilitate human understanding of robots. The emerging domain of eXplainable HRI (XHRI) investigates methods to generate explanations and evaluate their impact on human-robot interactions. Previous works have highlighted the need to personalise the level of detail of these explanations to enhance usability and comprehension. Our paper presents a framework designed to update and retrieve user knowledge-memory models, allowing for adapting the explanations' level of detail while referencing previously acquired concepts. Three architectures based on our proposed framework that use Large Language Models (LLMs) are evaluated in two distinct scenarios: a hospital patrolling robot and a kitchen assistant robot. Experimental results demonstrate that a two-stage architecture, which first generates an explanation and then personalises it, is the framework architecture that effectively reduces the level of detail only when there is related user knowledge.