🤖 AI Summary
This work addresses the inefficiency in robot plan verbalization caused by neglecting users’ prior knowledge. We propose an information-gain–based formulation strategy that explicitly models users’ second-order theory of mind—i.e., their beliefs about the robot’s knowledge—to quantify the informational value of each action in a plan. Based on this quantification, we optimize the presentation order of actions (e.g., ascending or descending information content) to maximize communicative efficiency. Experimental results demonstrate that our approach significantly accelerates users’ comprehension of the robot’s goal compared to conventional time-ordered or fixed-sequence baselines. This validates the effectiveness of “informativeness-driven expression” in human–robot collaboration and establishes a novel paradigm for explainable interaction with embodied agents. The key contribution lies in the first integration of second-order mental state modeling into plan verbalization, enabling adaptive, user-aware information structuring grounded in principled information-theoretic criteria.
📝 Abstract
When a robot is asked to verbalize its plan it can do it in many ways. For example, a seemingly natural strategy is incremental, where the robot verbalizes its planned actions in plan order. However, an important aspect of this type of strategy is that it misses considerations on what is effectively informative to communicate, because not considering what the user knows prior to explanations. In this paper we propose a verbalization strategy to communicate robot plans informatively, by measuring the information gain that verbalizations have against a second-order theory of mind of the user capturing his prior knowledge on the robot. As shown in our experiments, this strategy allows to understand the robot's goal much quicker than by using strategies such as increasing or decreasing plan order. In addition, following our formulation we hint to what is informative and why when a robot communicates its plan.