🤖 AI Summary
This work addresses the challenge of unintelligible robot behavior in human environments, particularly under data-scarce conditions, which hinders accurate human understanding of a robot’s capabilities and intentions. To enhance interpretability, the authors propose a causal Bayesian network–based modeling approach that replaces conventional associative reasoning with causal inference, enabling explainable predictions of human perception and generation of counterfactual robot behaviors. Furthermore, they introduce a causally guided combinatorial search strategy to optimize socially compliant navigation. Experimental results demonstrate F1 scores of 0.78 and 0.75 for capability and intention prediction, respectively, while user evaluations reveal an 83% improvement in perceived understandability of low-capability behaviors, significantly boosting human trust and interaction quality.
📝 Abstract
As mobile robots are increasingly deployed in human environments, enabling them to predict how people perceive them is critical for socially adaptable navigation. Predicting perceptions is challenging for two main reasons: (1) HRI prediction models must learn from limited data, and (2) the obtained models must be interpretable to enable safe and effective interactions. Interpretability is particularly important when a robot is perceived as incompetent (e.g., when the robot suddenly stops or rotates away from the goal), as it allows the robot to explain its reasoning and identify controllable factors to improve performance, requiring causal rather than associative reasoning. To address these challenges, we propose a Causal Bayesian Network designed to predict how people perceive a mobile robot's competence and how they interpret its intent during navigation. Additionally, we introduce a novel method to improve perceived robot competence employing a combinatorial search, guided by the proposed causal model, to identify better navigation behaviors. Our method enhances interpretability and generates counterfactual robot motions while achieving comparable or superior predictive performance to state-of-the-art methods, reaching an F1-score of 0.78 and 0.75 for competence and intention on a binary scale. To further assess our method's ability to improve the perceived robot competence, we conducted an online evaluation in which users rated robot behaviors on a 5-point Likert scale. Our method statistically significantly increased the perceived competence of low-competent robot behavior by 83%.