A Causal Approach to Predicting and Improving Human Perceptions of Social Navigation Robots

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of unintelligible robot behavior in human environments, particularly under data-scarce conditions, which hinders accurate human understanding of a robot’s capabilities and intentions. To enhance interpretability, the authors propose a causal Bayesian network–based modeling approach that replaces conventional associative reasoning with causal inference, enabling explainable predictions of human perception and generation of counterfactual robot behaviors. Furthermore, they introduce a causally guided combinatorial search strategy to optimize socially compliant navigation. Experimental results demonstrate F1 scores of 0.78 and 0.75 for capability and intention prediction, respectively, while user evaluations reveal an 83% improvement in perceived understandability of low-capability behaviors, significantly boosting human trust and interaction quality.

Technology Category

Application Category

📝 Abstract
As mobile robots are increasingly deployed in human environments, enabling them to predict how people perceive them is critical for socially adaptable navigation. Predicting perceptions is challenging for two main reasons: (1) HRI prediction models must learn from limited data, and (2) the obtained models must be interpretable to enable safe and effective interactions. Interpretability is particularly important when a robot is perceived as incompetent (e.g., when the robot suddenly stops or rotates away from the goal), as it allows the robot to explain its reasoning and identify controllable factors to improve performance, requiring causal rather than associative reasoning. To address these challenges, we propose a Causal Bayesian Network designed to predict how people perceive a mobile robot's competence and how they interpret its intent during navigation. Additionally, we introduce a novel method to improve perceived robot competence employing a combinatorial search, guided by the proposed causal model, to identify better navigation behaviors. Our method enhances interpretability and generates counterfactual robot motions while achieving comparable or superior predictive performance to state-of-the-art methods, reaching an F1-score of 0.78 and 0.75 for competence and intention on a binary scale. To further assess our method's ability to improve the perceived robot competence, we conducted an online evaluation in which users rated robot behaviors on a 5-point Likert scale. Our method statistically significantly increased the perceived competence of low-competent robot behavior by 83%.
Problem

Research questions and friction points this paper is trying to address.

social navigation
human perception
robot competence
causal reasoning
interpretable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Bayesian Network
Social Navigation
Perceived Competence
Counterfactual Reasoning
Interpretable AI
🔎 Similar Papers
No similar papers found.
M
Maximilian Diehl
Faculty of Electrical Engineering, Chalmers University of Technology, SE-412 96 Gothenburg, Sweden
Nathan Tsoi
Nathan Tsoi
Postdoctoral Researcher, The University of Texas at Austin
Robot LearningSystemsHuman-Robot Interaction
G
Gustavo Chavez
Yale University, New Haven, Connecticut, USA
K
Karinne Ramirez-Amaro
Faculty of Electrical Engineering, Chalmers University of Technology, SE-412 96 Gothenburg, Sweden
Marynel Vázquez
Marynel Vázquez
Yale University
Human-Robot InteractionArtificial Intelligence