Expectations, Explanations, and Embodiment: Attempts at Robot Failure Recovery

📅 2025-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how users’ pre-failure expectations moderate the effectiveness of post-failure explanations in repairing trust in robots, and whether this moderation is further contingent on robot embodiment. Method: Using an online experiment with a video priming paradigm, we manipulated participants’ high versus low expectations for two distinct robots—Furhat (a voice-enabled avatar robot) and Pepper (a humanoid service robot)—then exposed them to failure scenarios followed by explanatory interventions, measuring trust, satisfaction, and perceived expressivity. Contribution/Results: We provide the first empirical evidence of a significant three-way interaction: explanation efficacy is conditional on both expectation level and embodiment. Specifically, explanations significantly improved trust and satisfaction only under low expectations—and this effect was robust for Furhat but absent for Pepper. Results confirm that brief video priming effectively modulates expectations, and critically demonstrate that explanation-based trust repair is not universally effective but depends jointly on user expectations and robot morphology—offering foundational theoretical insights and practical guidance for personalized trust recovery strategies in human–robot interaction.

Technology Category

Application Category

📝 Abstract
Expectations critically shape how people form judgments about robots, influencing whether they view failures as minor technical glitches or deal-breaking flaws. This work explores how high and low expectations, induced through brief video priming, affect user perceptions of robot failures and the utility of explanations in HRI. We conducted two online studies ($N=600$ total participants); each replicated two robots with different embodiments, Furhat and Pepper. In our first study, grounded in expectation theory, participants were divided into two groups, one primed with positive and the other with negative expectations regarding the robot's performance, establishing distinct expectation frameworks. This validation study aimed to verify whether the videos could reliably establish low and high-expectation profiles. In the second study, participants were primed using the validated videos and then viewed a new scenario in which the robot failed at a task. Half viewed a version where the robot explained its failure, while the other half received no explanation. We found that explanations significantly improved user perceptions of Furhat, especially when participants were primed to have lower expectations. Explanations boosted satisfaction and enhanced the robot's perceived expressiveness, indicating that effectively communicating the cause of errors can help repair user trust. By contrast, Pepper's explanations produced minimal impact on user attitudes, suggesting that a robot's embodiment and style of interaction could determine whether explanations can successfully offset negative impressions. Together, these findings underscore the need to consider users' expectations when tailoring explanation strategies in HRI. When expectations are initially low, a cogent explanation can make the difference between dismissing a failure and appreciating the robot's transparency and effort to communicate.
Problem

Research questions and friction points this paper is trying to address.

How expectations affect user perceptions of robot failures
Impact of explanations on user trust in robot failures
Role of robot embodiment in effectiveness of failure explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video priming induces high and low expectations
Explanations improve perceptions with low expectations
Embodiment affects explanation effectiveness in HRI
🔎 Similar Papers
No similar papers found.