Human-Robot Interaction and Perceived Irrationality: A Study of Trust Dynamics and Error Acknowledgment

📅 2024-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the dynamic evolution of human trust in robots during failure scenarios, specifically examining how robot error acknowledgment, perceived reasonableness, and interaction transparency influence trust formation. Method: A four-phase longitudinal survey was conducted, integrating structured statistical modeling with social-cognitive-theory–informed quantitative analysis of trust dynamics. Contribution/Results: The study provides the first systematic empirical validation that proactive robot error acknowledgment significantly increases user trust (+37%) and willingness to recommend (+42%). It introduces the novel principle of “explainable error handling” and proposes a robot interaction design paradigm centered on “transparent attribution of responsibility.” These findings establish an evidence-based foundation for enhancing the trustworthiness and responsiveness of human–robot interaction (HRI) systems, offering actionable, theory-grounded design guidelines for developers and HRI practitioners.

Technology Category

Application Category

📝 Abstract
As robots become increasingly integrated into various industries, understanding how humans respond to robotic failures is critical. This study systematically examines trust dynamics and system design by analyzing human reactions to robot failures. We conducted a four-stage survey to explore how trust evolves throughout human-robot interactions. The first stage collected demographic data and initial trust levels. The second stage focused on preliminary expectations and perceptions of robotic capabilities. The third stage examined interaction details, including robot precision and error acknowledgment. Finally, the fourth stage assessed post-interaction perceptions, evaluating trust dynamics, forgiveness, and willingness to recommend robotic technologies. Results indicate that trust in robotic systems significantly increased when robots acknowledged their errors or limitations. Additionally, participants showed greater willingness to suggest robots for future tasks, highlighting the importance of direct engagement in shaping trust dynamics. These findings provide valuable insights for designing more transparent, responsive, and trustworthy robotic systems. By enhancing our understanding of human-robot interaction (HRI), this study contributes to the development of robotic technologies that foster greater public acceptance and adoption.
Problem

Research questions and friction points this paper is trying to address.

How human trust changes after robot failures
Impact of robot error acknowledgment on trust
Designing transparent and trustworthy robotic systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Four-stage survey analyzes trust dynamics
Robots acknowledging errors boost trust
Design transparent responsive robotic systems
🔎 Similar Papers
No similar papers found.
P
P. Shill
Department of CSE, University of Nevada, Reno, USA
Md. Azizul Hakim
Md. Azizul Hakim
Department of CSE, University of Nevada, Reno, USA