🤖 AI Summary
This study investigates the dynamic evolution and repair of human trust in robotic delivery systems, specifically examining human decisions to switch between autonomous and manual control following robot performance failures in multi-task scenarios.
Method: We propose the first interpretable Input-Output Hidden Markov Model (IOHMM) for objective, behavior-driven estimation of latent trust states, with a provable isomorphic mapping to subjective self-reported trust. A controlled human-subject experiment quantifies the efficacy of three trust-repair strategies—elaborate vs. concise explanations, apology-and-commitment, and denial.
Contribution/Results: Results show that elaborate explanations most effectively restore trust, whereas denial best mitigates further trust erosion. The IOHMM accurately captures trust-driven autonomous dispatch decisions, achieving strong alignment between estimated and self-reported trust (p < 0.01). This work establishes a theoretically grounded, real-time deployable framework for adaptive trust regulation in human-robot collaboration.
📝 Abstract
With increasing efficiency and reliability, autonomous systems are becoming valuable assistants to humans in various tasks. In the context of robot-assisted delivery, we investigate how robot performance and trust repair strategies impact human trust. In this task, while handling a secondary task, humans can choose to either send the robot to deliver autonomously or manually control it. The trust repair strategies examined include short and long explanations, apology and promise, and denial. Using data from human participants, we model human behavior using an Input-Output Hidden Markov Model (IOHMM) to capture the dynamics of trust and human action probabilities. Our findings indicate that humans are more likely to deploy the robot autonomously when their trust is high. Furthermore, state transition estimates show that long explanations are the most effective at repairing trust following a failure, while denial is most effective at preventing trust loss. We also demonstrate that the trust estimates generated by our model are isomorphic to self-reported trust values, making them interpretable. This model lays the groundwork for developing optimal policies that facilitate real-time adjustment of human trust in autonomous systems.