Trust-Aware Assistance Seeking in Human-Supervised Autonomy

📅 2024-10-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modeling and predicting trust evolution in dynamic human-robot collaboration remains challenging. Method: This paper introduces the first computational framework that formalizes human implicit trust in robots as a latent state within a Partially Observable Markov Decision Process (POMDP), enabling online estimation of trust and optimization of robot-initiated help-seeking strategies. The approach integrates behavioral data analysis, real-time trust-state inference, and controlled human-subject experiments. Contribution/Results: In high-complexity tasks, we demonstrate that strategic help-seeking positively modulates trust—establishing a closed-loop, “predictable and steerable” trust-aware collaboration paradigm. Experiments show that trust-aware policies significantly improve team performance and safety. Estimated trust states correlate strongly with subjective self-reports (p < 0.01) and outperform baseline methods in accuracy. This work establishes a novel computational paradigm for modeling human-robot trust and enabling adaptive, trust-responsive interaction.

Technology Category

Application Category

📝 Abstract
Our goal is to model and experimentally assess trust evolution to predict future beliefs and behaviors of human-robot teams in dynamic environments. Research suggests that maintaining trust among team members in a human-robot team is vital for successful team performance. Research suggests that trust is a multi-dimensional and latent entity that relates to past experiences and future actions in a complex manner. Employing a human-robot collaborative task, we design an optimal assistance-seeking strategy for the robot using a POMDP framework. In the task, the human supervises an autonomous mobile manipulator collecting objects in an environment. The supervisor's task is to ensure that the robot safely executes its task. The robot can either choose to attempt to collect the object or seek human assistance. The human supervisor actively monitors the robot's activities, offering assistance upon request, and intervening if they perceive the robot may fail. In this setting, human trust is the hidden state, and the primary objective is to optimize team performance. We execute two sets of human-robot interaction experiments. The data from the first experiment are used to estimate POMDP parameters, which are used to compute an optimal assistance-seeking policy evaluated in the second experiment. The estimated POMDP parameters reveal that, for most participants, human intervention is more probable when trust is low, particularly in high-complexity tasks. Our estimates suggest that the robot's action of asking for assistance in high-complexity tasks can positively impact human trust. Our experimental results show that the proposed trust-aware policy is better than an optimal trust-agnostic policy. By comparing model estimates of human trust, obtained using only behavioral data, with the collected self-reported trust values, we show that model estimates are isomorphic to self-reported responses.
Problem

Research questions and friction points this paper is trying to address.

Modeling trust evolution in human-robot teams for performance prediction
Developing optimal assistance-seeking strategies using POMDP framework
Validating trust-aware policies through human-robot interaction experiments
Innovation

Methods, ideas, or system contributions that make the work stand out.

POMDP framework optimizes robot assistance-seeking strategy
Trust-aware policy outperforms trust-agnostic approach
Model estimates trust using behavioral data