Should Collaborative Robots be Transparent?

📅 2023-04-23
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study challenges the assumption that robotic behavioral transparency inherently improves human-robot team performance. Method: We model human Bayesian uncertainty about robot types and derive optimal interaction policies by integrating recursive Bayesian Nash equilibrium with the Bellman equation. Contribution/Results: We theoretically prove that, under constraints of limited interaction duration or bounded human learning capacity, “strategic opacity”—selectively concealing internal robot states—can significantly increase team utility, thereby rigorously refuting the “transparency-is-always-better” hypothesis from a game-theoretic perspective for the first time. Online and laboratory user studies (N=43) empirically validate this: in short-duration tasks, participants collaborating with opaque robots achieved higher objective rewards, while reporting subjective trust and collaboration experience statistically equivalent to those with transparent robots. Our work establishes a novel optimization paradigm for information design under cognitive constraints.
📝 Abstract
We often assume that robots which collaborate with humans should behave in ways that are transparent (e.g., legible, explainable). These transparent robots intentionally choose actions that convey their internal state to nearby humans: for instance, a transparent robot might exaggerate its trajectory to indicate its goal. But while transparent behavior seems beneficial for human-robot interaction, is it actually optimal? In this paper we consider collaborative settings where the human and robot have the same objective, and the human is uncertain about the robot's type (i.e., the robot's internal state). We extend a recursive combination of Bayesian Nash equilibrium and the Bellman equation to solve for optimal robot policies. Interestingly, we discover that it is not always optimal for collaborative robots to be transparent; instead, human and robot teams can sometimes achieve higher rewards when the robot is opaque. In contrast to transparent robots, opaque robots select actions that withhold information from the human. Our analysis suggests that opaque behavior becomes optimal when either (a) human-robot interactions have a short time horizon or (b) users are slow to learn from the robot's actions. We extend this theoretical analysis to user studies across 43 total participants in both online and in-person settings. We find that -- during short interactions -- users reach higher rewards when working with opaque partners, and subjectively rate opaque robots as about equal to transparent robots. See videos of our experiments here: https://youtu.be/u8q1Z7WHUuI
Problem

Research questions and friction points this paper is trying to address.

Optimal transparency in human-robot collaboration
Impact of robot opacity on team rewards
Conditions favoring opaque vs. transparent robot behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends Bayesian Nash equilibrium and Bellman equation
Optimal robot policies sometimes favor opaque behavior
Validates theory with user studies on 43 participants
🔎 Similar Papers
No similar papers found.
Shahabedin Sagheb
Shahabedin Sagheb
Assistant Collegiate Professor, Virginia Tech
Robot LearningMachine LearningControl TheoryHapticsGame Theory
S
Soham Gandhi
Department of Mechanical Engineering, Virginia Tech, 635 Prices Fork Road, Blacksburg, 24061, Virginia, USA.
D
Dylan P. Losey
Department of Mechanical Engineering, Virginia Tech, 635 Prices Fork Road, Blacksburg, 24061, Virginia, USA.