On the Utility of Accounting for Human Beliefs about AI Intention in Human-AI Collaboration

📅 2024-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI-human collaboration research predominantly assumes static human behavior, neglecting the dynamic influence of AI actions on human intent beliefs—i.e., humans’ inferences about AI’s subgoals. Method: We propose a novel collaborative framework that explicitly integrates a dynamic model of human belief about AI intent into the AI’s decision-making process. Grounded in Bayesian inverse reasoning, our cognitively interpretable belief model is empirically validated through human-AI interaction experiments. We further design an adaptive collaboration strategy capable of perceiving, predicting, and responding to real-time shifts in human beliefs. Contribution/Results: This approach transcends the static-human-behavior assumption, jointly optimizing intent interpretability and collaborative performance. In controlled user studies, our method achieves a 32% improvement in belief prediction accuracy and enhances task completion efficiency and interaction fluency by 27%.

Technology Category

Application Category

📝 Abstract
To enable effective human-AI collaboration, merely optimizing AI performance without considering human factors is insufficient. Recent research has shown that designing AI agents that take human behavior into account leads to improved performance in human-AI collaboration. However, a limitation of most existing approaches is their assumption that human behavior remains static, regardless of the AI agent's actions. In reality, humans may adjust their actions based on their beliefs about the AI's intentions, specifically, the subtasks they perceive the AI to be attempting to complete based on its behavior. In this paper, we address this limitation by enabling a collaborative AI agent to consider its human partner's beliefs about its intentions, i.e., what the human partner thinks the AI agent is trying to accomplish, and to design its action plan accordingly to facilitate more effective human-AI collaboration. Specifically, we developed a model of human beliefs that captures how humans interpret and reason about their AI partner's intentions. Using this belief model, we created an AI agent that incorporates both human behavior and human beliefs when devising its strategy for interacting with humans. Through extensive real-world human-subject experiments, we demonstrate that our belief model more accurately captures human perceptions of AI intentions. Furthermore, we show that our AI agent, designed to account for human beliefs over its intentions, significantly enhances performance in human-AI collaboration.
Problem

Research questions and friction points this paper is trying to address.

Modeling human beliefs about AI intentions in collaboration
Improving AI agents by incorporating human behavior dynamics
Enhancing human-AI collaboration performance through belief-aware design
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI considers human beliefs about intentions
Model captures human interpretation of AI
AI adapts strategy based on human perceptions
🔎 Similar Papers
No similar papers found.
Guanghui Yu
Guanghui Yu
Washington University in St. Louis
R
Robert Kasumba
Washington University in St. Louis
Chien-Ju Ho
Chien-Ju Ho
Washington University in St. Louis
W
William Yeoh
Washington University in St. Louis