🤖 AI Summary
This study investigates whether explicitly sharing AI agents’ inferred beliefs about human teammates’ goals enhances human-AI collaboration, particularly when goals cannot be directly communicated. We designed three experimental conditions—no goal recognition, feasible-goal recognition, and on-demand feasible-goal recognition—integrating behavioral inference-based goal modeling, conditional belief presentation, and a mixed-evaluation framework comprising task performance metrics, collaboration perception scales, thematic analysis of verbal protocols, and cognitive load measurement. Results show that goal-belief sharing significantly improves subjective collaboration perception and depth of strategic adaptation, yet yields no significant gains in objective task completion rate, mean satisfaction, or cognitive load. This work provides the first empirical demonstration that transparency does not necessarily entail optimization, thereby disentangling perceived trust from actual collaborative efficacy and challenging the implicit assumption that transparency inherently improves human-AI team performance.
📝 Abstract
In human-agent teams, openly sharing goals is often assumed to enhance planning, collaboration, and effectiveness. However, direct communication of these goals is not always feasible, requiring teammates to infer their partner's intentions through actions. Building on this, we investigate whether an AI agent's ability to share its inferred understanding of a human teammate's goals can improve task performance and perceived collaboration. Through an experiment comparing three conditions-no recognition (NR), viable goals (VG), and viable goals on-demand (VGod) - we find that while goal-sharing information did not yield significant improvements in task performance or overall satisfaction scores, thematic analysis suggests that it supported strategic adaptations and subjective perceptions of collaboration. Cognitive load assessments revealed no additional burden across conditions, highlighting the challenge of balancing informativeness and simplicity in human-agent interactions. These findings highlight the nuanced trade-off of goal-sharing: while it fosters trust and enhances perceived collaboration, it can occasionally hinder objective performance gains.