🤖 AI Summary
This study addresses the trade-off between objective performance and human preferences in human-AI collaborative systems. Method: We designed five distinct AI agent strategies grounded in behavioral experiments and employed Bayesian hierarchical modeling to systematically analyze the interplay among human preferences, perceptual traits, and task performance. Contribution/Results: We identify “contributability”—the extent to which humans perceive themselves as meaningfully contributing—as the strongest predictor of preference; reveal that inequality aversion drives strong preference for “contributable” agents; and demonstrate that high-adaptivity agents significantly enhance likability without compromising objective performance. Critically, we establish that jointly optimizing subjective acceptability and objective efficacy improves both likability and team-level effectiveness. These findings introduce a dual-track evaluation framework—integrating subjective (e.g., perceived contributability, likability) and objective (e.g., task accuracy, efficiency) metrics—for collaborative AI, advancing human-centered AI design from empirically driven heuristics toward mechanistic, theory-grounded principles.
📝 Abstract
Despite the growing interest in collaborative AI, designing systems that seamlessly integrate human input remains a major challenge. In this study, we developed a task to systematically examine human preferences for collaborative agents. We created and evaluated five collaborative AI agents with strategies that differ in the manner and degree they adapt to human actions. Participants interacted with a subset of these agents, evaluated their perceived traits, and selected their preferred agent. We used a Bayesian model to understand how agents' strategies influence the Human-AI team performance, AI's perceived traits, and the factors shaping human-preferences in pairwise agent comparisons. Our results show that agents who are more considerate of human actions are preferred over purely performance-maximizing agents. Moreover, we show that such human-centric design can improve the likability of AI collaborators without reducing performance. We find evidence for inequality-aversion effects being a driver of human choices, suggesting that people prefer collaborative agents which allow them to meaningfully contribute to the team. Taken together, these findings demonstrate how collaboration with AI can benefit from development efforts which include both subjective and objective metrics.