🤖 AI Summary
This study addresses the misalignment between user preferences and actual outcomes in AI-assisted multi-agent negotiations. Through an online behavioral experiment, it compares three large language model–based assistance modalities—Advisor, Coach, and Delegate—in a three-party, multi-round negotiation setting. The findings reveal that, despite being the least preferred by users, the Delegate modality yields the highest individual payoffs and generates significant positive externalities: its high-quality proposals not only enhance the performance of adopters but also reshape the negotiation environment through Pareto improvements, benefiting even non-adopters. The work proposes reconceptualizing AI assistance as an endogenous participation mechanism, thereby uncovering the unique value of superhuman-capable AI in group interactions.
📝 Abstract
As AI usage becomes more prevalent in social contexts, understanding agent-user interaction is critical to designing systems that improve both individual and group outcomes. We present an online behavioral experiment (N = 243) in which participants play three multi-turn bargaining games in groups of three. Each game, presented in randomized order, grants \textit{access to} a single LLM assistance modality: proactive recommendations from an \textit{Advisor}, reactive feedback from a \textit{Coach}, or autonomous execution by a \textit{Delegate}; all modalities are powered by an underlying LLM that achieves superhuman performance in an all-agent environment. On each turn, participants privately decide whether to act manually or use the AI modality available in that game. Despite preferring the \textit{Advisor} modality, participants achieve the highest mean individual gains with the \textit{Delegate}, demonstrating a preference-performance misalignment. Moreover, delegation generates positive externalities; even non-adopting users in \textit{access-to-delegate} treatment groups benefit by receiving higher-quality offers. Mechanism analysis reveals that the \textit{Delegate} agent acts as a market maker, injecting rational, Pareto-improving proposals that restructure the trading environment. Our research reveals a gap between agent capabilities and realized group welfare. While autonomous agents can exhibit super-human strategic performance, their impact on realized welfare gains can be constrained by interfaces, user perceptions, and adoption barriers. Assistance modalities should be designed as mechanisms with endogenous participation; adoption-compatible interaction rules are a prerequisite to improving human welfare with automated assistance.