Understanding Economic Tradeoffs Between Human and AI Agents in Bargaining Games

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates economic trade-offs in dynamic bargaining between humans and AI agents—specifically Bayesian models, GPT-4o, and Gemini 1.5 Pro. Using controlled experiments, we compare negotiation behavior and performance across agent types under identical environmental conditions. Results show that Bayesian agents maximize surplus but exhibit low acceptance rates; large language models adopt conservative, smoothly convergent concession strategies, achieving total surplus comparable to humans; humans prioritize fairness and risk considerations, demonstrating greater contextual adaptability. Crucially, we demonstrate that “comparable performance” does not imply “behavioral equivalence”: despite negligible differences in final payoffs, agents diverge fundamentally in decision processes, value preferences, and coordination robustness. This finding underscores the necessity of process-oriented behavioral analysis for real-world AI deployment in multi-agent systems, providing empirical grounding for value alignment and human-AI collaboration. (149 words)

Technology Category

Application Category

📝 Abstract
Coordination tasks traditionally performed by humans are increasingly being delegated to autonomous agents. As this pattern progresses, it becomes critical to evaluate not only these agents' performance but also the processes through which they negotiate in dynamic, multi-agent environments. Furthermore, different agents exhibit distinct advantages: traditional statistical agents, such as Bayesian models, may excel under well-specified conditions, whereas large language models (LLMs) can generalize across contexts. In this work, we compare humans (N = 216), LLMs (GPT-4o, Gemini 1.5 Pro), and Bayesian agents in a dynamic negotiation setting that enables direct, identical-condition comparisons across populations, capturing both outcomes and behavioral dynamics. Bayesian agents extract the highest surplus through aggressive optimization, at the cost of frequent trade rejections. Humans and LLMs can achieve similar overall surplus, but through distinct behaviors: LLMs favor conservative, concessionary trades with few rejections, while humans employ more strategic, risk-taking, and fairness-oriented behaviors. Thus, we find that performance parity -- a common benchmark in agent evaluation -- can conceal fundamental differences in process and alignment, which are critical for practical deployment in real-world coordination tasks.
Problem

Research questions and friction points this paper is trying to address.

Compare human and AI agent negotiation behaviors
Evaluate economic tradeoffs in bargaining game performance
Assess process differences despite outcome parity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compare humans LLMs Bayesian agents negotiation
Bayesian agents optimize aggressively high rejections
LLMs conservative trades humans strategic fairness