🤖 AI Summary
This study investigates the capabilities of large language models (LLMs) in business negotiations involving strategic reasoning, theory of mind, and economic value creation. To this end, the authors introduce PieArena, a large-scale multi-agent benchmark grounded in real MBA negotiation coursework, and employ a comprehensive evaluation framework integrating multi-agent simulations, behavioral dimension analysis, and a joint-intentionality agent architecture. The research reveals for the first time that outcome-based metrics alone obscure significant behavioral differences among models—particularly in deception, computational accuracy, instruction adherence, and reputation awareness—and demonstrates an asymmetric performance gain from the joint-intentionality framework across models of varying capability levels. Experiments show that state-of-the-art models such as GPT-5 achieve or surpass the negotiation proficiency of MBA students, exhibiting proto-AGI traits, yet still face challenges in robustness and trustworthiness.
📝 Abstract
We present an in-depth evaluation of LLMs'ability to negotiate, a central business task that requires strategic reasoning, theory of mind, and economic value creation. To do so, we introduce PieArena, a large-scale negotiation benchmark grounded in multi-agent interactions over realistic scenarios drawn from an MBA negotiation course at an elite business school. We develop a statistically grounded ranking model for continuous negotiation payoffs that produces leaderboards with principled confidence intervals and corrects for experimental asymmetries. We find systematic evidence of human-expert-level performance in which a representative frontier language agent (GPT-5) matches or outperforms trained business-school students, despite a semester of general negotiation instruction and targeted coaching immediately prior to the task. We further study the effects of joint-intentionality agentic scaffolding and observe asymmetric gains, with large improvements for mid- and lower-tier LMs and diminishing returns for frontier LMs. Beyond deal outcomes, PieArena provides a multi-dimensional negotiation behavioral profile, revealing novel cross-model heterogeneity, masked by deal-outcome-only benchmarks, in deception, computation accuracy, instruction compliance, and perceived reputation. Overall, our results suggest that frontier language agents are already intellectually and psychologically capable of deployment in high-stakes economic settings, but deficiencies in robustness and trustworthiness remain open challenges.