🤖 AI Summary
This work addresses the lack of theory-driven evaluation benchmarks for collective reasoning in multi-agent large language model (LLM) systems. We introduce the first reproducible nine-task benchmark grounded in social psychology’s “hidden profile” paradigm, which asymmetrically distributes critical information across agents to expose systemic group-level failures—including collaborative breakdowns, information suppression, and convergence bottlenecks—and pioneer the adaptation of classical human group decision-making paradigms to multi-agent LLM evaluation. Experiments employ a negotiation framework built on GPT-4.1 and five mainstream LLMs (including reasoning-enhanced variants). Results show: (1) all multi-agent systems underperform an omniscient single-agent baseline; (2) collective performance approximates human group outcomes but exhibits heightened susceptibility to social desirability bias; and (3) we identify a novel “cooperation–contradiction trade-off,” demonstrating that excessive cooperation erodes response diversity while excessive rebuttal impedes consensus convergence.
📝 Abstract
Multi-agent systems built on large language models (LLMs) promise enhanced problem-solving through distributed information integration, but also risk replicating collective reasoning failures observed in human groups. Yet, no theory-grounded benchmark exists to systematically evaluate such failures. In this paper, we introduce the Hidden Profile paradigm from social psychology as a diagnostic testbed for multi-agent LLM systems. By distributing critical information asymmetrically across agents, the paradigm reveals how inter-agent dynamics support or hinder collective reasoning. We first formalize the paradigm for multi-agent decision-making under distributed knowledge and instantiate it as a benchmark with nine tasks spanning diverse scenarios, including adaptations from prior human studies. We then conduct experiments with GPT-4.1 and five other leading LLMs, including reasoning-enhanced variants, showing that multi-agent systems across all models fail to match the accuracy of single agents given complete information. While agents' collective performance is broadly comparable to that of human groups, nuanced behavioral differences emerge, such as increased sensitivity to social desirability. Finally, we demonstrate the paradigm's diagnostic utility by exploring a cooperation-contradiction trade-off in multi-agent LLM systems. We find that while cooperative agents are prone to over-coordination in collective settings, increased contradiction impairs group convergence. This work contributes a reproducible framework for evaluating multi-agent LLM systems and motivates future research on artificial collective intelligence and human-AI interaction.