🤖 AI Summary
This study investigates the evolution of cooperative behavior among large language model (LLM) agents in complex social settings characterized by intra-group cooperation and inter-group competition. We propose a two-level game-theoretic framework—“intra-group cooperation vs. inter-group competition”—and simulate multi-round repeated prisoner’s dilemmas via virtual tournaments to quantitatively assess agents’ cooperation propensity in both one-shot and long-term interactions. Results show that inter-group competition significantly increases initial one-shot cooperation rates (+23.6%), challenging the conventional assumption that competition inherently undermines cooperation; this effect is mediated by competition-induced group identity formation and reputation management mechanisms. To our knowledge, this is the first systematic empirical validation of the positive moderating role of inter-group competition on LLM agent cooperation. The findings provide theoretical grounding and a reproducible technical pathway for designing trustworthy, collaborative multi-agent systems. Code is publicly available.
📝 Abstract
With the prospect of autonomous artificial intelligence (AI) agents, studying their tendency for cooperative behavior becomes an increasingly relevant topic. This study is inspired by the super-additive cooperation theory, where the combined effects of repeated interactions and inter-group rivalry have been argued to be the cause for cooperative tendencies found in humans. We devised a virtual tournament where language model agents, grouped into teams, face each other in a Prisoner's Dilemma game. By simulating both internal team dynamics and external competition, we discovered that this blend substantially boosts both overall and initial, one-shot cooperation levels (the tendency to cooperate in one-off interactions). This research provides a novel framework for large language models to strategize and act in complex social scenarios and offers evidence for how intergroup competition can, counter-intuitively, result in more cooperative behavior. These insights are crucial for designing future multi-agent AI systems that can effectively work together and better align with human values. Source code is available at https://github.com/pippot/Superadditive-cooperation-LLMs.