Evolution of Cooperation in LLM-Agent Societies: A Preliminary Study Using Different Punishment Strategies

📅 2025-04-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether cooperative behavior can sustainably evolve in a society of large language model (LLM) agents situated in an embodied, natural-language-driven simulation, thereby testing the applicability of the Boyd–Richerson model of cultural evolution. We introduce the first multi-agent simulation framework grounded in the “scrounger’s dilemma” game, integrating explicit punishment mechanisms and natural-language-based social reasoning, and propose a pairwise imitation strategy-updating rule to approximate human-like norm acquisition. Our results demonstrate: (1) canonical dynamics of cooperation evolution—such as norm emergence, stabilization, and collapse—are faithfully reproduced in LLM agents; (2) explicit punishment robustly induces and sustains cooperative norms across diverse initial strategy distributions, significantly elevating population-level cooperation rates; and (3) the framework establishes an interpretable, experimentally controllable testbed for empirically evaluating theories of social norm evolution using language models.

Technology Category

Application Category

📝 Abstract
The evolution of cooperation has been extensively studied using abstract mathematical models and simulations. Recent advances in Large Language Models (LLM) and the rise of LLM agents have demonstrated their ability to perform social reasoning, thus providing an opportunity to test the emergence of norms in more realistic agent-based simulations with human-like reasoning using natural language. In this research, we investigate whether the cooperation dynamics presented in Boyd and Richerson's model persist in a more realistic simulation of the diner's dilemma using LLM agents compared to the abstract mathematical nature in the work of Boyd and Richerson. Our findings indicate that agents follow the strategies defined in the Boyd and Richerson model, and explicit punishment mechanisms drive norm emergence, reinforcing cooperative behaviour even when the agent strategy configuration varies. Our results suggest that LLM-based Multi-Agent System simulations, in fact, can replicate the evolution of cooperation predicted by the traditional mathematical models. Moreover, our simulations extend beyond the mathematical models by integrating natural language-driven reasoning and a pairwise imitation method for strategy adoption, making them a more realistic testbed for cooperative behaviour in MASs.
Problem

Research questions and friction points this paper is trying to address.

Study cooperation evolution in LLM-agent societies using punishment strategies
Compare Boyd-Richerson model dynamics in realistic LLM simulations
Test norm emergence via natural language reasoning in multi-agent systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLM agents for realistic social simulations
Implementing explicit punishment to drive cooperation
Integrating natural language reasoning in MAS
🔎 Similar Papers
No similar papers found.