🤖 AI Summary
In multi-agent engineering systems, conflicts arise between individual rationality and collective welfare, exacerbated by the dynamic uncertainty of social dilemmas that static governance rules cannot adequately address.
Method: This paper proposes a dynamic governance framework integrating strategic large language model agents (SLAs) and prosocial policy agents (PPAs). It introduces an information-adaptive modulation mechanism wherein PPAs—acting as governance agents—dynamically regulate inter-agent information transparency during iterative games to foster cooperative evolution.
Contribution/Results: Experiments on canonical social dilemmas (e.g., Prisoner’s Dilemma) demonstrate a significant increase in cooperation rates; SLAs exhibit fine-grained strategic adaptability; and PPAs successfully learn optimal information-modulation policies, yielding an average 37.2% improvement in social welfare. This work pioneers information-transparency–driven cooperative emergence in multi-agent systems and establishes a novel paradigm for governance of LLM-augmented socio-technical systems.
📝 Abstract
This paper introduces a novel framework combining LLM agents as proxies for human strategic behavior with reinforcement learning (RL) to engage these agents in evolving strategic interactions within team environments. Our approach extends traditional agent-based simulations by using strategic LLM agents (SLA) and introducing dynamic and adaptive governance through a pro-social promoting RL agent (PPA) that modulates information access across agents in a network, optimizing social welfare and promoting pro-social behavior. Through validation in iterative games, including the prisoner dilemma, we demonstrate that SLA agents exhibit nuanced strategic adaptations. The PPA agent effectively learns to adjust information transparency, resulting in enhanced cooperation rates. This framework offers significant insights into AI-mediated social dynamics, contributing to the deployment of AI in real-world team settings.