Instigating Cooperation among LLM Agents Using Adaptive Information Modulation

📅 2024-09-16
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
In multi-agent engineering systems, conflicts arise between individual rationality and collective welfare, exacerbated by the dynamic uncertainty of social dilemmas that static governance rules cannot adequately address. Method: This paper proposes a dynamic governance framework integrating strategic large language model agents (SLAs) and prosocial policy agents (PPAs). It introduces an information-adaptive modulation mechanism wherein PPAs—acting as governance agents—dynamically regulate inter-agent information transparency during iterative games to foster cooperative evolution. Contribution/Results: Experiments on canonical social dilemmas (e.g., Prisoner’s Dilemma) demonstrate a significant increase in cooperation rates; SLAs exhibit fine-grained strategic adaptability; and PPAs successfully learn optimal information-modulation policies, yielding an average 37.2% improvement in social welfare. This work pioneers information-transparency–driven cooperative emergence in multi-agent systems and establishes a novel paradigm for governance of LLM-augmented socio-technical systems.

Technology Category

Application Category

📝 Abstract
This paper introduces a novel framework combining LLM agents as proxies for human strategic behavior with reinforcement learning (RL) to engage these agents in evolving strategic interactions within team environments. Our approach extends traditional agent-based simulations by using strategic LLM agents (SLA) and introducing dynamic and adaptive governance through a pro-social promoting RL agent (PPA) that modulates information access across agents in a network, optimizing social welfare and promoting pro-social behavior. Through validation in iterative games, including the prisoner dilemma, we demonstrate that SLA agents exhibit nuanced strategic adaptations. The PPA agent effectively learns to adjust information transparency, resulting in enhanced cooperation rates. This framework offers significant insights into AI-mediated social dynamics, contributing to the deployment of AI in real-world team settings.
Problem

Research questions and friction points this paper is trying to address.

Addresses social dilemmas between individual and collective interests in multi-agent systems
Overcomes limitations of static governance in dynamic autonomous AI environments
Enables cooperation through adaptive information control while preserving agent autonomy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive governance mechanisms integrated into system design
Separation of agent interaction and information flow networks
Reinforcement learning agent dynamically modulates information transparency
🔎 Similar Papers
No similar papers found.
Q
Qiliang Chen
Northeastern University
S
Sepehr Ilami
Northeastern University
N
Nunzio Lorè
Northeastern University
Babak Heydari
Babak Heydari
Northeastern University, UC Berkeley
Artificial IntelligenceNetwork ScienceEngineering System DesignDynamic ModelingPlatforms