When Machines Meet Each Other: Network Effects and the Strategic Role of History in Multi-Agent AI

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) operating as autonomous agents coordinate decisions in multi-agent interactions—specifically under price incentives, historical dependencies, and network effects—and systematically deviate from Nash equilibrium behavior. We design a repeated-game experimental framework comprising 50 heterogeneous GPT-5 agents, employing controlled variable manipulation and agent-level regression analysis. Our key finding is that **historical structure serves as the critical lever for coordination**: monotonic histories stabilize expectations, whereas non-monotonic histories intensify path dependence and behavioral divergence. Empirically, agents exhibit persistent price sensitivity bias—underestimating participation at low prices and overestimating it at high prices—with outcomes remaining persistently fragmented. Network effects amplify this contextual bias. The work advances theoretical understanding of LLM-based collective behavior and identifies actionable intervention dimensions for governance and mechanism design in autonomous agent systems.

Technology Category

Application Category

📝 Abstract
As artificial intelligence (AI) enters the agentic era, large language models (LLMs) are increasingly deployed as autonomous agents that interact with one another rather than operate in isolation. This shift raises a fundamental question: how do machine agents behave in interdependent environments where outcomes depend not only on their own choices but also on the coordinated expectations of peers? To address this question, we study LLM agents in a canonical network-effect game, where economic theory predicts convergence to a fulfilled expectation equilibrium (FEE). We design an experimental framework in which 50 heterogeneous GPT-5-based agents repeatedly interact under systematically varied network-effect strengths, price trajectories, and decision-history lengths. The results reveal that LLM agents systematically diverge from FEE: they underestimate participation at low prices, overestimate at high prices, and sustain persistent dispersion. Crucially, the way history is structured emerges as a design lever. Simple monotonic histories-where past outcomes follow a steady upward or downward trend-help stabilize coordination, whereas nonmonotonic histories amplify divergence and path dependence. Regression analyses at the individual level further show that price is the dominant driver of deviation, history moderates this effect, and network effects amplify contextual distortions. Together, these findings advance machine behavior research by providing the first systematic evidence on multi-agent AI systems under network effects and offer guidance for configuring such systems in practice.
Problem

Research questions and friction points this paper is trying to address.

LLM agents deviate from equilibrium predictions in network-effect games
History structure influences coordination stability among autonomous AI agents
Price and network effects drive systematic distortions in multi-agent behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPT-5 agents interact in network-effect game
History structure stabilizes coordination in agents
Price and history moderate agent deviation patterns
🔎 Similar Papers
No similar papers found.
Y
Yu Liu
Fudan University
W
Wenwen Li
Fudan University
Y
Yifan Dou
Fudan University
Guangnan Ye
Guangnan Ye
Fudan University
Computer Vision - Machine Learning