How AI Agents Follow the Herd of AI? Network Effects, History, and Machine Optimism

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates decision-making by multi-agent AI systems in network-effect games—where individual payoffs depend on peers’ participation—a setting underexplored in multi-agent systems. Method: Leveraging LLM-driven programmable agents, we conduct repeated-game experiments under controlled price trajectories and parameterized network effects. Contribution/Results: We identify “machine optimism”: LLM agents persistently overestimate cooperative gains due to temporal-structure bias in reasoning. Crucially, we demonstrate that temporal coherence of historical data—not merely its content—governs strategic inference and equilibrium convergence. Absent history, no convergence occurs; ordered history yields partial convergence under weak network effects but sustained over-optimism under strong effects; random history completely disrupts convergence. These findings challenge classical game-theoretic equilibrium assumptions and reveal a structural cognitive limitation of LLMs in socially embedded decision-making.

Technology Category

Application Category

📝 Abstract
Understanding decision-making in multi-AI-agent frameworks is crucial for analyzing strategic interactions in network-effect-driven contexts. This study investigates how AI agents navigate network-effect games, where individual payoffs depend on peer participatio--a context underexplored in multi-agent systems despite its real-world prevalence. We introduce a novel workflow design using large language model (LLM)-based agents in repeated decision-making scenarios, systematically manipulating price trajectories (fixed, ascending, descending, random) and network-effect strength. Our key findings include: First, without historical data, agents fail to infer equilibrium. Second, ordered historical sequences (e.g., escalating prices) enable partial convergence under weak network effects but strong effects trigger persistent "AI optimism"--agents overestimate participation despite contradictory evidence. Third, randomized history disrupts convergence entirely, demonstrating that temporal coherence in data shapes LLMs' reasoning, unlike humans. These results highlight a paradigm shift: in AI-mediated systems, equilibrium outcomes depend not just on incentives, but on how history is curated, which is impossible for human.
Problem

Research questions and friction points this paper is trying to address.

Investigates AI agents' decision-making in network-effect games
Examines how historical data shapes AI convergence and optimism
Demonstrates temporal coherence's unique impact on LLM reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based agents in repeated decision-making scenarios
Manipulating price trajectories and network-effect strength systematically
Historical data curation shapes AI reasoning and equilibrium outcomes
🔎 Similar Papers
No similar papers found.
Y
Yu Liu
Fudan University
W
Wenwen Li
Fudan University
Y
Yifan Dou
Fudan University
Guangnan Ye
Guangnan Ye
Fudan University
Computer Vision - Machine Learning