🤖 AI Summary
This study investigates decision-making by multi-agent AI systems in network-effect games—where individual payoffs depend on peers’ participation—a setting underexplored in multi-agent systems.
Method: Leveraging LLM-driven programmable agents, we conduct repeated-game experiments under controlled price trajectories and parameterized network effects.
Contribution/Results: We identify “machine optimism”: LLM agents persistently overestimate cooperative gains due to temporal-structure bias in reasoning. Crucially, we demonstrate that temporal coherence of historical data—not merely its content—governs strategic inference and equilibrium convergence. Absent history, no convergence occurs; ordered history yields partial convergence under weak network effects but sustained over-optimism under strong effects; random history completely disrupts convergence. These findings challenge classical game-theoretic equilibrium assumptions and reveal a structural cognitive limitation of LLMs in socially embedded decision-making.
📝 Abstract
Understanding decision-making in multi-AI-agent frameworks is crucial for analyzing strategic interactions in network-effect-driven contexts. This study investigates how AI agents navigate network-effect games, where individual payoffs depend on peer participatio--a context underexplored in multi-agent systems despite its real-world prevalence. We introduce a novel workflow design using large language model (LLM)-based agents in repeated decision-making scenarios, systematically manipulating price trajectories (fixed, ascending, descending, random) and network-effect strength. Our key findings include: First, without historical data, agents fail to infer equilibrium. Second, ordered historical sequences (e.g., escalating prices) enable partial convergence under weak network effects but strong effects trigger persistent "AI optimism"--agents overestimate participation despite contradictory evidence. Third, randomized history disrupts convergence entirely, demonstrating that temporal coherence in data shapes LLMs' reasoning, unlike humans. These results highlight a paradigm shift: in AI-mediated systems, equilibrium outcomes depend not just on incentives, but on how history is curated, which is impossible for human.