Beyond Nash Equilibrium: Bounded Rationality of LLMs and humans in Strategic Decision-making

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit human-like bounded rationality in strategic games (e.g., Rock-Paper-Scissors, Prisoner’s Dilemma). Adopting a behavioral game-theoretic paradigm, we conduct multi-round human–model and model–model contests, integrating behavioral trajectory analysis with reasoning-path tracing to systematically compare decision dynamics. Our key findings are threefold: First, LLMs partially replicate human heuristics—such as outcome-driven strategy switching and enhanced cooperation under repeated interaction—but apply them rigidly and with markedly weaker environmental adaptability. Second, LLMs exhibit fundamental limitations in opponent modeling and contextual awareness, leading to mechanistic rather than adaptive strategy adjustments. Third, they fail to stably converge to effective strategies in dynamic, interactive settings. Collectively, results indicate that current LLMs possess only superficial bounded rationality, lacking genuine adaptive rationality essential for robust strategic reasoning.

Technology Category

Application Category

📝 Abstract
Large language models are increasingly used in strategic decision-making settings, yet evidence shows that, like humans, they often deviate from full rationality. In this study, we compare LLMs and humans using experimental paradigms directly adapted from behavioral game-theory research. We focus on two well-studied strategic games, Rock-Paper-Scissors and the Prisoner's Dilemma, which are well known for revealing systematic departures from rational play in human subjects. By placing LLMs in identical experimental conditions, we evaluate whether their behaviors exhibit the bounded rationality characteristic of humans. Our findings show that LLMs reproduce familiar human heuristics, such as outcome-based strategy switching and increased cooperation when future interaction is possible, but they apply these rules more rigidly and demonstrate weaker sensitivity to the dynamic changes in the game environment. Model-level analyses reveal distinctive architectural signatures in strategic behavior, and even reasoning models sometimes struggle to find effective strategies in adaptive situations. These results indicate that current LLMs capture only a partial form of human-like bounded rationality and highlight the need for training methods that encourage flexible opponent modeling and stronger context awareness.
Problem

Research questions and friction points this paper is trying to address.

Compare LLMs and humans in strategic decision-making deviations
Assess bounded rationality in LLMs using game-theory paradigms
Identify gaps in LLMs' human-like flexible strategic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compare LLMs and humans in strategic games
Evaluate bounded rationality in LLMs
Highlight need for flexible opponent modeling
🔎 Similar Papers
No similar papers found.