๐ค AI Summary
This study investigates the strategic behavior of large language models (LLMs) in structured geopolitical simulations, focusing on their similarities to and divergences from human decision-making in action selection, risk assessment, and theoretical reasoning. Through multi-round simulations across four real-world crisis scenarios, the authors systematically evaluate six prominent LLMs using behavioral alignment metrics, risk calibration analyses, and argument coding grounded in international relations theory. The findings reveal that while LLMs initially produce decisions resembling human choices, they progressively develop distinct strategic trajectories over time, exhibiting a pronounced preference for normative cooperation, stable coordination, and risk aversion, alongside a marked deficiency in adversarial reasoning. This work provides the first empirical demonstration of the dynamic behavioral patterns of LLMs in extended geopolitical forecasting and highlights fundamental differences between model-generated strategies and human strategic cognition.
๐ Abstract
Large language models (LLMs) are increasingly proposed as agents in strategic decision environments, yet their behavior in structured geopolitical simulations remains under-researched. We evaluate six popular state-of-the-art LLMs alongside results from human results across four real-world crisis simulation scenarios, requiring models to select predefined actions and justify their decisions across multiple rounds. We compare models to humans in action alignment, risk calibration through chosen actions' severity, and argumentative framing grounded in international relations theory. Results show that models approximate human decision patterns in base simulation rounds but diverge over time, displaying distinct behavioural profiles and strategy updates. LLM explanations for chosen actions across all models exhibit a strong normative-cooperative framing centered on stability, coordination, and risk mitigation, with limited adversarial reasoning.