Can LLMs effectively provide game-theoretic-based scenarios for cybersecurity?

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) conform to classical game-theoretic predictions in cybersecurity-relevant strategic interactions, focusing on equilibrium convergence in zero-sum games and dynamic Prisoner’s Dilemma, as well as cross-lingual behavioral stability. We develop a reproducible game-agent framework integrating persona modeling, multi-round strategic knowledge injection, and quantitative cross-lingual consistency evaluation. Systematic experiments assess decision-making behaviors of four state-of-the-art LLMs across five natural languages. Results reveal pervasive language-dependent biases: several models consistently deviate from Nash equilibria, and agent characteristics—particularly persona configuration—interact with language choice to shape payoff distributions, intra-agent consistency, and cross-lingual stability. To our knowledge, this is the first empirical study to demonstrate LLMs’ language sensitivity in security-critical games. The findings provide critical insights for multilingual AI security deployment and establish a novel evaluation paradigm for assessing strategic robustness across linguistic contexts.

Technology Category

Application Category

📝 Abstract
Game theory has long served as a foundational tool in cybersecurity to test, predict, and design strategic interactions between attackers and defenders. The recent advent of Large Language Models (LLMs) offers new tools and challenges for the security of computer systems; In this work, we investigate whether classical game-theoretic frameworks can effectively capture the behaviours of LLM-driven actors and bots. Using a reproducible framework for game-theoretic LLM agents, we investigate two canonical scenarios -- the one-shot zero-sum game and the dynamic Prisoner's Dilemma -- and we test whether LLMs converge to expected outcomes or exhibit deviations due to embedded biases. Our experiments involve four state-of-the-art LLMs and span five natural languages, English, French, Arabic, Vietnamese, and Mandarin Chinese, to assess linguistic sensitivity. For both games, we observe that the final payoffs are influenced by agents characteristics such as personality traits or knowledge of repeated rounds. Moreover, we uncover an unexpected sensitivity of the final payoffs to the choice of languages, which should warn against indiscriminate application of LLMs in cybersecurity applications and call for in-depth studies, as LLMs may behave differently when deployed in different countries. We also employ quantitative metrics to evaluate the internal consistency and cross-language stability of LLM agents, to help guide the selection of the most stable LLMs and optimising models for secure applications.
Problem

Research questions and friction points this paper is trying to address.

Assess LLMs in game-theoretic cybersecurity scenarios
Test LLM-driven actors in zero-sum and Prisoner's Dilemma games
Evaluate LLM sensitivity to language and biases in cybersecurity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reproducible framework for game-theoretic LLM agents
Testing LLMs in one-shot and dynamic games
Assessing linguistic sensitivity across five languages
🔎 Similar Papers
No similar papers found.