Cybersecurity AI: A Game-Theoretic AI for Guiding Attack and Defense

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the strategic limitations of current AI-driven penetration testing, which lacks the intuitive decision-making capabilities of expert human adversaries in cyber conflict scenarios. To bridge this gap, the authors propose Generative Cut-the-Rope (G-CTR), a novel framework that integrates game theory with large language models (LLMs) in a closed-loop architecture. G-CTR extracts attack graphs from agent contexts, computes cost-aware Nash equilibria, and uses generative summaries as feedback to guide LLM-based reasoning toward strategic-level offensive and defensive decisions. Experimental results demonstrate that G-CTR replicates 70–90% of expert attack graph structures in real-world settings, increases attack success rates from 20.0% to 42.9%, accelerates execution by 60–245×, reduces operational costs by over 140×, decreases behavioral variance by 5.2×, and achieves Purple team win ratios ranging from 2:1 to 3.7:1.

Technology Category

Application Category

📝 Abstract
AI-driven penetration testing now executes thousands of actions per hour but still lacks the strategic intuition humans apply in competitive security. To build cybersecurity superintelligence --Cybersecurity AI exceeding best human capability-such strategic intuition must be embedded into agentic reasoning processes. We present Generative Cut-the-Rope (G-CTR), a game-theoretic guidance layer that extracts attack graphs from agent's context, computes Nash equilibria with effort-aware scoring, and feeds a concise digest back into the LLM loop \emph{guiding} the agent's actions. Across five real-world exercises, G-CTR matches 70--90% of expert graph structure while running 60--245x faster and over 140x cheaper than manual analysis. In a 44-run cyber-range, adding the digest lifts success from 20.0% to 42.9%, cuts cost-per-success by 2.7x, and reduces behavioral variance by 5.2x. In Attack-and-Defense exercises, a shared digest produces the Purple agent, winning roughly 2:1 over the LLM-only baseline and 3.7:1 over independently guided teams. This closed-loop guidance is what produces the breakthrough: it reduces ambiguity, collapses the LLM's search space, suppresses hallucinations, and keeps the model anchored to the most relevant parts of the problem, yielding large gains in success rate, consistency, and reliability.
Problem

Research questions and friction points this paper is trying to address.

Cybersecurity AI
strategic intuition
attack and defense
game theory
penetration testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

game-theoretic AI
attack graph
Nash equilibrium
LLM guidance
cybersecurity superintelligence
🔎 Similar Papers
No similar papers found.
V
Víctor Mayoral-Vilches
Alias Robotics, Vitoria-Gasteiz, Álava, Spain
M
María Sanz-Gómez
Alias Robotics, Vitoria-Gasteiz, Álava, Spain
F
Francesco Balassone
Alias Robotics, Vitoria-Gasteiz, Álava, Spain
Stefan Rass
Stefan Rass
Full Professor, LIT Secure and Correct Systems Lab, Johannes Kepler University Linz, Austria
System SecurityStatisticsComplexity TheoryGame TheoryDecision Theory
L
Lidia Salas-Espejo
Alias Robotics, Vitoria-Gasteiz, Álava, Spain
B
Benjamin Jablonski
Johannes Kepler University Linz
L
Luis Javier Navarrete-Lozano
Alias Robotics, Vitoria-Gasteiz, Álava, Spain
M
Maite del Mundo de Torres
Alias Robotics, Vitoria-Gasteiz, Álava, Spain
C
Cristóbal R. J. Veas Chavez
Alias Robotics, Vitoria-Gasteiz, Álava, Spain