Guided Reasoning in LLM-Driven Penetration Testing Using Structured Attack Trees

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit hallucination, invalid actions, and cyclic responses in autonomous penetration testing due to unstructured reasoning. Method: This paper proposes a deterministic task-tree-guided reasoning framework grounded in the MITRE ATT&CK matrix, explicitly embedding structured adversarial knowledge into LLM decision-making to constrain generation along executable, real-world attack chains. Contribution/Results: The framework significantly suppresses hallucination while improving path validity and interpretability. Evaluated on Llama-3-8B, Gemini-1.5, and GPT-4 across 103 realistic subtasks, it achieves up to 78.6% task completion—nearly 5× more efficient than self-directed baselines—and reduces API call consumption by two-thirds. Its core innovation lies in pioneering the use of ATT&CK-driven attack trees as a controllable reasoning skeleton for knowledge-guided security agents.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have driven interest in automating cybersecurity penetration testing workflows, offering the promise of faster and more consistent vulnerability assessment for enterprise systems. Existing LLM agents for penetration testing primarily rely on self-guided reasoning, which can produce inaccurate or hallucinated procedural steps. As a result, the LLM agent may undertake unproductive actions, such as exploiting unused software libraries or generating cyclical responses that repeat prior tactics. In this work, we propose a guided reasoning pipeline for penetration testing LLM agents that incorporates a deterministic task tree built from the MITRE ATT&CK Matrix, a proven penetration testing kll chain, to constrain the LLM's reaoning process to explicitly defined tactics, techniques, and procedures. This anchors reasoning in proven penetration testing methodologies and filters out ineffective actions by guiding the agent towards more productive attack procedures. To evaluate our approach, we built an automated penetration testing LLM agent using three LLMs (Llama-3-8B, Gemini-1.5, and GPT-4) and applied it to navigate 10 HackTheBox cybersecurity exercises with 103 discrete subtasks representing real-world cyberattack scenarios. Our proposed reasoning pipeline guided the LLM agent through 71.8%, 72.8%, and 78.6% of subtasks using Llama-3-8B, Gemini-1.5, and GPT-4, respectively. Comparatively, the state-of-the-art LLM penetration testing tool using self-guided reasoning completed only 13.5%, 16.5%, and 75.7% of subtasks and required 86.2%, 118.7%, and 205.9% more model queries. This suggests that incorporating a deterministic task tree into LLM reasoning pipelines can enhance the accuracy and efficiency of automated cybersecurity assessments
Problem

Research questions and friction points this paper is trying to address.

Automating cybersecurity penetration testing using LLMs
Reducing inaccurate or hallucinated procedural steps in LLM agents
Improving efficiency and accuracy of automated vulnerability assessments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Guided reasoning pipeline with structured attack trees
Deterministic task tree from MITRE ATT&CK Matrix
Constrains LLM to proven tactics and procedures
🔎 Similar Papers
No similar papers found.
K
Katsuaki Nakano
Department of Electrical and Computer Engineering, Rochester Institute of Technology
R
Reza Feyyazi
Department of Electrical and Computer Engineering, Rochester Institute of Technology
Shanchieh Jay Yang
Shanchieh Jay Yang
Gonzaga University
Responsible AI for CybersecurityData ScienceMachine LearningSimulation
Michael Zuzak
Michael Zuzak
Assistant Professor of Computer Engineering, Rochester Institute of Technology