🤖 AI Summary
This work investigates the capabilities and limitations of large language models (LLMs) in automating Linux local privilege escalation during penetration testing. We propose the first end-to-end, LLM-driven autonomous privilege escalation framework, incorporating dynamic error recovery, context-aware memory management, and multi-stage guided reasoning to support both in-context learning and interactive command-line inference. We systematically evaluate GPT-4-turbo, GPT-3.5-turbo, and Llama3 on real Linux target machines. Results show GPT-4-turbo successfully exploits 33–83% of known privilege escalation vulnerabilities—substantially outperforming GPT-3.5-turbo (16–50%) and Llama3 (0–33%). This study bridges a critical gap in AI-powered red teaming by enabling automated lateral movement and privilege escalation, and empirically establishes the feasibility and practical boundaries of advanced reasoning LLMs in ethical hacking tasks.
📝 Abstract
Penetration testing, an essential component of software security testing, allows organizations to identify and remediate vulnerabilities in their systems, thus bolstering their defense mechanisms against cyberattacks. One recent advancement in the realm of penetration testing is the utilization of Language Models (LLMs). We explore the intersection of LLMs and penetration testing to gain insight into their capabilities and challenges in the context of privilege escalation. We introduce a fully automated privilege-escalation tool designed for evaluating the efficacy of LLMs for (ethical) hacking, executing benchmarks using multiple LLMs, and investigating their respective results. Our results show that GPT-4-turbo is well suited to exploit vulnerabilities (33-83% of vulnerabilities). GPT-3.5-turbo can abuse 16-50% of vulnerabilities, while local models, such as Llama3, can only exploit between 0 and 33% of the vulnerabilities. We analyze the impact of different context sizes, in-context learning, optional high-level guidance mechanisms, and memory management techniques. We discuss challenging areas for LLMs, including maintaining focus during testing, coping with errors, and finally comparing LLMs with human hackers. The current version of the LLM-guided privilege-escalation prototype can be found at https://github.com/ipa-labs/hackingBuddyGPT.