🤖 AI Summary
Existing small-scale local large language models struggle to efficiently perform Linux privilege escalation under limited computational resources. This work proposes a two-stage post-training approach: first conducting supervised fine-tuning on program-generated privilege escalation trajectories, followed by reinforcement learning driven by a verifiable reward mechanism. To our knowledge, this is the first application of such a paradigm to multi-step interactive security tasks with compact local models. Evaluated on 12 held-out privilege escalation scenarios, a 4B-parameter model achieves a success rate of 95.8%, closely approaching that of Claude Opus (97.5%), while reducing the inference cost per successful escalation by over 100×—demonstrating both high efficacy and exceptionally low resource overhead.
📝 Abstract
LLM agents are increasingly relevant to research domains such as vulnerability discovery. Yet, the strongest systems remain closed and cloud-only, making them resource-intensive, difficult to reproduce, and unsuitable for work involving proprietary code or sensitive data. Consequently, there is an urgent need for small, local models that can perform security tasks under strict resource budgets, but methods for developing them remain underexplored. In this paper, we address this gap by proposing a two-stage post-training pipeline. We focus on the problem of Linux privilege escalation, where success is automatically verifiable and the task requires multi-step interactive reasoning. Using an experimental setup that prevents data leakage, we post-train a 4B model in two stages: supervised fine-tuning on traces from procedurally generated privilege-escalation environments, followed by reinforcement learning with verifiable rewards. On a held-out benchmark of 12 Linux privilege-escalation scenarios, supervised fine-tuning alone more than doubles the baseline success rate at 20 rounds, and reinforcement learning further lifts our resulting model, PrivEsc-LLM, to 95.8%, nearly matching Claude Opus 4.6 at 97.5%. At the same time, the expected inference cost per successful escalation is reduced by over 100x.