Post-Training Local LLM Agents for Linux Privilege Escalation with Verifiable Rewards

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing small-scale local large language models struggle to efficiently perform Linux privilege escalation under limited computational resources. This work proposes a two-stage post-training approach: first conducting supervised fine-tuning on program-generated privilege escalation trajectories, followed by reinforcement learning driven by a verifiable reward mechanism. To our knowledge, this is the first application of such a paradigm to multi-step interactive security tasks with compact local models. Evaluated on 12 held-out privilege escalation scenarios, a 4B-parameter model achieves a success rate of 95.8%, closely approaching that of Claude Opus (97.5%), while reducing the inference cost per successful escalation by over 100×—demonstrating both high efficacy and exceptionally low resource overhead.

Technology Category

Application Category

📝 Abstract
LLM agents are increasingly relevant to research domains such as vulnerability discovery. Yet, the strongest systems remain closed and cloud-only, making them resource-intensive, difficult to reproduce, and unsuitable for work involving proprietary code or sensitive data. Consequently, there is an urgent need for small, local models that can perform security tasks under strict resource budgets, but methods for developing them remain underexplored. In this paper, we address this gap by proposing a two-stage post-training pipeline. We focus on the problem of Linux privilege escalation, where success is automatically verifiable and the task requires multi-step interactive reasoning. Using an experimental setup that prevents data leakage, we post-train a 4B model in two stages: supervised fine-tuning on traces from procedurally generated privilege-escalation environments, followed by reinforcement learning with verifiable rewards. On a held-out benchmark of 12 Linux privilege-escalation scenarios, supervised fine-tuning alone more than doubles the baseline success rate at 20 rounds, and reinforcement learning further lifts our resulting model, PrivEsc-LLM, to 95.8%, nearly matching Claude Opus 4.6 at 97.5%. At the same time, the expected inference cost per successful escalation is reduced by over 100x.
Problem

Research questions and friction points this paper is trying to address.

Local LLM
Linux Privilege Escalation
Post-Training
Verifiable Rewards
Resource-Constrained Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

post-training
local LLM
privilege escalation
verifiable rewards
reinforcement learning
🔎 Similar Papers
No similar papers found.