🤖 AI Summary
This work addresses the limitations of existing scripting languages like PowerShell, whose security assurances rely heavily on manual rules and are prone to human error and operational risks, while general-purpose large language models (LLMs) exhibit inadequate performance in generating secure scripts. To bridge this gap, we propose PSSec, a novel framework that systematically evaluates and enhances the capability of lightweight LLMs for secure PowerShell code generation. PSSec leverages synthetically constructed triplets of unsafe scripts, violation analyses, and corresponding fixes, integrating supervised fine-tuning with reinforcement learning. It further incorporates a self-debugging agent that combines static analysis with advanced LLM-based reasoning. Experimental results demonstrate that a 1.7B-parameter model trained with PSSec matches or surpasses the performance of much larger general-purpose LLMs on security-critical tasks, while reducing inference costs by over an order of magnitude.
📝 Abstract
The security of scripting languages such as PowerShell is critical given their powerful automation and administration capabilities, often exercised with elevated privileges. Today, securing these languages still demands substantial human effort to craft and enforce rules, imposing heavy burdens on typical administrators and creating critical production risks (e.g., misoperations that shut down servers).Large language models (LLMs) have demonstrated strong capabilities in code generation, vulnerability detection, and automated repair for languages like Python and JavaScript. However, their ability to assist with generating secure scripting-language code remains largely underexplored. In this paper, we present SecGenEval-PS, a benchmark designed to systematically evaluate LLMs on secure scripting generation, security analysis, and automated repair. Our results show that both proprietary and open-source models fall short in these areas. For instance, over 60% of PowerShell scripts produced by GPT-4o and o3-mini are insecure without structured guidance.To bridge this gap, we propose PSSec, a framework that combines data synthesis with fine-tuning to enhance model security capabilities. We develop a self-debugging agent that integrates static analyzers with the reasoning abilities of advanced LLMs to synthesize large-scale structured triplets of insecure scripts, violation analyses, and corresponding repairs. We then fine-tune lightweight LLMs (as small as 1.7B parameters) using supervised fine-tuning (SFT) and reinforcement learning (RL), enabling security-aware reasoning and the generation of secure PowerShell code.Across multiple LLM families, including GPT and Qwen, \textit{PSSec}-trained models match or surpass general-purpose large models on PowerShell security tasks while reducing inference cost by more than an order of magnitude.