SecPI: Secure Code Generation with Reasoning Models via Security Reasoning Internalization

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of existing reasoning language models to introduce security flaws during code generation, a problem exacerbated by conventional approaches that either rely on costly security-specific datasets or compromise functional correctness through runtime prompting. To overcome these limitations, the authors propose SecPI, a fine-tuning framework that curates security-relevant tasks from general-purpose code data and leverages a teacher model to generate structured security reasoning traces. Through supervised fine-tuning on inputs devoid of explicit security prompts, SecPI enables the model to internalize secure coding practices by default—eliminating the need for handcrafted instructions or scarce vulnerability examples. Experiments demonstrate that SecPI boosts the rate of secure and functionally correct code generation by 14.0 percentage points to 62.2% on CWEval for QwQ-32B, achieves 22.0% on BaxBench, and improves performance by 9.9% on unseen memory-safety-related CWE categories, showcasing strong cross-CWE and cross-language generalization.
📝 Abstract
Reasoning language models (RLMs) are increasingly used in programming. Yet, even state-of-the-art RLMs frequently introduce critical security vulnerabilities in generated code. Prior training-based approaches for secure code generation face a critical limitation that prevents their direct application to RLMs: they rely on costly, manually curated security datasets covering only a limited set of vulnerabilities. At the inference level, generic security reminders consistently degrade functional correctness while triggering only shallow ad-hoc vulnerability analysis. To address these problems, we present SecPI, a fine-tuning pipeline that teaches RLMs to internalize structured security reasoning, producing secure code by default without any security instructions at inference time. SecPI filters existing general-purpose coding datasets for security-relevant tasks using an LLM-based classifier, generates high-quality security reasoning traces with a teacher model guided by a structured prompt that systematically enumerates relevant CWEs and mitigations, and fine-tunes the target model on pairs of inputs with no security prompt and teacher reasoning traces -- as a result, the model learns to reason about security autonomously rather than in response to explicit instructions. An extensive evaluation on security benchmarks with state-of-the-art open-weight reasoning models validates the effectiveness of our approach. For instance, SecPI improves the percentage of functionally correct and secure generations for QwQ 32B from 48.2% to 62.2% (+14.0 points) on CWEval and from 18.2% to 22.0% on BaxBench. Further investigation also reveals strong cross-CWE and cross-language generalization beyond training vulnerabilities. Even when trained only on injection-related CWEs, QwQ 32B generates correct and secure code 9.9% more frequently on held-out memory-safety CWEs.
Problem

Research questions and friction points this paper is trying to address.

secure code generation
reasoning language models
security vulnerabilities
CWE
inference-time security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Secure Code Generation
Reasoning Language Models
Security Reasoning Internalization
CWE Generalization
Fine-tuning Pipeline
🔎 Similar Papers
No similar papers found.