Fortifying LLM-Based Code Generation with Graph-Based Reasoning on Secure Coding Practices

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate code containing security vulnerabilities; existing defenses—such as data fine-tuning and static analysis—are resource-intensive, struggle to adapt to zero-day vulnerabilities, and lack compatibility with proprietary models. Method: This paper introduces GRASP, a novel framework featuring a graph-guided reasoning mechanism grounded in a directed acyclic graph (DAG) of security coding practices (SCP Graph). GRASP integrates structural security knowledge into code generation in real time without model fine-tuning, enabling structured, interpretable, generalizable, and scalable security enhancement. Contribution/Results: GRASP achieves over 80% safety rates across multiple mainstream LLMs. Crucially, for zero-day vulnerabilities, it improves safety by up to 88% relative to baseline methods. Its architecture is model-agnostic, requires no retraining, and preserves the original model’s functionality while enhancing security awareness during inference.

Technology Category

Application Category

📝 Abstract
The code generation capabilities of Large Language Models (LLMs) have transformed the field of software development. However, this advancement also presents significant security challenges, as LLM-generated code often contains vulnerabilities. One direction of research strengthens LLMs by injecting or refining security knowledge through curated datasets, model tuning, or static analyzers. While effective in certain settings, these methods can be resource-intensive, less adaptable to zero-day vulnerabilities, and often inapplicable to proprietary models. To address these challenges, we introduce GRASP, which explores a new direction that focuses on structured reasoning over Secure Coding Practices(SCPs) rather than additional training or external feedback. GRASP comprises two key ideas: (1) an SCP graph that organizes SCPs into a Directed Acyclic Graph (DAG) capturing dependencies and relationships, and (2) a graph-based reasoning process that systematically guides LLMs through relevant SCPs for code generation. This design enables interpretable, model-agnostic, and scalable security improvements, particularly for previously unseen vulnerabilities. Our evaluation shows that GRASP consistently achieves Security Rates (SR) exceeding 80% across multiple LLMs, and delivers up to 88% improvements over baselines on zero-day vulnerabilities.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM-generated code security against vulnerabilities
Addressing zero-day vulnerabilities without additional model training
Providing interpretable security improvements through structured reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based reasoning for secure code generation
SCP graph organizes secure coding practices
Model-agnostic security improvements without additional training
🔎 Similar Papers
No similar papers found.