🤖 AI Summary
To address the challenges of uncontrollable code quality, poor security, and low maintainability in large language model (LLM)-generated code, this paper proposes REAL: a reinforcement learning framework grounded in program analysis feedback. REAL introduces a novel, prompt-agnostic and reference-free dual-signal feedback mechanism that jointly leverages static analysis—detecting security vulnerabilities and type errors—and dynamic unit testing, eliminating reliance on human annotations or heuristic rules to achieve end-to-end alignment with code quality objectives. The framework integrates Proximal Policy Optimization (PPO), LLM fine-tuning, and inference-time enhancement techniques. Extensive experiments across multiple benchmarks and model scales demonstrate that REAL consistently outperforms state-of-the-art methods, delivering simultaneous improvements in functional correctness, SQL injection resistance, and type annotation completeness.
📝 Abstract
Code generation with large language models (LLMs), often termed vibe coding, is increasingly adopted in production but fails to ensure code quality, particularly in security (e.g., SQL injection vulnerabilities) and maintainability (e.g., missing type annotations). Existing methods, such as supervised fine-tuning and rule-based post-processing, rely on labor-intensive annotations or brittle heuristics, limiting their scalability and effectiveness. We propose REAL, a reinforcement learning framework that incentivizes LLMs to generate production-quality code using program analysis-guided feedback. Specifically, REAL integrates two automated signals: (1) program analysis detecting security or maintainability defects and (2) unit tests ensuring functional correctness. Unlike prior work, our framework is prompt-agnostic and reference-free, enabling scalable supervision without manual intervention. Experiments across multiple datasets and model scales demonstrate that REAL outperforms state-of-the-art methods in simultaneous assessments of functionality and code quality. Our work bridges the gap between rapid prototyping and production-ready code, enabling LLMs to deliver both speed and quality.