Code-Optimise: Self-Generated Preference Data for Correctness and Efficiency

📅 2024-06-18
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
To address the common oversight of execution efficiency in large language models for code generation, this paper proposes a lightweight optimization framework that significantly improves runtime performance while preserving functional correctness. Methodologically, it introduces (1) a novel self-generated preference data mechanism jointly modeling pass rate and execution time; (2) a dynamic solution selection strategy to mitigate overfitting—without requiring larger models for supervision; and (3) an integrated preference learning framework combining self-distillation and runtime-aware sampling to jointly model binary pass/fail and quick/slow signals. Experiments demonstrate substantial improvements: pass@k increases notably; inference latency decreases by 6% (in-domain) and 3% (out-of-domain); and on MBPP, generated code length is reduced by up to 48%, collectively enhancing performance, efficiency, and deployment cost-effectiveness.

Technology Category

Application Category

📝 Abstract
Code Language Models have been trained to generate accurate solutions, typically with no regard for runtime. On the other hand, previous works that explored execution optimisation have observed corresponding drops in functional correctness. To that end, we introduce Code-Optimise, a framework that incorporates both correctness (passed, failed) and runtime (quick, slow) as learning signals via self-generated preference data. Our framework is both lightweight and robust as it dynamically selects solutions to reduce overfitting while avoiding a reliance on larger models for learning signals. Code-Optimise achieves significant improvements in pass@k while decreasing the competitive baseline runtimes by an additional 6% for in-domain data and up to 3% for out-of-domain data. As a by-product, the average length of the generated solutions is reduced by up to 48% on MBPP and 23% on HumanEval, resulting in faster and cheaper inference. The generated data and codebase is open-sourced at https://github.com/huawei-noah/HEBO/tree/Code_Optimise.
Problem

Research questions and friction points this paper is trying to address.

Ensures code correctness and runtime efficiency
Reduces overfitting with dynamic solution selection
Improves pass rates and decreases runtime significantly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-generated preference data
Dynamic solution selection
Improved runtime efficiency
🔎 Similar Papers
No similar papers found.