EffiPair: Improving the Efficiency of LLM-generated Code with Relative Contrastive Feedback

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of code generated by large language models in terms of runtime and memory usage, a challenge inadequately tackled by existing optimization methods that rely on absolute execution feedback due to their high cost and weak guidance. The paper proposes EffiPair, a fine-tuning-free, inference-time iterative optimization framework that introduces a novel relative contrastive feedback mechanism. By generating multiple candidate programs and identifying pairs that are structurally similar yet exhibit significant efficiency differences, EffiPair extracts lightweight contrastive signals from their execution profiles to steer the model toward producing more efficient code. Requiring no parameter updates and preserving functional correctness, this approach achieves up to 1.5× speedup while reducing prompt consumption by over 90% compared to prior methods.
📝 Abstract
Large language models (LLMs) often generate code that is functionally correct but inefficient in runtime and memory. Prior approaches to improving code efficiency typically rely on absolute execution feedback, such as profiling a single program's runtime or memory usage, which is costly and provides weak guidance for refinement. We propose Relative Contrastive Feedback (RCF), an inference-time feedback mechanism that requires no model fine-tuning or parameter updates. RCF compares two structurally similar programs for the same task and highlights the differences associated with better efficiency. Building on this idea, we introduce EffiPair, an inference-time iterative refinement framework that operates entirely at test time by generating multiple candidate solutions, identifying informative program pairs with large efficiency gaps, summarizing their execution differences into lightweight feedback, and using this signal to produce more efficient solutions. By replacing isolated scalar feedback with pairwise contrastive comparisons, EffiPair provides more direct guidance while reducing profiling and prompting overhead. Experiments on code-efficiency benchmarks show that EffiPair consistently improves efficiency while preserving correctness. For instance, with DeepSeek-Chat V3.2, EffiPair achieves up to 1.5x speedup over generation without performance feedback, while reducing token usage by more than 90% compared to prior work.
Problem

Research questions and friction points this paper is trying to address.

code efficiency
large language models
runtime performance
memory usage
inefficient code generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relative Contrastive Feedback
EffiPair
code efficiency
inference-time refinement
LLM-generated code
🔎 Similar Papers
No similar papers found.