Improving Code Generation via Small Language Model-as-a-judge

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited performance of large language models in code generation for non-mainstream programming languages and the high cost of training high-performance alternatives. To this end, we propose a correctness discriminator based on fine-tuning a compact, state-of-the-art language model such as T5, which ranks candidate programs using non-execution features without requiring execution feedback. We present the first systematic evaluation of small models’ classification accuracy in code correctness prediction and demonstrate that they can significantly improve code generation quality even in the absence of execution information. Our approach achieves superior performance over existing ranking methods like RankEF at a fraction of the computational cost, matching or exceeding the results of language models that are 5 to 25 times larger.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown remarkable capabilities in automated code generation. While effective for mainstream languages, they may underperform on less common or domain-specific languages, prompting companies to develop in-house code generators. While open-source models can be trained for this, only LLMs with tens of billions of parameters match the performance of commercial tools, demanding costly training and deployment. Recent work proposed supporting code generation with smaller models (SLMs) by generating multiple candidate solutions and using another SLM to select the most likely correct one. The most recent work in this area is the one by Sun et al. [29] presenting RankEF, a T5 model trained to rank code solutions using both execution-based and non-execution-based information. However, Sun et al. do not assess the T5 ranker's classification accuracy, that is, how often it misjudges correct implementations as incorrect or vice versa, leaving open questions about the reliability of LMs as code correctness judges for other tasks (e.g., automated code review). Moreover, their experiments involve relatively old models, making it unclear the extent to which such a methodology would still help companies in cheaply training their own code generators with performance comparable to those of massive LLMs. We present a study addressing these limitations. We train several state-of-the-art SLMs as code correctness judges and assess their ability to discriminate between correct and wrong implementations. We show that modern SLMs outperform RankEF, even without exploiting execution-based information. When used as code rankers, they achieve higher performance gains than RankEF and perform competitively with LLMs 5-25x larger, at a fraction of the cost.
Problem

Research questions and friction points this paper is trying to address.

code generation
small language models
code correctness
model evaluation
automated programming
Innovation

Methods, ideas, or system contributions that make the work stand out.

Small Language Models
Code Generation
Code Ranking
Model-as-a-Judge
Cost-Efficient AI
🔎 Similar Papers
No similar papers found.