🤖 AI Summary
Large language models (LLMs) often generate syntactically invalid code for low-resource programming languages (LRPLs), and supervised fine-tuning is data-hungry and computationally prohibitive under resource constraints. Method: This paper proposes a lightweight program repair framework that synergistically combines small language models (SLMs) with reinforcement learning (RL). It introduces a novel joint reward mechanism integrating static syntax validation and semantic similarity, enabling end-to-end differentiable error correction without reliance on labeled supervision. Contribution/Results: Experiments across multiple domain-specific languages demonstrate >95% static validation pass rates. The approach significantly outperforms supervised fine-tuning of comparable SLMs and even surpasses supervised fine-tuning baselines using 7B-parameter LLMs. It establishes an efficient, practical paradigm for reliable code generation in compute- and data-constrained settings.
📝 Abstract
Recent advancements in large language models (LLMs) have shown very impressive capabilities in code generation across many programming languages. However, even state-of-the-art LLMs generate programs that contains syntactic errors and fail to complete the given tasks, especially for low-resource programming languages (LRPLs). In addition, high training cost makes finetuning LLMs unaffordable with constrained computational resources, further undermining the effectiveness of LLMs for code generation. In this work, we propose SLMFix, a novel code generation pipeline that leverages a small language model (SLM) finetuned using reinforcement learning (RL) techniques to fix syntactic errors in LLM-generated programs to improve the quality of LLM-generated programs for domain-specific languages (DSLs). In specific, we applied RL on the SLM for the program repair task using a reward calculated using both a static validator and a static semantic similarity metric. Our experimental results demonstrate the effectiveness and generalizability of our approach across multiple DSLs, achieving more than 95% pass rate on the static validator. Notably, SLMFix brings substantial improvement to the base model and outperforms supervised finetuning approach even for 7B models on a LRPL, showing the potential of our approach as an alternative to traditional finetuning approaches.