Bridging Developer Instructions and Code Completion Through Instruction-Aware Fill-in-the-Middle Paradigm

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) for code excel at fill-in-the-middle (FIM) completion but struggle to leverage developer-provided natural-language instructions (e.g., comments) to resolve intent ambiguity: FIM pretraining lacks instruction data, while standard instruction tuning degrades FIM capability. To address this, we propose Instruction-aware Fill-in-the-Middle (IFIM), the first training paradigm explicitly designed to enhance FIM with natural-language instructions. IFIM reformulates the input as a triplet—(prefix, instruction, suffix)—enabling instruction-aware contextual fusion. Leveraging high-quality, GPT-4o-generated instruction data, we apply IFIM fine-tuning to DeepSeek-Coder and Qwen2.5-Coder. On HumanEval-infilling, our models achieve 93.6% Pass@1 (+9.0% absolute improvement) while fully preserving original FIM performance. This demonstrates that instruction integration need not compromise core FIM capabilities—enabling more precise, intention-aligned code completion without sacrificing generality.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have significantly advanced code completion, yet they often fail when the developer's intent is underspecified in the code context. To address this, developers usually add natural language instructions (e.g., comments) into the code context to clarify their intent. However, existing code LLMs applied for code completion systems merely undergo a fill-in-the-middle (FIM) pre-training, which struggles to leverage this information effectively due to the lack of instruction-like training data. Existing instruction-tuning techniques, which improve instruction-following in general code generation, paradoxically degrade FIM performance, forcing a trade-off between instruction-following and infilling capabilities. To address this gap, we introduce Instruction-aware Fill-in-the-Middle (IFIM), an instruction-tuning method specifically designed to enhance FIM code completion models. IFIM extends the conventional FIM training objective by incorporating an explicit instruction section into the input, enabling the model to learn from (prefix, instruction, suffix) triplets. This approach allows the model to effectively leverage developer-provided directives while preserving its core completion abilities when no instructions are present. To facilitate this, we constructed a large-scale dataset by using GPT-4o to generate concise, intent-focused instructions for code infilling examples. We evaluated IFIM by applying it to two popular base models, Deepseek-Coder and Qwen2.5-Coder, on the benchmarks derived from HumanEval-infilling and RepoMasterEval. The results demonstrate that IFIM significantly improves instruction-following capabilities, boosting the Pass@1 score from 84.6% to 93.6% on HumanEval-infilling. Moreover, this enhancement does not compromise the models' original performance on FIM code completion tasks with no instructions provided.
Problem

Research questions and friction points this paper is trying to address.

Bridging developer instructions and code completion
Enhancing instruction-following without compromising infilling capabilities
Effectively leveraging natural language directives in code context
Innovation

Methods, ideas, or system contributions that make the work stand out.

Instruction-aware Fill-in-the-Middle training paradigm
Incorporates explicit instruction section into input
Uses GPT-4 generated dataset for instruction training
🔎 Similar Papers
No similar papers found.
Zhensu Sun
Zhensu Sun
PhD Student, Singapore Management University
Software EngineeringDeep Learning
C
Chengran Yang
Singapore Management University, Singapore
C
Chao Peng
ByteDance, China
P
Pengfei Gao
ByteDance, China
Xiaoning Du
Xiaoning Du
Senior Lecturer (equivalent to U.S. Associate Professor), Monash University
Software EngineeringArtificial IntelligenceCybersecurityRuntime Verification
L
Li Li
Beihang University, China
D
David Lo
Singapore Management University, Singapore