Alignment with Fill-In-the-Middle for Enhancing Code Generation

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address code generation under constrained settings—limited training data, scarce test cases, and difficulty in precise correctness verification—this paper proposes a verifiability-oriented LLM alignment optimization framework. Methodologically, it introduces: (1) an AST-based fine-grained code block segmentation mechanism enabling Fill-in-the-Middle-style local generation; (2) a structured segmentation–informed curriculum learning strategy for Direct Preference Optimization (DPO), yielding high-quality preference data; and (3) a phased, progressive training paradigm to enhance model sensitivity to test-driven feedback. Evaluated on five major benchmarks—HumanEval+, MBPP+, APPS, LiveCodeBench, and BigCodeBench—the framework achieves significant improvements over strong baselines. All code and data are publicly released.

Technology Category

Application Category

📝 Abstract
The code generation capabilities of Large Language Models (LLMs) have advanced applications like tool invocation and problem-solving. However, improving performance in code-related tasks remains challenging due to limited training data that is verifiable with accurate test cases. While Direct Preference Optimization (DPO) has shown promise, existing methods for generating test cases still face limitations. In this paper, we propose a novel approach that splits code snippets into smaller, granular blocks, creating more diverse DPO pairs from the same test cases. Additionally, we introduce the Abstract Syntax Tree (AST) splitting and curriculum training method to enhance the DPO training. Our approach demonstrates significant improvements in code generation tasks, as validated by experiments on benchmark datasets such as HumanEval (+), MBPP (+), APPS, LiveCodeBench, and BigCodeBench. Code and data are available at https://github.com/SenseLLM/StructureCoder.
Problem

Research questions and friction points this paper is trying to address.

Improving code generation performance with limited verifiable training data
Enhancing DPO training using AST splitting and curriculum methods
Increasing diversity of DPO pairs through granular code splitting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Granular code splitting for diverse DPO pairs
AST splitting method to enhance training
Curriculum training approach for improved performance
🔎 Similar Papers
No similar papers found.
Houxing Ren
Houxing Ren
Beihang University
Zimu Lu
Zimu Lu
Ph.D. student at the Chinese University of Hong Kong
AI ReasoningLarge Language Model
W
Weikang Shi
CUHK MMLab
H
Haotian Hou
Beihang University
Yunqiao Yang
Yunqiao Yang
City University of Hong Kong
Transfer LearningMachine Learning
K
Ke Wang
CUHK MMLab
Aojun Zhou
Aojun Zhou
The Chinese University of Hong Kong
Deep Learning
J
Junting Pan
CPII under InnoHK
M
Mingjie Zhan
SenseTime Research
H
Hongsheng Li
CUHK MMLab, CPII under InnoHK, Shanghai AI Laboratory