🤖 AI Summary
This work addresses the tendency of large language models to produce over-edited patches in automated program repair, which increases maintenance costs. The authors propose a retention-aware fine-tuning approach that extracts token-level retention signals by comparing buggy and fixed code, and combines sequence masking with a curriculum learning strategy based on edit difficulty to guide the model toward modifying only necessary regions. This method introduces, for the first time, token-level retention signals and curriculum learning into program repair, enabling the generation of smaller, more localized, and plausible fixes without requiring search or post-processing during inference. Evaluated on Defects4J and HumanEval-Java, the approach achieves up to a 65.6% improvement in pass@1 over standard fine-tuning and reduces average edit distance by as much as 32.6%, significantly outperforming strong baselines such as AdaPatcher.
📝 Abstract
Large language models (LLMs) are effective for automated program repair, but plausible patches that pass the full test suite often rewrite more code than necessary, increasing review and maintenance costs. This over-editing is common because most bugs are localized, while standard supervised fine-tuning provides no explicit signal about which tokens should be preserved and which should be changed. We propose PAFT, a preservation-aware fine-tuning method for minimal-edit program repair. PAFT derives token-level preservation signals by aligning buggy and fixed code, combines them with full-sequence masking, and applies an edit-difficulty curriculum. Across Defects4J and HumanEval-Java, PAFT improves pass@1 by up to 65.6% over standard supervised fine-tuning (StdFT) while reducing average edit distance (AED) by up to 32.6%. On Defects4J with DeepSeek-Coder-6.7B, PAFT also outperforms AdaPatcher, a strong preference-based repair baseline, improving pass@1 from 5.9% to 10.1% while reducing median AED from 61.0 to 42.0. Overall, PAFT preserves stable context and concentrates edits on faulty regions, yielding smaller, more localized, plausible patches without inference-time search, reranking, or post-processing.