EditLord: Learning Code Transformation Rules for Code Editing

๐Ÿ“… 2025-03-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing code editing approaches implicitly model the task as an end-to-end mapping, overlooking its intrinsic discrete-step natureโ€”leading to limited performance, poor robustness, and weak generalization. This paper proposes the first rule-driven framework that explicitly decomposes code editing into interpretable, verifiable steps. Leveraging large language models, it automatically induces a concise, reusable set of meta-rules; editing is then decoupled into semantically grounded intermediate operations governed by these rules. We further design a meta-rule-guided fine-tuning strategy, prompting scheme, and iterative editing mechanism to jointly optimize functional preservation and precise attribute modification. Evaluated across diverse software engineering and security tasks, our method improves editing accuracy by 22.7%, robustness by 58.1%, and functional correctness by 20.2%. It additionally enables sample self-augmentation and cross-task rule transfer.

Technology Category

Application Category

๐Ÿ“ Abstract
Code editing is a foundational task in software development, where its effectiveness depends on whether it introduces desired code property changes without changing the original code's intended functionality. Existing approaches often formulate code editing as an implicit end-to-end task, omitting the fact that code-editing procedures inherently consist of discrete and explicit steps. Thus, they suffer from suboptimal performance and lack of robustness and generalization. We introduce EditLord, a code editing framework that makes the code transformation steps explicit. Our key insight is to employ a language model (LM) as an inductive learner to extract code editing rules from the training code pairs as concise meta-rule sets. Such rule sets will be manifested for each training sample to augment them for finetuning or assist in prompting- and iterative-based code editing. EditLordoutperforms the state-of-the-art by an average of 22.7% in editing performance and 58.1% in robustness while achieving 20.2% higher functional correctness across critical software engineering and security applications, LM models, and editing modes.
Problem

Research questions and friction points this paper is trying to address.

Explicitly modeling discrete code transformation steps
Improving robustness and generalization in code editing
Learning concise meta-rule sets from code pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explicit code transformation steps framework
LM extracts concise meta-rule sets
Augments training samples for better performance
๐Ÿ”Ž Similar Papers
No similar papers found.