🤖 AI Summary
Large language models (LLMs) often struggle to precisely execute user-specified editing intentions in instruction-driven text editing tasks, frequently over-editing unmodified regions and thereby compromising faithfulness and locality. To address this, we propose a lightweight, fine-grained editing framework: (1) a hypernetwork-based dynamic adaptation mechanism that generates instruction-customized editing policies; and (2) span-level difference-aware regularization, which imposes precise supervision on modified spans to effectively suppress over-editing. The resulting model contains only 3 billion parameters, balancing efficiency and capability. On modified-span BLEU, it outperforms state-of-the-art methods by 9–30%, while maintaining high edit accuracy and minimal contextual interference. Our approach establishes a new paradigm for controllable, localized editing of code and documentation.
📝 Abstract
Instruction-based text editing is increasingly critical for real-world applications such as code editors (e.g., Cursor), but Large Language Models (LLMs) continue to struggle with this task. Unlike free-form generation, editing requires faithfully implementing user instructions while preserving unchanged content, as even minor unintended modifications can break functionality. Existing approaches treat editing as generic text generation, leading to two key failures: they struggle to faithfully align edits with diverse user intents, and they often over-edit unchanged regions. We propose HyperEdit to address both issues. First, we introduce hypernetwork-based dynamic adaptation that generates request-specific parameters, enabling the model to tailor its editing strategy to each instruction. Second, we develop difference-aware regularization that focuses supervision on modified spans, preventing over-editing while ensuring precise, minimal changes. HyperEdit achieves a 9%--30% relative improvement in BLEU on modified regions over state-of-the-art baselines, despite utilizing only 3B parameters.