🤖 AI Summary
Conventional wisdom holds that fine-tuning is unsuitable for editing large language models (LLMs); however, this work reveals that its failure stems not from intrinsic methodological limitations, but from the prevailing single-sample, depth-first sequential editing paradigm—which induces over-optimization and cross-edit interference.
Method: We propose LocFT-BF, a novel fine-tuning framework that replaces depth-first sequential updates with breadth-first (epoch-wise), mini-batch training, coupled with systematic localization of editable parameters to enable localized, stable, and efficient editing.
Contribution/Results: LocFT-BF is the first approach to successfully adapt standard supervised training to model editing. It achieves state-of-the-art performance on large-scale editing tasks—up to 100K edits—and scales effectively to 72B-parameter models, significantly outperforming existing methods while preserving general model capabilities without degradation.
📝 Abstract
Fine-tuning, a foundational method for adapting large language models, has long been considered ineffective for model editing. Here, we challenge this belief, arguing that the reported failure arises not from the inherent limitation of fine-tuning itself, but from adapting it to the sequential nature of the editing task, a single-pass depth-first pipeline that optimizes each sample to convergence before moving on. While intuitive, this depth-first pipeline coupled with sample-wise updating over-optimizes each edit and induces interference across edits. Our controlled experiments reveal that simply restoring fine-tuning to the standard breadth-first (i.e., epoch-based) pipeline with mini-batch optimization substantially improves its effectiveness for model editing. Moreover, fine-tuning in editing also suffers from suboptimal tuning parameter locations inherited from prior methods. Through systematic analysis of tuning locations, we derive LocFT-BF, a simple and effective localized editing method built on the restored fine-tuning framework. Extensive experiments across diverse LLMs and datasets demonstrate that LocFT-BF outperforms state-of-the-art methods by large margins. Notably, to our knowledge, it is the first to sustain 100K edits and 72B-parameter models,10 x beyond prior practice, without sacrificing general capabilities. By clarifying a long-standing misconception and introducing a principled localized tuning strategy, we advance fine-tuning from an underestimated baseline to a leading method for model editing, establishing a solid foundation for future research.