🤖 AI Summary
To address the high cost and adverse side effects—such as knowledge forgetting and conflicts—associated with post-training knowledge updates in large language models (LLMs), this paper proposes a continual precise editing framework. The method integrates progressive adaptive intervention, knowledge reintegration, dynamic memory management, and distribution-aware locality constraints, enabling safe, targeted parameter modifications via a closed-loop feedback mechanism. Its key innovations lie in simultaneously ensuring stability and precision under frequent editing: strong locality constraints suppress undesired interference; dynamic memory management mitigates accumulated errors; and knowledge reintegration alleviates conflicts and forgetting. Evaluated across multiple mainstream LLM families, the framework achieves 10–30% gains in editing accuracy, significantly reduces forgetting rates, and supports scalable, sustainable model evolution.
📝 Abstract
Post-training for large language models (LLMs) is constrained by the high cost of acquiring new knowledge or correcting errors and by the unintended side effects that frequently arise from retraining. To address these issues, we introduce REPAIR (Robust Editing via Progressive Adaptive Intervention and Reintegration), a lifelong editing framework designed to support precise and low-cost model updates while preserving non-target knowledge. REPAIR mitigates the instability and conflicts of large-scale sequential edits through a closed-loop feedback mechanism coupled with dynamic memory management. Furthermore, by incorporating frequent knowledge fusion and enforcing strong locality guards, REPAIR effectively addresses the shortcomings of traditional distribution-agnostic approaches that often overlook unintended ripple effects. Our experiments demonstrate that REPAIR boosts editing accuracy by 10%-30% across multiple model families and significantly reduces knowledge forgetting. This work introduces a robust framework for developing reliable, scalable, and continually evolving LLMs.