🤖 AI Summary
To address the challenge of efficient, low-overhead updating of outdated knowledge in large language models (LLMs), this paper proposes a context-aware knowledge editing framework. Methodologically, it integrates low-rank adaptation (LoRA) with context-aware editing learning, employs diverse data augmentation strategies, and introduces a filtering-based intelligent retriever for dynamic inference path selection—balancing editing accuracy and inference efficiency. Its key contribution lies in being the first to jointly model parametric editing and retrieval-augmented dynamic routing, achieving Pareto-optimal trade-offs between edit recall and computational cost. Evaluated across 4 datasets, 3 model architectures, and 3 editing settings—totaling 36 benchmarks—the method achieves state-of-the-art performance on 35 tasks, with an average harmonic score improvement of 19.8 over prior approaches. It further demonstrates strong generalization and low computational overhead.
📝 Abstract
Knowledge editing aims to modify outdated knowledge in large language models (LLMs) efficiently while retaining their powerful capabilities. Most existing methods rely on either parameter-level editing or retrieval-based approaches. In this work, we propose Knowledge Editing alignment with Diverse Augmentation and Self-adaptive inference (KEDAS) to better align LLMs with knowledge editing. In the alignment phase, LLMs learn to apply in-context edited knowledge via low-rank adaptation. During editing, we design a diverse edit augmentation technique to improve the recall of edits. After that, a self-adaptive post-alignment inference mechanism is proposed, in which a filter-based smart retriever is employed to perform a dynamic selection of inference routing. Specifically, irrelevant queries will go through the original pre-alignment model directly, while relevant ones, together with their related edits, go through the model with aligned adapters activated. In experiments, KEDAS secures the highest overall performance scores in 35 out of 36 cases across four datasets with three LLMs on three settings, surpassing its strong knowledge editing alignment counterpart by about 19.8 harmonic mean scores of edit success, locality and portability and outperforming both parameter editing and retrieval-based baselines significantly. Analysis of computational cost and performance on general tasks further validates the robustness and efficiency of KEDAS, indicating that it presents an ideal paradigm of knowledge editing alignment.