KEDAS: Knowledge Editing Alignment with Diverse Augmentation and Self-adaptive Inference

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of efficient, low-overhead updating of outdated knowledge in large language models (LLMs), this paper proposes a context-aware knowledge editing framework. Methodologically, it integrates low-rank adaptation (LoRA) with context-aware editing learning, employs diverse data augmentation strategies, and introduces a filtering-based intelligent retriever for dynamic inference path selection—balancing editing accuracy and inference efficiency. Its key contribution lies in being the first to jointly model parametric editing and retrieval-augmented dynamic routing, achieving Pareto-optimal trade-offs between edit recall and computational cost. Evaluated across 4 datasets, 3 model architectures, and 3 editing settings—totaling 36 benchmarks—the method achieves state-of-the-art performance on 35 tasks, with an average harmonic score improvement of 19.8 over prior approaches. It further demonstrates strong generalization and low computational overhead.

Technology Category

Application Category

📝 Abstract
Knowledge editing aims to modify outdated knowledge in large language models (LLMs) efficiently while retaining their powerful capabilities. Most existing methods rely on either parameter-level editing or retrieval-based approaches. In this work, we propose Knowledge Editing alignment with Diverse Augmentation and Self-adaptive inference (KEDAS) to better align LLMs with knowledge editing. In the alignment phase, LLMs learn to apply in-context edited knowledge via low-rank adaptation. During editing, we design a diverse edit augmentation technique to improve the recall of edits. After that, a self-adaptive post-alignment inference mechanism is proposed, in which a filter-based smart retriever is employed to perform a dynamic selection of inference routing. Specifically, irrelevant queries will go through the original pre-alignment model directly, while relevant ones, together with their related edits, go through the model with aligned adapters activated. In experiments, KEDAS secures the highest overall performance scores in 35 out of 36 cases across four datasets with three LLMs on three settings, surpassing its strong knowledge editing alignment counterpart by about 19.8 harmonic mean scores of edit success, locality and portability and outperforming both parameter editing and retrieval-based baselines significantly. Analysis of computational cost and performance on general tasks further validates the robustness and efficiency of KEDAS, indicating that it presents an ideal paradigm of knowledge editing alignment.
Problem

Research questions and friction points this paper is trying to address.

Efficiently modify outdated knowledge in large language models
Improve recall of edits using diverse augmentation techniques
Dynamic inference routing for relevant and irrelevant queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank adaptation for in-context knowledge alignment
Diverse edit augmentation to enhance recall
Self-adaptive inference with dynamic routing selection
🔎 Similar Papers
No similar papers found.
C
Chenming Tang
National Key Laboratory for Multimedia Information Processing, Peking University
Yutong Yang
Yutong Yang
Mercedes-Benz AG R&D & University of Stuttgart
Computer VisionAutonomous Driving
Yunfang Wu
Yunfang Wu
Peking University
NLP