Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning

📅 2024-05-06
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 9
Influential: 1
📄 PDF
🤖 AI Summary
To address catastrophic forgetting, performance degradation, and editing inefficiency in lifelong knowledge editing for large language models (LLMs), this paper proposes RECIPE—a retrieval-augmented continual prompt learning framework. Its core innovation is a dynamic gating Knowledge Sentinel (KS) mechanism that jointly optimizes the retriever and prompt encoder, integrating knowledge embedding compression with prefix injection. This design ensures editing locality, reliability, and cross-task generalization. Evaluated across multiple LLMs and diverse knowledge editing benchmarks, RECIPE achieves substantial gains in editing accuracy while preserving near-original performance on base tasks. Moreover, it significantly accelerates both inference and editing latency. To our knowledge, RECIPE is the first method enabling efficient, stable, and end-to-end trainable continual knowledge updating—bridging critical gaps between edit efficacy, model fidelity, and computational efficiency in lifelong LLM adaptation.

Technology Category

Application Category

📝 Abstract
Model editing aims to correct outdated or erroneous knowledge in large language models (LLMs) without the need for costly retraining. Lifelong model editing is the most challenging task that caters to the continuous editing requirements of LLMs. Prior works primarily focus on single or batch editing; nevertheless, these methods fall short in lifelong editing scenarios due to catastrophic knowledge forgetting and the degradation of model performance. Although retrieval-based methods alleviate these issues, they are impeded by slow and cumbersome processes of integrating the retrieved knowledge into the model. In this work, we introduce RECIPE, a RetriEval-augmented ContInuous Prompt lEarning method, to boost editing efficacy and inference efficiency in lifelong learning. RECIPE first converts knowledge statements into short and informative continuous prompts, prefixed to the LLM’s input query embedding, to efficiently refine the response grounded on the knowledge. It further integrates the Knowledge Sentinel (KS) that acts as an intermediary to calculate a dynamic threshold, determining whether the retrieval repository contains relevant knowledge. Our retriever and prompt encoder are jointly trained to achieve editing properties, i.e., reliability, generality, and locality. In our experiments, RECIPE is assessed extensively across multiple LLMs and editing datasets, where it achieves superior editing performance. RECIPE also demonstrates its capability to maintain the overall performance of LLMs alongside showcasing fast editing and inference speed.
Problem

Research questions and friction points this paper is trying to address.

Correct outdated or erroneous knowledge in LLMs without retraining.
Address catastrophic knowledge forgetting in lifelong editing scenarios.
Improve editing efficacy and inference efficiency using retrieval-augmented prompts.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-augmented continuous prompt learning
Knowledge Sentinel for dynamic relevance threshold
Joint training for reliability, generality, locality
🔎 Similar Papers
No similar papers found.