Resolving Editing-Unlearning Conflicts: A Knowledge Codebook Framework for Large Language Model Updating

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the task conflict between knowledge editing and forgetting in large language models (LLMs) during dynamic knowledge updating, this paper proposes LOKA, a knowledge-codebook-based framework. Methodologically, LOKA introduces a learnable knowledge codebook—the first of its kind—coupled with similarity-aware mapping and conflict-aware memory routing to decouple and coordinate editing and forgetting. It further incorporates a learnable activation controller and a multi-task memory separation architecture to enhance knowledge storage density and retrieval efficiency. Empirically, LOKA achieves significant improvements in editing accuracy and forgetting completeness across multiple knowledge update benchmarks, effectively mitigating both sparsity and redundancy in knowledge storage. During inference, it requires only lightweight memory injection, preserving computational efficiency.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel in natural language processing by encoding extensive human knowledge, but their utility relies on timely updates as knowledge evolves. Updating LLMs involves two key tasks simultaneously: unlearning to remove unwanted knowledge and editing to incorporate new information. Existing methods face two major challenges: ineffective knowledge storage (either too sparse or too dense) and task conflicts between editing and unlearning, as validated through our theoretical and experimental results. To address these issues, we propose LOKA, a conflict-free framework for LLM updating based on a knowledge codebook. During training, updated knowledge is stored in multiple codebook memories. To optimize knowledge storage, a similarity-aware knowledge mapping ensures that related knowledge pieces are clustered and allocated to the same memory. Additionally, LOKA resolves task conflicts by employing task-specific and multi-task memories guided by a conflict score. In the inference stage, LOKA retrieves the most relevant memory from the codebook and plugs it into the original LLM to apply the updated knowledge. A learning-based router controls codebook activation to further improve knowledge utilization. Extensive experiments demonstrate the effectiveness of LOKA in LLM knowledge updating tasks.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Knowledge Update
Learning and Forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Consolidation
Memory Unit Clustering
Task-specific Memory Selection
🔎 Similar Papers
No similar papers found.