Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models

📅 2024-05-24
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
Existing knowledge editing methods primarily target structured knowledge, rendering them inadequate for real-world scenarios involving abundant unstructured knowledge—such as lengthy, noisy, and semantically complex texts. To address this gap, we propose the first precise editing framework for unstructured knowledge in large language models (LLMs). Our method introduces a non-local block-wise key-value store to capture long-range semantic dependencies and a causality-driven optimization mechanism that jointly integrates attention-based knowledge across layers while directly editing final token-level outputs—ensuring contextual consistency. We further construct UnKEBench, the first benchmark dedicated to unstructured knowledge editing, supporting both batch and sequential editing. Extensive experiments demonstrate that our approach significantly outperforms strong baselines—including MEMIT—on UnKEBench and multiple standard datasets, exhibiting superior robustness and scalability.

Technology Category

Application Category

📝 Abstract
Recent knowledge editing methods have primarily focused on modifying structured knowledge in large language models. However, this task setting overlooks the fact that a significant portion of real-world knowledge is stored in an unstructured format, characterized by long-form content, noise, and a complex yet comprehensive nature. Techniques like"local layer key-value storage"and"term-driven optimization", as used in previous methods like MEMIT, are not effective for handling unstructured knowledge. To address these challenges, we propose a novel Unstructured Knowledge Editing method, namely UnKE, which extends previous assumptions in the layer dimension and token dimension. Firstly, in the layer dimension, we propose non-local block key-value storage to replace local layer key-value storage, increasing the representation ability of key-value pairs and incorporating attention layer knowledge. Secondly, in the token dimension, we replace"term-driven optimization"with"cause-driven optimization", which edits the last token directly while preserving context, avoiding the need to locate terms and preventing the loss of context information. Results on newly proposed unstructured knowledge editing dataset (UnKEBench) and traditional structured datasets demonstrate that UnKE achieves remarkable performance, surpassing strong baselines. In addition, UnKE has robust batch editing and sequential editing capabilities.
Problem

Research questions and friction points this paper is trying to address.

Extend knowledge editing to unstructured data
Propose non-local block key-value storage
Replace term-driven with cause-driven optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-local block key-value storage
Cause-driven optimization
Unstructured Knowledge Editing method
🔎 Similar Papers
No similar papers found.