🤖 AI Summary
Large language models (LLMs) suffer from hallucinations and safety risks due to static training data, while existing unstructured knowledge editing methods—particularly window-based autoregressive approaches—often disrupt causal dependencies between early memory updates and subsequent token generation. To address this, we propose a retraining-free, fine-grained knowledge editing framework. Our method introduces a novel Matryoshka-style hierarchical memory update objective that explicitly models multi-granularity causal consistency across token positions. Additionally, we design an adaptive loss coefficient mechanism that dynamically modulates the strength of memory correction at each position. Leveraging theoretical analysis, differentiable memory editing, and multi-objective co-optimization, our approach achieves up to a 12.33% improvement in editing accuracy on two mainstream LLMs across four standard benchmarks. It demonstrates strong robustness to diverse input formats and preserves generative coherence without compromising model integrity.
📝 Abstract
Large language models (LLMs) have emerged as powerful knowledge bases yet are limited by static training data, leading to issues such as hallucinations and safety risks. Editing a model's internal knowledge through the locate-and-edit paradigm has proven a cost-effective alternative to retraining, though current unstructured approaches, especially window-based autoregressive methods, often disrupt the causal dependency between early memory updates and later output tokens. In this work, we first theoretically analyze these limitations and then introduce Matryoshka Unstructured Knowledge Editing ($mu$KE), a novel memory update mechanism that preserves such dependencies via a Matryoshka-style objective and adaptive loss coefficients. Empirical evaluations on two models across four benchmarks demonstrate that $mu$KE improves edit efficacy by up to 12.33% over state-of-the-art methods, and remain robust when applied to diverse formatted edits, underscoring its potential for effective unstructured knowledge editing in LLMs.