🤖 AI Summary
To address the susceptibility to interference and poor generalization of small-parameter language models in user-personalized commonsense knowledge editing, this paper introduces CaseEdit—the first benchmark tailored for localized commonsense editing, covering both typical and atypical household scenarios. We propose AlphaEdit, a method that leverages the ATOMIC20/20 knowledge graph to perform multi-step reasoning-based generation and incorporates null-space projection constraints during editing to effectively suppress unwanted knowledge perturbations. Furthermore, we design a four-dimensional evaluation framework—assessing reliability, generalization, locality, and transferability. Experiments on LLaMA-3.2-3B demonstrate that AlphaEdit significantly improves editing quality while reducing the ripple effect by 62%, achieving, for the first time, stable internalization of context-sensitive commonsense knowledge in lightweight models.
📝 Abstract
Large language models (LLMs) exhibit strong performance on factual recall and general reasoning but struggle to adapt to user-specific, commonsense knowledge, a challenge particularly acute in small-parameter settings where computational efficiency is prioritized. We introduce CaseEdit, a new dataset and generation pipeline for evaluating localized, personalized commonsense knowledge editing in small LLMs to address this. Built upon the ATOMIC20/20 commonsense graph, CaseEdit uses a multi-stage inference process to generate both typical and atypical contextual edits for household objects, paired with targeted evaluation questions across four axes: reliability, generalization, locality, and portability. We evaluate established knowledge editing methods using CaseEdit and demonstrate that AlphaEdit, a technique employing null-space projection to minimize interference with unrelated knowledge, consistently outperforms other methods when applied to an LLaMA 3.2 3B model, even in scalability tests, showing minimal ripple effects. Our results indicate that using CaseEdit with effective editing techniques like AlphaEdit allows small models to internalize high-quality, context-sensitive common-sense knowledge, paving the way for lightweight, personalized assistants.