π€ AI Summary
To address representation instability and catastrophic forgetting in continual knowledge editing of large language models (LLMs), this paper proposes SPHERE. First, it formally defines Hyperspherical Energy (HE) to quantify representation uniformity and theoretically establishes a lower bound on prior knowledge retention via HEβs dynamic evolution. Second, it injects new knowledge into complementary subspaces through sparse projection, thereby isolating editing interference from existing representations. Third, it integrates HE regularization with principal direction analysis to ensure stable model updates. Evaluated on LLaMA3-8B and Qwen2.5-7B, SPHERE achieves an average 16.41% improvement in editing accuracy over state-of-the-art methods, while inducing the minimal degradation in original task performance. Theoretical analysis and empirical results jointly validate SPHEREβs effectiveness in balancing knowledge update fidelity and pre-existing capability preservation.
π Abstract
Large language models (LLMs) require constant updates to remain aligned with evolving real-world knowledge. Model editing offers a lightweight alternative to retraining, but sequential editing often destabilizes representations and induces catastrophic forgetting. In this work, we seek to better understand and mitigate performance degradation caused by sequential editing. We hypothesize that hyperspherical uniformity, a property that maintains uniform distribution of neuron weights on a hypersphere, helps the model remain stable, retain prior knowledge, while still accommodate new updates. We use Hyperspherical Energy (HE) to quantify neuron uniformity during editing, and examine its correlation with editing performance. Empirical studies across widely used editing methods reveals a strong correlation between HE dynamics and editing performance, with editing failures consistently coinciding with high HE fluctuations. We further theoretically prove that HE dynamics impose a lower bound on the degradation of pretrained knowledge, highlighting why HE stability is crucial for knowledge retention. Motivated by these insights, we propose SPHERE (Sparse Projection for Hyperspherical Energy-Regularized Editing), an HE-driven regularization strategy that stabilizes neuron weight distributions, ultimately preserving prior knowledge while enabling reliable sequential updates. Specifically, SPHERE identifies a sparse space complementary to the principal hyperspherical directions of the pretrained weight matrices and projects new knowledge onto it, attenuating perturbations on the principal directions. Extensive experiments on LLaMA3 (8B) and Qwen2.5 (7B) show that SPHERE outperforms the best baseline in editing capability by an average of 16.41%, while most faithfully preserving general model performance, thereby offering a principled path toward reliable large-scale knowledge editing.