Energy-Regularized Sequential Model Editing on Hyperspheres

πŸ“… 2025-10-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address representation instability and catastrophic forgetting in continual knowledge editing of large language models (LLMs), this paper proposes SPHERE. First, it formally defines Hyperspherical Energy (HE) to quantify representation uniformity and theoretically establishes a lower bound on prior knowledge retention via HE’s dynamic evolution. Second, it injects new knowledge into complementary subspaces through sparse projection, thereby isolating editing interference from existing representations. Third, it integrates HE regularization with principal direction analysis to ensure stable model updates. Evaluated on LLaMA3-8B and Qwen2.5-7B, SPHERE achieves an average 16.41% improvement in editing accuracy over state-of-the-art methods, while inducing the minimal degradation in original task performance. Theoretical analysis and empirical results jointly validate SPHERE’s effectiveness in balancing knowledge update fidelity and pre-existing capability preservation.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) require constant updates to remain aligned with evolving real-world knowledge. Model editing offers a lightweight alternative to retraining, but sequential editing often destabilizes representations and induces catastrophic forgetting. In this work, we seek to better understand and mitigate performance degradation caused by sequential editing. We hypothesize that hyperspherical uniformity, a property that maintains uniform distribution of neuron weights on a hypersphere, helps the model remain stable, retain prior knowledge, while still accommodate new updates. We use Hyperspherical Energy (HE) to quantify neuron uniformity during editing, and examine its correlation with editing performance. Empirical studies across widely used editing methods reveals a strong correlation between HE dynamics and editing performance, with editing failures consistently coinciding with high HE fluctuations. We further theoretically prove that HE dynamics impose a lower bound on the degradation of pretrained knowledge, highlighting why HE stability is crucial for knowledge retention. Motivated by these insights, we propose SPHERE (Sparse Projection for Hyperspherical Energy-Regularized Editing), an HE-driven regularization strategy that stabilizes neuron weight distributions, ultimately preserving prior knowledge while enabling reliable sequential updates. Specifically, SPHERE identifies a sparse space complementary to the principal hyperspherical directions of the pretrained weight matrices and projects new knowledge onto it, attenuating perturbations on the principal directions. Extensive experiments on LLaMA3 (8B) and Qwen2.5 (7B) show that SPHERE outperforms the best baseline in editing capability by an average of 16.41%, while most faithfully preserving general model performance, thereby offering a principled path toward reliable large-scale knowledge editing.
Problem

Research questions and friction points this paper is trying to address.

Sequential model editing causes catastrophic forgetting in LLMs
Neuron weight instability leads to knowledge degradation during updates
Existing editing methods fail to preserve prior knowledge reliably
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hyperspherical Energy quantifies neuron uniformity during editing
SPHERE projects new knowledge onto sparse complementary space
Regularization stabilizes neuron weights to preserve prior knowledge
Q
Qingyuan Liu
Columbia University
Jia-Chen Gu
Jia-Chen Gu
University of California, Los Angeles
Natural Language ProcessingMachine Learning
Yunzhi Yao
Yunzhi Yao
Zhejiang University
Knowledge MechanismKnowledge Edit
H
Hong Wang
University of Science and Technology of China
N
Nanyun Peng
University of California, Los Angeles