🤖 AI Summary
This work addresses the degradation of general capabilities in large language models during editing, often caused by proxy optimization objectives. The authors formulate model editing as a constrained optimization problem and, for the first time, explicitly introduce a capability-preserving constraint. By leveraging the Bregman divergence to derive a Gauss-Newton Hessian approximation, they construct a matrix-free Kronecker-structured projector that restricts parameter updates to low-curvature subspaces of the capability loss landscape. Integrating second-order optimization with a K-FAC approximation, the proposed method achieves high editing success rates on standard benchmarks while limiting average capability degradation to less than 1%, substantially outperforming existing approaches.
📝 Abstract
A central challenge in large language model (LLM) editing is capability preservation: methods that successfully change targeted behavior can quietly game the editing proxy and corrupt general capabilities, producing degenerate behaviors reminiscent of proxy/reward hacking. We present CrispEdit, a scalable and principled second-order editing algorithm that treats capability preservation as an explicit constraint, unifying and generalizing several existing editing approaches. CrispEdit formulates editing as constrained optimization and enforces the constraint by projecting edit updates onto the low-curvature subspace of the capability-loss landscape. At the crux of CrispEdit is expressing capability constraint via Bregman divergence, whose quadratic form yields the Gauss-Newton Hessian exactly and even when the base model is not trained to convergence. We make this second-order procedure efficient at the LLM scale using Kronecker-factored approximate curvature (K-FAC) and a novel matrix-free projector that exploits Kronecker structure to avoid constructing massive projection matrices. Across standard model-editing benchmarks, CrispEdit achieves high edit success while keeping capability degradation below 1% on average across datasets, significantly improving over prior editors.