🤖 AI Summary
State-of-the-art knowledge editing methods (e.g., MEMIT, ROME) rely on massive precomputation—up to 44 million hidden vectors—resulting in prohibitive precomputation times of 36–40 hours on a single GPU for models like GPT-J (6B) and Llama2-7B, severely hindering practical deployment. Method: We establish, for the first time, a theoretical lower bound on the precomputation required for effective knowledge editing, and propose a lightweight sampling strategy that retains only 0.3% of the original hidden vectors (~130K) while preserving editing fidelity. Our approach is grounded in linear algebraic analysis and empirically validated across MEMIT, ROME, and EMMET. Contribution/Results: Precomputation time is reduced from tens of hours to mere minutes, drastically cutting computational overhead and deployment latency. The method provides both theoretical foundations and a practical, framework-agnostic solution for efficient, scalable knowledge updating in large language models.
📝 Abstract
Knowledge editing methods like MEMIT are able to make data and compute efficient updates of factual knowledge by using a single sentence to update facts and their consequences. However, what is often overlooked is a"precomputation step", which requires a one-time but significant computational cost. The authors of MEMIT originally precompute approximately 44 million hidden vectors per edited layer, which requires a forward pass over 44 million tokens. For GPT-J (6B), this precomputation step takes 36 hours on a single GPU, while it takes approximately 40 hours for Llama2-7B. Additionally, this precomputation time grows with model size. In this paper, we show that this excessive computational cost is unnecessary. Knowledge editing using MEMIT and related methods, such as ROME and EMMET, can be performed by pre-computing a very small portion of the 44 million hidden vectors. We first present the theoretical minimum number of hidden vector precomputation required for solutions of these editing methods to exist. We then empirically show that knowledge editing using these methods can be done by pre-computing significantly fewer hidden vectors. Specifically, we show that the precomputation step can be done with less than 0.3% of the originally stipulated number of hidden vectors. This saves a significant amount of precomputation time and allows users to begin editing new models within a few minutes.