Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing locality and global impact in knowledge editing for large language models (LLMs), this paper proposes an input-adaptive editing method based on subspace basis vectors. Its core innovation is Basis-level Fine-Tuning (BaFT), which abandons the conventional linear update assumption and instead introduces an input-conditioned nonlinear basis weighting mechanism, enabling precise localization and dynamic modification of knowledge within a learned subspace. BaFT supports efficient, scalable, and continual multi-step editing. Extensive experiments across three mainstream LLMs and five benchmarks—including MEMIT and ROME—demonstrate significant improvements: higher edit success rates, enhanced generalization preservation, and a 37% increase in edit locality. These results effectively alleviate the inherent trade-off between accuracy and locality in knowledge editing.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved remarkable performance on various natural language tasks. However, they are trained on static corpora and their knowledge can become outdated quickly in the fast-changing world. This motivates the development of knowledge editing methods designed to update certain knowledge in LLMs without changing unrelated others. To make selective edits, previous efforts often sought to update a small amount of parameters in some specific layer(s) of a LLM. Nonetheless, in challenging scenarios, they still fall short in making successful edits while preserving knowledge irrelevant to the updates simultaneously, resulting in a notable editing-locality trade-off. In this work, we question if the trade-offs are caused by the fact that parameter-based updates have a global effect, i.e., edited parameters affect all inputs indiscriminately. In light of this, we explore the feasibility of representation fine-tuning, which applied some linear update to a few representations in a learned subspace, for knowledge editing. While being effective to enhance an LLM's general ability as demonstrated in the previous work, we theoretically show that this linear update imposes a tension in editing-locality trade-off. Subsequently, BaFT is proposed to break the linearity. BaFT computes a weight for each basis that spans a dimension of the subspace based on the input representation. This input-dependent weighting mechanism allows BaFT to manage different types of knowledge in an adaptive way, thereby achieving a better editing-locality trade-off. Experiments on three LLMs with five editing benchmarks in diverse scenarios show the superiority of our method.
Problem

Research questions and friction points this paper is trying to address.

Updating outdated knowledge in large language models efficiently.
Achieving selective knowledge edits without affecting unrelated information.
Proposing BaFT for adaptive knowledge management in LLMs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Basis-Level Representation Fine-Tuning for knowledge editing
Input-dependent weighting mechanism for adaptive knowledge management
Breaks linearity to improve editing-locality trade-off
🔎 Similar Papers
No similar papers found.
T
Tianci Liu
Purdue University
R
Ruirui Li
Amazon
Yunzhe Qi
Yunzhe Qi
University of Illinois at Urbana-Champaign
H
Hui Liu
Amazon
Xianfeng Tang
Xianfeng Tang
Amazon
Machine LearningLarge Language Models
T
Tianqi Zheng
Amazon
Q
Qingyu Yin
Amazon
M
Monica Xiao Cheng
Amazon
J
Jun Huan
AWS AI Lab
H
Haoyu Wang
SUNY Albany
J
Jing Gao
Purdue University