🤖 AI Summary
This work addresses the challenges of semantic drift and catastrophic forgetting in continual editing of large language models, as well as the difficulty in precisely reverting specific edits. To overcome these limitations, we propose SoLA, a novel framework that encapsulates each edit into an independent LoRA module. SoLA employs a semantic routing mechanism embedded within the edited layers to dynamically activate relevant modules without requiring an additional routing network. By combining frozen module management with end-to-end semantic matching, SoLA achieves reversible rollbacks for the first time, enabling exact deletion of arbitrary edits and full restoration of the model’s original behavior. Experimental results demonstrate that SoLA significantly outperforms existing methods in edit accuracy, efficiency, and reversibility.
📝 Abstract
The dynamic evolution of real-world necessitates model editing within Large Language Models. While existing methods explore modular isolation or parameter-efficient strategies, they still suffer from semantic drift or knowledge forgetting due to continual updating. To address these challenges, we propose SoLA, a Semantic routing-based LoRA framework for lifelong model editing. In SoLA, each edit is encapsulated as an independent LoRA module, which is frozen after training and mapped to input by semantic routing, allowing dynamic activation of LoRA modules via semantic matching. This mechanism avoids semantic drift caused by cluster updating and mitigates catastrophic forgetting from parameter sharing. More importantly, SoLA supports precise revocation of specific edits by removing key from semantic routing, which restores model's original behavior. To our knowledge, this reversible rollback editing capability is the first to be achieved in existing literature. Furthermore, SoLA integrates decision-making process into edited layer, eliminating the need for auxiliary routing networks and enabling end-to-end decision-making process. Extensive experiments demonstrate that SoLA effectively learns and retains edited knowledge, achieving accurate, efficient, and reversible lifelong model editing.