Mitigating Negative Interference in Multilingual Sequential Knowledge Editing through Null-Space Constraints

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses cross-lingual negative interference in sequential knowledge editing for multilingual large language models (LLMs), arising from parameter sharing across languages. To mitigate this, we propose LangEdit, a null-space constrained editing framework. Its core innovation lies in projecting factual updates for each language strictly onto the orthogonal complement of the historical update subspace—thereby ensuring language-wise editing independence and generalization consistency—while jointly optimizing across languages under cross-lingual consistency constraints. Extensive experiments across three LLMs, six languages, and four downstream tasks demonstrate that LangEdit significantly outperforms state-of-the-art methods: it effectively suppresses parameter interference, improves editing accuracy, enhances cross-lingual factual consistency, and maintains high efficiency and low computational cost.

Technology Category

Application Category

📝 Abstract
Efficiently updating multilingual knowledge in large language models (LLMs), while preserving consistent factual representations across languages, remains a long-standing and unresolved challenge. While deploying separate editing systems for each language might seem viable, this approach incurs substantial costs due to the need to manage multiple models. A more efficient solution involves integrating knowledge updates across all languages into a unified model. However, performing sequential edits across languages often leads to destructive parameter interference, significantly degrading multilingual generalization and the accuracy of injected knowledge. To address this challenge, we propose LangEdit, a novel null-space constrained framework designed to precisely isolate language-specific knowledge updates. The core innovation of LangEdit lies in its ability to project parameter updates for each language onto the orthogonal complement of previous updated subspaces. This approach mathematically guarantees update independence while preserving multilingual generalization capabilities. We conduct a comprehensive evaluation across three model architectures, six languages, and four downstream tasks, demonstrating that LangEdit effectively mitigates parameter interference and outperforms existing state-of-the-art editing methods. Our results highlight its potential for enabling efficient and accurate multilingual knowledge updates in LLMs. The code is available at https://github.com/VRCMF/LangEdit.git.
Problem

Research questions and friction points this paper is trying to address.

Mitigating negative interference in multilingual knowledge editing
Preserving consistent factual representations across languages
Isolating language-specific updates to avoid parameter interference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Null-space constraints isolate language-specific updates
Orthogonal projection ensures update independence
Preserves multilingual generalization capabilities
🔎 Similar Papers
No similar papers found.