Editing Across Languages: A Survey of Multilingual Knowledge Editing

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the core challenge in multilingual knowledge editing (MKE): unreliable cross-lingual propagation of edits. We first propose a unified taxonomy covering four method classes—parameter-based, memory-augmented, fine-tuning-based, and hypernetwork-based approaches—and systematically analyze their cross-lingual transfer mechanisms and bottlenecks. Through taxonomic analysis, evaluation across multiple benchmarks, empirical identification of transfer patterns, and meta-analysis, we reveal that linguistic anisotropy significantly impedes edit generalization across languages. We further identify critical gaps, including insufficient evaluation coverage and poor edit scalability. Our study distills empirically grounded effectiveness patterns and cross-lingual propagation behaviors of mainstream MKE methods, establishing a theoretical framework and practical guidelines for developing editable, language-aware large language models. (149 words)

Technology Category

Application Category

📝 Abstract
While Knowledge Editing has been extensively studied in monolingual settings, it remains underexplored in multilingual contexts. This survey systematizes recent research on Multilingual Knowledge Editing (MKE), a growing subdomain of model editing focused on ensuring factual edits generalize reliably across languages. We present a comprehensive taxonomy of MKE methods, covering parameter-based, memory-based, fine-tuning, and hypernetwork approaches. We survey available benchmarks,summarize key findings on method effectiveness and transfer patterns, identify challenges in cross-lingual propagation, and highlight open problems related to language anisotropy, evaluation coverage, and edit scalability. Our analysis consolidates a rapidly evolving area and lays the groundwork for future progress in editable language-aware LLMs.
Problem

Research questions and friction points this paper is trying to address.

Exploring multilingual knowledge editing in language models
Assessing cross-lingual generalization of factual edits
Addressing challenges in language anisotropy and scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Taxonomy of multilingual knowledge editing methods
Benchmarks for evaluating cross-lingual edit propagation
Analysis of language anisotropy in model editing
🔎 Similar Papers
2024-07-14Conference on Empirical Methods in Natural Language ProcessingCitations: 0
2024-04-07International Conference on Computational LinguisticsCitations: 4