🤖 AI Summary
This work addresses the core challenge in multilingual knowledge editing (MKE): unreliable cross-lingual propagation of edits. We first propose a unified taxonomy covering four method classes—parameter-based, memory-augmented, fine-tuning-based, and hypernetwork-based approaches—and systematically analyze their cross-lingual transfer mechanisms and bottlenecks. Through taxonomic analysis, evaluation across multiple benchmarks, empirical identification of transfer patterns, and meta-analysis, we reveal that linguistic anisotropy significantly impedes edit generalization across languages. We further identify critical gaps, including insufficient evaluation coverage and poor edit scalability. Our study distills empirically grounded effectiveness patterns and cross-lingual propagation behaviors of mainstream MKE methods, establishing a theoretical framework and practical guidelines for developing editable, language-aware large language models. (149 words)
📝 Abstract
While Knowledge Editing has been extensively studied in monolingual settings, it remains underexplored in multilingual contexts. This survey systematizes recent research on Multilingual Knowledge Editing (MKE), a growing subdomain of model editing focused on ensuring factual edits generalize reliably across languages. We present a comprehensive taxonomy of MKE methods, covering parameter-based, memory-based, fine-tuning, and hypernetwork approaches. We survey available benchmarks,summarize key findings on method effectiveness and transfer patterns, identify challenges in cross-lingual propagation, and highlight open problems related to language anisotropy, evaluation coverage, and edit scalability. Our analysis consolidates a rapidly evolving area and lays the groundwork for future progress in editable language-aware LLMs.