Cross-Lingual Multi-Hop Knowledge Editing - Benchmarks, Analysis and a Simple Contrastive Learning based Approach

📅 2024-07-14
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing knowledge editing methods are largely confined to monolingual English settings, failing to address the dynamic updating of novel facts across global languages. This work introduces, for the first time, the cross-lingual multi-hop knowledge editing task. We construct CROLIN-MQUAKE—the first parallel multilingual benchmark for evaluating such editing—and propose CLEVER-CKE, a retrieval-verification-generation framework integrating language-aware contrastive learning, hard-negative-driven fact retrieval and verification, multilingual fact embedding alignment, and an LLM-guided dynamic knowledge adherence mechanism. Extensive experiments across three large language models, eight languages, and two datasets demonstrate that our approach achieves an average 30% performance gain over baselines, substantially mitigating performance degradation in non-English settings while significantly enhancing cross-lingual generalization and multi-hop reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Large language models are often expected to constantly adapt to new sources of knowledge and knowledge editing techniques aim to efficiently patch the outdated model knowledge, with minimal modification. Most prior works focus on monolingual knowledge editing in English, even though new information can emerge in any language from any part of the world. We propose the Cross-Lingual Multi-Hop Knowledge Editing paradigm, for measuring and analyzing the performance of various SoTA knowledge editing techniques in a cross-lingual setup. Specifically, we create a parallel cross-lingual benchmark, CROLIN-MQUAKE for measuring the knowledge editing capabilities. Our extensive analysis over various knowledge editing techniques uncover significant gaps in performance between the cross-lingual and English-centric setting. Following this, we propose a significantly improved system for cross-lingual multi-hop knowledge editing, CLEVER-CKE. CLEVER-CKE is based on a retrieve, verify and generate knowledge editing framework, where a retriever is formulated to recall edited facts and support an LLM to adhere to knowledge edits. We develop language-aware and hard-negative based contrastive objectives for improving the cross-lingual and fine-grained fact retrieval and verification process used in this framework. Extensive experiments on three LLMs, eight languages, and two datasets show CLEVER-CKE's significant gains of up to 30% over prior methods.
Problem

Research questions and friction points this paper is trying to address.

Cross-Lingual Knowledge Editing
Multi-Hop Knowledge Adaptation
Performance Gap Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Lingual Multi-Hop Knowledge Editing
Retrieve, Verify, Generate Framework
Language-Aware Contrastive Objectives
🔎 Similar Papers
No similar papers found.