Learn and Unlearn in Multilingual LLMs

πŸ“… 2024-06-19
πŸ“ˆ Citations: 5
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates how misinformation, injected via training data, propagates across languages in multilingual large language models (MLLMs), undermining generation reliability. We identify a critical phenomenon: unlearning harmful content in one language (e.g., English) inadvertently reinforces toxic outputs in other languagesβ€”a previously unreported cross-lingual reinforcement effect. To address this, we propose response-level *multilingual co-forgetting*, requiring simultaneous unlearning in both English and the original harmful language. We introduce a response-intervention-based multilingual co-forgetting framework that integrates cross-lingual consistency evaluation and harmful-generation provenance analysis. Extensive experiments across 12 languages demonstrate that our method reduces misinformation generation by 83.7% on average, significantly improving safety and cross-lingual generalization. The framework offers a scalable, verifiable paradigm for content governance in multilingual foundation models.

Technology Category

Application Category

πŸ“ Abstract
This paper investigates the propagation of harmful information in multilingual large language models (LLMs) and evaluates the efficacy of various unlearning methods. We demonstrate that fake information, regardless of the language it is in, once introduced into these models through training data, can spread across different languages, compromising the integrity and reliability of the generated content. Our findings reveal that standard unlearning techniques, which typically focus on English data, are insufficient in mitigating the spread of harmful content in multilingual contexts and could inadvertently reinforce harmful content across languages. We show that only by addressing harmful responses in both English and the original language of the harmful data can we effectively eliminate generations for all languages. This underscores the critical need for comprehensive unlearning strategies that consider the multilingual nature of modern LLMs to enhance their safety and reliability across diverse linguistic landscapes.
Problem

Research questions and friction points this paper is trying to address.

Addressing harmful misinformation propagation in multilingual LLMs
Evaluating efficacy of unlearning methods across languages
Mitigating cross-lingual reinforcement of harmful content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual unlearning methods for harmful content
Addressing harmful responses in both languages
Comprehensive strategies for diverse linguistic landscapes
πŸ”Ž Similar Papers
No similar papers found.