🤖 AI Summary
This work uncovers a critical cross-lingual security vulnerability in multilingual large language models (LLMs): fine-tuning attacks using only a small number of monolingual malicious instruction samples can systematically degrade their ability to refuse harmful requests across multiple languages.
Method: We conduct systematic parameter-space analysis, construct cross-lingual adversarial examples, and perform freezing experiments—including targeted parameter freezing—to assess attack transferability and defense resilience. We further propose Safety Information Localization (SIL) to identify safety-relevant parameters.
Contribution/Results: We are the first to demonstrate strong cross-lingual generalization of fine-tuning attacks; SIL confirms that safety knowledge is language-agnostic; and we show that fine-tuning merely ~20% of critical parameters suffices to globally compromise safety alignment. The attack successfully achieves cross-lingual jailbreaking on multiple mainstream multilingual LLMs, remains effective against newly adapted languages, and cannot be mitigated by freezing safety-related parameters.
📝 Abstract
Recent advancements in Large Language Models (LLMs) have sparked widespread concerns about their safety. Recent work demonstrates that safety alignment of LLMs can be easily removed by fine-tuning with a few adversarially chosen instruction-following examples, i.e., fine-tuning attacks. We take a further step to understand fine-tuning attacks in multilingual LLMs. We first discover cross-lingual generalization of fine-tuning attacks: using a few adversarially chosen instruction-following examples in one language, multilingual LLMs can also be easily compromised (e.g., multilingual LLMs fail to refuse harmful prompts in other languages). Motivated by this finding, we hypothesize that safety-related information is language-agnostic and propose a new method termed Safety Information Localization (SIL) to identify the safety-related information in the model parameter space. Through SIL, we validate this hypothesis and find that only changing 20% of weight parameters in fine-tuning attacks can break safety alignment across all languages. Furthermore, we provide evidence to the alternative pathways hypothesis for why freezing safety-related parameters does not prevent fine-tuning attacks, and we demonstrate that our attack vector can still jailbreak LLMs adapted to new languages.