Multilingual Safety Alignment Via Sparse Weight Editing

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models often bypass safety safeguards when processing low-resource languages, leading to inconsistent cross-lingual safety performance. This work proposes a training-free multilingual safety alignment method that, for the first time, constructs a closed-form solution for cross-lingual alignment based on sparse safety neurons. By leveraging sparse weight editing, constrained linear transformations, and null-space projection, the approach maps harmful representations from low-resource languages into the safety subspace of high-resource languages. Requiring neither multilingual safety data nor additional training, the method significantly reduces attack success rates across eight languages and multiple models while preserving general reasoning capabilities with minimal computational overhead—achieving alignment through a single, efficient computation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) exhibit significant safety disparities across languages, with low-resource languages (LRLs) often bypassing safety guardrails established for high-resource languages (HRLs) like English. Existing solutions, such as multilingual supervised fine-tuning (SFT) or Reinforcement Learning from Human Feedback (RLHF), are computationally expensive and dependent on scarce multilingual safety data. In this work, we propose a novel, training-free alignment framework based on Sparse Weight Editing. Identifying that safety capabilities are localized within a sparse set of safety neurons, we formulate the cross-lingual alignment problem as a constrained linear transformation. We derive a closed-form solution to optimally map the harmful representations of LRLs to the robust safety subspaces of HRLs, while preserving general utility via a null-space projection constraint. Extensive experiments across 8 languages and multiple model families (Llama-3, Qwen-2.5) demonstrate that our method substantially reduces Attack Success Rate (ASR) in LRLs with negligible impact on general reasoning capabilities, all achieved with a single, data-efficient calculation.
Problem

Research questions and friction points this paper is trying to address.

multilingual safety
low-resource languages
safety alignment
large language models
safety disparities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Weight Editing
Multilingual Safety Alignment
Safety Neurons
Null-space Projection
Training-free Alignment
🔎 Similar Papers
J
Jiaming Liang
School of Artificial Intelligence, Xidian University
Z
Zhaoxin Wang
School of Artificial Intelligence, Xidian University
Handing Wang
Handing Wang
School of Artificial Intelligence, Xidian University
Evolutionary ComputingMulti-objective OptimizationData-Driven OptimizationTrustworthy AI