🤖 AI Summary
To address catastrophic forgetting in low-resource target-language adaptation—caused by fine-tuning solely on unlabeled target-language data—this paper proposes Source-Shielding Updates (SSU). SSU quantifies parameter importance using a small amount of source-language data and selectively freezes parameters along the column dimension, updating only those insensitive to source-task performance. Crucially, SSU requires no target-language annotations, back-translation, or knowledge distillation. On 7B and 13B models, SSU incurs only 3.4% and 2.8% source-task degradation, respectively—substantially outperforming full fine-tuning (20.3% and 22.3%). Meanwhile, target-language performance matches or exceeds that of full fine-tuning. The core contribution is the first unsupervised, minimally invasive, and source-preserving paradigm for language expansion—enabling robust multilingual capability extension without compromising source-language competence.
📝 Abstract
Expanding the linguistic diversity of instruct large language models (LLMs) is crucial for global accessibility but is often hindered by the reliance on costly specialized target language labeled data and catastrophic forgetting during adaptation. We tackle this challenge under a realistic, low-resource constraint: adapting instruct LLMs using only unlabeled target language data. We introduce Source-Shielded Updates (SSU), a selective parameter update strategy that proactively preserves source knowledge. Using a small set of source data and a parameter importance scoring method, SSU identifies parameters critical to maintaining source abilities. It then applies a column-wise freezing strategy to protect these parameters before adaptation. Experiments across five typologically diverse languages and 7B and 13B models demonstrate that SSU successfully mitigates catastrophic forgetting. It reduces performance degradation on monolingual source tasks to just 3.4% (7B) and 2.8% (13B) on average, a stark contrast to the 20.3% and 22.3% from full fine-tuning. SSU also achieves target-language performance highly competitive with full fine-tuning, outperforming it on all benchmarks for 7B models and the majority for 13B models.