๐ค AI Summary
This study addresses the high cost and opaque mechanisms associated with adapting large language models to new languages, as well as the limited understanding of how linguistic competence emerges during training. By analyzing the evolution of input comprehension and output generation capabilities in decoder-only Transformers during low-resource language fine-tuning, this work revealsโfor the first timeโa layer-wise specialization pattern wherein language perception and production functions become distinctly localized across model layers. Building on this insight, the authors propose CogSym, a heuristic strategy that fine-tunes only the top and bottom 25% of layers. This approach consistently achieves 97โ98% of full-model fine-tuning performance across multiple adaptation methods, including LoRA and full-parameter fine-tuning, substantially reducing the computational cost of multilingual adaptation.
๐ Abstract
Adapting large language models (LLMs) to new languages is an expensive and opaque process. Understanding how language models acquire new languages and multilingual abilities is key to achieve efficient adaptation. Prior work on multilingual interpretability research focuses primarily on how trained models process multilingual instructions, leaving unexplored the mechanisms through which they acquire new languages during training. We investigate these training dynamics on decoder-only transformers through the lens of two functional cognitive specializations: language perception (input comprehension) and production (output generation). Through experiments on low-resource languages, we demonstrate how perceptual and productive specialization emerges in different regions of a language model by running layer ablation sweeps from the model's input and output directions. Based on the observed specialization patterns, we propose CogSym, a layer-wise heuristic that enables effective adaptation by exclusively fine-tuning a few early and late layers. We show that tuning only the 25% outermost layers achieves downstream task performance within 2-3% deviation from the full fine-tuning baseline. CogSym yields consistent performance with adapter methods such as LoRA, showcasing generalization beyond full fine-tuning. These findings provide insights to better understand how LLMs learn new languages and push toward accessible and inclusive language modeling.