Positional Cognitive Specialization: Where Do LLMs Learn To Comprehend and Speak Your Language?

๐Ÿ“… 2026-04-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the high cost and opaque mechanisms associated with adapting large language models to new languages, as well as the limited understanding of how linguistic competence emerges during training. By analyzing the evolution of input comprehension and output generation capabilities in decoder-only Transformers during low-resource language fine-tuning, this work revealsโ€”for the first timeโ€”a layer-wise specialization pattern wherein language perception and production functions become distinctly localized across model layers. Building on this insight, the authors propose CogSym, a heuristic strategy that fine-tunes only the top and bottom 25% of layers. This approach consistently achieves 97โ€“98% of full-model fine-tuning performance across multiple adaptation methods, including LoRA and full-parameter fine-tuning, substantially reducing the computational cost of multilingual adaptation.
๐Ÿ“ Abstract
Adapting large language models (LLMs) to new languages is an expensive and opaque process. Understanding how language models acquire new languages and multilingual abilities is key to achieve efficient adaptation. Prior work on multilingual interpretability research focuses primarily on how trained models process multilingual instructions, leaving unexplored the mechanisms through which they acquire new languages during training. We investigate these training dynamics on decoder-only transformers through the lens of two functional cognitive specializations: language perception (input comprehension) and production (output generation). Through experiments on low-resource languages, we demonstrate how perceptual and productive specialization emerges in different regions of a language model by running layer ablation sweeps from the model's input and output directions. Based on the observed specialization patterns, we propose CogSym, a layer-wise heuristic that enables effective adaptation by exclusively fine-tuning a few early and late layers. We show that tuning only the 25% outermost layers achieves downstream task performance within 2-3% deviation from the full fine-tuning baseline. CogSym yields consistent performance with adapter methods such as LoRA, showcasing generalization beyond full fine-tuning. These findings provide insights to better understand how LLMs learn new languages and push toward accessible and inclusive language modeling.
Problem

Research questions and friction points this paper is trying to address.

language acquisition
multilingual adaptation
cognitive specialization
large language models
training dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

cognitive specialization
layer-wise adaptation
language perception and production
low-resource languages
efficient fine-tuning
๐Ÿ”Ž Similar Papers
No similar papers found.
L
Luis Frentzen Salim
Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology
Lun-Wei Ku
Lun-Wei Ku
Research Fellow, Academia Sinica
Sentiment Analysis and Opinion MiningNatural Language ProcessingText MiningInformation RetrievalComputational Linguistic
H
Hsing-Kuo Kenneth Pao
Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology