Neural FOXP2 -- Language Specific Neuron Steering for Targeted Language Improvement in LLMs

πŸ“… 2026-02-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Although large language models are trained on multilingual data, non-English languages are systematically suppressed in their parameters due to the dominance of English. This work proposes a precise intervention method that leverages hierarchical sparse autoencoders (SAEs), language-selective quantization, and inter-layer SVD analysis to identify and manipulate sparse, low-rank language control circuits within the model. By applying targeted activation shifts in low-to-mid network layers, the approach reorients the model’s default language preference toward target languages such as Hindi or Spanish. This technique enables efficient, stable, and controllable language preference switching, significantly enhancing performance in the target languages without compromising overall model capabilities.

Technology Category

Application Category

πŸ“ Abstract
LLMs are multilingual by training, yet their lingua franca is often English, reflecting English language dominance in pretraining. Other languages remain in parametric memory but are systematically suppressed. We argue that language defaultness is governed by a sparse, low-rank control circuit, language neurons, that can be mechanistically isolated and safely steered. We introduce Neural FOXP2, that makes a chosen language (Hindi or Spanish) primary in a model by steering language-specific neurons. Neural FOXP2 proceeds in three stages: (i) Localize: We train per-layer SAEs so each activation decomposes into a small set of active feature components. For every feature, we quantify English vs. Hindi/Spanish selectivity overall logit-mass lift toward the target-language token set. Tracing the top-ranked features back to their strongest contributing units yields a compact language-neuron set. (ii) Steering directions: We localize controllable language-shift geometry via a spectral low-rank analysis. For each layer, we build English to target activation-difference matrices and perform layerwise SVD to extract the dominant singular directions governing language change. The eigengap and effective-rank spectra identify a compact steering subspace and an empirically chosen intervention window (where these directions are strongest and most stable). (iii) Steer: We apply a signed, sparse activation shift targeted to the language neurons. Concretely, within low to mid layers we add a positive steering along the target-language dominant directions and a compensating negative shift toward the null space for the English neurons, yielding controllable target-language defaultness.
Problem

Research questions and friction points this paper is trying to address.

multilingual LLMs
language dominance
language suppression
default language
non-English languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

language neurons
low-rank steering
sparse autoencoders
singular value decomposition
targeted language enhancement
πŸ”Ž Similar Papers
No similar papers found.