Language Lives in Sparse Dimensions: Toward Interpretable and Efficient Multilingual Control for Large Language Models

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face challenges in multilingual control and low interpretability under scarce non-English data. Method: This paper proposes a fine-tuning-free sparse dimension intervention approach. Through representation analysis, it identifies cross-layer-consistent, language-specific sparse dimensions—localized and manipulated using only minimal parallel or monolingual data—to enable zero-shot language switching directly in the vector space. Contribution/Results: Departing from conventional neuron-level interventions, our method achieves high interpretability and low computational overhead. Experiments demonstrate superior semantic preservation and higher language-switching accuracy on multilingual generation control tasks, alongside over 60% reduction in inference latency—significantly outperforming existing prompt-based, adapter-based, and gradient-optimization methods.

Technology Category

Application Category

📝 Abstract
Large language models exhibit strong multilingual capabilities despite limited exposure to non-English data. Prior studies show that English-centric large language models map multilingual content into English-aligned representations at intermediate layers and then project them back into target-language token spaces in the final layer. From this observation, we hypothesize that this cross-lingual transition is governed by a small and sparse set of dimensions, which occur at consistent indices across the intermediate to final layers. Building on this insight, we introduce a simple, training-free method to identify and manipulate these dimensions, requiring only as few as 50 sentences of either parallel or monolingual data. Experiments on a multilingual generation control task reveal the interpretability of these dimensions, demonstrating that the interventions in these dimensions can switch the output language while preserving semantic content, and that it surpasses the performance of prior neuron-based approaches at a substantially lower cost.
Problem

Research questions and friction points this paper is trying to address.

Identifying sparse cross-lingual control dimensions in LLMs
Enabling multilingual output switching with semantic preservation
Developing efficient training-free multilingual control methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies sparse cross-lingual control dimensions in LLMs
Manipulates dimensions using minimal parallel or monolingual data
Switches output language while preserving semantic content
🔎 Similar Papers
No similar papers found.
C
Chengzhi Zhong
Kyoto University, Japan
F
Fei Cheng
Kyoto University, Japan
Q
Qianying Liu
National Institute of Informatics, Japan
Y
Yugo Murawaki
Kyoto University, Japan
Chenhui Chu
Chenhui Chu
Kyoto University
Machine TranslationNatural Language ProcessingVision and LanguageSpeech Processing
S
Sadao Kurohashi
Kyoto University, Japan