Language steering in latent space to mitigate unintended code-switching

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multilingual large language models (LLMs) frequently exhibit unintended code-switching during inference, undermining reliability in downstream tasks. To address this, we propose an implicit-space language-direction guidance method: leveraging parallel corpora, we first extract linear language directions via principal component analysis (PCA); then, we apply lightweight, parameter-free projection and dynamic adjustment directly on token embeddings—entailing near-zero computational overhead. Crucially, only a small amount of parallel data is required to model cross-lingual representation disparities. Experiments on Qwen2.5 and Llama-3.2 demonstrate language classification accuracy of 95–99%, a 42% reduction in next-token distribution deviation, and confirm that deep-layer language features are highly linearly separable. Our approach establishes a new paradigm for multilingual controllable generation—efficient, interpretable, and deployment-friendly—without modifying model parameters or architecture.

Technology Category

Application Category

📝 Abstract
Multilingual Large Language Models (LLMs) often exhibit unintended code-switching, reducing reliability in downstream tasks. We propose latent-space language steering, a lightweight inference-time method that identifies language directions via PCA on parallel translations and steers token embeddings along these axes to control language identity. Our approach mitigates code-switching while preserving semantics with negligible computational overhead and requires only minimal parallel data for calibration. Empirically, we achieve 95-99% language classification accuracy using a single principal component and reduce next-token distributional divergence by up to 42% across multiple language pairs on Qwen2.5 and Llama-3.2 models. We further analyze the layer-wise evolution of language representations, revealing that language identity concentrates in final layers with near-perfect linear separability.
Problem

Research questions and friction points this paper is trying to address.

Mitigating unintended code-switching in multilingual LLMs
Controlling language identity via latent-space steering
Reducing distributional divergence across language pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent-space steering mitigates unintended code-switching in LLMs
Uses PCA on parallel translations to identify language directions
Steers token embeddings to control language with minimal overhead
🔎 Similar Papers
No similar papers found.