🤖 AI Summary
This work investigates how alignment between a large language model’s (LLM) internal “latent language” and the explicit input/output language affects downstream task performance. Addressing the open question of whether such alignment necessarily improves performance—unresolved in prior work—we propose an implicit language consistency metric and conduct systematic experiments across two cross-domain tasks: machine translation and geo-cultural reasoning. Leveraging multilingual prompt engineering, we evaluate across mainstream LLMs including Llama and Qwen. Our key finding—first empirically demonstrated—is that high performance persists even when latent and explicit languages are misaligned; critical adaptation occurs dynamically at the final transformer layer to align with the target language, thereby reducing reliance on full-sequence linguistic consistency. This reveals a robust language self-adaptation capability in LLM output layers, challenging the implicit assumption that strict language alignment is optimal. The result offers new insights into LLM internal representations and linguistic generalization.
📝 Abstract
Large Language Models (LLMs) are known to process information using a proficient internal language consistently, referred to as latent language, which may differ from the input or output languages. However, how the discrepancy between the latent language and the input and output language affects downstream task performance remains largely unexplored. While many studies research the latent language of LLMs, few address its importance in influencing task performance. In our study, we hypothesize that thinking in latent language consistently enhances downstream task performance. To validate this, our work varies the input prompt languages across multiple downstream tasks and analyzes the correlation between consistency in latent language and task performance. We create datasets consisting of questions from diverse domains such as translation and geo-culture, which are influenced by the choice of latent language. Experimental results across multiple LLMs on translation and geo-culture tasks, which are sensitive to the choice of language, indicate that maintaining consistency in latent language is not always necessary for optimal downstream task performance. This is because these models adapt their internal representations near the final layers to match the target language, reducing the impact of consistency on overall performance.