Language of Thought Shapes Output Diversity in Large Language Models

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited output diversity of large language models in multifaceted and creative tasks. The authors propose enhancing English output diversity by manipulating the internal “language of thought” used during model inference, introducing both monolingual and mixed multilingual thought sampling strategies while maintaining consistent output language. They reveal, for the first time, a direct link between the language of thought and the model’s internal representation space, demonstrating that non-English languages—particularly those linguistically distant from English—serve as a structural source of diversity. By aggregating outputs derived from multilingual thought samples, the method effectively surpasses conventional diversity ceilings. Experimental results show that this approach not only significantly improves output diversity but also yields tangible benefits in culturally aware tasks, including broader cultural knowledge coverage and better alignment with diverse value systems.

Technology Category

Application Category

📝 Abstract
Output diversity is crucial for Large Language Models as it underpins pluralism and creativity. In this work, we reveal that controlling the language used during model thinking-the language of thought-provides a novel and structural source of output diversity. Our preliminary study shows that different thinking languages occupy distinct regions in a model's thinking space. Based on this observation, we study two repeated sampling strategies under multilingual thinking-Single-Language Sampling and Mixed-Language Sampling-and conduct diversity evaluation on outputs that are controlled to be in English, regardless of the thinking language used. Across extensive experiments, we demonstrate that switching the thinking language from English to non-English languages consistently increases output diversity, with a clear and consistent positive correlation such that languages farther from English in the thinking space yield larger gains. We further show that aggregating samples across multiple thinking languages yields additional improvements through compositional effects, and that scaling sampling with linguistic heterogeneity expands the model's diversity ceiling. Finally, we show that these findings translate into practical benefits in pluralistic alignment scenarios, leading to broader coverage of cultural knowledge and value orientations in LLM outputs. Our code is publicly available at https://github.com/iNLP-Lab/Multilingual-LoT-Diversity.
Problem

Research questions and friction points this paper is trying to address.

output diversity
language of thought
large language models
pluralism
multilingual thinking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language of Thought
output diversity
multilingual reasoning
thinking space
pluralistic alignment
🔎 Similar Papers
No similar papers found.