Do Large Language Models Have an English Accent? Evaluating and Improving the Naturalness of Multilingual LLMs

📅 2024-10-21
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the pervasive English-centrism bias in multilingual large language models (LLMs), wherein non-English outputs—e.g., French or Chinese—exhibit lexical and syntactic “English accents,” severely degrading naturalness. We propose a systematic evaluation and mitigation framework. First, we introduce a novel corpus-level automatic metric that quantifies target-language naturalness along both lexical and syntactic dimensions. Second, we design a lightweight language alignment method integrating: (i) a contrastive linguistics–informed naturalness scoring function; (ii) cross-lingual n-gram distribution alignment; and (iii) domain-adaptive fine-tuning. Evaluated on our curated French/Chinese benchmarks, mainstream LLMs exhibit significant English bias. Our approach consistently improves output naturalness across languages while preserving zero degradation in general-purpose task performance.

Technology Category

Application Category

📝 Abstract
Current Large Language Models (LLMs) are predominantly designed with English as the primary language, and even the few that are multilingual tend to exhibit strong English-centric biases. Much like speakers who might produce awkward expressions when learning a second language, LLMs often generate unnatural outputs in non-English languages, reflecting English-centric patterns in both vocabulary and grammar. Despite the importance of this issue, the naturalness of multilingual LLM outputs has received limited attention. In this paper, we address this gap by introducing novel automatic corpus-level metrics to assess the lexical and syntactic naturalness of LLM outputs in a multilingual context. Using our new metrics, we evaluate state-of-the-art LLMs on a curated benchmark in French and Chinese, revealing a tendency towards English-influenced patterns. To mitigate this issue, we also propose a simple and effective alignment method to improve the naturalness of an LLM in a target language and domain, achieving consistent improvements in naturalness without compromising the performance on general-purpose benchmarks. Our work highlights the importance of developing multilingual metrics, resources and methods for the new wave of multilingual LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating English-centric biases in multilingual LLMs
Assessing lexical and syntactic naturalness in non-English outputs
Improving multilingual LLM naturalness via alignment methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces automatic metrics for multilingual naturalness
Evaluates LLMs on French and Chinese benchmarks
Proposes alignment method to enhance language naturalness
🔎 Similar Papers
No similar papers found.