Trans-EnV: A Framework for Evaluating the Linguistic Robustness of LLMs Against English Varieties

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM evaluation heavily relies on Standard American English (SAE), neglecting 38 globally prevalent nonstandard English varieties—leading to critical gaps in assessing linguistic robustness and fairness. To address this, we propose Trans-EnV, the first framework that jointly leverages a linguistically grounded rule library (derived from corpus linguistics and second-language acquisition research) and controllable LLM-based generation to enable automatic, verifiable, and reproducible transformation of SAE datasets into multiple nonstandard variants. We conduct systematic evaluation across six benchmarks and seven state-of-the-art LLMs, revealing up to a 46.3% accuracy drop under nonstandard varieties. All code, transformed datasets, and the comprehensive variant rule library are publicly released—establishing the first large-scale empirical foundation for cross-variety fairness research in LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are predominantly evaluated on Standard American English (SAE), often overlooking the diversity of global English varieties. This narrow focus may raise fairness concerns as degraded performance on non-standard varieties can lead to unequal benefits for users worldwide. Therefore, it is critical to extensively evaluate the linguistic robustness of LLMs on multiple non-standard English varieties. We introduce Trans-EnV, a framework that automatically transforms SAE datasets into multiple English varieties to evaluate the linguistic robustness. Our framework combines (1) linguistics expert knowledge to curate variety-specific features and transformation guidelines from linguistic literature and corpora, and (2) LLM-based transformations to ensure both linguistic validity and scalability. Using Trans-EnV, we transform six benchmark datasets into 38 English varieties and evaluate seven state-of-the-art LLMs. Our results reveal significant performance disparities, with accuracy decreasing by up to 46.3% on non-standard varieties. These findings highlight the importance of comprehensive linguistic robustness evaluation across diverse English varieties. Each construction of Trans-EnV was validated through rigorous statistical testing and consultation with a researcher in the field of second language acquisition, ensuring its linguistic validity. Our href{https://github.com/jiyounglee-0523/TransEnV}{code} and href{https://huggingface.co/collections/jiyounglee0523/transenv-681eadb3c0c8cf363b363fb1}{datasets} are publicly available.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' linguistic robustness across global English varieties
Addressing fairness concerns from performance gaps in non-standard English
Automating dataset transformation for diverse English variety evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically transforms SAE datasets into English varieties
Combines linguistics expert knowledge with LLM-based transformations
Validated through rigorous statistical testing and expert consultation