🤖 AI Summary
Existing LLM evaluation benchmarks overestimate model reliability due to insufficient lexical and syntactic diversity in question formulations, failing to reflect real-world linguistic variability. Method: The authors systematically generate semantically equivalent paraphrases—covering synonym substitution, syntactic restructuring, and other surface-form variations—for questions across six major benchmark suites, and conduct large-scale robustness evaluation on 34 state-of-the-art LLMs. Results: While relative model rankings remain largely stable, absolute performance drops significantly—by up to over 30%—exposing critical generalization deficits in top-performing models. This work provides the first quantitative analysis of how linguistic formulation diversity impacts evaluation validity, introduces a “robustness-aware evaluation” paradigm that integrates input-variant stability as a core metric, and establishes both methodological foundations and empirical evidence for developing more realistic, linguistically diverse benchmarks.
📝 Abstract
Large Language Models (LLMs) effectiveness is usually evaluated by means of benchmarks such as MMLU, ARC-C, or HellaSwag, where questions are presented in their original wording, thus in a fixed, standardized format. However, real-world applications involve linguistic variability, requiring models to maintain their effectiveness across diverse rewordings of the same question or query. In this study, we systematically assess the robustness of LLMs to paraphrased benchmark questions and investigate whether benchmark-based evaluations provide a reliable measure of model capabilities. We systematically generate various paraphrases of all the questions across six different common benchmarks, and measure the resulting variations in effectiveness of 34 state-of-the-art LLMs, of different size and effectiveness. Our findings reveal that while LLM rankings remain relatively stable across paraphrased inputs, absolute effectiveness scores change, and decline significantly. This suggests that LLMs struggle with linguistic variability, raising concerns about their generalization abilities and evaluation methodologies. Furthermore, the observed performance drop challenges the reliability of benchmark-based evaluations, indicating that high benchmark scores may not fully capture a model's robustness to real-world input variations. We discuss the implications of these findings for LLM evaluation methodologies, emphasizing the need for robustness-aware benchmarks that better reflect practical deployment scenarios.