🤖 AI Summary
This work investigates whether observed prompt sensitivity in large language models (LLMs) reflects genuine deficiencies or stems from methodological biases in evaluation. We systematically assess seven LLMs across six benchmarks under twelve distinct prompt templates, comparing three evaluation paradigms: rigid exact-match scoring, log-likelihood–based scoring, and LLM-as-a-Judge—a semantic-aware, reference-free approach. Results demonstrate that conventional exact-match evaluation substantially overestimates prompt sensitivity by disregarding semantically equivalent outputs; in contrast, LLM-as-a-Judge reduces cross-template performance variance by an average of 42% and improves ranking stability (Spearman correlation increases by 0.58). This study provides the first empirical evidence that modern LLMs’ prompt robustness is severely underestimated under standard evaluation protocols. We propose a novel evaluation framework centered on semantic consistency, establishing a methodological foundation for rigorous assessment of LLM reliability.
📝 Abstract
Prompt sensitivity, referring to the phenomenon where paraphrasing (i.e., repeating something written or spoken using different words) leads to significant changes in large language model (LLM) performance, has been widely accepted as a core limitation of LLMs. In this work, we revisit this issue and ask: Is the widely reported high prompt sensitivity truly an inherent weakness of LLMs, or is it largely an artifact of evaluation processes? To answer this question, we systematically evaluate 7 LLMs (e.g., GPT and Gemini family) across 6 benchmarks, including both multiple-choice and open-ended tasks on 12 diverse prompt templates. We find that much of the prompt sensitivity stems from heuristic evaluation methods, including log-likelihood scoring and rigid answer matching, which often overlook semantically correct responses expressed through alternative phrasings, such as synonyms or paraphrases. When we adopt LLM-as-a-Judge evaluations, we observe a substantial reduction in performance variance and a consistently higher correlation in model rankings across prompts. Our findings suggest that modern LLMs are more robust to prompt templates than previously believed, and that prompt sensitivity may be more an artifact of evaluation than a flaw in the models.