🤖 AI Summary
This work identifies an unrecognized “shortcut bias” in large language models (LLMs) when used as automatic evaluators: their judgments rely not on the actual quality of responses but on superficial cues implicitly embedded in prompts—such as temporal markers (“new”) or source indicators (“expert”). To isolate this effect, the authors construct controlled evaluation tasks based on ELI5 and LitBench, employing GPT-4o and Gemini-2.5-Flash to perform pairwise response comparisons while systematically injecting temporal and authority cues. Results show that LLM evaluators exhibit statistically significant preference for responses labeled “new” or “expert”, especially in subjective tasks; critically, their generated justifications almost never reference these cues—revealing decisional unfaithfulness and explanation unreliability. This is the first systematic empirical demonstration of implicit recency and source bias in LLM-based evaluation, challenging the assumption of their trustworthiness as automated assessment tools and providing foundational insights for developing reliable AI evaluation methodologies.
📝 Abstract
Large language models (LLMs) are increasingly deployed as automatic judges to evaluate system outputs in tasks such as summarization, dialogue, and creative writing. A faithful judge should base its verdicts solely on response quality and explicitly acknowledge the factors shaping its decision. We show that current LLM judges fail on both counts by relying on shortcuts introduced in the prompt. Our study uses two evaluation datasets: ELI5, a benchmark for long-form question answering, and LitBench, a recent benchmark for creative writing. Both datasets provide pairwise comparisons, where the evaluator must choose which of two responses is better. From each dataset we construct 100 pairwise judgment tasks and employ two widely used models, GPT-4o and Gemini-2.5-Flash, as evaluators in the role of LLM-as-a-judge. For each pair, we assign superficial cues to the responses, provenance cues indicating source identity (Human, Expert, LLM, or Unknown) and recency cues indicating temporal origin (Old, 1950 vs. New, 2025), while keeping the rest of the prompt fixed. Results reveal consistent verdict shifts: both models exhibit a strong recency bias, systematically favoring new responses over old, as well as a clear provenance hierarchy (Expert > Human > LLM > Unknown). These biases are especially pronounced in GPT-4o and in the more subjective and open-ended LitBench domain. Crucially, cue acknowledgment is rare: justifications almost never reference the injected cues, instead rationalizing decisions in terms of content qualities. These findings demonstrate that current LLM-as-a-judge systems are shortcut-prone and unfaithful, undermining their reliability as evaluators in both research and deployment.