What Has Been Lost with Synthetic Evaluation?

๐Ÿ“… 2025-05-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates whether LLM-generated evaluation benchmarks satisfy three core requirements: phenomenon specificity, shortcut resistance, and high difficulty. The authors construct LLM-synthesized variants of CondaQA (for negation reasoning) and DROP (for quantitative reasoning) via prompt engineering, then rigorously evaluate their validity and difficulty through human annotation consistency checks and cross-model performance comparisons. Results revealโ€” for the first timeโ€”that although LLM-synthesized data adhere to annotation guidelines and cost only a fraction of crowdsourcing, they exhibit a 12.7% average reduction in challenge level for mainstream LLMs and are significantly more susceptible to superficial pattern exploitation. This study uncovers a systemic difficulty decay in current LLM-generated benchmarks, risking inflated estimates of model capability. It provides critical empirical evidence and methodological insights for developing trustworthy, rigorously validated evaluation benchmarks in language model assessment.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) are increasingly used for data generation. However, creating evaluation benchmarks raises the bar for this emerging paradigm. Benchmarks must target specific phenomena, penalize exploiting shortcuts, and be challenging. Through two case studies, we investigate whether LLMs can meet these demands by generating reasoning over-text benchmarks and comparing them to those created through careful crowdsourcing. Specifically, we evaluate both the validity and difficulty of LLM-generated versions of two high-quality reading comprehension datasets: CondaQA, which evaluates reasoning about negation, and DROP, which targets reasoning about quantities. We find that prompting LLMs can produce variants of these datasets that are often valid according to the annotation guidelines, at a fraction of the cost of the original crowdsourcing effort. However, we show that they are less challenging for LLMs than their human-authored counterparts. This finding sheds light on what may have been lost by generating evaluation data with LLMs, and calls for critically reassessing the immediate use of this increasingly prevalent approach to benchmark creation.
Problem

Research questions and friction points this paper is trying to address.

Assessing validity and difficulty of LLM-generated benchmarks
Comparing LLM-generated vs human-crowdsourced evaluation datasets
Identifying shortcomings in synthetic benchmark creation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs generate reasoning benchmarks
Compare LLM vs crowdsourced datasets
Assess validity and difficulty differences