Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization

๐Ÿ“… 2024-09-27
๐Ÿ›๏ธ Neural Information Processing Systems
๐Ÿ“ˆ Citations: 7
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing LLM evaluation frameworks lack fine-grained, cross-domain difficulty benchmarks, hindering systematic characterization of easy-to-hard generalization capabilities. Method: We construct a unified benchmark comprising six standardized datasets spanning mathematics, programming, chess, and logical reasoning. For the first time, we integrate Item Response Theory (IRT) with the Glicko-2 rating system to jointly calibrate problem difficulty using large-scale real-world response data from human solvers and six state-of-the-art LLMs. Contribution/Results: Our approach significantly improves coverage of high-difficulty samples and establishes the first fine-grained, psychometrically grounded difficulty spectrum for LLM evaluation. Empirical analysis reveals a systematic, gradient-aligned performance decay across difficulty levelsโ€”confirming inherent limitations in scaling robustness. All datasets, calibrated difficulty scores, and implementation code are publicly released on Hugging Face.

Technology Category

Application Category

๐Ÿ“ Abstract
While generalization over tasks from easy to hard is crucial to profile language models (LLMs), the datasets with fine-grained difficulty annotations for each problem across a broad range of complexity are still blank. Aiming to address this limitation, we present Easy2Hard-Bench, a consistently formatted collection of 6 benchmark datasets spanning various domains, such as mathematics and programming problems, chess puzzles, and reasoning questions. Each problem within these datasets is annotated with numerical difficulty scores. To systematically estimate problem difficulties, we collect abundant performance data on attempts to each problem by humans in the real world or LLMs on the prominent leaderboard. Leveraging the rich performance data, we apply well-established difficulty ranking systems, such as Item Response Theory (IRT) and Glicko-2 models, to uniformly assign numerical difficulty scores to problems. Moreover, datasets in Easy2Hard-Bench distinguish themselves from previous collections by a higher proportion of challenging problems. Through extensive experiments with six state-of-the-art LLMs, we provide a comprehensive analysis of their performance and generalization capabilities across varying levels of difficulty, with the aim of inspiring future research in LLM generalization. The datasets are available at https://huggingface.co/datasets/furonghuang-lab/Easy2Hard-Bench.
Problem

Research questions and friction points this paper is trying to address.

Lack of datasets with fine-grained difficulty annotations for LLM profiling
Need for standardized difficulty scores across diverse problem domains
Limited understanding of LLM generalization from easy to hard tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized difficulty labels for LLM profiling
Uses IRT and Glicko-2 for difficulty scoring
High proportion of challenging problems included