Revisiting Generalization Across Difficulty Levels: It's Not So Easy

πŸ“… 2025-11-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates the cross-difficulty generalization capability of large language models (LLMs) along the task difficulty dimension, addressing the critical question: β€œDoes training data difficulty affect model performance across test sets of varying difficulty?” We propose a novel, objective difficulty quantification method grounded in multi-model ensemble outputs and Item Response Theory (IRT), enabling fine-grained, scalable, and human-annotation-free sample difficulty ranking across six diverse datasets. Experimental results reveal that models trained on either easy or hard data consistently fail to achieve stable performance improvements across the full difficulty spectrum, exposing substantial limitations in cross-difficulty generalization. Our core contribution is a difficulty-aware generalization evaluation framework, which empirically validates that both training and evaluation must span heterogeneous difficulty levels to avoid systematic bias arising from single-difficulty assumptions.

Technology Category

Application Category

πŸ“ Abstract
We investigate how well large language models (LLMs) generalize across different task difficulties, a key question for effective data curation and evaluation. Existing research is mixed regarding whether training on easier or harder data leads to better results, and whether those gains come on easier or harder test data. We address this question by conducting a systematic evaluation of LLMs'generalization across models, datasets, and fine-grained groups of example difficulty. We rank examples in six datasets using the outputs of thousands of different LLMs and Item Response Theory (IRT), a well-established difficulty metric in educational testing. Unlike prior work, our difficulty ratings are therefore determined solely by the abilities of many different LLMs, excluding human opinions of difficulty. With a more objective, larger-scale, and finer-grained analysis, we show that cross-difficulty generalization is often limited; training on either easy or hard data cannot achieve consistent improvements across the full range of difficulties. These results show the importance of having a range of difficulties in both training and evaluation data for LLMs, and that taking shortcuts with respect to difficulty is risky.
Problem

Research questions and friction points this paper is trying to address.

Investigates LLMs' generalization across varying task difficulty levels
Evaluates whether training on easy or hard data yields better results
Assesses the need for diverse difficulty in training and evaluation data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLM outputs and IRT for difficulty ranking
Excluding human opinions to determine difficulty objectively
Analyzing cross-difficulty generalization with fine-grained groups
πŸ”Ž Similar Papers
No similar papers found.