🤖 AI Summary
Existing code generation benchmarks lack systematic characterization of task difficulty and overlook the interaction between prompt formulation and model capabilities. Method: We propose HardEval, the first framework to define a computable programming task difficulty metric, integrating multi-model collaborative evaluation, diverse prompt engineering, and task-level difficulty modeling. Contribution/Results: HardEval identifies six prototypical hard-task categories and enables difficulty-aware, thematic novel task generation. On HumanEval+ and ClassEval, it precisely identifies 21% and 27% of hard tasks, respectively. Its difficulty scores generalize effectively to downstream tasks—including code completion and code-related question answering—enhancing evaluation fairness and granularity. The framework thus advances principled, fine-grained assessment of code generation models beyond accuracy-based metrics.
📝 Abstract
Large Language Models (LLMs) show promising potential in Software Engineering, especially for code-related tasks like code completion and code generation. LLMs' evaluation is generally centred around general metrics computed over benchmarks. While painting a macroscopic view of the benchmarks and of the LLMs' capacity, it is unclear how each programming task in these benchmarks assesses the capabilities of the LLMs. In particular, the difficulty level of the tasks in the benchmarks is not reflected in the score used to report the performance of the model. Yet, a model achieving a 90% score on a benchmark of predominantly easy tasks is likely less capable than a model achieving a 90% score on a benchmark containing predominantly difficult tasks. This paper devises a framework, HardEval, for assessing task difficulty for LLMs and crafting new tasks based on identified hard tasks. The framework uses a diverse array of prompts for a single task across multiple LLMs to obtain a difficulty score for each task of a benchmark. Using two code generation benchmarks, HumanEval+ and ClassEval, we show that HardEval can reliably identify the hard tasks within those benchmarks, highlighting that only 21% of HumanEval+ and 27% of ClassEval tasks are hard for LLMs. Through our analysis of task difficulty, we also characterize 6 practical hard task topics which we used to generate new hard tasks. Orthogonal to current benchmarking evaluation efforts, HardEval can assist researchers and practitioners in fostering better assessments of LLMs. The difficulty score can be used to identify hard tasks within existing benchmarks. This, in turn, can be leveraged to generate more hard tasks centred around specific topics either for evaluation or improvement of LLMs. HardEval generalistic approach can be applied to other domains such as code completion or Q/A.