🤖 AI Summary
This study addresses the limited representativeness of existing code generation benchmarks—such as HumanEval—in covering programming language knowledge, which leads to significant bias in evaluating large language models (LLMs). The authors introduce, for the first time, a systematic approach based on knowledge units (KUs) to quantitatively analyze the coverage gap between benchmarks and real-world projects. They further propose a prompt-engineering-based task synthesis framework that automatically generates 440 new tasks to enhance benchmark representativeness. Experimental results demonstrate that the augmented benchmark achieves substantially improved KU coverage and over 60% better alignment with the knowledge distribution of real projects. Notably, performance of mainstream LLMs drops by 12.54–44.82% on this enhanced benchmark, revealing that prior evaluations have substantially overestimated model capabilities.
📝 Abstract
Large Language Models (LLMs) such as GPT-4, Claude and LLaMA have shown impressive performance in code generation, typically evaluated using benchmarks (e.g., HumanEval). However, effective code generation requires models to understand and apply a wide range of language concepts. If the concepts exercised in benchmarks are not representative of those used in real-world projects, evaluations may yield incomplete. Despite this concern, the representativeness of code concepts in benchmarks has not been systematically examined. To address this gap, we present the first empirical study that analyzes the representativeness of code generation benchmarks through the lens of Knowledge Units (KUs) - cohesive sets of programming language capabilities provided by language constructs and APIs. We analyze KU coverage in two widely used Python benchmarks, HumanEval and MBPP, and compare them with 30 real-world Python projects. Our results show that each benchmark covers only half of the identified 20 KUs, whereas projects exercise all KUs with relatively balanced distributions. In contrast, benchmark tasks exhibit highly skewed KU distributions. To mitigate this misalignment, we propose a prompt-based LLM framework that synthesizes KU-based tasks to rebalance benchmark KU distributions and better align them with real-world usage. Using this framework, we generate 440 new tasks and augment existing benchmarks. The augmented benchmarks substantially improve KU coverage and achieve over a 60% improvement in distributional alignment. Evaluations of state-of-the-art LLMs on these augmented benchmarks reveal consistent and statistically significant performance drops (12.54-44.82%), indicating that existing benchmarks overestimate LLM performance due to their limited KU coverage. Our findings provide actionable guidance for building more realistic evaluations of LLM code-generation capabilities.