🤖 AI Summary
Existing evaluation benchmarks for large language models in software engineering often suffer from narrow task coverage, single-dimensional metrics, lack of realistic context, and data contamination, limiting their ability to comprehensively assess model robustness, fairness, and practical utility. To address these limitations, this work proposes BEHELM—the first full-stack benchmarking framework tailored for software engineering. BEHELM establishes a unified, standardized, and reproducible evaluation infrastructure through structured software scenario modeling, multi-granularity input-output specifications, and a multidimensional quality metric system encompassing robustness, explainability, fairness, and efficiency. By significantly lowering the barrier to constructing high-quality benchmarks, BEHELM enables systematic cross-task, cross-language, and cross-granularity evaluations, offering the community a more equitable, realistic, and future-oriented assessment paradigm.
📝 Abstract
Large language models for code are advancing fast, yet our ability to evaluate them lags behind. Current benchmarks focus on narrow tasks and single metrics, which hide critical gaps in robustness, interpretability, fairness, efficiency, and real-world usability. They also suffer from inconsistent data engineering practices, limited software engineering context, and widespread contamination issues. To understand these problems and chart a path forward, we combined an in-depth survey of existing benchmarks with insights gathered from a dedicated community workshop. We identified three core barriers to reliable evaluation: the absence of software-engineering-rich datasets, overreliance on ML-centric metrics, and the lack of standardized, reproducible data pipelines. Building on these findings, we introduce BEHELM, a holistic benchmarking infrastructure that unifies software-scenario specification with multi-metric evaluation. BEHELM provides a structured way to assess models across tasks, languages, input and output granularities, and key quality dimensions. Our goal is to reduce the overhead currently required to construct benchmarks while enabling a fair, realistic, and future-proof assessment of LLMs in software engineering.