🤖 AI Summary
Large language models (LLMs) frequently generate code containing “code smells”—design flaws such as redundant conditions or mergeable type checks—that impair maintainability and correctness.
Method: We introduce CodeSmellEval, the first benchmark specifically designed to assess LLMs’ propensity to generate smelly code. It comprises a method-level dataset (CodeSmellData), a novel quantitative metric—the Propensity Smelly Score (PSC)—and a reproducible evaluation framework grounded in static analysis and expert annotation.
Contribution/Results: Empirical evaluation across prominent open-source models—including CodeLlama and Mistral—reveals pervasive and statistically significant code-smell generation tendencies, validating both the benchmark’s efficacy and its necessity. This work establishes the first systematic characterization of code-quality hazards in LLM-generated code, providing a critical assessment dimension and practical toolset for trustworthy programming.
📝 Abstract
Large Language Models (LLMs) have shown significant potential in automating software engineering tasks, particularly in code generation. However, current evaluation benchmarks, which primarily focus on accuracy, fall short in assessing the quality of the code generated by these models, specifically their tendency to produce code smells. To address this limitation, we introduce CodeSmellEval, a benchmark designed to evaluate the propensity of LLMs for generating code smells. Our benchmark includes a novel metric: Propensity Smelly Score (PSC), and a curated dataset of method-level code smells: CodeSmellData. To demonstrate the use of CodeSmellEval, we conducted a case study with two state-of-the-art LLMs, CodeLlama and Mistral. The results reveal that both models tend to generate code smells, such as simplifiable-condition and consider-merging-isinstance. These findings highlight the effectiveness of our benchmark in evaluating LLMs, providing valuable insights into their reliability and their propensity to introduce code smells in code generation tasks.