OODEval: Evaluating Large Language Models on Object-Oriented Design

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic evaluation of large language models (LLMs) in object-oriented design (OOD) within software engineering, as existing assessments predominantly focus on code generation. To bridge this gap, the authors introduce OODEval, a novel benchmark comprising 50 human-crafted OOD tasks and 940 student-submitted class diagrams, along with OODEval-Human, an expert-graded dataset. They further propose CLUE, a unified evaluation metric that jointly assesses global correctness and fine-grained design quality. A comprehensive evaluation of 29 LLMs using this framework reveals that while current models can produce syntactically valid class diagrams, they frequently exhibit semantic flaws. Among them, Qwen3-Coder-30B achieves the best performance, approaching the average level of undergraduate students but still falling significantly short of expert human designers.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have driven extensive evaluations in software engineering. however, most prior work concentrates on code-level tasks, leaving software design capabilities underexplored. To fill this gap, we conduct a comprehensive empirical study evaluating 29 LLMs on object-oriented design (OOD) tasks. Owing to the lack of standardized benchmarks and metrics, we introduce OODEval, a manually constructed benchmark comprising 50 OOD tasks of varying difficulty, and OODEval-Human, the first human-rated OOD benchmark, which includes 940 undergraduate-submitted class diagrams evaluated by instructors. We further propose CLUE (Class Likeness Unified Evaluation), a unified metric set that assesses both global correctness and fine-grained design quality in class diagram generation. Using these benchmarks and metrics, we investigate five research questions: overall correctness, comparison with humans, model dimension analysis, task feature analysis, and bad case analysis. The results indicate that while LLMs achieve high syntactic accuracy, they exhibit substantial semantic deficiencies, particularly in method and relationship generation. Among the evaluated models, Qwen3-Coder-30B achieves the best overall performance, rivaling DeepSeek-R1 and GPT-4o, while Gemma3-4B-IT outperforms GPT-4o-Mini despite its smaller parameter scale. Although top-performing LLMs nearly match the average performance of undergraduates, they remain significantly below the level of the best human designers. Further analysis shows that parameter scale, code specialization, and instruction tuning strongly influence performance, whereas increased design complexity and lower requirement readability degrade it. Bad case analysis reveals common failure modes, including keyword misuse, missing classes or relationships, and omitted methods.
Problem

Research questions and friction points this paper is trying to address.

object-oriented design
large language models
software design evaluation
class diagram generation
benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

OODEval
object-oriented design
CLUE
human-rated benchmark
class diagram evaluation
🔎 Similar Papers
No similar papers found.
B
Bingxu Xiao
Northwestern Polytechnical University, China
Y
Yunwei Dong
Northwestern Polytechnical University, China
Y
Yiqi Tang
Northwestern Polytechnical University, China
M
Manqing Zhang
Northwestern Polytechnical University, China
Y
Yifan Zhou
Southern University of Science and Technology, China
C
Chunyan Ma
Northwestern Polytechnical University, China
Yepang Liu
Yepang Liu
Associate Professor, CSE, Southern University of Science and Technology
Software testing and analysisempirical software engineeringsoftware securitycyber-physical