🤖 AI Summary
This study addresses the lack of systematic evaluation of large language models (LLMs) in object-oriented design (OOD) within software engineering, as existing assessments predominantly focus on code generation. To bridge this gap, the authors introduce OODEval, a novel benchmark comprising 50 human-crafted OOD tasks and 940 student-submitted class diagrams, along with OODEval-Human, an expert-graded dataset. They further propose CLUE, a unified evaluation metric that jointly assesses global correctness and fine-grained design quality. A comprehensive evaluation of 29 LLMs using this framework reveals that while current models can produce syntactically valid class diagrams, they frequently exhibit semantic flaws. Among them, Qwen3-Coder-30B achieves the best performance, approaching the average level of undergraduate students but still falling significantly short of expert human designers.
📝 Abstract
Recent advances in large language models (LLMs) have driven extensive evaluations in software engineering. however, most prior work concentrates on code-level tasks, leaving software design capabilities underexplored. To fill this gap, we conduct a comprehensive empirical study evaluating 29 LLMs on object-oriented design (OOD) tasks. Owing to the lack of standardized benchmarks and metrics, we introduce OODEval, a manually constructed benchmark comprising 50 OOD tasks of varying difficulty, and OODEval-Human, the first human-rated OOD benchmark, which includes 940 undergraduate-submitted class diagrams evaluated by instructors. We further propose CLUE (Class Likeness Unified Evaluation), a unified metric set that assesses both global correctness and fine-grained design quality in class diagram generation. Using these benchmarks and metrics, we investigate five research questions: overall correctness, comparison with humans, model dimension analysis, task feature analysis, and bad case analysis. The results indicate that while LLMs achieve high syntactic accuracy, they exhibit substantial semantic deficiencies, particularly in method and relationship generation. Among the evaluated models, Qwen3-Coder-30B achieves the best overall performance, rivaling DeepSeek-R1 and GPT-4o, while Gemma3-4B-IT outperforms GPT-4o-Mini despite its smaller parameter scale. Although top-performing LLMs nearly match the average performance of undergraduates, they remain significantly below the level of the best human designers. Further analysis shows that parameter scale, code specialization, and instruction tuning strongly influence performance, whereas increased design complexity and lower requirement readability degrade it. Bad case analysis reveals common failure modes, including keyword misuse, missing classes or relationships, and omitted methods.