🤖 AI Summary
This work proposes an automated approach leveraging large language models (LLMs) to generate UML class diagrams from natural language requirements, aiming to reduce manual effort in software design. The method employs chain-of-thought prompting to extract domain entities, attributes, and relationships, followed by structured diagram generation using PlantUML. A novel dual-validation framework is introduced: one component utilizes LLMs such as Grok and Mistral as judges for automated quality assessment, while the other incorporates expert human evaluation. Experimental results demonstrate that the generated class diagrams achieve high performance across five dimensions—including completeness and correctness—and show strong alignment between LLM-based evaluations and expert judgments, thereby confirming the feasibility and reliability of LLMs in both automated modeling and quality assessment.
📝 Abstract
The emergence of Large Language Models (LLMs) has opened new opportunities to automate software engineering activities that traditionally require substantial manual effort. Among these, class diagram generation represents a critical yet resource-intensive phase in software design. This paper investigates the capabilities of state-of-the-art LLMs, including GPT-5, Claude Sonnet 4.0, Gemini 2.5 Flash Thinking, and Llama-3.1-8B-Instruct, to generate UML class diagrams from natural language requirements automatically. To evaluate the effectiveness and reliability of LLM-based model generation, we propose a comprehensive dual-validation framework that integrates an LLM-as-a-Judge methodology with human-in-the-loop assessment. Using eight heterogeneous datasets, we apply chain-of-thought prompting to extract domain entities, attributes, and associations, generating corresponding PlantUML representations. The resulting models are evaluated across five quality dimensions: completeness, correctness, conformance to standards, comprehensibility, and terminological alignment. Two independent LLM judges (Grok and Mistral) perform structured pairwise comparisons, and their judgments are further validated against expert evaluations. Our results demonstrate that LLMs can generate structurally coherent and semantically meaningful UML diagrams, achieving substantial alignment with human evaluators. The consistency observed between LLM-based and human-based assessments highlights the potential of LLMs not only as modeling assistants but also as reliable evaluators in automated requirements engineering workflows, offering practical insights into the capabilities and limitations of LLM-driven UML class diagram automation.