🤖 AI Summary
Current large language models (LLMs) lack dedicated, standardized benchmarks for system modeling tasks—particularly for generating syntactically valid and semantically faithful structural and behavioral models (e.g., UML/SysML) from natural-language requirements.
Method: We introduce SysMBench, the first benchmark for system model generation, comprising 151 high-quality, human-annotated requirement–model pairs. We further propose SysMEval, a semantic-aware evaluation metric integrating syntactic correctness, semantic fidelity, and structural consistency.
Contribution/Results: Evaluated across 17 state-of-the-art LLMs, our experiments reveal severe limitations: BLEU scores peak at only 4%, while SysMEval-F1 reaches just 62%. These results critically expose the inadequacy of existing LLMs in model-driven engineering contexts. SysMBench and SysMEval together establish a reproducible, extensible evaluation infrastructure and paradigm for advancing LLMs in software engineering.
📝 Abstract
System models, a critical artifact in software development, provide a formal abstraction of both the structural and behavioral aspects of software systems, which can facilitate the early requirements analysis and architecture design. However, developing system models remains challenging due to the specific syntax of model description languages and the relative scarcity of public model examples. While large language models (LLMs) have shown promise in generating code with programming languages and could potentially aid in system model development, no benchmarks currently exist for evaluating their ability to generate system models with specific description languages. We present SysMBench, which comprises 151 human-curated scenarios spanning a wide range of popular domains and varying difficulty levels. Each scenario mainly comprises a natural language requirements description, a system model expressed in a specific model description language, and a visualized system model diagram. The requirements description is fed as user input to the LLM, the system model with description language is used to verify if the generated system model conforms to the requirements, and the visualized diagram serves to support manual validation. We introduce SysMEval, a semantic-aware evaluation metric to evaluate the quality of generated system models. We evaluate 17 popular LLMs on this task with three traditional metrics and SysMEval, from directly prompting to three commonly used enhancement strategies. Our in-depth evaluation shows that LLMs perform poorly on SysMBench, with the highest BLEU of 4% and SysMEval-F1 of 62%. We release the SysMBench and its evaluation framework to enable future research on LLM-based system model generation.