The Impact of Critique on LLM-Based Model Generation from Natural Language: The Case of Activity Diagrams

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses structural inaccuracies and semantic inconsistencies in activity diagram generation from natural language by large language models (LLMs). To this end, we propose LADEX, a three-stage iterative framework—Generate–Critique–Refine—that jointly employs algorithmic structural validation (ensuring syntactic and topological compliance) and LLM-driven semantic alignment verification (ensuring faithful meaning preservation), with the Critique stage powered by an inference-optimized O4-Mini model. Ablation studies over five variants empirically demonstrate the strong complementarity between structural and semantic verification, establishing an efficient hybrid validation paradigm. Evaluated on a standard benchmark, LADEX achieves 86.37% correctness and 88.56% completeness—substantially outperforming single-shot generation—while requiring fewer than five average LLM invocations per diagram, thus balancing accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) show strong potential for automating the generation of models from natural-language descriptions. A common approach is an iterative generate-critique-refine loop, where candidate models are produced, evaluated, and updated based on detected issues. This process needs to address: (1) structural correctness - compliance with well-formedness rules - and (2) semantic alignment - accurate reflection of the intended meaning in the source text. We present LADEX (LLM-based Activity Diagram Extractor), a pipeline for deriving activity diagrams from natural-language process descriptions using an LLM-driven critique-refine process. Structural checks in LADEX can be performed either algorithmically or by an LLM, while alignment checks are always performed by an LLM. We design five ablated variants of LADEX to study: (i) the impact of the critique-refine loop itself, (ii) the role of LLM-based semantic checks, and (iii) the comparative effectiveness of algorithmic versus LLM-based structural checks. To evaluate LADEX, we compare the generated activity diagrams with expert-created ground truths using trace-based operational semantics. This enables automated measurement of correctness and completeness. Experiments on two datasets indicate that: (1) the critique-refine loop improves structural validity, correctness, and completeness compared to single-pass generation; (2) algorithmic structural checks eliminate inconsistencies that LLM-based checks fail to detect, improving correctness by an average of 17.81% and completeness by 13.24% over LLM-only checks; and (3) combining algorithmic structural checks with LLM-based semantic checks, implemented using the reasoning-focused O4 Mini, achieves the best overall performance - yielding average correctness of up to 86.37% and average completeness of up to 88.56% - while requiring fewer than five LLM calls on average.
Problem

Research questions and friction points this paper is trying to address.

Evaluating critique-refine loops for LLM-based model generation
Comparing algorithmic versus LLM-based structural correctness checks
Assessing semantic alignment in natural language to diagram conversion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative generate-critique-refine loop process
Combining algorithmic and LLM-based structural checks
LLM-driven semantic alignment verification system
🔎 Similar Papers
No similar papers found.