๐ค AI Summary
Large language models (LLMs) struggle to consistently preserve object-oriented design intent in software design synthesis, exhibiting output non-determinism and sensitivity to prompting. This work introduces the first benchmark for evaluating object-oriented design intent fidelity and systematically assesses the reliability of ChatGPT-4o-mini, Claude 3.5 Sonnet, and Gemini 2.5 Flash in generating UML class diagrams under standard prompting, rule injection, and a novel preference-aligned few-shot prompting strategy. Experimental results demonstrate that preference alignment substantially improves adherence to design intent but fails to eliminate non-determinism; moreover, inherent model behaviors significantly influence reliability. This study provides the first empirical analysis of LLM-based design synthesis stability through the lenses of non-determinism, prompt sensitivity, and methodological scaffolding, establishing a new paradigm for dependable AI-assisted software design.
๐ Abstract
Large Language Models (LLMs) are increasingly applied to automate software engineering tasks, including the generation of UML class diagrams from natural language descriptions. While prior work demonstrates that LLMs can produce syntactically valid diagrams, syntactic correctness alone does not guarantee meaningful design. This study investigates whether LLMs can move beyond diagram translation to perform design synthesis, and how reliably they maintain design-oriented reasoning under variation. We introduce a preference-based few-shot prompting approach that biases LLM outputs toward designs satisfying object-oriented principles and pattern-consistent structures. Two design-intent benchmarks, each with three domain-only, paraphrased prompts and 10 repeated runs, are used to evaluate three LLMs (ChatGPT 4o-mini, Claude 3.5 Sonnet, Gemini 2.5 Flash) across three modeling strategies: standard prompting, rule-injection prompting, and preference-based prompting, totaling 540 experiments (i.e. 2x3x10x3x3). Results indicate that while preference-based alignment improves adherence to design intent it does not eliminate non-determinism, and model-level behavior strongly influences design reliability. These findings highlight that achieving dependable LLM-assisted software design requires not only effective prompting but also careful consideration of model behavior and robustness.