🤖 AI Summary
This work addresses the significant performance degradation of large language models (LLMs) in generating code for constraint-based domain-specific languages (DSLs), such as OCL and Alloy, and the absence of systematic evaluation methodologies. The paper introduces the first evaluation framework tailored for constraint DSL code generation, which systematically assesses LLM capabilities in translating natural language to DSL through both syntactic correctness and semantic accuracy, leveraging formal verification. Experimental comparisons across Python, OCL, and Alloy reveal that LLMs perform markedly better on general-purpose languages, that models with limited context windows struggle to jointly generate constraints and domain models, and that incorporating code repair and multi-candidate generation strategies substantially improves output quality. The framework further enables systematic analysis of prompting templates, repair mechanisms, and multi-turn generation strategies.
📝 Abstract
Large language models (LLMs) can be used to support software development tasks, e.g., through code completion or code generation. However, their effectiveness drops significantly when considering less popular programming languages such as domain-specific languages (DSLs). In this paper, we propose a generic framework for evaluating the capabilities of LLMs generating DSL code from textual specifications. The generated code is assessed from the perspectives of well-formedness and correctness. This framework is applied to a particular type of DSL, constraint languages, focusing our experiments on OCL and Alloy and comparing their results to those achieved for Python, a popular general-purpose programming language. Experimental results show that, in general, LLMs have better performance for Python than for OCL and Alloy. LLMs with smaller context windows such as open-source LLMs may be unable to generate constraint-related code, as this requires managing both the constraint and the domain model where it is defined. Moreover, some improvements to the code generation process such as code repair (asking an LLM to fix incorrect code) or multiple attempts (generating several candidates for each coding task) can improve the quality of the generated code. Meanwhile, other decisions like the choice of a prompt template have less impact. All these dimensions can be systematically analyzed using our evaluation framework, making it possible to decide the most effective way to set up code generation for a particular type of task.