🤖 AI Summary
To address the high syntactic error rate and unreliable end-to-end generation in natural language-to-Game Description Language (GDL) translation, this paper proposes a two-stage LLM-driven framework: first generating a minimal syntactically compliant GDL skeleton, then iteratively refining it under guidance from a customized GDL parser. The method innovatively integrates progressive syntax generation with subsequence-level validity verification, enabling controllable, grammar-constrained generation. Experiments across multiple logic games demonstrate substantial improvements over direct LLM prompting baselines, with a significant increase in GDL syntactic correctness. The implementation is publicly available.
📝 Abstract
Game Description Language (GDL) provides a standardized way to express diverse games in a machine-readable format, enabling automated game simulation, and evaluation. While previous research has explored game description generation using search-based methods, generating GDL descriptions from natural language remains a challenging task. This paper presents a novel framework that leverages Large Language Models (LLMs) to generate grammatically accurate game descriptions from natural language. Our approach consists of two stages: first, we gradually generate a minimal grammar based on GDL specifications; second, we iteratively improve the game description through grammar-guided generation. Our framework employs a specialized parser that identifies valid subsequences and candidate symbols from LLM responses, enabling gradual refinement of the output to ensure grammatical correctness. Experimental results demonstrate that our iterative improvement approach significantly outperforms baseline methods that directly use LLM outputs. Our code is available at https://github.com/tsunehiko/ggdg