Automatic Syntax Error Repair for Discrete Controller Synthesis using Large Language Model

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Discrete controller synthesis (DCS) suffers from slow development cycles due to modeling errors stemming from the steep syntactic learning curve of formal specification languages (e.g., FSP, FLNL). This paper introduces the first LLM-based automated repair method targeting syntax errors in DCS. Our approach leverages domain knowledge to design a tripartite prompting strategy—integrating syntactic rules, error patterns, and expert-crafted examples—to embed formal semantic priors into LLM reasoning. Evaluated on a novel, manually curated benchmark comprising real-world syntax errors, our method achieves an 89.7% repair accuracy and operates 3.46× faster than human experts, significantly outperforming general-purpose code repair models. All code, data, and prompt templates are publicly released, establishing a reusable technical pathway toward practical adoption of formal methods.

Technology Category

Application Category

📝 Abstract
Discrete Controller Synthesis (DCS) is a powerful formal method for automatically generating specifications of discrete event systems. However, its practical adoption is often hindered by the highly specialized nature of formal models written in languages such as FSP and FLTL. In practice, syntax errors in modeling frequently become an important bottleneck for developers-not only disrupting the workflow and reducing productivity, but also diverting attention from higher-level semantic design. To this end, this paper presents an automated approach that leverages Large Language Models (LLMs) to repair syntax errors in DCS models using a well-designed, knowledge-informed prompting strategy. Specifically, the prompting is derived from a systematic empirical study of common error patterns, identified through expert interviews and student workshops. It equips the LLM with DCS-specific domain knowledge, including formal grammar rules and illustrative examples, to guide accurate corrections. To evaluate our method, we constructed a new benchmark by systematically injecting realistic syntax errors into validated DCS models. The quantitative evaluation demonstrates the high effectiveness of the proposed approach in terms of repair accuracy and its practical utility regarding time, achieving a speedup of 3.46 times compared to human developers. The experimental replication suite, including the benchmark and prompts, is available at https://github.com/Uuusay1432/DCSModelRepair.git
Problem

Research questions and friction points this paper is trying to address.

Automated syntax error repair for Discrete Controller Synthesis models
Leveraging Large Language Models with domain-specific prompting strategies
Improving developer productivity by accelerating error correction processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Large Language Models for automated syntax error repair
Uses knowledge-informed prompting derived from common error patterns
Achieves 3.46 times speedup compared to human developers
🔎 Similar Papers
No similar papers found.