🤖 AI Summary
This study systematically reviews 35 papers on LLM-assisted translation of natural language requirements into formal specifications (e.g., Dafny, C, Java), supporting VERIFAI’s goals of requirement traceability and formal verification. We propose the first application paradigm taxonomy for this task, identifying three core capabilities—syntactic translation, constraint completion, and error detection—and common bottlenecks including low accuracy and weak logical consistency. Methodologically, we integrate Elicit AI–assisted literature retrieval, cross-database validation, manual curation, and thematic coding analysis. Our contributions include identifying key research directions: enhancing model interpretability, establishing accuracy assurance mechanisms, and enabling domain adaptation. These insights provide both theoretical foundations and practical pathways for LLM-driven formal requirements engineering. (149 words)
📝 Abstract
This paper presents a focused literature survey on the use of large language models (LLM) to assist in writing formal specifications for software. A summary of thirty-five key papers is presented, including examples for specifying programs written in Dafny, C and Java. This paper arose from the project VERIFAI - Traceability and verification of natural language requirements that addresses the challenges in writing formal specifications from requirements that are expressed in natural language. Our methodology employed multiple academic databases to identify relevant research. The AI-assisted tool Elicit facilitated the initial paper selection, which were manually screened for final selection. The survey provides valuable insights and future directions for utilising LLMs while formalising software requirements.