🤖 AI Summary
Accurately translating natural language requirements into unambiguous and consistent Linear Temporal Logic (LTL) specifications remains challenging for small language models due to frequent syntactic invalidity and logical inconsistencies. This work proposes a modular toolchain that, for the first time, integrates lightweight symbolic reasoning with compact language models (4B–14B parameters) to generate candidate LTL formulas via constrained decoding. The approach further incorporates formal consistency checking and conflict localization mechanisms to iteratively refine the generated specifications. Experimental results demonstrate that this method significantly improves both syntactic correctness and logical consistency of LTL specifications produced by small models, thereby enhancing their usability and accuracy in formal specification tasks under resource-constrained conditions.
📝 Abstract
Translating informal requirements into formal specifications is challenging due to the ambiguity and variability of natural language (NL). This challenge is particularly pronounced when relying on compact (small and medium) language models, which may lack robust knowledge of temporal logic and thus struggle to produce syntactically valid and consistent formal specifications. In this work, we focus on enabling resource-efficient open-weight models (4B--14B parameters) to generate correct linear temporal logic (LTL) specifications from informal requirements. We present LTLGuard, a modular toolchain that combines constrained generation with formal consistency checking to generate conflict-free LTL specifications from informal input. Our method integrates the generative capabilities of model languages with lightweight automated reasoning tools to iteratively refine candidate specifications, understand the origin of the conflicts and thus help in eliminating inconsistencies. We demonstrate the usability and the effectiveness of our approach and perform quantitative evaluation of the resulting framework.