GinSign: Grounding Natural Language Into System Signatures for Temporal Logic Translation

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing NL-to-temporal-logic (NL→TL) translation methods either rely on precise atomic proposition annotations or suffer from semantic distortion due to mismatches between natural language atomic propositions and formal system signatures. This paper proposes a hierarchical structured classification paradigm that eschews free-form generation and large language models, instead directly mapping NL spans to predicates and typed constants within the formal system signature. Our approach builds a two-stage classifier—predicate identification followed by parameter selection—on top of a masked language model, explicitly incorporating formal signature constraints. Evaluated on a multi-domain benchmark, our method achieves 95.5% logical equivalence accuracy, outperforming the state of the art by 1.4×. To our knowledge, this is the first end-to-end NL→TL translation framework that simultaneously ensures high fidelity, formal verifiability, and robust generalization across domains.

Technology Category

Application Category

📝 Abstract
Natural language (NL) to temporal logic (TL) translation enables engineers to specify, verify, and enforce system behaviors without manually crafting formal specifications-an essential capability for building trustworthy autonomous systems. While existing NL-to-TL translation frameworks have demonstrated encouraging initial results, these systems either explicitly assume access to accurate atom grounding or suffer from low grounded translation accuracy. In this paper, we propose a framework for Grounding Natural Language Into System Signatures for Temporal Logic translation called GinSign. The framework introduces a grounding model that learns the abstract task of mapping NL spans onto a given system signature: given a lifted NL specification and a system signature $mathcal{S}$, the classifier must assign each lifted atomic proposition to an element of the set of signature-defined atoms $mathcal{P}$. We decompose the grounding task hierarchically- first predicting predicate labels, then selecting the appropriately typed constant arguments. Decomposing this task from a free-form generation problem into a structured classification problem permits the use of smaller masked language models and eliminates the reliance on expensive LLMs. Experiments across multiple domains show that frameworks which omit grounding tend to produce syntactically correct lifted LTL that is semantically nonequivalent to grounded target expressions, whereas our framework supports downstream model checking and achieves grounded logical-equivalence scores of $95.5%$, a $1.4 imes$ improvement over SOTA.
Problem

Research questions and friction points this paper is trying to address.

Grounding natural language into system signatures for temporal logic translation
Improving accuracy of NL-to-TL translation by structured classification
Enhancing model checking with semantically equivalent logical expressions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical grounding model mapping NL to system signatures
Decomposes task into structured classification, not generation
Uses smaller masked language models, avoids expensive LLMs
🔎 Similar Papers
No similar papers found.
W
William English
University of Florida
C
Chase Walker
University of Florida
D
Dominic Simon
University of Florida
Rickard Ewetz
Rickard Ewetz
University of Florida
Computer-aided designMachine learningArtificial intelligenceFuture computing systems