Verifiable Natural Language to Linear Temporal Logic Translation: A Benchmark Dataset and Evaluation Suite

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing NL-to-LTL translation systems achieve high syntactic accuracy but neglect critical capabilities—correctly grounding atomic propositions to novel state spaces and verifying the logical validity of generated LTL formulas—rendering standard benchmarks misaligned with real-world deployment requirements. To address this, we introduce VLTL-Bench, the first verification-oriented, multi-scenario benchmark for NL-to-LTL translation. It spans three heterogeneous state spaces, includes thousands of diverse natural language utterances paired with formal LTL specifications, and provides both real and synthetic execution traces for formal verification. Crucially, VLTL-Bench is the first to explicitly incorporate *grounding* and *verification* into end-to-end and modular evaluation pipelines (i.e., refine → ground → translate → verify). Empirical evaluation on VLTL-Bench exposes substantial deficiencies in grounding generalization and verifiability across state-of-the-art methods. The benchmark establishes a rigorous evaluation framework and foundational data resource for developing domain-agnostic, scalable NL-to-LTL systems.

Technology Category

Application Category

📝 Abstract
Empirical evaluation of state-of-the-art natural-language (NL) to temporal-logic (TL) translation systems reveals near-perfect performance on existing benchmarks. However, current studies measure only the accuracy of the translation of NL logic into formal TL, ignoring a system's capacity to ground atomic propositions into new scenarios or environments. This is a critical feature, necessary for the verification of resulting formulas in a concrete state space. Consequently, most NL-to-TL translation frameworks propose their own bespoke dataset in which the correct grounding is known a-priori, inflating performance metrics and neglecting the need for extensible, domain-general systems. In this paper, we introduce the Verifiable Linear Temporal Logic Benchmark ( VLTL-Bench), a unifying benchmark that measures verification and verifiability of automated NL-to-LTL translation. The dataset consists of three unique state spaces and thousands of diverse natural language specifications and corresponding formal specifications in temporal logic. Moreover, the benchmark contains sample traces to validate the temporal logic expressions. While the benchmark directly supports end-to-end evaluation, we observe that many frameworks decompose the process into i) lifting, ii) grounding, iii) translation, and iv) verification. The benchmark provides ground truths after each of these steps to enable researches to improve and evaluate different substeps of the overall problem. To encourage methodologically sound advances in verifiable NL-to-LTL translation approaches, we release VLTL-Bench here: https://www.kaggle.com/datasets/dubascudes/vltl bench.
Problem

Research questions and friction points this paper is trying to address.

Evaluates NL-to-LTL translation verifiability in new scenarios
Addresses lack of grounding capacity in existing benchmarks
Provides a unified benchmark for end-to-end evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces VLTL-Bench benchmark for verifiable NL-to-LTL translation
Measures verification and verifiability across diverse state spaces
Provides ground truths for lifting, grounding, translation, verification
W
William H English
University of Florida, Gainesville, FL, USA
C
Chase Walker
University of Florida, Gainesville, FL, USA
D
Dominic Simon
University of Florida, Gainesville, FL, USA
Sumit Kumar Jha
Sumit Kumar Jha
University of Florida
Rickard Ewetz
Rickard Ewetz
University of Florida
Computer-aided designMachine learningArtificial intelligenceFuture computing systems