Automatic Generation of Safety-compliant Linear Temporal Logic via Large Language Model: A Self-supervised Framework

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ensuring safety compliance in cyber-physical systems (CPS) requires precise translation of natural language (NL) safety requirements into formal Linear Temporal Logic (LTL) specifications—a task prone to semantic inaccuracies and safety violations. Method: We propose AutoSafeLTL, the first self-supervised large language model (LLM) framework for NL-to-LTL translation tailored to CPS safety verification. It innovatively integrates language inclusion checking, counterexample-guided iterative refinement, and a dual-agent architecture to enhance semantic understanding and error correction. Contribution/Results: Experiments demonstrate that all generated LTL specifications strictly satisfy predefined safety constraints, achieving 0% safety violation rate. AutoSafeLTL significantly outperforms baseline methods in both logical consistency and semantic accuracy. This work establishes a trustworthy, verifiable paradigm for automated, safety-aware NL-to-LTL translation in CPS formal verification.

Technology Category

Application Category

📝 Abstract
Ensuring safety in cyber-physical systems (CPS) poses a significant challenge, especially when converting high-level tasks described by natural language into formal specifications like Linear Temporal Logic (LTL). In particular, the compliance of formal languages with respect to safety restrictions imposed on CPS is crucial for system safety. In this paper, we introduce AutoSafeLTL, a self-supervised framework that utilizes large language models (LLMs) to automate the generation of safety-compliant LTL. Our approach integrates a Language Inclusion check with an automated counterexample-guided feedback and modification mechanism, establishing a pipeline that verifies the safety-compliance of the resulting LTL while preserving its logical consistency and semantic accuracy. To enhance the framework's understanding and correction capabilities, we incorporate two additional Agent LLMs. Experimental results demonstrate that AutoSafeLTL effectively guarantees safety-compliance for generated LTL, achieving a 0% violation rate against imposed safety constraints.
Problem

Research questions and friction points this paper is trying to address.

Automates generation of safety-compliant Linear Temporal Logic (LTL)
Ensures compliance with safety restrictions in cyber-physical systems
Uses self-supervised framework with large language models (LLMs)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised framework using large language models
Automated safety-compliant LTL generation
Counterexample-guided feedback and modification mechanism
🔎 Similar Papers
No similar papers found.
Junle Li
Junle Li
Phd at University of Glasgow
Usable securityFormal VerificationLarge Language Model
M
Meiqi Tian
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
B
Bingzhuo Zhong
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China