🤖 AI Summary
Recurrent neural networks (RNNs) are investigated for their ability to generalize to ω-regular languages derived from Linear Temporal Logic (LTL), addressing the scalability bottleneck of Büchi automata in formal verification of complex systems.
Method: A multi-scale suite of deterministic Büchi automata tasks is constructed; RNNs are trained to emulate target automaton behavior on ultimately periodic ω-words, and their out-of-distribution generalization is rigorously evaluated on sequences up to eight times longer than those seen during training.
Contribution/Results: This work provides the first empirical evidence that RNNs exhibit strong generalization across structurally diverse ω-regular languages: 92.6% of tasks achieve perfect or near-perfect generalization. The results establish RNNs as reliable learnable components in neuro-symbolic verification, enabling a novel paradigm for temporal-logic-guided model checking.
📝 Abstract
Büchi automata (BAs) recognize $ω$-regular languages defined by formal specifications like linear temporal logic (LTL) and are commonly used in the verification of reactive systems. However, BAs face scalability challenges when handling and manipulating complex system behaviors. As neural networks are increasingly used to address these scalability challenges in areas like model checking, investigating their ability to generalize beyond training data becomes necessary. This work presents the first study investigating whether recurrent neural networks (RNNs) can generalize to $ω$-regular languages derived from LTL formulas. We train RNNs on ultimately periodic $ω$-word sequences to replicate target BA behavior and evaluate how well they generalize to out-of-distribution sequences. Through experiments on LTL formulas corresponding to deterministic automata of varying structural complexity, from 3 to over 100 states, we show that RNNs achieve high accuracy on their target $ω$-regular languages when evaluated on sequences up to $8 imes$ longer than training examples, with $92.6%$ of tasks achieving perfect or near-perfect generalization. These results establish the feasibility of neural approaches for learning complex $ω$-regular languages, suggesting their potential as components in neurosymbolic verification methods.