🤖 AI Summary
Learning Linear Temporal Logic (LTL) formulas faces a fundamental trade-off among expressiveness, conciseness, and interpretability, especially when incorporating user-specified semantic (e.g., minimality) and syntactic (e.g., operator restrictions) constraints.
Method: We propose a constraint-driven LTL learning paradigm that synthesizes formulas directly from positive and negative execution traces while respecting user-defined constraints. We formalize the problem as first-order relational logic and encode it efficiently as a MaxSAT instance.
Contribution/Results: We implement our approach in ATLAS, the first tool capable of solving novel constrained LTL learning tasks—where prior methods fail to yield feasible solutions. Across benchmarks, ATLAS matches or surpasses state-of-the-art LTL learners in both accuracy and solving efficiency. By enabling precise, customizable formula synthesis, ATLAS significantly extends the practical applicability of LTL in model checking and runtime monitoring.
📝 Abstract
Temporal logic specifications play an important role in a wide range of software analysis tasks, such as model checking, automated synthesis, program comprehension, and runtime monitoring. Given a set of positive and negative examples, specified as traces, LTL learning is the problem of synthesizing a specification, in linear temporal logic (LTL), that evaluates to true over the positive traces and false over the negative ones. In this paper, we propose a new type of LTL learning problem called constrained LTL learning, where the user, in addition to positive and negative examples, is given an option to specify one or more constraints over the properties of the LTL formula to be learned. We demonstrate that the ability to specify these additional constraints significantly increases the range of applications for LTL learning, and also allows efficient generation of LTL formulas that satisfy certain desirable properties (such as minimality). We propose an approach for solving the constrained LTL learning problem through an encoding in first-order relational logic and reduction to an instance of the maximal satisfiability (MaxSAT) problem. An experimental evaluation demonstrates that ATLAS, an implementation of our proposed approach, is able to solve new types of learning problems while performing better than or competitively with the state-of-the-art tools in LTL learning.