🤖 AI Summary
This work addresses the limited scalability of existing active automata learning algorithms when applied to systems with high-dimensional inputs where most inputs trigger observable errors. To overcome this challenge, the paper introduces the first systematic integration of error-aware mechanisms into an active learning framework. Building upon varying degrees of prior knowledge about non-error-inducing inputs, the authors tailor the L# algorithm by incorporating error identification and state-space pruning strategies. The proposed approach achieves learning speedups of several orders of magnitude under strong domain knowledge, and even with only limited prior information, it yields approximately a tenfold improvement in efficiency. These advances significantly enhance the scalability and practical applicability of active learning in error-prone systems.
📝 Abstract
Active automata learning (AAL) algorithms can learn a behavioral model of a system from interacting with it. The primary challenge remains scaling to larger models, in particular in the presence of many possible inputs to the system. Modern AAL algorithms fail to scale even if, in every state, most inputs lead to errors. In various challenging problems from the literature, these errors are observable, i.e., they emit a known error output. Motivated by these problems, we study learning these systems more efficiently. Further, we consider various degrees of knowledge about which inputs are non-error producing at which state. For each level of knowledge, we provide a matching adaptation of the state-of-the-art AAL algorithm L# to make the most of this domain knowledge. Our empirical evaluation demonstrates that the methods accelerate learning by orders of magnitude with strong but realistic domain knowledge to a single order of magnitude with limited domain knowledge.