Inference of Deterministic Finite Automata via Q-Learning

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the passive inference of deterministic finite automata (DFA), introducing Q-learning to this task for the first time and establishing a novel reinforcement learning (RL)-based paradigm for formal language learning. Methodologically, it reinterprets the Q-function semantics as a state-transition function over a finite domain, thereby enabling a rigorous mapping between sub-symbolic learning and symbolic systems; it further designs a tailored reward scheme, models discrete state-action spaces, and incorporates convergence guarantees to automatically induce DFA structure from input/output sequences. Experiments on multiple benchmark datasets successfully recover exact DFAs, demonstrating both effectiveness and robustness. The work extends the applicability of Q-learning beyond traditional RL domains and constructs an interpretable semantic bridge between reinforcement learning and formal language theory—offering a principled pathway toward the integration of symbolic and sub-symbolic AI.

Technology Category

Application Category

📝 Abstract
Traditional approaches to inference of deterministic finite-state automata (DFA) stem from symbolic AI, including both active learning methods (e.g., Angluin's L* algorithm and its variants) and passive techniques (e.g., Biermann and Feldman's method, RPNI). Meanwhile, sub-symbolic AI, particularly machine learning, offers alternative paradigms for learning from data, such as supervised, unsupervised, and reinforcement learning (RL). This paper investigates the use of Q-learning, a well-known reinforcement learning algorithm, for the passive inference of deterministic finite automata. It builds on the core insight that the learned Q-function, which maps state-action pairs to rewards, can be reinterpreted as the transition function of a DFA over a finite domain. This provides a novel bridge between sub-symbolic learning and symbolic representations. The paper demonstrates how Q-learning can be adapted for automaton inference and provides an evaluation on several examples.
Problem

Research questions and friction points this paper is trying to address.

Using Q-learning for passive DFA inference
Bridging sub-symbolic learning with symbolic representations
Adapting reinforcement learning for automaton transition functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Q-learning for automaton inference
Reinterprets Q-function as DFA transition function
Bridges sub-symbolic learning with symbolic representations
🔎 Similar Papers
No similar papers found.
E
Elaheh Hosseinkhani
Universität zu Lübeck, Lübeck, Germany
Martin Leucker
Martin Leucker
Professor of Computer Science, University of Lübeck
Software Engineering