🤖 AI Summary
This work addresses the passive learning of deterministic context-free grammars (DCFGs), modeled as visibly deterministic pushdown automata (VDPDAs). To overcome the inherent computational hardness of DCFG learning, we propose the PAPNI framework—the first extension of the classical RPNI algorithm to DCFG inference. Leveraging prior knowledge of the input alphabet partitioned by stack operations (push/pop/ε), PAPNI reduces VDPDA learning to the coordinated inference of multiple regular models. Crucially, it operates purely passively, requiring no active membership or equivalence queries, and learns end-to-end from labeled positive and negative examples. Experiments on standard DCFG benchmarks demonstrate that PAPNI achieves prediction accuracy comparable to regular-level RPNI, confirming both its theoretical soundness and practical efficacy. By enabling scalable, query-free learning of structured grammars, PAPNI establishes a novel, extensible paradigm for grammar induction beyond the regular hierarchy.
📝 Abstract
We present PAPNI, a passive automata learning algorithm capable of learning deterministic context-free grammars, which are modeled with visibly deterministic pushdown automata. PAPNI is a generalization of RPNI, a passive automata learning algorithm capable of learning regular languages from positive and negative samples. PAPNI uses RPNI as its underlying learning algorithm while assuming a priori knowledge of the visibly deterministic input alphabet, that is, the alphabet decomposition into symbols that push to the stack, pop from the stack, or do not affect the stack.
In this paper, we show how passive learning of deterministic pushdown automata can be viewed as a preprocessing step of standard RPNI implementations. We evaluate the proposed approach on various deterministic context-free grammars found in the literature and compare the predictive accuracy of learned models with RPNI.