Training Neural Networks as Recognizers of Formal Languages

📅 2024-11-11
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Prior work commonly evaluates neural networks’ formal language recognition capabilities via proxy tasks like language modeling, creating a significant misalignment with formal language theory—which fundamentally concerns binary string classification. Method: We propose a theoretically grounded empirical paradigm: directly training RNNs, LSTMs, and causal Transformers as binary classifiers; designing a length-controllable regular language sampling algorithm (an improvement over Snæbjarnarson et al., 2024); and introducing FLaRe—the first benchmark dedicated to formal language recognition. Contribution/Results: Experiments reveal that RNNs and LSTMs consistently outperform causal Transformers across most languages in the Chomsky hierarchy; auxiliary objectives exhibit architecture- and language-specific efficacy. FLaRe is publicly released, establishing a new foundation for theoretically rigorous evaluation of AI’s reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Characterizing the computational power of neural network architectures in terms of formal language theory remains a crucial line of research, as it describes lower and upper bounds on the reasoning capabilities of modern AI. However, when empirically testing these bounds, existing work often leaves a discrepancy between experiments and the formal claims they are meant to support. The problem is that formal language theory pertains specifically to recognizers: machines that receive a string as input and classify whether it belongs to a language. On the other hand, it is common to instead use proxy tasks that are similar in only an informal sense, such as language modeling or sequence-to-sequence transduction. We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings, using a general method that can be applied to a wide variety of languages. As part of this, we extend an algorithm recently proposed by Sn{ae}bjarnarson et al. (2024) to do length-controlled sampling of strings from regular languages, with much better asymptotic time complexity than previous methods. We provide results on a variety of languages across the Chomsky hierarchy for three neural architectures: a simple RNN, an LSTM, and a causally-masked transformer. We find that the RNN and LSTM often outperform the transformer, and that auxiliary training objectives such as language modeling can help, although no single objective uniformly improves performance across languages and architectures. Our contributions will facilitate theoretically sound empirical testing of language recognition claims in future work. We have released our datasets as a benchmark called FLaRe (Formal Language Recognition), along with our code.
Problem

Research questions and friction points this paper is trying to address.

Neural networks' computational power in formal language recognition.
Discrepancy between empirical tests and formal language theory claims.
Training neural networks as binary classifiers for string recognition.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural networks trained as binary string classifiers
Efficient length-controlled string sampling algorithm
FLaRe benchmark for formal language recognition
🔎 Similar Papers
No similar papers found.