🤖 AI Summary
This work investigates whether large language models (LLMs) can generalize to novel formal languages outside their training distribution, specifically focusing on their ability to understand and generate languages defined by deterministic finite automata (DFAs).
Method: We introduce a theoretically grounded, stochastically sampled family of random DFAs to avoid dataset recency bias; evaluate LLMs on both language recognition and synthesis tasks; and benchmark against parameter-free n-gram baselines.
Contribution/Results: Experiments reveal that even on unseen DFA languages with only three states, state-of-the-art LLMs underperform simple n-gram models. These findings indicate that LLMs lack generalizable, grammar-based language modeling capacity: their generalization is heavily reliant on surface-level statistical patterns in training data rather than abstract syntactic principles or universal linguistic regularities. The results challenge the assumption that LLMs implicitly acquire formal language competence through scale alone.
📝 Abstract
Can LLMs pick up language structure from examples? Evidence in prior work seems to indicate yes, as pretrained models repeatedly demonstrate the ability to adapt to new language structures and vocabularies. However, this line of research typically considers languages that are present within common pretraining datasets, or otherwise share notable similarities with these seen languages. In contrast, in this work we attempt to measure models' language understanding capacity while circumventing the risk of dataset recall. We parameterize large families of language tasks recognized by deterministic finite automata (DFAs), and can thus sample novel language reasoning problems to fairly evaulate LLMs regardless of training data. We find that, even in the strikingly simple setting of 3-state DFAs, LLMs underperform unparameterized ngram models on both language recognition and synthesis tasks. These results suggest that LLMs struggle to match the ability of basic language models in recognizing and reasoning over languages that are sufficiently distinct from the ones they see at training time, underscoring the distinction between learning individual languages and possessing a general theory of language.