🤖 AI Summary
This work investigates whether language models can simultaneously satisfy *consistency* (avoiding hallucinations by generating only valid strings) and *completeness* (covering all valid strings in the target language, thereby preventing mode collapse). Within the Gold–Angluin statistical learning framework, we establish— for most countable language families—the first rigorous proof that a fundamental trade-off exists between consistency and completeness when learning from positive examples alone; standard autoregressive modeling cannot achieve both. Our theoretical analysis demonstrates that negative examples (or post-training feedback) are necessary to overcome this limitation. Leveraging computational learning theory, Gold-style learnability analysis, and sample complexity bounds, we derive nearly tight sample complexity bounds both with and without completeness constraints. Crucially, we prove that incorporating negative examples enables reliable, hallucination-free, and mode-collapse-free generation over any countable language family.
📝 Abstract
Specifying all desirable properties of a language model is challenging, but certain requirements seem essential. Given samples from an unknown language, the trained model should produce valid strings not seen in training and be expressive enough to capture the language's full richness. Otherwise, outputting invalid strings constitutes"hallucination,"and failing to capture the full range leads to"mode collapse."We ask if a language model can meet both requirements. We investigate this within a statistical language generation setting building on Gold and Angluin. Here, the model receives random samples from a distribution over an unknown language K, which belongs to a possibly infinite collection of languages. The goal is to generate unseen strings from K. We say the model generates from K with consistency and breadth if, as training size increases, its output converges to all unseen strings in K. Kleinberg and Mullainathan [KM24] asked if consistency and breadth in language generation are possible. We answer this negatively: for a large class of language models, including next-token prediction models, this is impossible for most collections of candidate languages. This contrasts with [KM24]'s result, showing consistent generation without breadth is possible for any countable collection of languages. Our finding highlights that generation with breadth fundamentally differs from generation without breadth. As a byproduct, we establish near-tight bounds on the number of samples needed for generation with or without breadth. Finally, our results offer hope: consistent generation with breadth is achievable for any countable collection of languages when negative examples (strings outside K) are available alongside positive ones. This suggests that post-training feedback, which encodes negative examples, can be crucial in reducing hallucinations while limiting mode collapse.