🤖 AI Summary
Despite the dominance of large language models (LLMs), classical n-gram models remain theoretically insightful yet practically limited by fixed context length and scalability. Method: This paper introduces the ∞-gram framework—a theoretically unbounded, real-time n-gram probability estimator operating at 5-trillion-token scale—implemented via a suffix-array–based *infini-gram* engine enabling millisecond inference and dynamic backoff; further supported by distributed n-gram counting and high-compression storage. Contribution/Results: ∞-gram achieves 47% single-step prediction accuracy; significantly reduces neural LLM perplexity; and—crucially—uncovers a systematic inconsistency in long-suffix coherence of machine-generated text, alongside a previously unidentified structural misalignment between Transformer positional encoding and empirical data distributions.
📝 Abstract
Are $n$-gram language models still relevant in this era of neural large language models (LLMs)? Our answer is yes, and we showcase their values in both text analysis and improving neural LLMs. This was done by modernizing $n$-gram LMs in two aspects. First, we train them at the same data scale as neural LLMs -- 5 trillion tokens. This is the largest $n$-gram LM ever built. Second, existing $n$-gram LMs use small $n$ which hinders their performance; we instead allow $n$ to be arbitrarily large, by introducing a new $infty$-gram LM with backoff. Instead of pre-computing $n$-gram count tables (which would be very expensive), we develop an engine named infini-gram -- powered by suffix arrays -- that can compute $infty$-gram (as well as $n$-gram with arbitrary $n$) probabilities with millisecond-level latency. The $infty$-gram framework and infini-gram engine enable us to conduct many novel and interesting analyses of human-written and machine-generated text: we find that the $infty$-gram LM has fairly high accuracy for next-token prediction (47%), and can complement neural LLMs to greatly reduce their perplexity. When analyzing machine-generated text, we also observe irregularities in the machine--$infty$-gram agreement level with respect to the suffix length, which indicates deficiencies in neural LLM pretraining and the positional embeddings of Transformers.