Locally Typical Sampling

📅 2022-02-01
🏛️ Transactions of the Association for Computational Linguistics
📈 Citations: 88
Influential: 11
📄 PDF
🤖 AI Summary
Existing probabilistic language generators achieve strong performance on metrics like perplexity but often produce text lacking coherence, fluency, and diversity. To address this, we propose Locally Typical Sampling—a novel decoding strategy that formalizes principles of efficiency and robustness from human linguistic communication as an information-theoretic criterion based on the expected conditional entropy. At each decoding step, the method retains only tokens whose log-probabilities lie within a dynamically computed neighborhood of the model’s current conditional entropy, enabling lightweight, parameter-free, and adaptive probability truncation. Unlike prior methods, it requires no additional training or hyperparameter tuning. Empirical evaluation on summarization and story generation tasks shows that Locally Typical Sampling significantly reduces repetition while maintaining fluency and coherence comparable to nucleus (top-p) and top-k sampling. Results are validated through both automated metrics and human evaluation.
📝 Abstract
Today’s probabilistic language generators fall short when it comes to producing coherent and fluent text despite the fact that the underlying models perform well under standard metrics (e.g., perplexity). This discrepancy has puzzled the language generation community for the last few years. In this work, we posit that the abstraction of natural language generation as a discrete stochastic process—which allows for an information-theoretic analysis—can provide new insights into the behavior of probabilistic language generators, for example, why high-probability texts can be dull or repetitive. Humans use language as a means of communicating information, aiming to do so in a simultaneously efficient and error-minimizing manner; in fact, psycholinguistics research suggests humans choose each word in a string with this subconscious goal in mind. We formally define the set of strings that meet this criterion: Those for which each word has an information content close to the expected information content, namely, the conditional entropy of our model. We then propose a simple and efficient procedure for enforcing this criterion when generating from probabilistic models, which we call locally typical sampling. Automatic and human evaluations show that, in comparison to nucleus and top-k sampling, locally typical sampling offers competitive performance (in both abstractive summarization and story generation) in terms of quality while consistently reducing degenerate repetitions.
Problem

Research questions and friction points this paper is trying to address.

Improving coherence and fluency in probabilistic language generation
Addressing dullness and repetition in high-probability text outputs
Enhancing efficiency and error-minimization in language generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Locally typical sampling reduces repetitions
Information-theoretic analysis guides word selection
Efficient and error-minimizing language generation