Long-Tail Crisis in Nearest Neighbor Language Models

πŸ“… 2025-03-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper identifies a fundamental failure of kNN-LM in predicting low-frequency (long-tail) target words: although it effectively retrieves long-tail contextual neighbors, it fails to improve probability estimates for such tokensβ€”and instead predominantly boosts predictions for high-frequency words, even introducing negative bias for low-frequency ones. This challenges the prevailing assumption that explicit memory inherently benefits long-tail modeling. Method: We conduct a frequency-aware analysis integrating retrieval accuracy evaluation, token-frequency statistics over the datastore, product quantization error modeling, and perplexity decomposition experiments. Contribution/Results: Our analysis reveals a significant absence of predictive gain for low-frequency target words, exposing a structural limitation of kNN-LM in out-of-distribution long-tail modeling. These findings provide critical diagnostic insights for advancing memory-augmented language models, particularly in addressing distributional skew and improving tail coverage.

Technology Category

Application Category

πŸ“ Abstract
The $k$-nearest-neighbor language model ($k$NN-LM), one of the retrieval-augmented language models, improves the perplexity for given text by directly accessing a large datastore built from any text data during inference. A widely held hypothesis for the success of $k$NN-LM is that its explicit memory, i.e., the datastore, enhances predictions for long-tail phenomena. However, prior works have primarily shown its ability to retrieve long-tail contexts, leaving the model's performance remain underexplored in estimating the probabilities of long-tail target tokens during inference. In this paper, we investigate the behavior of $k$NN-LM on low-frequency tokens, examining prediction probability, retrieval accuracy, token distribution in the datastore, and approximation error of the product quantization. Our experimental results reveal that $k$NN-LM does not improve prediction performance for low-frequency tokens but mainly benefits high-frequency tokens regardless of long-tail contexts in the datastore.
Problem

Research questions and friction points this paper is trying to address.

Investigates kNN-LM's performance on low-frequency tokens
Examines prediction accuracy for long-tail target tokens
Assesses datastore impact on high vs low-frequency token predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates kNN-LM on low-frequency tokens
Examines prediction probability and retrieval accuracy
Analyzes datastore token distribution and quantization error
πŸ”Ž Similar Papers
No similar papers found.