🤖 AI Summary
Large language models (LLMs) exhibit significant performance degradation on non-definitional queries (e.g., exemplification, paraphrasing) and demonstrate strong bias toward high-frequency (“head”) concepts over low-frequency (“tail”) ones. Method: We propose TrackList, a systematic analytical framework, and introduce RefoMed-EN—a curated English medical dataset—to evaluate LLMs across diverse linguistic query types and knowledge frequency spectra. Using syntactic/semantic similarity metrics, embedding-space analysis, and statistical correlation, we quantify generative preferences and limitations. Contribution/Results: We find LLMs perform best on definitional tasks and worst on exemplification; they strongly favor rephrasing pretraining-dominant head-knowledge while failing to accurately generate tail-domain expertise. This work is the first to jointly model knowledge frequency and query type, providing empirical evidence and methodological foundations for diagnosing LLM knowledge coverage biases and informing improvements in pretraining data composition.
📝 Abstract
Large Language Models (LLMs) have proven efficient in giving definition-type answers to user input queries. While for humans giving various types of answers, such as examples and paraphrases, is an easy task, LLMs struggle to provide correct answers for other than definition-type queries. In this study, we evaluated this drop in performance using TrackList, a fine-grained linguistic and statistical analysis pipeline to investigate the impact of the pre-training data on LLMs answers to diverse linguistic queries. We also introduce RefoMed-EN, an English dataset consisting of 6170 human-annotated medical terms alongside their corresponding definitions, denominations, exemplifications, explanations, or paraphrases. We studied whether the high frequency of a concept (head) or low frequency (tail) impacts the language model's performance. We evaluated the quality of the LLM's output using syntactic and semantic similarity metrics, statistical correlations and embeddings. Results showed that the LLM's task performance for definition type questions is the highest, while for the exemplification type it is the lowest. Additionally, we showed that for definition-type questions, large language models are prone to paraphrase more on popular and frequent knowledge and less on tail and technical knowledge, especially in the expert texts.