Cram Less to Fit More: Training Data Pruning Improves Memorization of Facts

πŸ“… 2026-04-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models suffer from limited parametric memory capacity, leading to hallucinations and suboptimal performance on knowledge-intensive tasks. This work formulates factual memory as an information-theoretic capacity-constrained problem and introduces a novel data pruning method that relies solely on training loss. By reducing the number of distinct facts in the training data and balancing their frequency distribution, the approach enhances the model’s encoding efficiency for factual knowledge. Notably, without increasing model size, this method enables GPT2-Small to retain 1.3 times more entity-related facts, achieving performance comparable to models ten times larger and substantially raising the upper bound of factual recall.
πŸ“ Abstract
Large language models (LLMs) can struggle to memorize factual knowledge in their parameters, often leading to hallucinations and poor performance on knowledge-intensive tasks. In this paper, we formalize fact memorization from an information-theoretic perspective and study how training data distributions affect fact accuracy. We show that fact accuracy is suboptimal (below the capacity limit) whenever the amount of information contained in the training data facts exceeds model capacity. This is further exacerbated when the fact frequency distribution is skewed (e.g. a power law). We propose data selection schemes based on the training loss alone that aim to limit the number of facts in the training data and flatten their frequency distribution. On semi-synthetic datasets containing high-entropy facts, our selection method effectively boosts fact accuracy to the capacity limit. When pretraining language models from scratch on an annotated Wikipedia corpus, our selection method enables a GPT2-Small model (110m parameters) to memorize 1.3X more entity facts compared to standard training, matching the performance of a 10X larger model (1.3B parameters) pretrained on the full dataset.
Problem

Research questions and friction points this paper is trying to address.

fact memorization
large language models
training data pruning
knowledge-intensive tasks
hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

data pruning
fact memorization
training data selection
information-theoretic analysis
language model pretraining
πŸ”Ž Similar Papers
No similar papers found.