A Fictional Q&A Dataset for Studying Memorization and Knowledge Acquisition

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the dual mechanisms of factual (semantic) memory versus sequential (verbatim) memory in language model training, and their decoupling. To this end, we introduce the first synthetic fictional Q&A dataset specifically designed to isolate these memory types: using a controllable fictional event generation paradigm, we eliminate interference from real-world knowledge, thereby enabling systematic differentiation between factual knowledge acquisition and literal text repetition. Methodologically, our approach integrates synthetic data construction, controlled text modeling, training trajectory tracking, and empirical memory-behavior evaluation. Experiments demonstrate that the dataset effectively disentangles the two memory pathways; further, they reveal substantial memory biases in current models—such as overreliance on superficial lexical cues—and critical generalization bottlenecks in transferring learned fictional facts. This study establishes a novel, reproducible paradigm and benchmark for probing the memory architecture of large language models.

Technology Category

Application Category

📝 Abstract
When language models are trained on textual data, they acquire both knowledge about the structure of language as well as knowledge of facts about the world. At inference time, their knowledge of facts can be leveraged to solve interesting problems and perform useful knowledge work for users. It is well known that language models can verbatim memorize long sequences from their training data. However, it is much less well understood how language models memorize facts seen during training. In this work, we propose a new dataset to specifically empower researchers to study the dual processes of fact memorization and verbatim sequence memorization. The dataset consists of synthetically-generated, webtext-like documents about fictional events, as well as question-answer pairs about the events. We conduct training experiments showing how synthetic data about fictional events can be effective in teasing apart different forms of memorization. We also document the challenges in effectively building realistic, fictional synthetic data.
Problem

Research questions and friction points this paper is trying to address.

Studying how language models memorize facts during training
Differentiating fact memorization from verbatim sequence memorization
Creating realistic synthetic data for memorization research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic Q&A dataset for memorization study
Fictional events to separate memorization types
Webtext-like documents for realistic simulation
🔎 Similar Papers
No similar papers found.