Understanding Contextual Recall in Transformers: How Finetuning Enables In-Context Reasoning over Pretraining Knowledge

📅 2026-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates why pretrained Transformer models alone fail to achieve contextual recall—the ability to accurately retrieve and apply pretrained knowledge in response to novel prompt formats based on examples. To address this, the authors introduce a controlled synthetic data framework that integrates task-oriented fine-tuning, attention mechanism analysis, and low-dimensional representation learning to systematically compare the roles of pretraining and fine-tuning. Their work reveals for the first time that targeted fine-tuning can activate latent contextual reasoning mechanisms within the model, leading to the emergence of low-dimensional latent codes for attribute types that enable generalization across unseen subjects. The findings demonstrate that pretraining alone is insufficient for contextual recall and provide a constructive account—using purely attention-based Transformers—of the transition from factual memorization to contextual recall.

Technology Category

Application Category

📝 Abstract
Transformer-based language models excel at in-context learning (ICL), where they can adapt to new tasks based on contextual examples, without parameter updates. In a specific form of ICL, which we refer to as \textit{contextual recall}, models pretrained on open-ended text leverage pairwise examples to recall specific facts in novel prompt formats. We investigate whether contextual recall emerges from pretraining alone, what finetuning is required, and what mechanisms drive the necessary representations. For this, we introduce a controlled synthetic framework where pretraining sequences consist of subject-grammar-attribute tuples, with attribute types tied to grammar statistics. We demonstrate that while such pretraining successfully yields factual knowledge, it is insufficient for contextual recall: models fail to implicitly infer attribute types when the grammar statistics are removed in ICL prompts. However, we show that finetuning on tasks requiring implicit inference, distinct from the ICL evaluation, using a subset of subjects, triggers the emergence of contextual recall across all subjects. This transition is accompanied by the formation of low-dimensional latent encodings of the shared attribute type. For mechanistic insight, we derive a construction for an attention-only transformer that replicates the transition from factual to contextual recall, corroborated by empirical validation.
Problem

Research questions and friction points this paper is trying to address.

contextual recall
in-context learning
transformer
pretraining
finetuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

contextual recall
in-context learning
transformer mechanisms
latent representations
synthetic pretraining
🔎 Similar Papers
No similar papers found.