Reason to Rote: Rethinking Memorization in Reasoning

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) concurrently memorize noisy labels and retain generalizable reasoning capabilities. Using controlled synthetic datasets—four-number addition and two-hop relational reasoning—we combine neural activation analysis with targeted intervention experiments. We find that noisy label memorization does not overwrite or disrupt underlying reasoning mechanisms; instead, it leverages intermediate reasoning steps, relying on distributed neural representations and outlier-based heuristics to fine-tune pre-existing computational patterns. Experiments demonstrate that perturbing the reasoning process severely impairs noisy label recall, whereas intermediate reasoning computations remain robust during memory retrieval. Our work identifies a “benign memorization” phenomenon: memorization and reasoning coexist synergistically rather than competitively—challenging the conventional memory-reasoning dichotomy. This reveals a novel mechanistic account of model robustness and generalization boundaries, suggesting that memorization can be scaffolded upon, rather than antagonistic to, structured reasoning pathways.

Technology Category

Application Category

📝 Abstract
Large language models readily memorize arbitrary training instances, such as label noise, yet they perform strikingly well on reasoning tasks. In this work, we investigate how language models memorize label noise, and why such memorization in many cases does not heavily affect generalizable reasoning capabilities. Using two controllable synthetic reasoning datasets with noisy labels, four-digit addition (FDA) and two-hop relational reasoning (THR), we discover a reliance of memorization on generalizable reasoning mechanisms: models continue to compute intermediate reasoning outputs even when retrieving memorized noisy labels, and intervening reasoning adversely affects memorization. We further show that memorization operates through distributed encoding, i.e., aggregating various inputs and intermediate results, rather than building a look-up mechanism from inputs to noisy labels. Moreover, our FDA case study reveals memorization occurs via outlier heuristics, where existing neuron activation patterns are slightly shifted to fit noisy labels. Together, our findings suggest that memorization of label noise in language models builds on, rather than overrides, the underlying reasoning mechanisms, shedding lights on the intriguing phenomenon of benign memorization.
Problem

Research questions and friction points this paper is trying to address.

How language models memorize noisy labels without impairing reasoning
Mechanisms behind memorization via distributed encoding and outlier heuristics
Relationship between memorization and underlying reasoning processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memorization relies on reasoning mechanisms
Distributed encoding aggregates inputs and results
Outlier heuristics shift neuron activation patterns
🔎 Similar Papers
No similar papers found.