🤖 AI Summary
This study investigates how large language models (LLMs) concurrently memorize noisy labels and retain generalizable reasoning capabilities. Using controlled synthetic datasets—four-number addition and two-hop relational reasoning—we combine neural activation analysis with targeted intervention experiments. We find that noisy label memorization does not overwrite or disrupt underlying reasoning mechanisms; instead, it leverages intermediate reasoning steps, relying on distributed neural representations and outlier-based heuristics to fine-tune pre-existing computational patterns. Experiments demonstrate that perturbing the reasoning process severely impairs noisy label recall, whereas intermediate reasoning computations remain robust during memory retrieval. Our work identifies a “benign memorization” phenomenon: memorization and reasoning coexist synergistically rather than competitively—challenging the conventional memory-reasoning dichotomy. This reveals a novel mechanistic account of model robustness and generalization boundaries, suggesting that memorization can be scaffolded upon, rather than antagonistic to, structured reasoning pathways.
📝 Abstract
Large language models readily memorize arbitrary training instances, such as label noise, yet they perform strikingly well on reasoning tasks. In this work, we investigate how language models memorize label noise, and why such memorization in many cases does not heavily affect generalizable reasoning capabilities. Using two controllable synthetic reasoning datasets with noisy labels, four-digit addition (FDA) and two-hop relational reasoning (THR), we discover a reliance of memorization on generalizable reasoning mechanisms: models continue to compute intermediate reasoning outputs even when retrieving memorized noisy labels, and intervening reasoning adversely affects memorization. We further show that memorization operates through distributed encoding, i.e., aggregating various inputs and intermediate results, rather than building a look-up mechanism from inputs to noisy labels. Moreover, our FDA case study reveals memorization occurs via outlier heuristics, where existing neuron activation patterns are slightly shifted to fit noisy labels. Together, our findings suggest that memorization of label noise in language models builds on, rather than overrides, the underlying reasoning mechanisms, shedding lights on the intriguing phenomenon of benign memorization.