From Memorization to Reasoning in the Spectrum of Loss Curvature

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of identifying and disentangling memorization from reasoning in Transformer-based models—including language models (LMs) and vision transformers (ViTs)—where conventional memory mechanisms remain poorly separable. We propose a curvature-spectrum-based weight decomposition method that leverages loss landscape curvature analysis to achieve label-free, geometric separation of memorization and reasoning components in weight space. We empirically discover that downstream tasks such as factual retrieval and arithmetic rely predominantly on low-curvature weight directions—a finding enabling a curvature-aware weight editing strategy that selectively suppresses non-target memorization. Experiments demonstrate that our approach significantly reduces memory leakage while preserving perplexity stability, outperforming existing unlearning methods. Moreover, task performance degradation exhibits strong correlation with the intensity of low-curvature component editing, offering an interpretable geometric perspective on the memorization–reasoning trade-off.

Technology Category

Application Category

📝 Abstract
We characterize how memorization is represented in transformer models and show that it can be disentangled in the weights of both language models (LMs) and vision transformers (ViTs) using a decomposition based on the loss landscape curvature. This insight is based on prior theoretical and empirical work showing that the curvature for memorized training points is much sharper than non memorized, meaning ordering weight components from high to low curvature can reveal a distinction without explicit labels. This motivates a weight editing procedure that suppresses far more recitation of untargeted memorized data more effectively than a recent unlearning method (BalancedSubnet), while maintaining lower perplexity. Since the basis of curvature has a natural interpretation for shared structure in model weights, we analyze the editing procedure extensively on its effect on downstream tasks in LMs, and find that fact retrieval and arithmetic are specifically and consistently negatively affected, even though open book fact retrieval and general logical reasoning is conserved. We posit these tasks rely heavily on specialized directions in weight space rather than general purpose mechanisms, regardless of whether those individual datapoints are memorized. We support this by showing a correspondence between task data's activation strength with low curvature components that we edit out, and the drop in task performance after the edit. Our work enhances the understanding of memorization in neural networks with practical applications towards removing it, and provides evidence for idiosyncratic, narrowly-used structures involved in solving tasks like math and fact retrieval.
Problem

Research questions and friction points this paper is trying to address.

Characterizing memorization representation in transformer models through loss curvature analysis
Developing weight editing procedure to suppress untargeted memorized data recitation
Investigating how specialized weight structures affect fact retrieval and arithmetic tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decompose weights via loss curvature to separate memorization
Edit weights by suppressing high-curvature memorized components
Preserve general reasoning while reducing fact retrieval capability
🔎 Similar Papers
No similar papers found.