Initialization is Critical to Whether Transformers Fit Composite Functions by Reasoning or Memorizing

📅 2024-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates why Transformer models adopt either reasoning or memorization strategies in compositional generalization tasks. Method: We identify parameter initialization scale as the decisive factor: smaller initialization standard deviations induce low-complexity biases, encouraging models to learn compositional primitives rather than memorizing input-output mappings. We propose a tunable hyperparameter—the “initialization rate” γ—to uniformly characterize layer-wise initialization scaling, and integrate information-flow analysis, representation visualization, complexity-bias quantification, and evaluation across multiple real-world datasets. Contribution/Results: We demonstrate that precise control of initialization scale systematically shifts model behavior from memorization toward structured reasoning, substantially improving out-of-distribution generalization on unseen compositional problems. Our findings provide a principled, initialization-based mechanism for enhancing mathematical reasoning and compositional generalization in large language models.

Technology Category

Application Category

📝 Abstract
Transformers have shown impressive capabilities across various tasks, but their performance on compositional problems remains a topic of debate. In this work, we investigate the mechanisms of how transformers behave on unseen compositional tasks. We discover that the parameter initialization scale plays a critical role in determining whether the model learns inferential (reasoning-based) solutions, which capture the underlying compositional primitives, or symmetric (memory-based) solutions, which simply memorize mappings without understanding the compositional structure. By analyzing the information flow and vector representations within the model, we reveal the distinct mechanisms underlying these solution types. We further find that inferential (reasoning-based) solutions exhibit low complexity bias, which we hypothesize is a key factor enabling them to learn individual mappings for single anchors. We validate our conclusions on various real-world datasets. Our findings provide valuable insights into the role of initialization scale in tuning the reasoning and memorizing ability and we propose the initialization rate $gamma$ to be a convenient tunable hyper-parameter in common deep learning frameworks, where $1/d_{mathrm{in}}^gamma$ is the standard deviation of parameters of the layer with $d_{mathrm{in}}$ input neurons.
Problem

Research questions and friction points this paper is trying to address.

Transformer Models
Unseen Combinations
Mathematical Functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer Models
Initialization Parameters
Strategy Control
🔎 Similar Papers
No similar papers found.
Zhongwang Zhang
Zhongwang Zhang
Shanghai Jiao Tong University
P
Pengxiao Lin
Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University; School of Mathematical Sciences, Shanghai Jiao Tong University
Z
Zhiwei Wang
Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University; School of Mathematical Sciences, Shanghai Jiao Tong University
Yaoyu Zhang
Yaoyu Zhang
Shanghai Jiao Tong University
Deep Learning Theory
Z
Z. Xu
Institute of Natural Sciences, MOE-LSC, Shanghai Jiao Tong University; School of Mathematical Sciences, Shanghai Jiao Tong University; Key Laboratory of Marine Intelligent Equipment and System, Ministry of Education, P.R. China; Shanghai Seres Information Technology Co., Ltd, Shanghai, P.R. China; Center for LLM, Institute for Advanced Algorithms Research, Shanghai