Data Distribution as a Lever for Guiding Optimizers Toward Superior Generalization in LLMs

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how to enhance the generalization of large language models without relying on computationally expensive optimizers such as Sharpness-Aware Minimization (SAM). Through theoretical and empirical analysis, it reveals for the first time that SAM’s generalization advantage stems from its ability to mitigate the optimizer-induced simplicity bias (SB). Building on this insight, the study proposes actively attenuating SB by reshaping the training data distribution—specifically through upsampling or augmenting samples that are learned later in training—to guide standard optimizers (e.g., GD, AdamW, Muon) toward solutions with superior generalization. Evaluated on mathematical reasoning tasks using models including Phi2-2.7B and Llama3.2-1B, the approach achieves up to an 18% relative improvement in accuracy.

Technology Category

Application Category

📝 Abstract
Can modifying the training data distribution guide optimizers toward solutions with improved generalization when training large language models (LLMs)? In this work, we theoretically analyze an in-context linear regression model with multi-head linear self-attention, and compare the training dynamics of two gradient based optimizers, namely gradient descent (GD) and sharpness-aware minimization (SAM), the latter exhibiting superior generalization properties but is prohibitively expensive for training even medium-sized LLMs. We show, for the first time, that SAM induces a lower simplicity bias (SB)-the tendency of an optimizer to preferentially learn simpler features earlier in training-and identify this reduction as a key factor underlying its improved generalization performance. Motivated by this insight, we demonstrate that altering the training data distribution by upsampling or augmenting examples learned later in training similarly reduces SB and leads to improved generalization. Our extensive experiments show that our strategy improves the performance of multiple LLMs-including Phi2-2.7B , Llama3.2-1B, Gemma3-1B-PT, and Qwen3-0.6B-Base-achieving relative accuracy gains up to 18% when fine-tuned with AdamW and Muon on mathematical reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

data distribution
generalization
large language models
simplicity bias
optimizers
Innovation

Methods, ideas, or system contributions that make the work stand out.

simplicity bias
data distribution
generalization
large language models
sharpness-aware minimization
🔎 Similar Papers
No similar papers found.
T
Tushaar Gangavarapu
University of Texas at Austin
Jiping Li
Jiping Li
University of California, Los Angeles
Machine LearningStatistical Learning TheoryOptimization
C
Christopher Vattheuer
University of California, Los Angeles
Z
Zhangyang Wang
University of Texas at Austin
Baharan Mirzasoleiman
Baharan Mirzasoleiman
UCLA
Machine LearningOptimizationSubmodularityML SustainabilityData-quality