Data Uniformity Improves Training Efficiency and More, with a Convergence Framework Beyond the NTK Regime

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current data selection strategies for LLMs in data-driven decision-making lack general, quantitative principles. Method: This paper proposes a novel data selection paradigm centered on data distribution uniformity—specifically, maximizing the minimum pairwise distance $h_{min}$ among training samples to improve training efficiency and generalization. We develop a generalized gradient descent convergence theory that does not require Lipschitz continuity assumptions, transcending the limitations of the Neural Tangent Kernel (NTK) framework. Contribution/Results: Our theory is the first to rigorously establish that increasing $h_{min}$ accelerates training dynamics for Transformer-based architectures and provides formal justification for residual connections and function composition structures. Empirical evaluation demonstrates that this strategy significantly speeds up supervised fine-tuning while achieving superior or comparable downstream task performance across diverse model scales, optimizers, and datasets.

Technology Category

Application Category

📝 Abstract
Data selection plays a crucial role in data-driven decision-making, including in large language models (LLMs), and is typically task-dependent. Properties such as data quality and diversity have been extensively studied and are known to enhance model performance. However, it remains unclear whether there exist other quantitative and general principles of data selection that can consistently improve performance, especially for complex tasks with limited prior knowledge. In this paper, we demonstrate that selecting more uniformly distributed data can improve training efficiency while enhancing performance. Specifically, we establish that more uniform (less biased) distribution leads to a larger minimum pairwise distance between data points, denoted by $h_{min}$, and prove that a smaller $h_{min}$ can slow down the training dynamics of gradient descent (GD). Moreover, we theoretically show that the approximation error of neural networks decreases as $h_{min}$ increases. Our analysis introduces a convergence framework for GD beyond the Neural Tangent Kernel (NTK) regime, applicable to a broad class of architectures, including transformers, without requiring Lipschitz smoothness. This framework further provides theoretical justification for the use of residual connections and function compositions in deep neural architectures. In the end, we conduct comprehensive experiments for supervised fine-tuning across various settings, including different optimization strategies, model sizes, and training datasets. The results consistently demonstrate that selecting data by maximizing pairwise distance significantly accelerates training and achieves comparable or better performance in LLMs across diverse datasets. Code and Datasets are available at the link: https://github.com/SafeRL-Lab/data-uniformity.
Problem

Research questions and friction points this paper is trying to address.

Identifies uniform data distribution improves training efficiency
Proves smaller pairwise distance slows gradient descent dynamics
Theoretical framework extends beyond NTK for diverse architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uniform data distribution enhances training efficiency
Larger minimum pairwise distance speeds convergence
Convergence framework beyond NTK for transformers
🔎 Similar Papers
No similar papers found.
Y
Yuqing Wang
Department of Applied Mathematics and Statistics, Johns Hopkins University
Shangding Gu
Shangding Gu
UC Berkeley
Artificial IntelligenceSafe Reinforcement LearningOptimizationPlanningRobotics