🤖 AI Summary
Current data selection strategies for LLMs in data-driven decision-making lack general, quantitative principles. Method: This paper proposes a novel data selection paradigm centered on data distribution uniformity—specifically, maximizing the minimum pairwise distance $h_{min}$ among training samples to improve training efficiency and generalization. We develop a generalized gradient descent convergence theory that does not require Lipschitz continuity assumptions, transcending the limitations of the Neural Tangent Kernel (NTK) framework. Contribution/Results: Our theory is the first to rigorously establish that increasing $h_{min}$ accelerates training dynamics for Transformer-based architectures and provides formal justification for residual connections and function composition structures. Empirical evaluation demonstrates that this strategy significantly speeds up supervised fine-tuning while achieving superior or comparable downstream task performance across diverse model scales, optimizers, and datasets.
📝 Abstract
Data selection plays a crucial role in data-driven decision-making, including in large language models (LLMs), and is typically task-dependent. Properties such as data quality and diversity have been extensively studied and are known to enhance model performance. However, it remains unclear whether there exist other quantitative and general principles of data selection that can consistently improve performance, especially for complex tasks with limited prior knowledge. In this paper, we demonstrate that selecting more uniformly distributed data can improve training efficiency while enhancing performance. Specifically, we establish that more uniform (less biased) distribution leads to a larger minimum pairwise distance between data points, denoted by $h_{min}$, and prove that a smaller $h_{min}$ can slow down the training dynamics of gradient descent (GD). Moreover, we theoretically show that the approximation error of neural networks decreases as $h_{min}$ increases. Our analysis introduces a convergence framework for GD beyond the Neural Tangent Kernel (NTK) regime, applicable to a broad class of architectures, including transformers, without requiring Lipschitz smoothness. This framework further provides theoretical justification for the use of residual connections and function compositions in deep neural architectures. In the end, we conduct comprehensive experiments for supervised fine-tuning across various settings, including different optimization strategies, model sizes, and training datasets. The results consistently demonstrate that selecting data by maximizing pairwise distance significantly accelerates training and achieves comparable or better performance in LLMs across diverse datasets. Code and Datasets are available at the link: https://github.com/SafeRL-Lab/data-uniformity.