Compute-Constrained Data Selection

📅 2024-10-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work formalizes the “computation-constrained data selection” problem for large language model (LLM) fine-tuning under limited compute resources. Method: We propose a cost-aware utility function that jointly optimizes selection overhead and training gain, and introduce a cross-scale compute budget control mechanism to systematically evaluate lightweight strategies—including perplexity-based and gradient-driven selection—demonstrating that small-model-assisted selection achieves superior performance over high-cost alternatives at model size ratios of 5×–10×. Contribution/Results: Extensive multi-task and multi-scale experiments show that lightweight methods—e.g., perplexity-based selection at 5× and gradient-based selection at 10×—achieve optimal training efficiency within fixed compute budgets. Our approach breaks the reliance on computationally prohibitive selection paradigms, establishing a scalable, deployable data selection framework for resource-constrained LLM fine-tuning.

Technology Category

Application Category

📝 Abstract
Data selection can reduce the amount of training data needed to finetune LLMs; however, the efficacy of data selection scales directly with its compute. Motivated by the practical challenge of compute-constrained finetuning, we consider the setting in which both the cost of selecting data and training are budgeted for. We first formalize the problem of data selection with a cost-aware utility function, and model the data selection problem as trading off initial-selection cost for training gain. We run a comprehensive sweep of experiments across multiple tasks, varying compute budget by scaling finetuning tokens, model sizes, and data selection compute. Interestingly we find that many powerful data selection methods are almost never compute-optimal, and that cheaper data selection alternatives dominate both from a theoretical and empirical perspective. For compute-optimal training, we find that perplexity and gradient data selection require training-to-selection model size ratios of 5x and 10x, respectively.
Problem

Research questions and friction points this paper is trying to address.

Optimizing data selection under compute constraints for LLM finetuning
Balancing selection cost and training gain with budget limits
Identifying compute-optimal data selection methods for efficient training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cost-aware utility function for data selection
Trade-off selection cost for training gain
Cheaper data selection methods dominate compute-optimal
🔎 Similar Papers
No similar papers found.