๐ค AI Summary
Scalability of bilevel optimization for data reweighting in large language models (LLMs) remains prohibitive for billion-parameter models. To address this, we propose ScaleBiO, a scalable first-order bilevel optimization framework. ScaleBiO integrates LISA-based memory optimization with gradient approximation techniques, enabling efficient bilevel data reweighting on 34B-parameter LLMs (e.g., Yi-34B) using only eight A40 GPUs. Theoretically, we establish convergence and optimality guarantees under standard assumptions. Empirically, ScaleBiO consistently improves downstream task performance across model scalesโfrom GPT-2 to Yi-34Bโwhile effectively filtering noisy samples and identifying high-information instances. To our knowledge, it is the first practical bilevel learning approach for data-adaptive optimization in large-scale LLMs.
๐ Abstract
Bilevel optimization has shown its utility across various machine learning settings, yet most algorithms in practice require second-order information, making it challenging to scale them up. Only recently, a paradigm of first-order algorithms emerged, capable of effectively addressing bilevel optimization problems. Nevertheless, the practical efficiency of this paradigm remains unverified, particularly in the context of large language models (LLMs). This paper introduces the first scalable instantiation of this paradigm called ScaleBiO, focusing on bilevel optimization for large-scale LLM data reweighting. By combining with a recently proposed memory-efficient training technique called LISA, our novel algorithm allows the paradigm to scale to 34-billion-parameter LLMs on eight A40 GPUs, marking the first successful application of bilevel optimization under practical scenarios for large-sized LLMs. Empirically, extensive experiments on data reweighting verify the effectiveness of ScaleBiO for different-scaled models, including GPT-2, LLaMA-3-8B, GPT-NeoX-20B, and Yi-34B, where bilevel optimization succeeds in filtering irrelevant data samples and selecting informative samples. Theoretically, ScaleBiO ensures the optimality of the learned data weights, along with a convergence guarantee matching the conventional first-order bilevel optimization paradigm on smooth and strongly convex objectives.