ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting

๐Ÿ“… 2024-06-28
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 7
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Scalability of bilevel optimization for data reweighting in large language models (LLMs) remains prohibitive for billion-parameter models. To address this, we propose ScaleBiO, a scalable first-order bilevel optimization framework. ScaleBiO integrates LISA-based memory optimization with gradient approximation techniques, enabling efficient bilevel data reweighting on 34B-parameter LLMs (e.g., Yi-34B) using only eight A40 GPUs. Theoretically, we establish convergence and optimality guarantees under standard assumptions. Empirically, ScaleBiO consistently improves downstream task performance across model scalesโ€”from GPT-2 to Yi-34Bโ€”while effectively filtering noisy samples and identifying high-information instances. To our knowledge, it is the first practical bilevel learning approach for data-adaptive optimization in large-scale LLMs.

Technology Category

Application Category

๐Ÿ“ Abstract
Bilevel optimization has shown its utility across various machine learning settings, yet most algorithms in practice require second-order information, making it challenging to scale them up. Only recently, a paradigm of first-order algorithms emerged, capable of effectively addressing bilevel optimization problems. Nevertheless, the practical efficiency of this paradigm remains unverified, particularly in the context of large language models (LLMs). This paper introduces the first scalable instantiation of this paradigm called ScaleBiO, focusing on bilevel optimization for large-scale LLM data reweighting. By combining with a recently proposed memory-efficient training technique called LISA, our novel algorithm allows the paradigm to scale to 34-billion-parameter LLMs on eight A40 GPUs, marking the first successful application of bilevel optimization under practical scenarios for large-sized LLMs. Empirically, extensive experiments on data reweighting verify the effectiveness of ScaleBiO for different-scaled models, including GPT-2, LLaMA-3-8B, GPT-NeoX-20B, and Yi-34B, where bilevel optimization succeeds in filtering irrelevant data samples and selecting informative samples. Theoretically, ScaleBiO ensures the optimality of the learned data weights, along with a convergence guarantee matching the conventional first-order bilevel optimization paradigm on smooth and strongly convex objectives.
Problem

Research questions and friction points this paper is trying to address.

Scaling bilevel optimization for LLM data reweighting
Validating first-order bilevel optimization in LLMs
Ensuring optimality and convergence in large-scale LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

First-order scalable bilevel optimization algorithm
Combines with memory-efficient LISA training technique
Ensures optimality and convergence for large LLMs
๐Ÿ”Ž Similar Papers
No similar papers found.
R
Rui Pan
The Hong Kong University of Science and Technology
Jipeng Zhang
Jipeng Zhang
Hong Kong University of Science and Technology
natural language processingquestion answering
Xingyuan Pan
Xingyuan Pan
University of Illinois Urbana Champaign
Renjie Pi
Renjie Pi
HKUST
Multi-modal LearningAutoMLData-centric Learning
X
Xiaoyu Wang
The Hong Kong University of Science and Technology
T
Tong Zhang
University of Illinois Urbana-Champaign