Bucket Pre-training is All You Need

📅 2024-07-10
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Fixed-length data concatenation in LLM pretraining often causes document truncation (harming long-range dependencies), cross-document concatenation (introducing noise and breaking semantic coherence), and high computational overhead. To address these issues, we propose a multi-bin dynamic data composition method that abandons conventional truncation/padding paradigms. We formally define and quantify three data quality metrics—padding, truncation, and concatenation—and employ adaptive binning to control sequence length and dynamically assemble context. This ensures document integrity and contextual fidelity while improving training efficiency. Experiments demonstrate that our approach significantly reduces data noise and truncation rates, accelerates model convergence, enhances long-range dependency modeling, and improves downstream task performance under identical compute budgets.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated exceptional performance across various natural language processing tasks. However, the conventional fixed-length data composition strategy for pretraining, which involves concatenating and splitting documents, can introduce noise and limit the model's ability to capture long-range dependencies. To address this, we first introduce three metrics for evaluating data composition quality: padding ratio, truncation ratio, and concatenation ratio. We further propose a multi-bucket data composition method that moves beyond the fixed-length paradigm, offering a more flexible and efficient approach to pretraining. Extensive experiments demonstrate that our proposed method could significantly improving both the efficiency and efficacy of LLMs pretraining. Our approach not only reduces noise and preserves context but also accelerates training, making it a promising solution for LLMs pretraining.
Problem

Research questions and friction points this paper is trying to address.

Fixed-length data composition causes information loss
Long sequences introduce noise and computational overhead
Need better metrics and methods for data composition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-bucket data composition method
Adaptive training data organization
Quantitative metrics for data quality
🔎 Similar Papers
No similar papers found.