๐ค AI Summary
Multi-source heterogeneous text pre-training often suffers from cross-lingual and cross-domain negative interference (โmultilingual curseโ) and incurs prohibitive communication and memory overhead. Method: We propose the first vocabulary-agnostic federated large language model pre-training framework: it decouples token embeddings from the Transformer backbone, assigns each data source an independent vocabulary for parallel training, and dynamically allocates and loads embedding parameters on-demand. Contribution/Results: This design reduces communication costs by several orders of magnitude and cuts embedding memory consumption by 4โ5ร. Simultaneously, backbone generalization and training robustness improve. Experiments show a 20% average reduction in perplexity and significant gains across downstream tasks.
๐ Abstract
Language Model pre-training uses broad data mixtures to enhance performance across domains and languages. However, training on such heterogeneous text corpora requires extensive and expensive efforts. Since these data sources vary significantly in lexical, syntactic, and semantic aspects, they cause negative interference or the ``curse of multilinguality''. To address these challenges we propose a communication-efficient pre-training framework, DEPT. Our method decouples embeddings from the transformer body while simultaneously training the latter on multiple data sources without requiring a shared vocabulary. DEPT can: (1) train robustly and effectively under significant data heterogeneity, (2) minimize token embedding parameters to only what the data source vocabulary requires, while cutting communication costs in direct proportion to both the communication frequency and the reduction in parameters, (3) enhance transformer body plasticity and generalization, improving both average perplexity (up to 20%) and downstream task performance, and (4) enable training with custom optimized vocabularies per data source. We demonstrate DEPT's potential via the first vocabulary-agnostic federated pre-training of billion-scale models, reducing communication costs by orders of magnitude and embedding memory by 4-5x.