Chameleon: A Flexible Data-mixing Framework for Language Model Pretraining and Finetuning

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Data mixing ratios significantly impact the generalization performance of large language models (LLMs), yet retraining with adjusted ratios incurs prohibitive computational costs. Method: This paper proposes a dynamic domain reweighting framework based on leverage scores in the embedding space. It is the first to introduce leverage scores into data mixing, quantifying per-domain contributions via domain embedding modeling and domain affinity matrix construction—enabling online, retraining-free adaptation of domain weights. The framework supports zero-shot plug-and-play integration of new domains and unifies data mixing optimization across pretraining and fine-tuning stages. Contributions/Results: In pretraining, it achieves superior performance using only 10% of the compute required by state-of-the-art methods. For few-shot transfer to new domains, it yields substantial accuracy gains. During fine-tuning, it consistently reduces test perplexity across all domains.

Technology Category

Application Category

📝 Abstract
Training data mixtures greatly impact the generalization performance of large language models. Existing domain reweighting methods often rely on costly weight computations and require retraining when new data is introduced. To this end, we introduce a flexible and efficient data mixing framework, Chameleon, that employs leverage scores to quantify domain importance within a learned embedding space. We first construct a domain affinity matrix over domain embeddings. The induced leverage scores determine a mixture that upweights domains sharing common representations in embedding space. This formulation allows direct transfer to new data by computing the new domain embeddings. In experiments, we demonstrate improvements over three key scenarios: (i) our computed weights improve performance on pretraining domains with a fraction of the compute of existing methods; (ii) Chameleon can adapt to data changes without proxy retraining, boosting few-shot reasoning accuracies when transferred to new data; (iii) our method enables efficient domain reweighting in finetuning, consistently improving test perplexity on all finetuning domains over uniform mixture. Our code is available at https://github.com/LIONS-EPFL/Chameleon.
Problem

Research questions and friction points this paper is trying to address.

Optimizing data mixtures for language model generalization
Reducing computational cost in domain reweighting methods
Enabling flexible adaptation to new data without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses leverage scores for domain importance
Constructs domain affinity matrix efficiently
Enables direct transfer to new data
🔎 Similar Papers
No similar papers found.
Wanyun Xie
Wanyun Xie
PhD student, EPFL
F
Francesco Tonin
Laboratory for Information and Inference Systems, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
V
V. Cevher
Laboratory for Information and Inference Systems, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland